id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2310.17245
CROP: Conservative Reward for Model-based Offline Policy Optimization
Offline reinforcement learning (RL) aims to optimize policy using collected data without online interactions. Model-based approaches are particularly appealing for addressing offline RL challenges due to their capability to mitigate the limitations of offline data through data generation using models. Prior research has demonstrated that introducing conservatism into the model or Q-function during policy optimization can effectively alleviate the prevalent distribution drift problem in offline RL. However, the investigation into the impacts of conservatism in reward estimation is still lacking. This paper proposes a novel model-based offline RL algorithm, Conservative Reward for model-based Offline Policy optimization (CROP), which conservatively estimates the reward in model training. To achieve a conservative reward estimation, CROP simultaneously minimizes the estimation error and the reward of random actions. Theoretical analysis shows that this conservative reward mechanism leads to a conservative policy evaluation and helps mitigate distribution drift. Experiments on D4RL benchmarks showcase that the performance of CROP is comparable to the state-of-the-art baselines. Notably, CROP establishes an innovative connection between offline and online RL, highlighting that offline RL problems can be tackled by adopting online RL techniques to the empirical Markov decision process trained with a conservative reward. The source code is available with https://github.com/G0K0URURI/CROP.git.
Hao Li, Xiao-Hu Zhou, Xiao-Liang Xie, Shi-Qi Liu, Zhen-Qiu Feng, Xiao-Yin Liu, Mei-Jiang Gui, Tian-Yu Xiang, De-Xing Huang, Bo-Xian Yao, Zeng-Guang Hou
2023-10-26T08:45:23Z
http://arxiv.org/abs/2310.17245v1
# CROP: Conservative Reward for Model-based Offline Policy Optimization ###### Abstract Offline reinforcement learning (RL) aims to optimize policy using collected data without online interactions. Model-based approaches are particularly appealing for addressing offline RL challenges due to their capability to mitigate the limitations of offline data through data generation using models. Prior research has demonstrated that introducing conservation into the model of Q-function during policy optimization can effectively alleviate the prevalent distribution drift problem in offline RL. However, the investigation into the impacts of conservatism in reward estimation is still lacking. This paper proposes a novel model-based offline RL algorithm, Conservative Reward for model-based Offline Policy optimization (CROP), which conservatively estimates the reward in model training. To achieve a conservative reward estimation, CROP simultaneously minimizes the estimation error and the reward of random actions. Theoretical analysis shows that this conservative reward mechanism leads to a conservative policy evaluation and helps mitigate distribution drift. Experiments on D4RL benchmarks showcase that the performance of CROP is comparable to the state-of-the-art baselines. Notably, CROP establishes an innovative connection between offline and online RL, highlighting that offline RL problems can be tackled by adopting online RL techniques to the empirical Markov decision process trained with a conservative reward. The source code is available with [https://github.com/G0K0URURUKI/CROP.git](https://github.com/G0K0URURUKI/CROP.git). \({}^{1}\)State Key Laboratory of Multimodal Artificial Intelligence Systems Institute of Automation, Chinese Academy of Sciences Beijing 100190, China \({}^{2}\)The School of Artificial Intelligence University of Chinese Academy of Sciences Beijing 100049, China [email protected], [email protected], [email protected] ## Introduction Reinforcement learning (RL) has achieved impressive performance across various many decision-making domains, including electronic games [14], robot control [13], and recommended systems [17]. Conventional RL employs an online training paradigm where the agent optimizes policies based on real-time interactions with the environment [12]. Online interactions can be expensive, time-consuming, or dangerous, posing a significant hurdle for widespread RL applications. To address this issue, a natural idea is to use pre-collected data instead of online interactions in RL, which is known as offline RL [11]. Directly using online RL algorithms in the offline setting often leads to extremely poor results due to the distribution shift [13]. Distribution shift arises from the difference between the behavior policy that produced the offline data and the learned policy, which may cause erroneous overestimation of Q-function and damage the performance. To alleviate the distribution drift, many model-free offline RL algorithms incorporate conservatism or regularization to constrain the learned policy [13, 14, 15, 16]. However, model-free algorithms can only directly learn about the states in the offline data and cannot generalize environmental information, which may lead to myopia and poor performance in unseen states. Model-based offline RL algorithms solve the limitation by training a environment model for interactions with the agent. Nevertheless, distribution drift can also negatively impact model-based offline RL methods. The model accuracy inherently diminishes for state-action pairs that lie beyond the scope of the offline dataset. These inaccurate situations may be accessed by the agent due to the distribution drift and affect the effect of policy optimization. Several methods use uncertainty estimation to penalize the situations with low model accuracy. However, these methods rely on some sort of strong heuristic assumption about uncertainty estimation [18, 19] or directly detect out-of-distribution (OOD) state-action tuples [15], which might prove fragile or impractical in complex environments. Furthermore, some researchers have heuristically designed elaborate structures, such as introducing counters[14] or inverse dynamic functions [13], to punish OOD data. To esche uncertainty estimation or other addition parts, several methods induce conservatism into the model [16, 15] or Q-function [18] in policy optimization and underestimate OOD state-action tuples, indirectly mitigating distribution shift [18, 16, 15]. The main contributions of this paper are as follows: 1. A novel model-based offline RL algorithm, Conservative Reward for model-based Offline Policy optimization (CROP), is proposed in this paper. The proposed method conservatively estimates the reward in model training by minimizing rewards of random actions alongside the estimation error, then uses the existing online RL method in policy optimization. This provides a new perspective bridging offline and online RL, where offline RL can be solved by online RL methods in the empirical MDP with the conservative reward. 2. Theoretical analysis gives a lower bound on the performance of CROP and demonstrates that the proposed method CROP is capable of underestimating Q-function and mitigating distribution drift. 3. Experimental results show that CROP obtains the state-of-the-art results on D4RL benchmark tasks. ## Related Work In offline RL, there is no opportunity of improving exploration during policy optimization. When offline data sufficiently cover the state-action space, online RL methods can perform well without additional modification [1]. However, in more common cases offline data is insufficiently covered. Using out-of-distribution (OOD) actions is necessary to find better policies, but also brings potential risks, which need to be balanced in policy optimization [11]. Online RL methods often perform extremely poorly in such settings due to overestimation caused by the distribution shift [23, 24]. Below, we discuss how existing offline methods address this challenge. **Model-free offline RL:** Existing model-free offline RL methods can be broadly categorized into policy constraint and value regularization. Policy constraint methods introduce constraints based on the behavior policy, directly restricting the learned policy to be close to the behavior policy [25, 19, 26] or avoiding OOD actions in Bellman backup operator [26]. These methods directly limit the scope of policy optimization and may perform poorly when the behavioral policy is inferior. In contrast, value regularization methods avoid OOD actions indirectly by underestimating the value function. Value regularization can be achieved by conservatively estimating value function [24, 25, 27], or penalizing uncertainty based on Q function ensembles [26, 27]. **Model-based offline RL:** Model-based offline RL methods first train an environment model based on the offline data, then utilize interactions with the trained models to extend the offline data. A major approach to eliminating distribution drift in offline RL is to penalize model uncertainty [26, 28, 29, 10]. However, methods that rely on model uncertainty often necessitate strong prior assumptions during uncertainty quantification. For example, MOPO and MOBILE respectively assume variance and Bellman inconsistency as reasonable estimates of model uncertainty. These assumptions may not hold in specific cases. To avoid model uncertainty estimators, additional parts, such as counters and discriminators, are incorporated into the model to conservatively estimate OOD action [26, 27, 28]. Moreover, some research directly introduces conservatism into the model or Q-function and designs offline RL methods without additional parts or model uncertainty [26, 28, 29]. The proposed method CROP is similar to the existing methods COMBO [26], RAMBO [25], and ARMOR [29], all of which incorporate conservatism into the fundamental components of model-based RL (the model or Q-function) to avoid overestimation. Compared with the existing three methods, CROP is mainly different in two places: * While the existing three methods introduce conservatism in the Q-function or the entire model, CROP only introduces conservatism in the model's reward estimate. * CROP conservatively estimates rewards during model training, whereas these methods achieve conservatism during policy optimization. Since the number of steps required for model training is empirically much smaller than that required for policy optimization, CROP can compute and back-propagate conservative losses less often. Furthermore, CROP directly adopts online RL methods during policy optimization, providing a new perspective on bridging offline and online RL that offline RL can be solved by applying online RL methods on the empirical MDP with conservative rewards. ## Preliminaries Reinforcement learning (RL) is used for optimization problems in Markov decision processes (MDP). An MDP is defined by a tuple \((\mathcal{S},\mathcal{A},T,R,\mu_{0},\gamma)\), where \(\mathcal{S}\) and \(\mathcal{A}\) represent the state and action spaces. \(T\), \(R\), \(\mu_{0}\), and \(\gamma\) denote the transition probability, reward, initial state distribution, and discount factor, respectively. At time step \(t\) with state \(s\in\mathcal{S}\), the agent chooses an action \(a\) based on a policy \(\pi(a|s)\). Then the state changes from \(s\) to \(s^{\prime}\) based on the transition probability \(T(s^{\prime}|s,a)\), and the agent obtains a reward \(R(s,a)\in[-R_{\max},R_{\max}]\). The goal of RL is to find the optimal policy that maximizes the expected cumulative discounted reward \(\sum_{t}\mathbb{E}_{s,a}\left[\gamma^{t}R(s,a)\right]\) in the MDP. Offline RL is a special formulation of RL that only uses previously collected datasets \(\mathcal{D}=\{(s,a,R(s,a),s^{\prime})\}\) during training [10]. We use \(\bar{\pi}\) to denote the empirical behavior policy in \(\mathcal{D}\). ## Method ### Model training with conservative reward estimation The proposed algorithm, Conservative Reward for model-based Offline Policy optimization (CROP), aims to integrate the conservative evaluation of out-of-distribution (OOD) actions in the process of model training, to avoid additional consideration of avoiding OOD actions and directly utilize existing online RL algorithms in policy optimization. The model consists of a transition probability estimator \(\hat{T}\) and a reward estimator \(\hat{r}\). In the model training, the transi tion probability estimator \(\hat{T}\) is updated by maximizing log-probability: \[l_{T}=\mathbb{E}_{\mathcal{D}}\left\{-\log\hat{T}(s^{\prime}|s,a)\right\}. \tag{1}\] The reward estimator \(\hat{r}\) should minimize the estimation error as well as underestimate actions outside \(\mathcal{D}\) as much as possible, which is achieved by the following loss: \[l_{r}=\mathbb{E}_{\mathcal{D}}\left\{\left[\hat{r}(s,a)-R(s,a)\right]^{2}+ \beta\hat{r}(s,\bar{a})\right\}, \tag{2}\] where \(\bar{a}\) denotes random actions and hyperparameter \(\beta\) controls the underestimation. By setting the derivative of Equation 2 to zero, we obtain the optimal conservative reward estimation \(r\): \[r\left(s,a\right)=R\left(s,a\right)-\beta\frac{\mu}{\bar{\pi}(a|s)}, \tag{3}\] where \(\mu\) is the probability density of a uniform distribution in the action space \(\mathcal{A}\), and \(\bar{\pi}\) is the behavior policy of \(\mathcal{D}\). The second term on RHS of Equation 3 represents the conservativeness of the reward estimation, which is inversely proportional to the probability of the action appearing in \(\mathcal{D}\). By interacting with models that use the conservative reward estimation, online RL algorithms can avoid OOD actions and improve policies safely for offline RL problems, which will be shown in Section. Therefore, CROP provides a new perspective to connect offline and online RL that offline RL can be regarded as online RL under conservative reward estimation, which helps to apply the appealing development of online RL to offline RL. ### Practical implementation Now we describe a practical implementation of CROP using the conservative reward estimation above. The algorithm is summarized in Algorithm 1, which consists of model training and policy optimization. In model training, we learn an ensemble of models, and each model is trained independently. For each model, the offline data \(\mathcal{D}\) is divided into a train set and a valid set, and then \(\hat{T}\) and \(\hat{r}\) are trained using Function 1 and Function 2, respectively. After model training, the reward in offline data \(R_{t}\) is replaced by the mean of \(\hat{r}\). Then an online model-free RL algorithm, Soft Actor-Critic (SAC) [1], is used to optimize the policy from offline data and online interactions with the model ensemble. In interactions with the model ensemble, the reward is computed as the mean of \(\hat{r}\), while the next state is the output of \(\hat{T}\) in a model chosen at random. The initial state of the interaction is randomly sampled from the offline data \(\mathcal{D}\), and the interaction lasts for \(k\) steps. In each step of policy optimization, a mini-batch data \(\mathcal{D}_{f}\) is sampled, where the proportion of online interaction data is \(f\). Q-function of policy \(\pi\), which is denoted by \(\hat{Q}^{\pi}\), is trained by minimizing the soft Bellman residual \[l_{\hat{Q}^{\pi}}=\mathbb{E}_{\mathcal{D}_{f}}\{[\hat{Q}^{\pi}\left(s,a\right) -\hat{r}\left(s,a\right)-\gamma\hat{V}^{\pi}\left(s^{\prime}\right)]^{2}\}. \tag{4}\] \(V\) is estimated by Monte Carlo method \[\hat{V}^{\pi}(s)=\mathbb{E}_{a\sim\pi\left(\cdot|s\right)}[Q_{\mathrm{tar}}(s,a)-\alpha\log\pi(a|s)], \tag{5}\] where the target Q-function \(Q_{\mathrm{tar}}\) is an exponentially moving average of \(\hat{Q}^{\pi}\). The policy \(\pi\) is trained by maximizing value function \(V^{\pi}\) and keeping a reasonable entropy with a Lagrangian relaxation. The policy loss function \(l_{\pi}\) is as follows: \[l_{\pi}=\mathbb{E}_{\mathcal{D}_{f}}\left\{\alpha\mathcal{H}[\pi(\cdot|s)]- \hat{V}^{\pi}(s)\right\}, \tag{6}\] where \(\mathcal{H}(\cdot)\) stands for entropy. \(\alpha\) is a non-negative parameter updated by minimizing the following loss function: \[l_{\alpha}=\mathbb{E}_{\mathcal{D}_{f}}\left\{\alpha\mathcal{H}[\pi(\cdot|s)] -\alpha\bar{\mathcal{H}}\right\}, \tag{7}\] where hyperparameter \(\bar{\mathcal{H}}\) is the target entropy. \(\hat{T}\), \(\hat{r}\), \(Q_{\mathrm{tar}}\), \(\hat{Q}^{\pi}\) and \(\pi\) are all parameterized with multi-layer perceptrons. ### Theoretical analysis of CROP In the following, we theoretically analyze the proposed method CROP and show that it underestimates Q-function and satisfies safe policy improvement guarantees. In the following we discuss the tabular case, and the proof on the continuous case is similar and omitted for brevity. Let \(\bar{T}\) and \(\bar{r}\) denote the empirical transition probability and the empirical conservative reward, which are the optimal estimations of \(T\) and \(r\) from \(\mathcal{D}\). The difference between \((\bar{T},\bar{r})\) and \((T,r)\) comes from the sample bias during offline data collection, while the difference between \((\bar{T},\bar{r})\) and \((\hat{T},\hat{r})\) comes from the estimation error of models (neural networks in this paper). The sample bias and the estimation error are the main factors affecting the performance of the algorithm. To express the Q-function update conveniently, we use \(B^{T,r,\pi}\) to denote the Bellman operator about policy \(\pi\) in the MDP with transition probability \(T\) and reward \(r\): \[B^{T,r,\pi}Q^{\pi}=r+\gamma T^{\pi}Q^{\pi}, \tag{8}\] where \(T^{\pi}\) denotes the state transition probability with policy \(\pi\): \(T^{\pi}(s^{\prime}|s)=\sum_{a}\pi(a|s)T(s^{\prime}|s,a)\). Following the standard assumption in model-based offline RL literature [21, 1], we assume that the sample bias and the estimation error are bounded as following: **Assumption 1**.: \(\forall s,a\in D\)_, the following relationships hold_ \[\begin{split}|\bar{r}\left(s,a\right)-r\left(s,a\right)|& \leq\frac{C_{r}}{\sqrt{\left|D\left(s,a\right)\right|}},\\ ||\bar{T}\left(s^{\prime}|s,a\right)-T\left(s^{\prime}|s,a\right) ||_{1}&\leq\frac{C_{T}}{\sqrt{\left|D\left(s,a\right)\right|}}, \end{split} \tag{9}\] **Assumption 2**.: \(\forall s,a\in D\)_, the model estimation bias is bounded:_ \[\begin{split}|\hat{r}\left(s,a\right)-\bar{r}\left(s,a\right)|& \leq\epsilon_{r},||\hat{T}\left(s^{\prime}|s,a\right)-\bar{T}\left(s^{ \prime}|s,a\right)||_{1}\leq\epsilon_{T}\end{split} \tag{10}\] Firstly, We state that the Q-function \(\hat{Q}^{\pi}\) in CROP conservatively estimates the true Q-function: **Proposition 1**.: _For large enough \(\beta\), we have_ \[\mathbb{E}_{s\sim\mu_{0},a\sim\pi\left(\cdot|s\right)}\left[\hat{Q}^{\pi}(s, a)\right]\leq\mathbb{E}_{s\sim\mu_{0},a\sim\pi\left(\cdot|s\right)}\left[Q^{ \pi}(s,a)\right], \tag{11}\] _where \(Q^{\pi}\) is the Q-function of \(\pi\) in the actual MDP, i.e. the fixed point of \(B^{T,\overline{R},\pi}\), and \(\mu_{0}\) is the initial state distribution._ Proof.: With Assumption 1, the difference between \(B^{\bar{T},\bar{r},\pi}\) and \(B^{T,r,\pi}\) can be bounded: \[\begin{split}\left|B^{T,r,\pi}Q^{\pi}\left(*\right)-B^{\bar{T}, \bar{r},\pi}Q^{\pi}\left(*\right)\right|\\ =\left|\left(r\left(*\right)-\bar{r}\left(*\right)\right)+\gamma \sum_{s^{\prime}}\left(T\left(s^{\prime}|*\right)-\bar{T}\left(s^{\prime}|* \right)\right)V^{\pi}(s^{\prime})\right|\\ \leq\left|\left(r\left(*\right)-\bar{r}\left(*\right)\right) \right|+\gamma\left|\sum_{s^{\prime}}\left(T\left(s^{\prime}|*\right)-\bar{T} \left(s^{\prime}|*\right)\right)V^{\pi}(s^{\prime})\right|\\ \leq\frac{C_{r}+\gamma C_{T}2R_{\max}/(1-\gamma)}{\sqrt{\left| D\left(*\right)\right|}},\end{split} \tag{12}\] where \(*\) denotes \((s,a)\) and \(V^{\pi}(s)=\sum_{a}\pi(a|s)Q(a|s)\). This gives us an expression, which is a function of \(C_{r}\) and \(C_{P}\), to bound the potential overestimation caused by the sample bias. In the proposed algorithm, only state transitions of offline date are kept, and the reward for offline data is replaced by the reward of environment model. Thus, the Bellman operator using offline data is \(B^{\bar{T},\hat{r},\pi}\), whose difference to \(B^{\bar{T},\bar{r},\pi}\) is bounded by \(\epsilon_{r}\): \[\begin{split}\left|B^{\bar{T},\hat{r},\pi}Q^{\pi}\left(*\right)- B^{\bar{T},\bar{r},\pi}Q^{\pi}\left(*\right)\right|&=|\hat{r} \left(*\right)-\bar{r}\left(*\right)|\leq\epsilon_{r}.\end{split} \tag{13}\] The Bellman operator using interactions with the model is \(B^{\bar{T},\hat{r},\pi}\), whose difference to \(B^{\bar{T},\bar{r},\pi}\) is bounded as follows: \[\begin{split}\left|B^{\bar{T},\hat{r},\pi}Q\left(*\right)-B^{ \bar{T},\bar{r},\pi}Q\left(*\right)\right|\\ =\left|\hat{r}\left(*\right)-\bar{r}\left(*\right)+\gamma\sum_{s^{ \prime}}\left(\hat{T}\left(s^{\prime}|*\right)-\bar{T}\left(s^{\prime}|* \right)\right)V^{\pi}(s^{\prime})\right|\\ \leq\left|\hat{r}\left(*\right)-\bar{r}\left(*\right)\right|+ \left|\gamma\sum_{s^{\prime}}\left(\hat{T}\left(s^{\prime}|*\right)-\bar{T} \left(s^{\prime}|*\right)\right)V^{\pi}(s^{\prime})\right|\\ \leq\epsilon_{r}+\gamma\epsilon_{P}2R_{\max}/(1-\gamma).\end{split} \tag{14}\] Since offline data and interactions with the model are mixed in the ratio \((1-f):f\) for policy optimization in CROP, the Bellman operator used in CROP \(B_{\mathrm{CROP}}\) can be seen as a mix of \(B^{\bar{T},\bar{r},\pi}\) and \(B^{\bar{T},\bar{r},\pi}\): \[\begin{split} B_{\mathrm{CROP}}Q\left(*\right)=\left[(1-f)B^{\bar{T},\hat{r},\pi}+fB^{\bar{T},\hat{r},\pi}\right]Q\left(*\right)\\ \leq B^{T,r,\pi}Q\left(*\right)+\left|B^{T,r,\pi}Q\left(*\right)-B ^{\bar{T},\bar{r},\pi}Q\left(*\right)\right|\\ +(1-f)\epsilon_{r}+f\left(\epsilon_{r}+\gamma\epsilon_{T}2R_{\max}/ \left(1-\gamma\right)\right)\\ =B^{T,R,\pi}Q\left(*\right)-\beta\frac{\mu}{\bar{\pi}(a|s)}+ \frac{C_{r}+\gamma C_{T}2R_{\max}/(1-\gamma)}{\sqrt{\left|D\left(*\right)\right|} }\\ +\epsilon_{r}+f\gamma\epsilon_{T}2R_{\max}/\left(1-\gamma\right).\end{split} \tag{15}\] \(\hat{Q}^{\pi}(*)\) is the fixed point of \(B_{\mathrm{CROP}}\), and \(Q^{\pi}\left(*\right)\) is the fixed point of \(B^{P,R,\pi}\). Define the terms independent of \(\beta\) in the RHS of Equation 15 as \(\phi\): \(\phi=\frac{C_{r}+\gamma C_{r}2R_{\max}/(1-\gamma)}{\sqrt{\left|D\left(s,a \right)\right|}}+\epsilon_{r}+f\ast\gamma\epsilon_{T}2R_{\max}/\left(1-\gamma\right)\). By computing the fixed point of equation 15, \(\hat{Q}^{\pi}(*)\) can be bounded as follows: \[\hat{Q}^{\pi}(*)\leq Q^{\pi}\left(*\right)-\beta*\left[S^{\pi}\frac{\mu}{\bar{ \pi}}\right]\left(*\right)+\left[S^{\pi}\phi\right]\left(*\right), \tag{16}\] where \(S^{\pi}=\left(I-\gamma T^{\pi}\right)^{-1}\). Thus, by choosing large enough \(\beta\), \(\hat{Q}^{\pi}(*)\leq Q^{\pi}\left(*\right)\) and \(\mathbb{E}_{s\sim\mu_{0},a\sim\pi\left(\cdot|s\right)}\left[\hat{Q}^{\pi}(s,a )\right]\leq\mathbb{E}_{s\sim\mu_{0},a\sim\pi\left(\cdot|s\right)}\left[Q^{ \pi}(s,a)\right]\). Proposition 1 shows that CROP can conservatively estimate Q-function and avoid the common overestimation problem in offline RL. However, not all conservative estimations help avoid OOD actions. As an extreme example, for all bounded \(\hat{Q}^{\pi}\), there always exists a constant large enough that \(\hat{Q}^{\pi}(*)\) minus the constant is smaller than \(Q^{\pi}(*)\). However, this constant subtraction on Q-function does not change the relative merits of actions and has no effect on policy optimization. Thus, it is necessary to prove that CROP is effective in avoiding OOD actions. **Proposition 2**.: \(\forall a_{1},a_{2}\in A,s_{1},s_{2}\in S\)_, if \(\bar{\pi}\left(a_{1}|s_{1}\right)<\bar{\pi}\left(a_{2}|s_{2}\right)\), for large enough \(\beta\),_ \[Q^{\pi}\left(s_{1},a_{1}\right)-\hat{Q}^{\pi}(s_{1},a_{1})>Q^{\pi}\left(s_{2},a_ {2}\right)-\hat{Q}^{\pi}(s_{2},a_{2}). \tag{17}\] Proof.: Similar to equation 15, \[B_{\mathrm{CROP}}Q^{\pi}\left(*\right)=\left[(1-f)B^{P,\hat{r}, \pi}+fB^{\hat{P},\hat{r},\pi}\right]Q^{\pi}\left(*\right) \tag{18}\] \[\geq B^{P,r,\pi}Q^{\pi}\left(*\right)-\left|B^{P,r,\pi}Q^{\pi} \left(*\right)-B^{\hat{P},\bar{r},\pi}Q^{\pi}\left(*\right)\right|\] \[-(1-f)\left|B^{\hat{P},\bar{r},\pi}Q^{\pi}\left(*\right)-B^{\hat{ P},\bar{r},\pi}Q^{\pi}\left(*\right)\right|\] \[-f\left|B^{\hat{P},\bar{r},\pi}Q^{\pi}\left(*\right)-B^{\hat{P}, \bar{r},\pi}Q^{\pi}\left(*\right)\right|\] \[\geq B^{P,R,\pi}Q^{\pi}\left(*\right)-\beta\frac{\mu}{\bar{\pi}( a|s)}-\phi.\] Computing the fixed points on both sides of equation 18 yields the following: \[\hat{Q}^{\pi}(*)\geq Q^{\pi}\left(*\right)-\beta*\left[S^{\pi}\frac{\mu}{\bar{ \pi}}\right]\left(*\right)-\left[S^{\pi}\phi\right]\left(*\right) \tag{19}\] Therefore, \[\left(Q^{\pi}\left(s_{1},a_{1}\right)-\hat{Q}^{\pi}(s_{1},a_{1}) \right)-\left(Q^{\pi}\left(s_{2},a_{2}\right)-\hat{Q}^{\pi}(s_{2},a_{2})\right) \geq \tag{20}\] \[\beta*\left\{\left[S^{\pi}\frac{\mu}{\bar{\pi}}\right]\left(s_{1},a_{1}\right)-\left[S^{\pi}\frac{\mu}{\bar{\pi}}\right]\left(s_{2},a_{2}\right)\right\}\] \[-\left[S^{\pi}\phi\right]\left(s_{1},a_{1}\right)-\left[S^{\pi} \phi\right]\left(s_{2},a_{2}\right)\] For large enough \(\beta\), \(\left(Q^{\pi}\left(s_{1},a_{1}\right)-\hat{Q}^{\pi}(s_{1},a_{1})\right)- \left(Q^{\pi}\left(s_{2},a_{2}\right)-\hat{Q}^{\pi}(s_{2},a_{2})\right)>0\), i.e., \(Q^{\pi}\left(s_{1},a_{1}\right)-\hat{Q}^{\pi}(s_{1},a_{1})>Q^{\pi}\left(s_{2},a_{2}\right)-\hat{Q}^{\pi}(s_{2},a_{2})\) Proposition 2 states that with suitable hyperparameters, the conservatism (\(Q^{\pi}\) minus \(\hat{Q}^{\pi}\)) in CROP is stronger for actions that occur less frequently in \(\bar{\pi}\), thus avoiding OOD actions in policy optimization. When \(\beta\) comes to \(+\infty\), \(\arg\max_{a}\hat{Q}^{\pi}(\cdot|s)=\arg\max_{a}\bar{\pi}(\cdot,s)\), and the optimal policy in CROP \(\pi^{*}\) directly chooses the most likely action in \(\bar{\pi}\). The core evaluation metric for offline RL algorithms is the performance of the learned policy. The proposed algorithm CROP has a safe policy improvement guarantee as stated in the following. **Proposition 3**.: _The optimal policy \(\pi^{*}\) learned by maximizing \(\hat{Q}^{\pi}\)_ \[\pi^{*}=\arg\max_{\pi}\mathbb{E}_{s\sim\mu_{0},a\sim\pi(\cdot|s)}\left[\hat{Q} ^{\pi}(s,a)\right] \tag{21}\] _is performed not worse than the behavior policy \(\bar{\pi}\) with tolteration \(\delta\):_ \[\mathbb{E}_{s\sim\mu_{0},a\sim\pi^{*}(\cdot|s)}\left[Q^{\pi^{*}}(s,a)\right] \geq\mathbb{E}_{s\sim\mu_{0},a\sim\bar{\pi}(\cdot|s)}\left[Q^{\bar{\pi}}(s,a) \right]-\delta. \tag{22}\] \(\delta\) _is a function about the sample bias, estimation errors, and difference between \(\pi^{*}\) and \(\bar{\pi}\), which is detailed in the proof (\(\delta\)-safe policy improvement)._ Proof.: For ease of writing, we define the expected cumulative reward of policy \(\pi\) in a MDP with transition probability \(T\) and reward \(r\) as \(F(T,r,\pi)=\mathbb{E}_{s\sim\mu_{0},a\sim\pi(\cdot|s)}\left[Q^{\pi}(s,a)\right]\). Similarly define \(F(\bar{T},\bar{r},\pi)=\mathbb{E}_{s\sim\mu_{0},a\sim\pi(\cdot|s)}\left[\hat{Q} ^{\pi}(s,a)\right]\) and \(F(T_{f},\hat{r},\pi)=\mathbb{E}_{s\sim\mu_{0},a\sim\pi(\cdot|s)}\left[\hat{Q} ^{\pi}(s,a)\right]\), where \(T_{f}=(1-f)\bar{T}+f\hat{T}\) **Step 1: Relate \(F(T_{f},R,\pi^{*})\) and \(F(T_{f},R,\bar{\pi})\).** Since \(\pi^{*}\) optimizes Equation 21, \[F(T_{f},\hat{r},\pi^{*})\geq F(T_{f},\hat{r},\bar{\pi}). \tag{23}\] Since \(|\bar{r}-\hat{r}|\leq\epsilon_{r}\), \[F(T_{f},R-\beta\frac{\mu}{\bar{\pi}(a|s)}+\frac{C_{r}}{\sqrt{ \left|D\left(s,a\right)\right|}}+\epsilon_{r},\pi^{*}) \tag{24}\] \[\geq F(T_{f},R-\beta\frac{\mu}{\bar{\pi}(a|s)}-\frac{C_{r}}{\sqrt{ \left|D\left(s,a\right)\right|}}-\epsilon_{r},\bar{\pi}).\] Arranging Equation 24 can get: \[F(T_{f},R,\pi^{*})\geq F(T_{f},R,\bar{\pi})+F(T_{f},\beta\frac{ \mu}{\bar{\pi}(a|s)},\pi^{*}) \tag{25}\] \[-\beta-\frac{2\epsilon_{r}}{1-\gamma}-F(T_{f},\frac{C_{r}}{\sqrt{ \left|D\left(s,a\right)\right|}},\pi^{*})\] \[-F(T_{f},\frac{C_{r}}{\sqrt{\left|D\left(s,a\right)\right|}},\bar{ \pi}).\] \begin{table} \begin{tabular}{l l} \hline \hline **Hyperparameter** & **Value** \\ \hline Hidden units of model & 200 \\ Hidden units of policy & 256 \\ Number of layers in model & 4 \\ Number of layers in Q-function & 2 \\ Number of layers in policy & 2 \\ Ratio of valid set in \(\mathcal{D}\) & 0.01 \\ Nonlinear activation & ReLU \\ Batch size in model training & 256 \\ Batch size in policy optimization & 512 \\ \(f\) & 0.5 \\ Optimizer & Adam \\ Learning rate of model & 1e-3 \\ Learning rate of Q-function & 3e-4 \\ Learning rate of policy & 1e-4 \\ Learning rate of alpha & 3e-5 \\ Discount factor \(\gamma\) & 0.99 \\ Exponentially moving average of \(\bar{Q}\) & 0.005 \\ Replay buffer size for model- & 100000 \\ generated data & \\ \hline \hline \end{tabular} \end{table} Table 1: Value of Hyperparameters \[F(T_{f},\frac{C_{r,\delta}}{\sqrt{\left|D\left(s,a\right)\right|}},\pi)= \sum_{s}\sum_{a}\frac{C_{r}\pi(a|s)d_{T_{f}}^{\pi}(s)}{\sqrt{\left|D\left(s \right)\right|\bar{\pi}(a|s)}} \tag{26}\] \[=\sum_{s}\frac{C_{r,\delta}d_{T_{f}}^{\pi}(s)}{\sqrt{\left|D\left( s\right)\right|}}\sum_{a}\frac{\pi(a|s)}{\sqrt{\bar{\pi}(a|s)}}\] \[\leq\sum_{s}\frac{C_{r,\delta}d_{T_{f}}^{\pi}(s)}{\sqrt{\left|D \left(s\right)\right|}}\sqrt{\left|A\right|\left[D_{\mathrm{CQL}}(\pi,\bar{ \pi})(s)+1\right]}.\] where \(d_{T_{f}}^{\pi}\) is the distribution of state with the transition probability \(T_{f}\) and policy \(\pi\). \(D_{\mathrm{CQL}}(\pi,\bar{\pi})(s)=\sum_{a}\pi(a|s)\left(\frac{\pi(a|s)}{\bar {\pi}(a|s)}-1\right)\) is defined as in Kumar et al. (2020) and can be bounded as: \[D_{\mathrm{CQL}}(\pi,\bar{\pi})(s)+1 \leq(\sum_{a}\frac{\pi(a|s)}{\sqrt{\bar{\pi}(a|s)}})^{2}, \tag{27}\] \[\left|A\right|\left[D_{\mathrm{CQL}}(\pi,\bar{\pi})(s)+1\right] \geq(\sum_{a}\frac{\pi(a|s)}{\sqrt{\bar{\pi}(a|s)}})^{2}.\] \[F(T_{f},\beta\frac{\mu}{\bar{\pi}(a|s)},\pi) =\sum_{s}\sum_{a}\beta\frac{\mu}{\bar{\pi}(a|s)}\pi(a|s)d_{T_{f}}^{ \pi}(s)\] (28) \[=\sum_{s}\beta d_{T_{f}}^{\pi}(s)\sum_{a}\frac{\pi(a|s)}{\sqrt{ \bar{\pi}(a|s)}}\frac{\mu}{\sqrt{\bar{\pi}(a|s)}}\] \[\geq\sum_{s}\beta d_{T_{f}}^{\pi}(s)\sqrt{D_{\mathrm{CQL}}(\pi, \bar{\pi})(s)+1}\sqrt{D_{\mathrm{CQL}}(\pi_{\mathrm{ran}},\bar{\pi})(s)+1},\] where \(\pi_{\mathrm{ran}}\) is the random policy and \(\pi_{\mathrm{ran}}(s,a)=\mu\) for all \((s,a)\). Therefore, \[F(T_{f},R,\pi^{*})\geq F(T_{f},R,\bar{\pi})-\Delta_{R}, \tag{29}\] where \[\Delta_{R}=\sum_{s}\frac{C_{r,\delta}}{\sqrt{\left|D\left(s\right) \right|}}\left\{d_{T_{f}}^{\pi^{*}}(s)\sqrt{\left|A\right|\left[D_{\mathrm{CQL }}(\pi^{*},\bar{\pi})(s)+1\right]}\right. \tag{30}\] \[\left.+d_{T_{f}}^{\bar{\pi}}(s)\sqrt{\left|A\right|}\right\}+ \beta+\frac{2\epsilon_{r}}{1-\gamma}\] \[-\sum_{s}\beta d_{T_{f}}^{\pi}(s)\sqrt{D_{\mathrm{CQL}}(\pi,\bar {\pi})(s)+1}\sqrt{D_{\mathrm{CQL}}(\pi_{\mathrm{ran}},\bar{\pi})(s)+1}\] **Step 2: Relate \(F(T_{f},R,\cdot)\) and \(F(T,R,\cdot)\).** \[F(T_{f},R,\pi) =F(T,R,\pi) \tag{31}\] \[+\frac{\gamma}{1-\gamma}\mathbb{E}_{s,a\sim d_{T_{f}}^{\pi}(s)\pi (a|s)}\left[(T_{f}^{\pi}-T^{\pi})Q^{\pi}\right]\] The above equation comes from Simulation Lemma Chapter 2, Lemma 2.2) in Agarwal et al. (2019). Therefore, \[\left|F(T_{f},R,\pi)-F(T,R,\pi)\right| \tag{32}\] \[\leq\frac{\gamma R_{\max}}{(1-\gamma)^{2}}\mathbb{E}_{s,a\sim d_{ T_{f}}^{\pi}(s)\pi(a|s)}\left[||T_{f}^{\pi}-T^{\pi}||_{1}\right]\] \[\leq\frac{\gamma R_{\max}}{(1-\gamma)^{2}}\left\{\mathbb{E}_{s,a \sim d_{T_{f}}^{\pi}(s)\pi(a|s)}\left[||\bar{T}^{\pi}-T^{\pi}||_{1}\right]+f \epsilon_{P}\right\}\] \[\mathbb{E}_{s,a\sim d_{T_{f}}^{\pi}(s)\pi(a|s)}\left[||\bar{T}^{ \pi}-T^{\pi}||_{1}\right]\] \[=\sum_{s}\sum_{a}||\bar{T}(\cdot|s,a)-T(\cdot|s,a)||_{1}\pi(a|s)d_ {T_{f}}^{\pi}(s)\] \[\leq\sum_{s}\sum_{a}\frac{C_{T}}{\sqrt{\left|D\left(s,a\right) \right|}}\pi(a|s)d_{T_{f}}^{\pi}(s)\] \[=\sum_{s}\frac{C_{T}d_{T_{f}}^{\pi}(s)}{\sqrt{\left|D\left(s \right)\right|}}\sum_{a}\frac{\pi(a|s)}{\sqrt{\bar{\pi}(a|s)}}\] \[\leq\sum_{s}\frac{C_{T}d_{T_{f}}^{\pi}(s)}{\sqrt{\left|D\left(s \right)\right|}}\sqrt{\left|A\right|\left[D_{\mathrm{CQL}}(\pi,\bar{\pi})(s)+ 1\right]}\] \[\left|F(P_{f},R,\pi)-F(P,R,\pi)\right|\leq\Delta_{P}(\pi)= \tag{34}\] \[\frac{\gamma R_{\max}}{(1-\gamma)^{2}}\left\{f\epsilon_{P}+\sum_ {s}\frac{C_{T}d_{T_{f}}^{\pi}(s)}{\sqrt{\left|D\left(s\right)\right|}}\sqrt{ \left|A\right|\left[D_{\mathrm{CQL}}(\pi,\bar{\pi})(s)+1\right]}\right\}\] **Step 3: Relate \(F(P,R,\pi^{*})\) and \(F(P,R,\bar{\pi})\).** Combining step 1 and step 2, \[F(P,R,\pi^{*})\geq F(P,R,\bar{\pi})-\delta, \tag{35}\] where \(\delta=\Delta_{R}+\Delta_{P}(\pi^{*})+\Delta_{P}(\bar{\pi})\). ## Experiment ### Conservative reward visualization To visualize the proposed conservative reward, we design a simple 1-dimension MDP where the state is always 0 and the action space is \([-1,1]\). The reward \(R\) is defined as: \[R(a)=0.4*\mathcal{N}(a|0.1,0.2)+1*\mathcal{N}(a|-0.3,0.5)+0.1*b, \tag{36}\] where \(\mathcal{N}(*|x,y)\) denotes the probability density function of a Gaussian distribution with mean \(x\) and variance \(y\), and \(b\) is a noise following the standard Gaussian distribution. The offline data contains 10000 interactions where the probability of action is proportional to \(\mathcal{N}(*|0.1,0.5)\). The model hyperparameters are the same as in Table. 1. The conservative reward with different \(\beta\) is shown in Fig. 1. The results show that the larger \(\beta\) is, the smaller the conservative reward is, and when \(\beta\) is large enough (\(\beta=10\)), the conservative reward corresponding to \(\arg\max_{a}\bar{\pi}(\cdot,s)\) (the action near \(0.2\)) is the largest. This result is consistent with our theoretical analysis. ### Experiments on D4RL In this section, the proposed method CROP is compared with several prior offline RL methods on the Mujoco-v2 tasks HalfCheetah et al. (2020). Each task has four datasets, Random, Medium, Medium-Replay, and Medium-Expert. The Random dataset comprises transitions gathered through a random policy. The Medium dataset consists of suboptimal data collected by an early-stopped SAC policy. The Medium-Replay dataset encompasses the replay buffer generated during the training of an early-stopped SAC policy. Lastly, the Medium-Expert dataset combines expert demonstrations and suboptimal data. Due to the different sizes and behavior policies of different datasets, the coverage of offline data and the learned model accuracy are different, which affect the selection of conservatism coefficient \(\beta\) and roll-out length \(k\). For each dataset, \(\beta\) is searched from \(\{0.01,0.05,0.1,0.2\}\) and \(k\) is searched from \(\{3,5,10\}\). We train an ensemble of 7 models and pick the best 5 models based on their loss on the valid set. For the Hopper task, the hidden units of Q-function are 256, while the hidden units of Q-function are 512 for the Halfcheetah task and the Walker2d task. Other hyperparameters are shown in Table. 1. The performance is compared with several state-of-the-art model-based ( RAMBO [14], CABI+TD3-BC [15], MoREL [13], COMBO [21],and CountMORL [12]) and model-free (ATAC [16], CQL [11]), and IQL [11] offline RL methods. Results of the baselines are taken from their respective papers. The score of CROP is the average of the last five evaluations on three random seeds. The results are shown in the Table. 2. CROP achieves comparable performances (surpassing 90% of the maximum score) on 6 of the 12 datasets and obtains a mean score of 73.1. The proposed method ranks second only to CountMORL, surpassing the performance of other baselines. This outcome highlights the efficacy of CROP. It should be emphasized that CROP performs better than methods that incorporate conservatism within the value function (COMBO) or the entire environment model (RAMBO), underscoring the value of the novel design choice to introduce conservatism into the reward estimator. ## Conclusion This paper proposes a novel model-based offline RL method CROP which uses a conservative reward estimation. The proposed method estimates the reward by concurrently minimizing both the estimation error and the rewards with random actions during model training. Theoretical analysis shows that CROP can conservatively estimate Q-function, effectively mitigate distribution drift, and ensure a safe policy improvement. Experiments on D4RL benchmarks show that CROP is comparable with the state-of-the-art offline RL methods. CROP provides a new perspective where online RL methods can be used on the empirical MDP with conservative rewards for offline RL problems, which is conducive to applying the latest development of online RL to offline settings. Future work will consider hyperparameter selection without relying on online evaluation. Additionally, combining model design in online model-based RL with CROP will be an appealing way to deal with more complex offline environments. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multicolumn{6}{c}{Model-based methods} & \multicolumn{3}{c}{Model-free methods} \\ \cline{2-9} & CROP & \multirow{2}{*}{RAMBO} & \multicolumn{1}{c}{CABI+} & \multirow{2}{*}{MoREL} & \multirow{2}{*}{COMBO} & \multicolumn{1}{c}{Count-} & \multirow{2}{*}{ATAC} & \multirow{2}{*}{CQL} & \multirow{2}{*}{IQL} \\ & (ours) & & & TD3-BC & & & & & \\ \hline halfcheetah-random & 33.3\(\pm\)2.8 & **40.0** & 15.1 & 25.6 & **38.8** & **41.0** & 3.9 & 35.4 & - \\ hopper-random & 19.1\(\pm\)11.0 & 11.5 & 11.9 & **37.3** & 7.0 & 30.7 & 17.5 & 10.8 & - \\ walker2d-random & 21.6\(\pm\)0.8 & 21.6 & 6.4 & **53.6** & 17.9 & 21.9 & 6.8 & 7.0 & - \\ halfcheetah-medium & 68.1\(\pm\)0.8 & **77.6** & 45.1 & 42.1 & 54.2 & **76.5** & 53.3 & 44.4 & 47.4 \\ hopper-medium & **100.6\(\pm\)3.2** & 92.8 & **105.0** & 95.4 & **97.2** & **103.6** & 85.6 & 86.6 & 66.3 \\ walker2d-medium & **89.7\(\pm\)0.8** & **86.9** & **82.0** & 77.8 & **81.9** & **87.6** & **89.6** & 74.5 & 78.3 \\ halfcheetah-medium-expert & 91.1\(\pm\)1.1 & 93.7 & **107.6** & 53.3 & 90.0 & **100.0** & 94.8 & 62.4 & 86.7 \\ hopper-medium-expert & 96.5\(\pm\)10.2 & 83.3 & **112.4** & **108.7** & **111.1** & **111.4** & **111.9** & **111.0** & 91.5 \\ walker2d-medium-expert & **109.3\(\pm\)0.3** & 68.3 & **108.6** & 95.6 & **103.3** & **112.3** & **114.2** & 98.7 & **109.6** \\ halfcheetah-medium-replay & **64.9\(\pm\)1.1** & **68.9** & 44.4 & 40.2 & 55.1 & **71.5** & 48.0 & 46.2 & 44.2 \\ hopper-medium-replay & **93.0\(\pm\)2.2** & **96.6** & 31.3 & 93.6 & 89.5 & **101.7** & **102.5** & 48.6 & **94.7** \\ walker2d-medium-replay & **89.7\(\pm\)0.7** & **85.0** & 29.4 & 49.8 & 56.0 & **87.7** & **92.5** & 32.6 & 73.9 \\ \hline mean & **73.1** & 68.9 & 58.3 & 64.4 & 66.8 & **78.8** & 68.4 & 54.9 & - \\ \hline \hline \end{tabular} The highest score on each dataset is underlined. Boldface denotes performance better than 90% of the highest score. \end{table} Table 2: Results on the Mujoco tasks of D4RL dataset Figure 1: Conservative reward with different \(\beta\). \(R\) and the behavior policy are also shown for comparison.
2301.01366
Tunneling analysis of null aether black hole theory in the background of Newman-Janis algorithm
We present a new asymptotically flat black hole solution in null aether theory (NAT) by applying Newman-Janis process. For this purpose, we study the asymptotically flat NAT black hole solution in Newman-Janis algorithm and then compute the tunneling radiation for NAT black hole. The Hawking temperature for NAT black hole depends upon the rotation parameter and charge of the black hole. The Hawking temperature describes a black hole with extremal event horizon. Furthermore, we analyze the graphical interpretation of Hawking temperature w.r.t event horizon and check the stability of black hole under the influence of different parameters associated with black hole temperature.
Riasat Ali, Rimsha Babar, Muhammad Asgher, G. Mustafa
2023-01-03T21:45:12Z
http://arxiv.org/abs/2301.01366v1
# Tunneling analysis of null aether black hole theory in the background of Newman-Janis algorithm ###### Abstract We present a new asymptotically flat black hole solution in null aether theory (NAT) by applying Newman-Janis process. For this purpose, we study the asymptotically flat NAT black hole solution in Newman-Janis algorithm and then compute the tunneling radiation for NAT black hole. The Hawking temperature for NAT black hole depends upon the rotation parameter and charge of the black hole. The Hawking temperature describes a black hole with extremal event horizon. Furthermore, we analyze the graphical interpretation of Hawking temperature w.r.t event horizon and check the stability of black hole under the influence of different parameters associated with black hole temperature. **keywords:** Null Aether Black Hole Theory; Newman-Janis algorithm; Quantum gravity; Lagrangian field equation; Hamilton-Jacobi phenomenon; Hawking temperature. ## I Introduction According to general quantum theory of gravity Lorentz symmetry won't hold precisely in nature. Lately, this idea has propelled a lot of intentions in Lorentz breaking theories of gravity. Among these frameworks are specific concepts of vector-tensor speculations with favored direction settled at each point of space-time via vector field with fixed-norm. Therefore, vector-tensor gravity speculations are of physical significance nowadays since they may reveal some aspects of the inside framework of quantum theory of gravity. Einstein-Aether theory [1] is one of the main theory in which the Aether field is supposed to be time-like and thus rests the boost segment of the Lorentz symmetry. The concept of Aether theory has been explored throughout the years from different aspects [2]. There additionally showed up some related works [3]-[5] which talk about the plausibility of a space-like Aether field that breaks the rotational invariance. The dynamics and inner formalism of these theories are yet under consideration, for instance, the stability issue of the aether field has been studied [6], obviously, to acquire a clear perceptive in this regard, one additionally desires some specific analytic solutions for the genuinely complex equations of motion which these speculations holds. Null Aether Theory (NAT) is the new vector-tensor speculations of modified theory of gravity [7]. According to NAT, the dynamical vector field acts like the Aether and the BH solution via charge is conceivable [8] as well as the physical properties (ADM mass, thermodynamics, singularity) of the NAT BH have been investigated. Additionally, NAT charge is capable to decrease the thermodynamics of horizon as that of the Reissner Nordstrom-AdS BH and generalize the circular orbits of massive as well as massless particles around the BH. Xu and Wang analyzed [9] the quintessence field around the Kerr-Newman-AdS BH solution by using the Newman-Janis method and complex calculations. A typical element of different quantum gravity speculations, like loop quantum gravity, string theory and geometry of non-commutative, is the presence of a minimum observable length [10; 11]. The generalized uncertainty principle (GUP) is a basic approach to understanding this minimal observable length. The generalized commutation relation can be defined as [12] \[[x,p]=i\hbar\left(1+\rho p^{2}\right), \tag{1}\] here \(\rho=\frac{1}{3M_{f}^{2}}\) represents the correction parameter and \(M_{f}\) is the Plank's mass. Moreover, the generalized uncertainty relation is given as \[\Delta x\Delta p\geq\frac{\hbar}{2}\left(1+\rho p^{2}\right), \tag{2}\] where \(x\) and \(p\) stands for position and momentum operators, respectively. The GUP effect on the tunneling radiation of the higher dimensional BHs in the context of the boson phenomenon of the spin-1 particle have been studied in [13; 14]. The Kerr Newman-NUT-Kiselev solution of BH by applying Newman-Janis approach to the dyonically as well as electrically charged BH encompassed by quintessence has been examined [15]. The thermodynamical properties of BH (Temperature, heat capacity, angular momentum and entropy) have also been derived. The Hawking temperature phenomenon for the different spin particles has been widely analyzed in literature [16]-[29]. Moreover, it has been studied that the Hawking temperature for different spin of particles remain preserved. The aim of our paper is to study the NAT BH solution in the context of Newman-Janis algorithm and to investigate the NAT BH Hawking temperature (\(T_{H}\)) under the effects of rotation parameter and to describe a comparison of our new results with previous literature. Furthermore, to derive the quantum corrected temperature \(T_{H}\) for NAT BH with rotation parameter accompanying GUP effects and to analyze the stable condition of BH in the presence of quantum gravity effects. This article is formatted in the following manner: Section **II**, contains a brief introduction about the metric of asymptotically flat NAT BH. Section **III** investigate the Hawking temperature of BH under the influence of Newman-Janis algorithm. Both Sec. **IV** and **VI**, present the graphical analysis of Hawking temperature w.r.t event horizon. Section **V** study the temperature of NAT BH under the influence of quantum gravity and rotation parameter. Section **VII**, comprised the summary and discussion of all the results. ## II Asymptotically flat BH in null Aether theory The spacetime for asymptotically flat BH in NAT can be defined as [30] \[ds^{2}=-E(r)dt^{2}+\frac{1}{E(r)}dr^{2}+r^{2}d\theta^{2}+r^{2}sin^{2}\theta d \phi^{2}, \tag{3}\] where \[E(r)=\left\{\begin{array}{ll}1-\frac{2\tilde{a}_{1}^{2}\tilde{b}_{1}}{\tilde{ r}^{1+q}}-\frac{2\tilde{a}_{2}^{2}\tilde{b}_{2}}{r^{1-q}}-\frac{2\tilde{m}}{r}&( \mbox{ when }\tilde{q}\neq 0),\\ 1-\frac{2m}{r}&(\mbox{when }\tilde{q}=0),\end{array}\right. \tag{4}\] here \(E(r)\) shows the metric function, \(\tilde{a}_{1},\tilde{a}_{2},\tilde{b}_{1}\) and \(\tilde{b}_{2}\) are (constant null vector denoting the Aether field) integration constants treated as free parameters, also \[\tilde{b}_{1}=\frac{1}{8}\left[\tilde{c}_{3}+\tilde{c}_{23}\tilde{q}-3\tilde{ c}_{2}\right],\quad\tilde{b}_{2}=\frac{1}{8}\left[\tilde{c}_{3}-\tilde{c}_{23} \tilde{q}-3\tilde{c}_{2}\right],\quad\tilde{q}\equiv\sqrt{9+8\frac{\tilde{c}_ {1}}{\tilde{c}_{23}}}, \tag{5}\] \(\tilde{q}\) gives the charge, \(\tilde{m}\)\(\&\)\(m\) denotes the mass parameter and \(\tilde{c}_{1},\tilde{c}_{2},\tilde{c}_{3}\) represents the dimensionless constant parameters. For the case \(\tilde{q}=0\), we can observe that the metric converts into the usual asymptotically flat Schwarzschild BH. Although, for the case \(\tilde{q}\neq 0\), we get different asymptotically flat boundary conditions by considering the following cases independently(by def \(\tilde{q}>0\)[30]): \[\left.E(r)\right|_{r\rightarrow\infty}=1\left\{\begin{array}{ll}\mbox{for }0< \tilde{q}<1&(\mbox{if }\tilde{a}_{1}\neq 0\mbox{ and }\tilde{a}_{2}\neq 0)\mbox{ or }\left(\mbox{ if }\tilde{a}_{1}=0\mbox{ or }\tilde{b}_{1}=0\right),\\ \mbox{for }0<\tilde{q}&(\mbox{if }\tilde{a}_{2}=0\mbox{ or }\tilde{b}_{2}=0)\,.\end{array}\right. \tag{6}\] For \(\tilde{q}=0\), one can obtain the ADM mass as \[\bar{M}_{ADM}=\frac{m}{\bar{G}}, \tag{7}\] where we have defined [32] \[\bar{G}=\frac{G}{1-\tilde{c}_{1}\tilde{b}_{1}^{2}}. \tag{8}\] The effective value of Newtonian constant \(\bar{G}\) associated to the constant \(G\) can be evaluated through experiments within the solar system [31]. Also, for the case \(\tilde{q}\neq 0\), we get \[\bar{M}_{ADM}=\frac{1}{\bar{G}}\left[\tilde{m}+\frac{\tilde{a}_{1}^{2}\tilde{b}_ {1}}{r^{\tilde{q}}}(1+\tilde{q})+\frac{\tilde{a}_{2}^{2}\tilde{b}_{2}}{r^{-q}} (1-\tilde{q})\right]\Big{|}_{r\rightarrow\infty}. \tag{9}\] From the above equations, we can get the ADM mass for the NAT BH in the following form \[\bar{M}_{ADM}=\frac{\tilde{m}}{\bar{G}}\left\{\begin{array}{ll}\text{for }0< \tilde{q}<1&\text{(if }\tilde{a}_{1}=0\text{ or }\tilde{b}_{1}=0\text{)}\\ \text{for }0<\tilde{q}&\text{(if }\tilde{a}_{2}=0\text{ or }b_{2}=0\text{)}\end{array}\right. \tag{10}\] If we consider the condition \(\tilde{a}_{2}=0\), then, the the Aether field \(\phi(r)\) and \(E(r)\) becomes \[E(r) = 1-\frac{2\tilde{a}_{1}^{2}\tilde{b}_{1}}{r^{(1+\tilde{q})}}- \frac{2\tilde{m}}{r}, \tag{11}\] \[\phi(r) = \frac{\tilde{a}_{1}}{r^{(1+\tilde{q})/2}}. \tag{12}\] We can get the event horizon \(r_{0}\) by considering \(E\left(r_{0}\right)=0\) and the horizon area is \(A=4\pi r_{0}^{2}\). By taking \(\tilde{a}_{1}=\tilde{G}\tilde{Q}r_{0}^{(\tilde{q}-1)/2}\) the Eqs. (11) and (12) become \[E(r) = 1-\frac{2\tilde{G}^{2}\tilde{Q}^{2}\tilde{b}_{1}}{r^{2}}\left( \frac{r_{0}}{r}\right)^{\tilde{q}-1}-\frac{2\tilde{m}}{r}, \tag{13}\] \[\phi(r) = \frac{\tilde{G}\tilde{Q}}{r}\left(\frac{r_{0}}{r}\right)^{( \tilde{q}-1)/2}, \tag{14}\] here \(\tilde{Q}\) depicts the charge of NAT BH. After putting \(r=r_{0}\) in the above equations, we get \[E\left(r_{0}\right) = 1-\frac{2\tilde{G}^{2}\tilde{Q}^{2}\tilde{b}_{1}}{r_{0}^{2}}- \frac{2\tilde{m}}{r_{0}}=0, \tag{15}\] \[\phi\left(r_{0}\right) = \frac{\tilde{G}\tilde{Q}}{r_{0}}. \tag{16}\] It is note worthy to mention here that the horizon condition in Eq. (15) is free from the parameter \(\tilde{q}\). Moreover, \(\phi(r)\) looks like the electric potential at \(r=r_{0}\). After substituting \(\tilde{q}=1\) in the metric (3), the \(E(r)\) and \(\phi(r)\) get the form \[\begin{array}{l}E(r)=1-\frac{2\tilde{a}_{1}^{2}\tilde{b}_{1}}{r^{2}}-\frac{ 2\tilde{m}}{r},\\ \phi(r)=\frac{\tilde{a}_{1}}{r^{1/2}}.\end{array} \tag{17}\] ## III Asymptotically flat NAT BH in Newman-Janis algorithm By applying the Newman-Janis algorithm [33; 35; 36], we generalize the asymptotically flat NAT BH solution. Now we introduce a coordinate transformation from Boyer Lindquist (BL) coordinates \((t,r,\theta,\phi)\) to Eddington Finkelstein (EF) coordinates \((u,r,\theta,\phi)\) \[du=dt-\frac{dr}{E(r)}, \tag{18}\] where \(u\) represents the null coordinate. According to new coordinates the Eq. (3) can be rewritten as \[ds^{2}=-E(r)du^{2}+r^{2}d\theta^{2}-2dudr+r^{2}\sin^{2}\theta d\phi^{2}. \tag{19}\] The non-zero components for the inverse metric (19) are defined as \[g^{ur}=-1,\ \ g^{rr}=E(r),\ \ g^{\theta\theta}=\frac{1}{r^{2}},\ \ g^{\phi\phi}=\frac{1}{r^{2}\sin^{2}\theta}.\] Moreover, the inverse metric with complex null tetrad \(Z^{x}=(l^{x},n^{x},m^{x},\bar{m}^{x})\) can be written as \[g^{xy}=-l^{x}n^{y}-l^{y}n^{x}+m^{x}\bar{m}^{y}+m^{y}\bar{m}^{x}. \tag{20}\] The corresponding components can be defined as \[l^{x} = \delta^{x}_{r},\quad n^{x}=\delta^{x}_{u}-\frac{1}{2}E(r)\delta^{ x}_{r},\] \[m^{x} = \frac{1}{\sqrt{2}r}\delta^{x}_{\theta}+\frac{i}{\sqrt{2}r\sin \theta}\delta^{x}_{\phi},\] \[\bar{m}^{x} = \frac{1}{\sqrt{2}r}\delta^{x}_{\theta}-\frac{i}{\sqrt{2}r\sin^{2 }\theta}\delta^{x}_{\phi}.\] These null tetrad have orthonormal relation and comply with the accompanying characterizing conditions, specifically all the vectors satisfy the given relations \[l_{x}l^{x} = n_{x}n^{x}\quad=m_{x}m^{x}=\bar{m}_{x}\bar{m}^{x}=0,\] \[l_{x}m^{x} = l_{x}\bar{m}^{x}\quad=n_{x}m^{x}\ =n_{x}\bar{m}^{x}=0,\] \[l_{x}n^{x} = m_{x}\bar{m}^{x}=1,\] By considering the Newman-Janis method, we enable the coordinates to get complex values, while for real \(l^{x}\) and \(n^{x}\) we are able to consider the given transformation [10 ], \[u^{\prime} = u-ia\cos\theta,\] \[r^{\prime} = r+ia\cos\theta, \tag{21}\] here \(a\) represents the spin parameter (due to Newman-Janis algorithm). Furthermore, we consider the transformations from \(E(r)\rightarrow\tilde{E}(r,a,\theta)\) and \(\sigma^{2}=r^{2}+a^{2}\cos^{2}\theta\), whereas the null tetrad transforms as vectors in the form \[l^{x} = \delta^{x}_{r},\quad n^{y}=\delta^{x}_{u}-\frac{1}{2}\tilde{E}(r )\delta^{x}_{r},\] \[m^{x} = \frac{1}{\sqrt{2}r}\left(\delta^{x}_{\theta}+\frac{i}{\sin\theta }\delta^{x}_{\phi}+ia\sin\theta(\delta^{x}_{u}-\delta^{x}_{r})\right),\] \[\bar{m}^{x} = \frac{1}{\sqrt{2}r}\left(\delta^{x}_{\theta}-\frac{i}{\sin\theta }\delta^{x}_{\phi}-ia\sin\theta(\delta^{x}_{u}-\delta^{x}_{r})\right). \tag{22}\] By using the Eq. (20) and (22), the \(g^{xy}\) components of non-zero in the EF coordinate can be defined as \[g^{uu} = \frac{a^{2}\sin^{2}\theta}{\sigma^{2}},\quad g^{ur}=g^{ru}=-1- \frac{a^{2}\sin^{2}\theta}{\sigma^{2}},\quad g^{rr}=\tilde{E}(r,\theta)+\frac{ a^{2}\sin^{2}\theta}{\sigma^{2}},\quad g^{\theta\theta}=\frac{1}{\sigma^{2}},\] \[g^{\phi\phi} = \frac{1}{\sigma^{2}\sin^{2}\theta},\quad g^{u\phi}=g^{\phi u}= \frac{a}{\sigma^{2}},\quad g^{r\phi}=g^{\phi r}=-\frac{a}{\sigma^{2}}.\] Furthermore, the lower indices components of matrix in the EF coordinates can be given as \[g_{uu} = -\tilde{E}(r,\theta),\quad g_{ur}=g_{ru}=-1,\quad g_{rr}=0,\quad g _{\theta\theta}=\sigma^{2},\quad g_{u\phi}=g_{\phi u}=a\sin^{2}\theta,\] \[g_{\phi\phi} = \sin^{2}\theta\left(\sigma^{2}+a^{2}(\tilde{E}(r,\theta)-2)\sin^{ 2}\theta\right),\quad g_{r\phi}=g_{\phi r}=-\frac{a}{\sigma^{2}}, \tag{23}\] where \[\tilde{E}(r,\theta)=\frac{r^{2}E+a^{2}cos^{2}\theta}{\sigma^{2}}. \tag{24}\] According to transformed tetrad the new line element can be written as \[ds^{2} = -\tilde{E}(r,\theta)du^{2}+\sigma^{2}d\theta^{2}+2a\sin^{2}\theta drd \phi-2a\left(1-\tilde{E}(r,\theta)\right)\sin^{2}\theta dud\phi-2dudr \tag{25}\] \[+ \sin^{2}\theta\left(\sigma^{2}+a^{2}\left(2-\tilde{E}(r,\theta) \right)\sin^{2}\theta\right)d\phi^{2}.\] Now we introduce the transformation of EF coordinates to BL coordinates as [34] \[du=dt+Y(r)dr,\quad d\phi=d\phi+\chi(r)dr, \tag{26}\] where the function of \(Y(r)\) and \(\chi(r)\) is to ignore the \(g_{r\phi}\) and \(g_{tr}\) components. However, \(Y(r)\) and \(\chi(r)\) appears as function of \(r\) and \(\theta\) which can be defined as \[Y(r)=-\frac{r^{2}+a^{2}}{(r^{2}E+a^{2})},\qquad\chi(r)=-\frac{a}{(r^{2}E+a^{2 })}. \tag{27}\] The dependence of \(\theta\) from EF to BL coordinates transformation reveals the fact that, we are dealing with modified theory of gravity and non-vacuum surrounding [35]. Furthermore, we will exclude the dependency on \(r\) and \(\theta\) in the functions \(\sigma^{2}\) and \(\Delta_{r}\). The asymptotically flat NAT BH with BL coordinates in the context of Newman-Janis algorithm can be obtained as \[ds^{2} = -\left(\frac{\Delta_{r}-a^{2}\sin^{2}\theta}{\sigma^{2}}\right) dt^{2}+\frac{\sigma^{2}}{\Delta_{r}}dr^{2}-2a\left(1+\frac{a^{2}\sin^{2} \theta-\Delta_{r}}{\sigma^{2}}\right)\sin^{2}\theta dtd\phi+\sigma^{2}d\theta ^{2} \tag{28}\] \[+ \sin^{2}\theta\left[\sigma^{2}+\sin^{2}\theta\left(2-a^{2}\frac{ \Delta_{r}-a^{2}\sin^{2}\theta}{\sigma^{2}}\right)\right]d\phi^{2},\] here \[\Delta_{r}=r^{2}-2mr+a^{2}-\frac{2\tilde{a}_{1}^{2}\tilde{b}_{1}}{\sigma^{2( \tilde{q}-1)/2}}-\frac{2\tilde{a}_{2}^{2}\tilde{b}_{2}}{\sigma^{2(-1-\tilde{q} )/2}}. \tag{29}\] Since, BH acts like thermodynamical substance and whose temperature \(T_{H}\) can be determined by considering the surface gravity \(\kappa\). So, we can compute the Hawking temperature of the metric (28) by using the following formula [34] \[T_{H}=\frac{k}{2\pi},\ \ \ \ \ k=\frac{\Delta_{r}^{\prime}}{2(r_{+}^{2}+a^{2})}, \tag{30}\] where \(\Delta_{r}^{\prime}=\frac{d}{dr}(\Delta_{r})\). The corresponding Hawking temperature for NAT BH with Newman-Janis algorithm can be derived as \[T_{H}=\left[\frac{r_{+}-m-\tilde{a}_{1}^{2}\tilde{b}_{1}r(1-\tilde{q})(r_{+}^ {2}+a^{2})^{(-1-\tilde{q})/2}-\tilde{a}_{2}^{2}\tilde{b}_{2}r(1+\tilde{q})(r_ {+}^{2}+a^{2})^{(\tilde{q}-1)/2}}{2\pi(r_{+}^{2}+a^{2})}\right]. \tag{31}\] The \(T_{H}\) for BH depends upon the BH mass \(m\), charge \(\tilde{q}\), rotation parameter \(a\) and free parameters \(\tilde{a}_{1},\tilde{a}_{2},\tilde{b}_{1},\tilde{b}_{2}\). The above temperature reduces into temperature of Schwarzschild BH for \(a=0,\ \tilde{q}=0\) which implies as [37] \[T_{SBH}=\frac{(r_{+}-m)}{2\pi r_{+}^{2}},\ \ \ \mbox{where}\ \ \ r_{+}=2m. \tag{32}\] ## IV Stability analysis of NAT BH This section is comprised to investigate the graphical interpretation of temperature \(T_{H}\) w.r.t event horizon (\(r_{+}\)). We evaluate the physical significance of the plots to analyze the effects of charge \(\tilde{q}\), mass \(m\) and rotation parameter \(a\) of BH on temperature to study the BH stability. **Figure 1**: depicts the presentation of \(T_{H}\) via \(r_{+}\) for the fixed values of mass \(m=1\), rotation parameter \(a=1\), free parameters \(\tilde{a}_{1}=0.1=\tilde{b}_{1},\tilde{a}_{2}=50,\tilde{b}_{2}=-10\) in the range of charge \(0.1\leq\tilde{q}<0.3\). At first, the \(T_{H}\) increases and attains a maximum height and then it drops down gradually from a height and gets an asymptotically flat sate by indicating the stability of BH as \(r_{+}\rightarrow\infty\). It can be observe that the temperature of BH increases with the decreasing values of horizon. This physical behavior satisfies the Hawking's phenomenon and guarantee the stability of BH. For \(0.1\leq\tilde{q}\leq 0.3\), we observe an asymptotically flat behavior in temperature that exhibits the stable state of BH. **Figure 2**: depicts the behavior of \(T_{H}\) via \(r_{+}\) with fixed values of mass \(m=1\), charge \(\tilde{q}=0.1\), free parameters \(\tilde{a}_{1}=0.1,\tilde{b}_{1}=0.2,\tilde{a}_{2}=50,\tilde{b}_{2}=-10\) and for varying values of rotation parameter \(a\) in the range \(0\leq r_{+}\leq 15\). There can be seen that an asymptotically flat behavior of temperature appears after attaining a maximum height for different values of \(a\). It can be seen that as we rises the value of \(a\) the temperature goes on decreasing as well as for the increasing horizon the temperature decreases. This Hawking's phenomenon depicts the BH stability in the domain \(0\leq r_{+}\leq 15\). It has worth to mention here that for \(T_{H}\geq 0\), the BH shows the physical behavior and it is in complete stable form. ## V Temperature of Nat BH under the influence of quantum gravity In this chapter, we analyze the \(T_{H}\) under the act upon of quantum gravity for boson spin-1 particles. We rewrite the Eq. (28) in the adopting form \[ds^{2} = -Fdt^{2}+Gdr^{2}+Hd\theta^{2}+Kd\phi^{2}+2Ldtd\phi, \tag{33}\] where \[F = \frac{\Delta_{r}-a^{2}\sin^{2}\theta}{\sigma^{2}},\qquad G=\frac{ \sigma^{2}}{\Delta_{r}},\qquad H=\sigma^{2},\] \[K = \sin^{2}\theta\left[\sigma^{2}+\left(2+\frac{a^{2}\sin^{2}\theta +\Delta_{r}}{\sigma^{2}}\right)a^{2}\sin^{2}\theta\right]\] \[L = -2a\left(1+\frac{a^{2}\sin^{2}\theta+\Delta_{r}}{\sigma^{2}} \right)\sin^{2}\theta.\] In order to evaluate the corrected \(T_{H}\) of vector particles from the BHs. The vector particles such as \(Z\) and \(W\) are well-known and act as very significance role in Standard Model [17]. We motion the charges bosonic tunneling in the NAT BH should be more complicated than the Lagrangian field equation as the nontrivial solution interaction during the charged bosonic field, the electromagnetic field and the Aether field. Firstly, we take the field equation of charged particles from the Lagrangian field equation given by the GUP and also we use the Hamilton-Jacobi ansatz phenomenon and WKB approximation to calculate the set of field equation in NAT space-time. By considering the coefficient matrix determinant equal to zero and the linear equations can be derived for the radial function. Accordingly, we compute the tunneling probability of the vector particles from the NAT BH and discuss the corresponding temperature. Therefore, we utilize the generalized Lagrangian equation incorporating the GUP influenced by quantum gravity. The Lagrangian field equation is given [26] by \[\partial_{\mu}(\sqrt{-g}\chi^{\nu\mu})+\sqrt{-g}\frac{m^{2}}{ \hbar^{2}}\varphi^{\nu}+\sqrt{-g}\frac{i}{\hbar}A_{\mu}\varphi^{\nu\mu}+\sqrt {-g}\frac{i}{\hbar}eF^{\nu\mu}\varphi_{\mu}+\varrho\hbar^{2}\partial_{0} \partial_{0}(\sqrt{-g}g^{00}\varphi^{0\nu})\] \[-\varrho\hbar^{2}\partial_{i}\partial_{i}\partial_{i}(\sqrt{-g} g^{ii}\varphi^{i\nu})=0, \tag{34}\] here determinant of \(g\), \(\varphi^{\nu\mu}\) and \(m\) represent coefficient matrix, anti-symmetric of tensor and particle of mass, since \[\varphi_{\nu\mu} = (1-\varrho\hbar^{2}\partial_{\nu}^{2})\partial_{\nu}\varphi_{ \mu}-(1-\varrho\hbar^{2}\partial_{\mu}^{2})\partial_{\mu}\varphi_{\nu}+(1- \varrho\hbar^{2}\partial_{\nu}^{2})\frac{i}{\hbar}eA_{\nu}\varphi_{\mu}-(1- \varrho\hbar^{2}\partial_{\nu}^{2})\frac{i}{\hbar}eA_{\mu}\varphi_{\nu},\] \[F_{\nu\mu} = \nabla_{\nu}A_{\mu}-\nabla_{\mu}A_{\nu},\] where \(\varrho,\ A_{\mu},\ \nabla_{\mu}\) and \(e\) represent the GUP(quantum gravity) parameter, vector potential, covariant derivatives and the charge of particle, respectively. The elements of non-zero for anti-symmetric tensor can be calculated as \[\varphi^{0}=\frac{-K\varphi_{0}+L\chi_{3}}{FK+L^{2}},\quad\varphi ^{1}=\frac{1}{G}\varphi_{1},\quad\varphi^{2}=\frac{1}{H}\varphi_{2},\quad \varphi^{3}=\frac{L\varphi_{0}+F\chi_{3}}{FK+L^{2}},\ \ \varphi^{12}=\frac{1}{GH} \varphi_{12},\ \varphi^{13}=\frac{1}{G}FK+L^{2}\varphi_{13},\] \[\varphi^{01}=\frac{-K\varphi_{01}+L\varphi_{13}}{G(FK+L^{2})}, \quad\varphi^{02}=\frac{-K\varphi_{02}}{H(FK+L^{2})},\quad\varphi^{03}=\frac{ (-FK+F^{2})\varphi_{03}}{(FK+L^{2})^{2}},\ \ \varphi^{23}=\frac{L\varphi_{02}+F \varphi_{23}}{H(FK+L^{2})},\] The WKB approximation can be expressed as \[\varphi_{\nu}=c_{\nu}\exp[\frac{i}{\hbar}Q_{0}(t,r,\phi,\theta)+\Sigma\hbar^{n}Q_ {n}(t,r,\phi,\theta)]. \tag{35}\] Using variables technique of separation, we can choose \[Q_{0}=-\tilde{E}t+W(r)+\nu(\phi)+J\theta, \tag{36}\] where \(\tilde{E}=E-J\omega\) and \(E\), \(J\) denote the particle energy and the angular particle momentum corresponding to \(\theta\) angle. After substituting Eq. (34) into set of the field equations, we get a matrix of order \(4\times 4\) \[Y(c_{0},c_{1},c_{2},c_{3})^{T}=0, \tag{37}\] whose elements are given as follows: \[Y_{00} = \frac{-K}{G(FK+L^{2})}\Big{[}W_{1}^{2}+\varrho W_{1}^{4}\Big{]}- \frac{K}{H(FK+L^{2})}\Big{[}\nu_{1}^{2}+\varrho\nu_{1}^{4}\Big{]},-\frac{FK}{ (FK+L^{2})^{2}}\Big{[}J^{2}+\varrho J^{4}\Big{]}-\frac{m^{2}K}{(FK+L^{2})},\] \[Y_{01} = \frac{\tilde{-K}}{G(FK+L^{2})}\Big{[}L+\varrho\tilde{E}^{3}+eA_{ 0}+\varrho eA_{0}\tilde{E}^{2}\Big{]}W_{1}+\frac{E}{G(FK+L^{2})}+\Big{[}\nu_{1 }+\varrho\nu_{1}^{3}\Big{]},\] \[Y_{02} = \frac{-K}{H(FK+L^{2})}\Big{[}\tilde{E}+\varrho\tilde{E}^{3}-eA_{ 0}-\varrho eA_{0}\tilde{E}^{2}\Big{]}J,\] \[Y_{03} = \frac{-\tilde{E}}{B(FK+L^{2})}\Big{[}W_{1}^{2}+\varrho W_{1}^{4} \Big{]}-\frac{FK}{H(FK+L^{2})^{2}}\Big{[}\tilde{E}+\varrho\tilde{E}^{3}-eA_{ 0}-\varrho eA_{0}\tilde{E}^{2}\Big{]}J+\frac{m^{2}L}{(FK+L^{2})^{2}},\] \[Y_{11} = \frac{\tilde{-K}}{G(FK+L^{2})}\Big{[}\tilde{E}^{2}+\varrho\tilde {E}^{4}-eA_{0}\tilde{E}-\varrho eA_{0}\tilde{E}W_{1}^{2}\Big{]}+\frac{L}{G(FK+ L^{2})}-\frac{m^{2}}{G}\] \[+ \Big{[}J+\varrho J^{3}\Big{]}\tilde{E}-\frac{1}{GH}\Big{[}\nu_{1 }^{2}+\varrho\nu_{1}^{4}\Big{]}-\frac{1}{G(FK+L^{2})}\Big{[}J+\varrho J^{3} \Big{]}+\frac{eA_{0}L}{G(FK+L^{2})}\Big{[}J+\varrho J^{3}\Big{]}\] \[- \frac{eA_{0}K}{G(FK+L^{2})}\Big{[}\tilde{E}+\varrho\tilde{E}^{3} -eA_{0}-\varrho eA_{0}\tilde{E}^{2}\Big{]},\qquad Y_{12}=\frac{1}{GH}[W_{1}+ \varrho W_{1}^{3}]\nu_{1},\] \[Y_{13} = \frac{-E}{G(FK+L^{2})}\Big{[}W_{1}+\varrho W_{1}^{3}\Big{]}\tilde {E}+\frac{1}{G(FK+L^{2})^{2}}\Big{[}W_{1}+\varrho W_{1}^{3}\Big{]}J+\frac{LeA_ {0}}{G(FK+L^{2})}\Big{[}W_{1}+\varrho W_{1}^{3}\Big{]},\] \[Y_{22} = \frac{K}{H(FK+L^{2})}\Big{[}\tilde{E}^{2}+\varrho\tilde{E}^{4}- eA_{0}\tilde{E}-\varrho eA_{0}\tilde{E}\Big{]}-\frac{1}{GH}-\frac{m^{2}}{H} \tag{38}\] \[- \frac{F}{H(FK+L^{2})}\Big{[}\nu_{1}^{2}+\varrho\nu_{1}^{4}\Big{]} -\frac{eA_{0}K}{H(FK+L^{2})}\Big{[}\tilde{E}+\varrho\tilde{E}^{3}-eA_{0}- \varrho eA_{0}\tilde{E}^{2}\Big{]}\] \[+ \frac{L}{H(FK+L^{2})}\Big{[}\tilde{E}+\varrho\tilde{E}^{3}-eA_{ 0}-\varrho eA_{0}\tilde{E}^{2}\Big{]}J,\] \[Y_{23} = \frac{F}{G(FK+L^{2})}\Big{[}\nu_{1}+\varrho\nu_{1}^{3}\Big{]}J \tag{39}\] \[Y_{33} = \frac{(FK-\tilde{F^{2}})}{(FK+L^{2})}\Big{[}\tilde{E}^{2}+ \varrho\tilde{E}^{4}-eA_{0}\tilde{E}-\varrho eA_{0}\tilde{E}^{3}\Big{]}-\frac{1 }{G(FK+L^{2})}\Big{[}W_{1}^{2}+\varrho W_{1}^{4}\Big{]}\] \[- \frac{F}{H(FK+L^{2})}\Big{[}\nu_{1}^{2}+\varrho\nu_{1}^{4}\Big{]} -\frac{m^{2}F}{(FK+L^{2})}-\frac{eA_{0}(FK-\tilde{F^{2}})}{(FK+L^{2})}\Big{[} \tilde{E}+\varrho\tilde{E}^{3}-eA_{0}\tilde{E}^{2}\Big{]},\] where \(\nu_{1}=\partial_{\phi}Q_{0}\), \(W_{1}=\partial_{r}Q_{0}\) and \(J=\partial_{\theta}Q_{0}\). The determinant of \(Y\) is equal to zero for the non-trivial solution and get \[ImW^{\pm} = \pm\int\sqrt{\frac{(E-J\omega-A_{0}e)^{2}+X_{1}\Big{[}1+\varrho \frac{X_{2}}{X_{1}}\Big{]}}{(FK+L^{2})/GK}}dr, \tag{40}\] \[= \pm\pi\frac{(\tilde{E}-A_{0}e)}{2k(r_{\pm})}\Big{[}1+\varrho \Xi\Big{]},\] where \[X_{1} = \frac{GL}{(FK+L^{2})}\Big{[}\tilde{E}-eA_{0}\Big{]}\nu_{1}+\frac{FG }{(FK+L^{2})}J^{2}-Gm^{2},\] \[X_{2} = \frac{GK}{(FK+L^{2})}\Big{[}\tilde{E}^{4}-2eA_{0}\tilde{E}^{3}+(eA _{0})^{2}\tilde{E}^{2}\Big{]}-\frac{FG}{(FK+L^{2})}J^{4}-W_{1}^{4}\] \[+ \frac{GL}{H(FK+L^{2})}\Big{[}\tilde{E}^{3}-eA_{0}\tilde{E}^{2} \Big{]}J.\] The bosonic particle tunneling can be expressed as \[\Gamma=\frac{\Gamma_{\rm emission}}{\Gamma_{\rm absorption}}=\exp\left[-2 \pi\frac{(E-J\omega-A_{0}e)}{k(r_{+})}\right]\Big{[}1+\varrho\Xi\Big{]}. \tag{40}\] where \[k=\frac{\Delta_{r}^{\prime}}{2(r_{+}^{2}+a^{2})}. \tag{41}\] The modified temperature can be calculated by applying the Boltzmann factor \(\Gamma_{B}=\exp\left[(E-J\omega-A_{0}e)/T_{H}^{\prime}\right]\) as \[T_{H}^{\prime}=\left[\frac{r_{+}-m-\tilde{a}_{1}^{2}\tilde{b}_{1}r(1-\tilde{q })(r_{+}^{2}+a^{2})^{(-1-\tilde{q})/2}-\tilde{a}_{2}^{2}\tilde{b}_{2}r(1+ \tilde{q})(r_{+}^{2}+a^{2})^{(\tilde{q}-1)/2}}{2\pi(r_{+}^{2}+a^{2})}\right] \Big{[}1-\varrho\Xi\Big{]}. \tag{42}\] The Hawking temperature for BH depends upon the mass \(m\), charge \(\tilde{q}\), quantum gravity parameter \(\varrho\), spin parameter \(a\), arbitrary parameter \(\Xi\) and free parameters \(\tilde{a}_{1}\), \(\tilde{a}_{2}\), \(\tilde{b}_{1}\), \(\tilde{b}_{2}\). The expression (42) reduces into BH temperature for \(\varrho=0\), which leads a temperature in Eq. (31). It has note worthy that the quantum corrections cause a deceleration in the increment of temperature. ## VI Stability analysis of NAT BH with quantum corrections This section depicts the graphical presentation of \(T_{H}^{\prime}\) w.r.t event horizon (\(r_{+}\)) with fixed value of arbitrary parameter \(\Xi=1\). We study the physical existence of the plots and observe the effects of correction parameter \(\varrho\) and spin parameter \(a\) of BH on corrected Hawking temperature to study the stable BH condition under the influence of quantum effects. **Figure 3(i)**: describes the behavior of \(T_{H}^{\prime}\) via event horizon for the fixed values of mass \(m=1\), spin parameter \(a=1\), free parameters \(\Xi=1\), \(\tilde{a}_{1}=0.1=\tilde{b}_{1}\), \(\tilde{a}_{2}=50\), \(\tilde{b}_{2}=-10\), charge \(\tilde{q}=0.1\) and for varying values of correction parameter \(\varrho\). At a peak value the temperature attains a maximum height and then it drops down gradually and obtain a condition of asymptotically flat by indicating the stability of BH as \(r_{+}\rightarrow\infty\). It can be observed that the \(T_{H}^{\prime}\) decreases as we increase the correction parameter values. The temperature of BH increases with the decreasing values of event horizon. This physical presentation reflects the stability state of BH. The maximum temperature at non-zero horizon left the BH remnant. **Figure 3(ii)** represents the behavior of \(T_{H}^{\prime}\) via \(r_{+}\) with fixed values of mass \(m=1\), correction parameter \(\varrho=0.8\), charge \(\tilde{q}=0.1\), free parameters \(\tilde{a}_{1}=1\), \(\tilde{b}_{1}=0.1\), \(\tilde{a}_{2}=50\), \(\tilde{b}_{2}=-10\) and for varying values of \(a\). There can be seen that for different values of \(a\) the corrected temperature gets a height and then it shows an asymptotically flat behavior. It is notable that when we increase the value of \(a\) the corrected temperature decreases as well as for the increasing value of horizon the corrected temperature also decreases. This Hawking's phenomenon represents the BH stable condition in the domain \(0\leq r_{+}\leq 15\). From both plots, we can observe that for \(T_{H}^{\prime}\geq 0\), the BH gets its stable form while for \(T_{H}^{\prime}<0\) the BH with negative temperature always depicts its unstable form. We can also observe it graphically that the \(T_{H}^{\prime}\) is less than the original one. So, we can conclude the quantum corrections decelerates the increment in temperature. ## VII Summary and discussion The theory of null Aether is a vector-tensor gravity theory with null vector and Aether field exist at every point of the spacetime. In this paper, we have studied a new asymptotically flat BH solution by using Newman-Janis algorithm. To do so, firstly, we have reviewed the asymptotically flat BH solution in NAT and then by applying the Newman-Janis algorithm, we have derived a new asymptotically flat NAT BH spacetime influenced by rotation parameter. By considering the spin parameter (\(a\to 0\)) in Eq. (28), we get the asymptotically flat BH solution [30] in general relativity. Furthermore, by taking into account the surface gravity \(\kappa\), we have computed the temperature for NAT BH in the presence of rotation parameter. The BH temperature depends upon the charge, mass, spin and free parameters of the BH. The NAT BH temperature in Eq. (31) recovers the temperature of Schwarzschild BH for \(\tilde{q}=0=a\) as in Eq. (32). Moreover, we have comprised the graphical representation of Hawking temperature w.r.t event horizon in order to check the stability of BH. We have studied the radiation spectrum through bosonic tunneling process of spin-1 particles from NAT BH involving both spin and quantum gravity parameters. Therefore, we have utilized the generalized Lagrangian equation incorporating the GUP influenced by quantum gravity. For this investigation, we have applied the Hamilton-Jacobi ansatz and WKB approximation to the generalized Lagrangian field equation for boson particles. We have obtained the bosonic corrected tunneling rate of emitted particles and their corresponding corrected temperature \(T_{H}^{\prime}\). It has note worthy to analyzed that, when we ignore the quantum gravity effects, i.e., (\(\rho=0\)), then the corrected Hawking temperature in Eq. (42) is reduced to the original temperature in Eq. (31). The corrected temperature of BH depends upon spin parameter, quantum gravity parameter and Aether field. The \(T_{H}^{\prime}\) reduces into Schwarzschild BH temperature when the spin parameter, quantum gravity parameter and Aether field approaches to zero. It has been analyzed that the quantum gravity decelerates the increase in \(T_{H}^{\prime}\) in the process of radiation. Moreover, we have analyzed the physical significance of corrected temperature to check the effects of quantum gravity and rotation parameter on \(T_{H}^{\prime}\) by seeing the stability of NAT BH over Aether field. The results from the plots of Hawking temperature with respect to the horizon in the presence/absence of gravity parameter for the given BH are given as follows: * In the absence of gravity parameter the temperature shows the asymptotically flat behavior in the range of charge \(0.1\leq\tilde{q}\leq 0.3\) and the \(T_{H}\) decreases with the increasing \(r_{+}\). This is physical graphical presentation of \(T_{H}\) w.r.t \(r_{+}\) and depicts the stable condition of BH with positive temperature. * The \(T_{H}\) for varying values of rotation parameter \(a\) shows an asymptotically flat behavior and after a maximum height the temperature goes on decreasing as well as for the increasing horizon. This Hawking's phenomenon depicts the BH stability in the domain \(0\leq r_{+}\leq 15\). * In the presence of gravity parameter \(T_{H}^{\prime}\) decreases with the increasing values of correction parameter as well as horizon. We have observed BH remnant at nonzero horizon with maximum temperature for different values \(\rho\) in the domain \(0\leq r_{+}\leq 15\). * For different values of \(a\) the corrected temperature gets a height and then it shows an asymptotically flat behavior. It is notable that the corrected temperature decreases with the increasing values of \(a\) as well as for the increasing value of horizon. This Hawking's phenomenon represents the BH stable condition in the domain \(0\leq r_{+}\leq 15\). * From all the plots, we have observed that for \(T^{\prime}_{H}\geq 0\), the BH gets its stable form. We have also observed it graphically that the \(T^{\prime}_{H}\) is less than the original one. So, we have concluded that the quantum corrections decelerates the increment in temperature. ## VIII Appendix After setting the all values in Eq. (34), we get the field equations set as \[\frac{K}{G(FK+L^{2})}\Big{[}c_{1}(\partial_{0}Q_{0})(\partial_{1} Q_{0})+\varrho c_{1}(\partial_{0}Q_{0})^{3}(\partial_{1}Q_{0})-c_{0}( \partial_{1}Q_{0})^{2}-\varrho c_{0}(\partial_{1}Q_{0})^{4}+c_{1}eA_{0}( \partial_{1}Q_{0})\] \[+c_{1}\varrho eA_{0}(\partial_{0}Q_{0})^{2}(\partial_{1}Q_{0}) \Big{]}-\frac{L}{G(FK+L^{2})}\Big{[}c_{3}(\partial_{1}Q_{0})^{2}+\varrho c_{3 }(\partial_{1}Q_{0})^{4}-c_{1}(\partial_{1}Q_{0})(\partial_{3}Q_{0})-\varrho c _{1}(\partial_{1}Q_{0})(\partial_{3}Q_{0})^{2}\Big{]}\] \[+\frac{K}{H(FK+L^{2})}\Big{[}c_{2}(\partial_{0}Q_{0})(\partial_ {2}Q_{0})+\varrho c_{2}(\partial_{0}Q_{0})^{3}(\partial_{2}Q_{0})-c_{0}( \partial_{2}Q_{0})^{2}-\varrho c_{0}(\partial_{2}Q_{0})^{4}+c_{2}eA_{0}( \partial_{2}Q_{0})+c_{2}eA_{0}\varrho\] \[(\partial_{0}Q_{0})^{2}(\partial_{1}Q_{0})\Big{]}+\frac{FK}{(FK+ L^{2})^{2}}\Big{[}c_{3}(\partial_{0}Q_{0})(\partial_{3}Q_{0})+\varrho c_{3}( \partial_{0}Q_{0})^{3}(\partial_{3}Q_{0})-c_{0}(\partial_{3}Q_{0})^{2}-\varrho c _{0}(\partial_{3}Q_{0})^{4}+c_{3}eA_{0}\] \[(\partial_{3}Q_{0})+c_{3}eA_{0}(\partial_{0}Q_{0})^{2}(\partial_ {3}Q_{0})\Big{]}-m^{2}\frac{Kc_{0}-Lc_{3}}{(FK+L^{2})}=0,\] (43) \[\frac{-K}{G(FK+L^{2})}\Big{[}c_{1}(\partial_{0}Q_{0})^{2}+ \varrho c_{1}(\partial_{0}Q_{0})^{4}-c_{0}(\partial_{0}Q_{0})(\partial_{1}Q _{0})-\varrho c_{0}(\partial_{0}Q_{0})(\partial_{1}Q_{0})^{3}+c_{1}eA_{0}( \partial_{0}Q_{0})\] \[+\varrho c_{1}eA_{0}(\partial_{0}Q_{0})^{3}\Big{]}+\frac{L}{G(FK +L^{2})}\Big{[}c_{3}(\partial_{0}Q_{0})(\partial_{1}Q_{0})+\varrho c_{3}( \partial_{0}Q_{0})(\partial_{1}Q_{0})^{3}-c_{1}(\partial_{0}Q_{0})(\partial_{ 3}Q_{0})-\varrho c_{1}(\partial_{0}Q_{0})(\partial_{3}Q_{0})^{3}\Big{]}\] \[+\frac{1}{GH}\Big{[}c_{2}(\partial_{1}Q_{0})(\partial_{2}Q_{0})+ \varrho c_{2}(\partial_{1}Q_{0})(\partial_{2}Q_{0})^{3}-c_{1}(\partial_{2}Q _{0})^{2}-\varrho c_{1}(\partial_{2}Q_{0})^{4}\Big{]}+\frac{1}{G(FK+L^{2})} \Big{[}c_{3}(\partial_{1}Q_{0})(\partial_{3}Q_{0})+\varrho c_{3}\] \[(\partial_{1}Q_{0})(\partial_{3}Q_{0})^{3}-c_{1}(\partial_{3}Q_{ 0})^{2}-\varrho c_{1}(\partial_{3}Q_{0})^{4}\Big{]}+\frac{eA_{0}K}{G(FK+L^{2}) }\Big{[}c_{1}(\partial_{0}Q_{0})+\varrho c_{1}(\partial_{0}Q_{0})^{3}-c_{0} (\partial_{1}Q_{0})-\varrho c_{0}(\partial_{1}Q_{0})^{3}\] \[+eA_{0}c_{1}+\varrho c_{1}eA_{0}(\partial_{0}Q_{0})^{2}\Big{]}+ \frac{eA_{0}L}{G(FK+L^{2})}\Big{[}c_{3}(\partial_{1}Q_{0})+\varrho c_{3}( \partial_{1}Q_{0})^{3}-c_{1}(\partial_{3}Q_{0})-\varrho c_{1}(\partial_{1}Q _{0})^{3}\Big{]}-\frac{m^{2}c_{1}}{G}=0,\] (44) \[\frac{K}{H(FK+L^{2})}\Big{[}c_{2}(\partial_{0}Q_{0})^{2}+ \varrho c_{2}(\partial_{0}Q_{0})^{4}-c_{0}(\partial_{0}Q_{0})(\partial_{2}Q_ {0})-\varrho c_{0}(\partial_{0}Q_{0})(\partial_{2}Q_{0})^{3}+c_{2}eA_{0}( \partial_{0}Q_{0})+\varrho c_{2}eA_{0}(\partial_{0}Q_{0})^{3}\Big{]}\] \[+\frac{1}{GH}\Big{[}c_{2}(\partial_{1}Q_{0})^{2}+\varrho c_{2}( \partial_{1}Q_{0})^{4}-c_{1}(\partial_{1}Q_{0})(\partial_{2}Q_{0})-\varrho c _{1}(\partial_{1}Q_{0})(\partial_{2}Q_{0})^{3}\Big{]}-\frac{L}{H(FK+L^{2})} \Big{[}c_{2}(\partial_{0}Q_{0})(\partial_{3}Q_{0})\] \[+\frac{eC_{2}(\partial_{0}Q_{0})^{3}(\partial_{3}Q_{0})-c_{0}( \partial_{0}Q_{0})(\partial_{3}Q_{0})-\varrho c_{0}(\partial_{0}Q_{0})^{3}( \partial_{3}Q_{0})+c_{2}eA_{0}(\partial_{3}Q_{0})+\varrho c_{2}eA_{0}(\partial_{ 3}Q_{0})^{3}\Big{]}\] \[+\frac{F}{H(FK+L^{2})}\Big{[}c_{3}(\partial_{2}Q_{0})(\partial_{3} Q_{0})+\varrho c_{3}(\partial_{2}Q_{0})^{3}(\partial_{3}Q_{0})-c_{2}( \partial_{3}Q_{0})^{2}-\varrho c_{2}(\partial_{3}Q_{0})^{4}\Big{]}-\frac{m^{2}c_{ 2}}{H}\] \[+\frac{eA_{0}K}{H(FK+L^{2})}\Big{[}c_{2}(\partial_{0}Q_{0})+ \varrho c_{2}(\partial_{0}Q_{0})^{3}-c_{0}(\partial_{2}Q_{0})-\varrho c_{0}( \partial_{2}Q_{0})^{3}+c_{2}eA_{0}+c_{2}\varrho eA_{0}(\partial_{0}Q_{0})^{2} \Big{]}=0,\] (45) \[\frac{FK-F^{2}}{(FK+L^{2})^{2}}\Big{[}c_{3}(\partial_{0}Q_{0})^{2} +\varrho c_{3}(\partial_{0}Q_{0})^{4}-c_{0}(\partial_{0}Q_{0})(\partial_{3}Q_{0}) -\varrho c_{0}(\partial_{0}Q_{0})(\partial_{3}Q_{0})^{3}+eA_{0}c_{3}(\partial_{0}Q _{0})\] \[+\varrho c_{3}eA_{0}(\partial_{0}Q_{0})^{3}\Big{]}-\frac{K}{H(FK+L^{2}) }\Big{[}c_{3}(\partial_{1}Q_{0})^{2}+\varrho c_{3}(\partial_{1}Q_{0})^{4}-c_{1}( \partial_{1}Q_{0})(\partial_{3}Q_{0})-\varrho c_{1}(\partial_{1}Q_{0})( \partial_{3}Q_{0})^{3}\Big{]}\] \[-\frac{L}{H(FK+L^{2})}\Big{[}c_{2}(\partial_{0}Q_{0})(\partial_ {2}Q_{0})+\varrho c_{2}(\partial_{0}Q_{0})^{3}(\partial_{2}Q_{0})-c_{0}( \partial_{2}Q_{0})^{2}+\varrho c_{0}(\partial_{2}Q_{0})^{4}+eA_{0}c_{2}( \partial_{2}Q_{0})+\varrho c_{2}eA_{0}\] \[(\partial_{0}Q_{0})^{2}(\partial_{2}Q_{0})\Big{]}-\frac{eA_{0}F}{H(FK+L^{2}) }\Big{[}c_{3}(\partial_{2}Q_{0})^{2}+\varrho c_{3}(\partial_{2}Q_{0})
2308.15618
RACR-MIL: Weakly Supervised Skin Cancer Grading using Rank-Aware Contextual Reasoning on Whole Slide Images
Cutaneous squamous cell cancer (cSCC) is the second most common skin cancer in the US. It is diagnosed by manual multi-class tumor grading using a tissue whole slide image (WSI), which is subjective and suffers from inter-pathologist variability. We propose an automated weakly-supervised grading approach for cSCC WSIs that is trained using WSI-level grade and does not require fine-grained tumor annotations. The proposed model, RACR-MIL, transforms each WSI into a bag of tiled patches and leverages attention-based multiple-instance learning to assign a WSI-level grade. We propose three key innovations to address general as well as cSCC-specific challenges in tumor grading. First, we leverage spatial and semantic proximity to define a WSI graph that encodes both local and non-local dependencies between tumor regions and leverage graph attention convolution to derive contextual patch features. Second, we introduce a novel ordinal ranking constraint on the patch attention network to ensure that higher-grade tumor regions are assigned higher attention. Third, we use tumor depth as an auxiliary task to improve grade classification in a multitask learning framework. RACR-MIL achieves 2-9% improvement in grade classification over existing weakly-supervised approaches on a dataset of 718 cSCC tissue images and localizes the tumor better. The model achieves 5-20% higher accuracy in difficult-to-classify high-risk grade classes and is robust to class imbalance.
Anirudh Choudhary, Angelina Hwang, Jacob Kechter, Krishnakant Saboo, Blake Bordeaux, Puneet Bhullar, Nneka Comfere, David DiCaudo, Steven Nelson, Emma Johnson, Leah Swanson, Dennis Murphree, Aaron Mangold, Ravishankar K. Iyer
2023-08-29T20:25:49Z
http://arxiv.org/abs/2308.15618v1
RACR-MIL: Weakly Supervised Skin Cancer Grading using Rank-Aware Contextual Reasoning on Whole Slide Images ###### Abstract Cutaneous squamous cell cancer (cSCC) is the second most common skin cancer in the US. It is diagnosed by manual multi-class tumor grading using a tissue whole slide image (WSI), which is subjective and suffers from inter-pathologist variability. We propose an automated weakly-supervised grading approach for cSCC WSIs that is trained using WSI-level grade and does not require fine-grained tumor annotations. The proposed model, RACR-MIL, transforms each WSI into a bag of tiled patches and leverages attention-based multiple-instance learning to assign a WSI-level grade. We propose three key innovations to address general as well as cSCC-specific challenges in tumor grading. First, we leverage spatial and semantic proximity to define a WSI graph that encodes both local and non-local dependencies between tumor regions and leverage graph attention convolution to derive contextual patch features. Second, we introduce a novel ordinal ranking constraint on the patch attention network to ensure that higher-grade tumor regions are assigned higher attention. Third, we use tumor depth as an auxiliary task to improve grade classification in a multitask learning framework. RACR-MIL achieves 2-9% improvement in grade classification over existing weakly-supervised approaches on a dataset of 718 cSCC tissue images and localizes the tumor better. The model achieves 5-20% higher accuracy in difficult-to-classify high-risk grade classes and is robust to class imbalance. 1 University of Illinois Urbana-Champaign 2 Mayo Clinic, Arizona 3 Mayo Clinic, Rochester ## Introduction Cutaneous squamous cell carcinoma (cSCC) is the second most prevalent skin cancer in the United States, and its occurrence is increasing rapidly [17]. cSCC tumor grade is an important prognostic factor, reflecting the level of cancer aggressiveness, and is strongly linked to outcomes [13]. The current practice for grading cSCC tumors involves a manual examination of whole slide images (WSI) of skin tissues by pathologists, which is inherently subjective, prone to inter-observer variability, and leads to under-staging of high-risk cSCC tumors [16]. AI-assisted grading has emerged as a promising approach for objective tumor grading for a wide range of tumors but has not been explored for cSCC tumor grading yet [12, 13, 14]. This paper proposes the first weakly-supervised machine learning-based approach to predict cSCC grade using a model trained on WSI-level grade labels assigned by pathologists. Our primary objective is to classify cSCC WSI into one of four grading classes: normal (tumor not present), well-differentiated, moderately-differentiated, and poorly-differentiated. We address this problem in the multiple-instance learning (MIL) paradigm because of the success of previous studies that used MIL for weakly-supervised cancer grading and, thus, transform each WSI into a bag of tiled patches (instances) [15, 16]. cSCC tumor grading presents three main challenges: (i) grade difference of tumor regions within the same WSI, (ii) the need for contextual information for determining tumor grade, and (iii) limited data. (i) A given cSCC WSI might comprise multiple tumor regions with varying grades. Pathologists implicitly rank the tumor regions based on their cellular differentiation (from well to poor), providing the grade of the most severe tumor region as the overall label (NCI 2023). Thus, the model must determine the implicit grade order to identify the most severe tumor region and de-emphasize the importance of irrelevant tumor regions, even if the less severe tumor captures a larger portion of the WSI. (ii) cSCC grading is context-aware because pathologists consider the local tumor neighborhood (tumor microenvironment) as well as long-range relations between distant tumor regions to determine the WSI-level grade label. On the one hand, existing studies [1] employ spatial proximity-based graphs to capture information from the tumor microenvironment, but they oft Figure 1: (a) Non-local semantic (blue) and local spatial (black) dependencies between tumor regions with the same grade. (b) Correlation between depth and grade: Worsegrade tumor invades deeper into the skin tissue reaching higher depth from the skin surface. semantic dependencies between tumor regions with similar morphology. On the other hand, although graph transformer-based methods Zheng et al. (2022) incorporate all pairwise patch relationships, they are prone to overfitting on limited-size WSI datasets. Thus, there is a need for approaches that balance the information from local and long-range relations between patches. (iii) Limited number of WSIs and an imbalance in the number of low-risk (well-differentiated) vs. high-risk (moderately and poorly-differentiated) cases further exacerbate the previous two challenges. To overcome these challenges, we propose an approach that utilizes rank-aware contextual reasoning (RACR) to leverage contextual information and maintain the ordinal ranking of tumor grades. Our model, RACR-MIL, predicts WSI tumor grade via four main steps. First, we divide the tissue into patches and extract local patch features using a self-supervised pre-trained encoder, in line with existing methods Li et al. (2021). Second, we define a graph on the tissue patches that captures both local (spatial) and non-local (semantic) dependencies between the patches. We derive multiscale features using self-attention-based graph convolution to incorporate contextual information. Third, we utilize attention-based patch feature aggregation Ilse et al. (2018) to derive a WSI-level feature for grade classification. We augment the attention computation with a rank ordering mechanism that assigns higher weights to higher-grade tumor regions. Finally, we incorporate tumor depth as an auxiliary prediction task for regularizing attention to relevant tumor patches. Our work presents three key innovations. First, to emulate the ordinal grading protocol implicitly followed by pathologists, we introduce a two-part rank-ordering loss to train the attention network. It consists of (i) an _interclass_ constraint, which compares patches from different grades and imposes higher attention on more severe tumor patches, and (ii) an _intraclass_ constraint, which imposes higher attention on more likely patches within the same grade. We obtain the grade of each patch by pseudo-labeling the patches based on their grade class-likelihoods. Rank ordering enables our model to consistently assign higher importance to the most severe tumor region(s), improving tumor localization. Second, we demonstrate the effectiveness of combining local and non-local dependencies between tissue regions for grading. We construct a WSI graph with patches as nodes and edges defined using a combination of _spatial_ proximity and _semantic_ similarity, i.e., patch feature similarity. This enables long-range message passing extending beyond the immediate neighbors in WSI during graph convolution, allowing us to capture broader tumor structure. Incorporating spatial and semantic context improves the localization and classification of higher-risk tumors (moderately and poorly differentiated), which existing methods find difficult to classify correctly. Finally, we introduce the use of tumor depth as an auxiliary training signal to enhance grade classification. Prior studies have shown that tumor grade is significantly associated with tumor depth Derwinger et al. (2010); Cruz et al. (2007); Kudo et al. (2022). Well-differentiated cSCC tumors have lower depth, while poorly-differentiated tumors invade deeper into the tissue Derwinger et al. (2010). To capture the relationship between depth and grade, we develop a multi-task framework that predicts depth and grade jointly, sharing the patch features between them. We evaluated our approach on a real-world cSCC dataset of 718 WSIs. RACR-MIL achieves an F1-score of 0.796 and a 19.6% improvement in classifying challenging moderate-grade tumors compared to existing MIL methods. Qualitative analysis of the attention distribution revealed that the tumor region(s) localized by the model aligns well with fine-grained tumor annotations by pathologists, and the attention distribution is consistent over tumor region(s) of interest. Ablation analyses showed that each proposed innovation contributes to improving certain aspects of model performance. Our key contributions are as follows: 1. We propose the first weakly supervised framework for multi-class cSCC grading in pathology images. 2. Our model captures spatial and semantic dependency within a WSI using a graph network, enabling us to capture higher-order relationships between tumor patches. To our knowledge, we are the first to leverage semantic edges for WSI grading. 3. We introduce an ordinal ranking constraint on the attention of the patches that mimics pathologists' implicit tumor grade ordering. 4. We exploit the additional training signal from tumor depth, a related cSCC tumor prognostic factor, using multitask learning. 5. Our innovations lead to state-of-the-art grade classification performance which is resilient to class imbalance and results in greater alignment with fine-grained tumor annotations compared to existing methods. ## Methodology Each WSI is represented as a bag \(b\) of patches \(X_{b}=\{x_{n}\}_{n=1}^{N}\). \(N\) denotes the number of non-overlapping patches in a WSI, and \(b\in\{1,2...B\}\), where \(B\) is the number of training samples. The patch-level labels \(y_{n}\) are unknown, and we have access only to the bag label \(Y_{b}\in\) {normal, well, moderate, poor}. ### Tissue Feature Encoding Tissue feature extraction consists of local feature extraction using self-supervised learning followed by contextual feature extraction using a graph convolution network (GCN). Local feature extractionWe leverage self-supervised learning (SSL) to pre-train the patch feature extractor using unlabeled patches extracted from WSIs. To capture fine-grained pathological features (e.g., nuclei details, cell distribution, tumor microenvironment), we extract \(448\times 448\) sized non-overlapping patches at \(20X\) magnification. We use Nest-S Zhang et al. (2022), a hierarchical transformer, to extract patch features. We pre-train the feature extractor using DINO (knowledge distillation-based SSL) Caron et al. (2021) because of its promising performance in MIL-based classification tasks Chen et al. (2022). After pre-training, the transformer network is used as an offline feature extractor to derive \(d\)-dimensional feature \(f_{n}\in\mathcal{R}^{d}\) for each patch \(x_{n}\), leading to a WSI representation of \(\{f_{n}\}_{n=1}^{N}\). Since the pre-trained features \(f_{n}\) are agnostic to the downstream task, we project them into a lower-dimensional space using a multi-layer perceptron (MLP) with nonlinear activation to get local patch features \(h_{n}^{0}\). We train the MLP along with the graph convolution network and the rank-aware grade classifier. Thus, the resulting local patch feature \(h_{n}^{0}\) potentially captures information specific to grade and de-emphasizes information irrelevant to the downstream task. **Contextual feature extraction** **Graph Definition**: To extract contextual information capturing spatial and semantic dependency, we derive an undirected graph from each WSI \(G=(V,A)\) with patches as nodes \(V\) and adjacency matrix \(A\in\mathcal{R}^{N\times N}\), which represents the connections between the nodes. The edges capture the pathology-related structure and interdependence among tumor regions via two types of contextual dependencies incorporated into the adjacency matrix \(A\): (i) \(A_{sem}\), which represents non-local dependencies between patches that may be spatially distant, but are similar in terms of their tissue structure (grade) in the feature space; and (ii) \(A_{sp}\), which captures local dependencies between a tumor patch and its spatially neighboring patches in the tumor microenvironment. We only consider edges to the K-nearest neighbors in both the semantic and spatial space. \(A_{sem}\) is defined using pairwise feature similarity between patches: \[A_{sem}(i,j)=\exp{\left(\frac{-d_{ij}^{sem}}{0.1}\right)};d_{ij}^{sem}=1- \frac{f_{i}\cdot f_{j}}{||f_{i}||_{2}||f_{j}||_{2}} \tag{1}\] where \(d_{ij}^{sem}\) represents the semantic distance between patches \(i,j\) with features \(f_{i}\) and \(f_{j}\). \(j\in[1,N]\) denotes the \(K\)-nearest neighbors of the \(i^{th}\) patch (\(K=4\)). We pre-process the derived semantic graph \(A_{sem}\) using personalized PageRank kernel-based graph diffusion [1]. This amplifies long-range connections between tumor regions by generating additional edges beyond 1-hop neighbors in the feature space. \(A_{sp}\) is computed using inverse distance weighing across spatially K-nearest patches. \[A_{sp}(i,j)=\exp{\left(\frac{-d_{ij}^{sp}}{2}\right)};d_{ij}^{sp}=\sqrt{(s_{i }-s_{j})^{2}+(t_{i}-t_{j})^{2}} \tag{2}\] where \(d_{ij}^{sp}\) represents the spatial distance between patches \(i,j\) with spatial coordinates \((s_{i},t_{i})\) and \((s_{j},t_{j})\), respectively. We use \(K=8\) to connect each patch to its immediate neighboring patches in the tissue. The adjacency matrix \(A\) is the average of the spatial and semantic components \(\left(A=\frac{A_{sem}+A_{sp}}{2}\right)\). **Graph Feature Aggregation**: We leverage graph convolution with residual mapping [1] to derive multiscale contextual patch features \((h_{n}^{1},h_{n}^{2})\) from the local patch feature \(h_{n}^{0}\) and adjacency matrix \(A\). Each convolution layer uses message-passing to propagate feature information from the 1-hop connected nodes \(j\) of a node \(i\) (\(A_{ij}>0\)) and updates node features via the following operation: \[H^{l+1}=H^{l}+GConv(H^{l},A;W^{l}) \tag{3}\] where \(H^{l}=[h_{1}^{l},h_{2}^{l}...,h_{N}^{l}]\) and \(W^{l}\) are the trainable parameters of layer \(l\). We use two \(GConv(\cdot)\) layers and consider two design choices for \(GConv(\cdot)\). 1. _Fixed edge weights_: We use vanilla graph convolution network (GCN)[11] with weighted message passing using predefined edge weights from adjacency matrix \(A\): \[GConv(\cdot)=ReLU(\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2}}H^{l }W^{l})\] (4) where \(\tilde{D}=\sum_{j}\tilde{A}(i,j)\) is the diagonal matrix of \(\tilde{A}=A+I\). \(\tilde{A}\) is used to avoid oversmoothing [11]. 2. _Dynamic edge weights_: Alternatively, we can leverage graph attention-based convolution [14] to dynamically define edge weights. This allows us to dynamically aggregate information from patches connected to a Figure 2: The proposed model. (a) Tissue tiling and local feature extraction. (b) Derivation of contextual patch features using self-attention-based graph convolution, taking spatial and semantic dependency into account. (c) Joint prediction of grade and depth using multiscale contextual features. (d) Rank-order constraint on the attention network. patch based on their pairwise similarity. We use a single attention head with masking to aggregate the neighboring patch features: \[GConv(\cdot)=ReLU\bigg{[}softmax\bigg{(}\frac{Q^{l}K^{l}}{\sqrt{d}}+M\bigg{)}H^{l} W^{l}\bigg{]} \tag{5}\] where \(Q^{l}=H^{l}W^{l}_{Q},K^{l}=H^{l}W^{l}_{K}\) and \(M\) is the attention mask (\(M_{ij}=0\) if \(A_{ij}>0\), else \(M_{ij}=-\infty\)). \(W^{l}_{Q},W^{l}_{K}\), \(W^{l}\) are trainable parameters of layer \(l\) and \(W^{l}_{Q},W^{l}_{K}\) are used to learn the edge weights through the dot product \(Q^{l}K^{l}\). ### Multi-task training Multi-task learning allows us to use additional information from auxiliary labels (depth) to aid main task prediction (grade). We selected depth as an auxiliary label because (i) it is easily available from diagnostic reports, (ii) it is derived from pathology images and reflects tissue structure surrounding the tumor, and (iii) it is well-correlated with grade (_Dunn's pairwise p-values_ are significant at a 10% level). We designed separate predictors for depth and grade while the graph network is shared between them to allow feature sharing and to ensure parameter efficiency. #### Grade Classification We use attention-based feature aggregation to determine the contribution of each patch to overall WSI prediction. Attention-based aggregation outperforms traditional pooling-based methods by up-weighing the most relevant patches. We leverage 1-hop contextual features to determine the normalized attention weight for each patch using a two-layer network: \[w_{n}=\frac{\exp[a^{T}\tanh(U\cdot h^{1}_{n})]}{\sum_{i}\exp[a^{T}\tanh(U\cdot h ^{1}_{n})]} \tag{6}\] where \(a\in\mathcal{R}^{d}\) and \(U\in\mathcal{R}^{d\times d}\) are learnable parameters. The attention score is based on the formulation proposed by Ilee et al. [18]. The overall WSI representation is the attention-weighted average of normalized 1-hop patch features: \(W^{diff}_{b}=\sum_{n=1}^{N}w_{n}\frac{h^{1}_{n}}{\|h^{1}_{n}\|}\). The WSI class-likelihood is computed using a single-layer MLP cosine softmax classifier \(\phi\): \[p_{b,c}=\frac{\exp[-\mathcal{D}(\phi(W^{diff}),z_{c})]}{\sum_{c^{\prime}}\exp[ -\mathcal{D}(\phi(W^{diff}),z_{c^{\prime}})]} \tag{7}\] where \(c\in\{0,1,2,3\}\) representing {normal, well, moderate, poor}, \(\mathcal{D}\) represents cosine distance and \(z_{c}\) is the prototype (class centroid) of grade class \(c\) defined as \(z_{c}=\frac{1}{\mathcal{N}_{c}}\sum_{n:y_{n}=c}\phi(\frac{h^{1}_{n}}{\|h^{1}_ {n}\|})\). To counter class imbalance due to a lower proportion of poorly differentiated cases, we leverage class-balanced sampling [10] during training. We use the ground-truth WSI grade label \(Y_{b}\) to compute our MIL-based grade classification loss: \[\mathcal{L}_{grade}=-\sum_{b=1}^{B}\sum_{c}\mathbf{1}(Y_{b}=c)\log(p_{b,c}), \tag{8}\] #### Ordinal Ranking of Patches We apply two ranking constraints on the attention network to ensure consistent ranking of tumor regions: (i) an interclass constraint to impose higher attention values for worse patches, and (ii) an intraclass constraint to impose higher attention values for more likely patches within the same class. #### Interclass ranking Pathologists determine the WSI grade based on the grade of the most severe tumor region by implicitly ranking the different tumor sections based on their severity. Motivated by this, we propose to enforce a ranking by using pairwise inequality constraints between patches, such that a more severe patch is ranked higher by the attention network (\(w^{normal}_{n}<w^{well}_{n}<w^{moderate}_{n}<w^{poor}_{n}\)). To do so, we require the grade of each patch, which is unavailable. Therefore, we pseudo label the patches using a threshold on their class probabilities \(p_{n,c}\): \[p_{n,c}=p(y_{n}=c|h^{1}_{n})=\frac{\exp[-\mathcal{D}(\phi(h^{1}_{n}),z_{c})]}{ \sum_{c^{\prime}}\exp[-\mathcal{D}(\phi(h^{1}_{n}),z_{c^{\prime}})]} \tag{9}\] We set \(y^{pseudo}_{n}\) = \(c\) if \(p_{n,c}>0.6\), \(c\) is the grade class. We derive a set of pairs \((i,j)\) of pseudo-labeled patches belonging to two adjacent classes, \(Z\) = {\(i,j\)};\(y^{pseudo}_{i}=c,y^{pseudo}_{j}=c+1,\text{for some }c\in\{0,1,2\}\) and impose a soft ordinal constraint on their attention weights (\(w_{i}<w_{j}\)) using the pairwise ranking loss: \[\mathcal{L}_{inter}=\sum_{i,j\in Z}\log[1+\exp(w_{i}-w_{j})] \tag{10}\] #### Intraclass ranking In addition, we impose intraclass ranking on the patches to ensure that the most confident patches (with higher \(p_{n,c}\)) within a particular grade class are weighted higher during feature aggregation. We focus on the patches with the highest attention weights or class probability, i.e., \(\mathcal{S}=\{n;n\in TopK(w_{n})\cup TopK(p_{n,c})\}\) for \(K=50\). Next, we derive a set of pairs \((i,j)\) using their class probabilities \(\widehat{Z}\) = {\(i,j);\(p_{i,c}<p_{j,c}-\delta,\text{for }i,j\in\mathcal{S},c\in\{0,1,2,3\}\)}, where \(\delta=0.1\) is used to limit the number of pairs due to computational constraints. We impose a pairwise ranking loss on the attention network for patches in \(\widehat{Z}\): \[\mathcal{L}_{intra}=\sum_{i,j\in\hat{Z}}\log[1+\exp\left(w_{i}-w_{j}\right)] \tag{11}\] For a pair of patches \((i,j)\), this loss enforces that if \(p_{j,c}>p_{i,c}\), then attention weight \(w_{j}>w_{i}\). #### Depth Prediction Depth values are continuous and depend on the global tissue structure. Since depth is continuous, we formulate depth prediction as a regression task. The depth predictor uses a single-layer MLP regressor \(W_{reg}\) and a two-layer attention network. We use 2-hop patch features \(h^{2}_{n}\) for both regression and attention networks since it captures more contextual information, which is useful for depth prediction. The WSI-level feature for depth prediction is \(H^{depth}_{b}=\sum_{n=1}^{N}w^{d}_{n}\cdot h^{2}_{n}\), where \(w^{d}_{n}\) is the attention weight computed from \(h^{2}_{n}\) by using an equation similar to equation 6. The depth predictor is trained using the robust Geman-McClure loss [1] to reduce sensitivity to large errors from outlier depth values (\(>10mm\)): \[\mathcal{L}_{depth}=\frac{2(\hat{Y}_{b}^{depth}-Y_{b}^{depth})^{2}/c^{2}}{2( \hat{Y}_{b}^{depth}-Y_{b}^{depth})^{2}/c^{2}+4} \tag{12}\] where \(\hat{Y}_{b}^{depth}=W_{reg}\cdot(H_{b}^{depth})^{T}\) and \(c=2\) is the scale parameter. **Overall Loss**: The overall loss combines the grade classification loss, depth prediction loss, and the interclass and intraclass attention ranking losses. To balance the losses, we used the weighing factors \(\lambda_{0}\), \(\lambda_{1}\) and \(\lambda_{2}\), which are determined using hyperparameter tuning: \[\mathcal{L}_{total}=\mathcal{L}_{grade}+\lambda_{0}\mathcal{L}_{depth}+ \lambda_{1}\mathcal{L}_{inter}+\lambda_{2}\mathcal{L}_{intra} \tag{13}\] ## Experimental Setup ### Datasets We utilized a cSCC dataset from a leading US-based hospital consisting of 718 hematoxylin and eosin (H&E) stained WSIs scanned at \(40X\) magnification for training and evaluation. The dataset was collected from 2017-2022 and reviewed by a group of 4 expert dermatopathologists. The dataset contains 150 normal, 383 well-differentiated, 108 moderately differentiated, and 77 poorly differentiated cases. A majority of patients were white. ### Pre-processing To remove background regions and irrelevant tissue sections, we pre-processed each WSI using thresholding and morphological operations [11]. Each tissue region was downsampled to \(20X\) magnification and tiled into non-overlapping \(448\times 448\) patches. Patches with minimal texture were removed using image gradient-based entropy. ### Training details We performed stratified 5-fold cross-validation using a 64:16:20 split between training/validation/test sets. The GCN feature extractor and task-specific predictors were jointly trained using the Adam optimizer with a batch size of 16 and a learning rate of \(1e^{-4}\) for 60 epochs. The evaluation metrics were: cross-validated average of classwise accuracy (ACC), macro-averaged F1 score, AUC score, and Matthews Correlation Coefficient (MCC). ### Tumor localization Fine-grained tumor annotations were obtained to determine the extent of overlap of tumors with the most-probable tumor regions as predicted by the model. 24 WSIs from the test set were randomly chosen and annotated by two senior pathologists. Each pathologist marked the grades of up to seven most relevant tumor regions in each WSI. ### Baselines We compared our model with state-of-the-art attention-based MIL models, including methods that treat patches independently (ABMIL [16], Gated ABMIL [16], CLAM-MB [11]) and contextual-dependency-based methods (PatchGCN [10], DSMIL [12], TransMIL [15]). We also compared variants of RACR-MIL that included only some of the proposed innovations (spatial vs semantic contextual features, fixed vs learned edge weights, attention ranking) to evaluate their contribution to overall performance. In order to ensure a fair comparison, we utilized the same pre-trained model as the feature extractor for all approaches. We adjust the hyperparameters of existing approaches (embedding size, dropout rate, learning rate) to achieve optimal performance on our dataset. We achieve best test accuracy using \(\lambda_{0}=1.0\), \(\lambda_{1}=0.5\) and \(\lambda_{2}=0.25\). ## Results ### Grade Classification RACR-MIL outperforms state-of-the-art attention models by achieving 2-9% improvement in F1-score over existing non-contextual methods (Max/Mean Pooling, CLAM-MB, GABMIL, ABMIL; Table 1). It achieves a higher classification accuracy for higher-risk tumors (Mod + Poor) compared to the self-attention-based contextual methods TransMIL and DSMIL. Our model outperforms TransMIL, which learns pairwise dependencies between all patches, by 12% because it explicitly incorporates spatial and semantic dependency while creating the graph. Moreover, compared to DSMIL and PatchGCN which incorporate semantic dependency and spatial dependency respectively, our approach achieves slightly higher F1-score and is less prone to overfitting on the dominant well-differentiated class. Our model achieves the best performance in classifying the most challenging class (moderately differentiated) with 19.6% higher accuracy compared to the next best method (DSMIL). All three innovations in the framework contribute to improvement in grade classification. Graph network-based contextual features combined with rank-ordering loss achieves the highest improvement in F1-score. The higher accuracy is due to the improved feature space that enables better rank ordering and separation of patches that belong to different grade classes (see Appendix). Including depth as an auxiliary task leads to a reduction in F1-score. This might be because depth is a geometric concept, and using 2-hop contextual features might not capture all of the relevant structural information for predicting depth accurately. However, qualitative analysis of tumor localization shows that using depth allows the model to localize the tumor better, capturing more tumor patches corresponding to the grade label. ### Tumor Localization - Qualitative Analysis We evaluated the impact of the proposed innovations on tumor localization by studying the normalized attention heatmaps of two representative WSIs (Figure 3). The heatmaps were derived by scaling the attention weight for each patch \(x_{n}\) across a WSI \(\left(\begin{array}{c}a_{n}=\frac{w_{n}-w_{n}^{min}}{w_{n}^{max}-w_{n}^{min}} \end{array}\right)\). Our model localizes the tumor accurately, achieving high consistency with ground-truth fine-grained tumor annotations. Leveraging depth with graph and raking captures a larger portion of the tumor (cases I and II) while leveraging graph leads to fewer false positives (case II) compared to the baseline approach ABMIL. accuracy (Table 3). Using spatial dependency allows the model to give consistent importance to nearby tumor regions, and using semantic dependency allows it to aggregate and focus on the relevant spatially distant tumor regions with similar morphology. Depth as Auxiliary TaskWe find that the graph-based contextual features and depth information are complementary and incorporating depth further guides the attention weights toward key tumor sections leading to lesser false positives (Figure 3). Rank ConstraintUsing the rank-ordering constraint allows us to suitably weigh the tumor regions by focusing higher attention on the higher-grade tumor within a WSI (Figure 3). The rank-ordering constraint is widely applicable - applying it to the baseline ABMIL framework leads to improved F1-score with higher accuracy in classifying high-risk tumors. Furthermore, imposing this constraint improves the F1-score by 2% for ABMIL and upto 5% for our framework. Additionally, it improves classification accuracy across all grade classes in our framework. We speculate that the rank constraint assigns a higher score to worse-grade tumor regions during feature aggregation and allows for aligning the prediction probability with the WSI-level label assigned by the pathologist. Fixed vs Learnt Edge WeightsLearning edge weights using graph attention is more effective than using predefined edge weights for higher-risk cases (Table 2). It is possible that dynamically learned edge resulted in better performance because the edge weights better reflected the similarities between task-relevant patches. ## Related Work A majority of weakly-supervised tumor grading studies focus on prostate cancer or breast cancer, with a few studies on skin cancer grading primarily focusing on melanoma. These studies have explored attention-based MIL and graph-based contextual features. We build upon and extend existing methods to address cSCC-specific challenges. Attention-based MILAttention-based multiple instance learning (ABMIL) ((Ilse et al., 2018)) was the first attention-based approach that aggregated patch information using vanilla attention and a gating mechanism for tissues with limited tumor proportion. Later methods built on MIL by improving the patch feature representation. CLAM ((Lu et al., 2021)) extended ABMIL by introducing an additional hinge loss that discriminated between high and low-attention patches to improve the patch representation. The authors also proposed a multiclass version named CLAM-MB with class-specific attention weights. DSMIL ((Li et al., 2021)) and TransMIL ((Shao et al., 2021)) employed self-attention to learn global dependencies within the tissue. Recently, self-supervision techniques based on contrastive learning (Caron et al., 2021) have been employed to pre-train and enhance feature extractors during training. Previous studies have not explored the use of auxiliary task labels (such as depth) or rank ordering of tumor regions. Graph-based methodsStudies using graph networks with attention primarily focus on incorporating the spatial dependency between neighboring patches (PatchGCN (Chen et al., 2021)) and learning multiscale relationships using self-attention (Graph Transformer (Zheng et al., 2022)). Graph transformer-based approaches encode all pairwise dependencies for learning global relationships resulting in increased complexity and a possibility of overfitting on limited datasets. GCN-MIL (Xiang et al., 2023) proposed a graph convolution network for prostate cancer grading by defining a graph using sampled patches and edges based on spatial proximity. Existing methods have not leveraged long-distance semantic relationships for creating the graph. ## Conclusion We developed RACR-MIL, a novel approach for cSCC WSI tumor grading that incorporated spatial and semantic graph-based contextual features, auxiliary task information (tumor depth), and rank-ordering of tumor regions. The model achieved improved tumor grading on a real-world dataset compared to existing methods and also led to improved tumor localization. The proposed innovations are generic and applicable to existing WSI tumor grading methods, as shown in the ablation study. Our approach has the potential to enhance manual tumor grading as an AI assistant. There are several limitations of this study, that we will address in future work. (i) The model can confuse non-tumor patches with tumor patches, resulting in false positives. Improved feature and contextual information extraction might help in separating different cell types in the feature space. (ii) Including depth as an auxiliary task did not improve grading. Depth is a geometric concept that requires identifying the tissue surface (epidermis). Expressing the global structure relevant to depth prediction may help. (iii) Clinical translation of the proposed method requires quantification of the uncertainty associated with the prediction to build trust. (iv) Finally, the clinical utility of the proposed approach in reducing inter-pathologist variability needs to be evaluated.
2307.03827
Effect of Intensity Standardization on Deep Learning for WML Segmentation in Multi-Centre FLAIR MRI
Deep learning (DL) methods for white matter lesion (WML) segmentation in MRI suffer a reduction in performance when applied on data from a scanner or centre that is out-of-distribution (OOD) from the training data. This is critical for translation and widescale adoption, since current models cannot be readily applied to data from new institutions. In this work, we evaluate several intensity standardization methods for MRI as a preprocessing step for WML segmentation in multi-centre Fluid-Attenuated Inversion Recovery (FLAIR) MRI. We evaluate a method specifically developed for FLAIR MRI called IAMLAB along with other popular normalization techniques such as White-strip, Nyul and Z-score. We proposed an Ensemble model that combines predictions from each of these models. A skip-connection UNet (SC UNet) was trained on the standardized images, as well as the original data and segmentation performance was evaluated over several dimensions. The training (in-distribution) data consists of a single study, of 60 volumes, and the test (OOD) data is 128 unseen volumes from three clinical cohorts. Results show IAMLAB and Ensemble provide higher WML segmentation performance compared to models from original data or other normalization methods. IAMLAB & Ensemble have the highest dice similarity coefficient (DSC) on the in-distribution data (0.78 & 0.80) and on clinical OOD data. DSC was significantly higher for IAMLAB compared to the original data (p<0.05) for all lesion categories (LL>25mL: 0.77 vs. 0.71; 10mL<= LL<25mL: 0.66 vs. 0.61; LL<10mL: 0.53 vs. 0.52). The IAMLAB and Ensemble normalization methods are mitigating MRI domain shift and are optimal for DL-based WML segmentation in unseen FLAIR data.
Abdollah Ghazvanchahi, Pejman Jahbedar Maralani, Alan R. Moody, April Khademi
2023-07-07T20:51:38Z
http://arxiv.org/abs/2307.03827v1
# Effect of Intensity Standardization on Deep Learning for WML Segmentation in Multi-Centre FLAIR MRI ###### Abstract Deep learning (DL) methods for white matter lesion (WML) segmentation in MRI suffer a reduction in performance when applied on data from a scanner or center that is out-of-distribution (OOD) from the training data. This is critical for translation and widescale adoption, since current models cannot be readily applied to data from new institutions. In this work, we evaluate several intensity standardization methods for MRI as a preprocessing step for WML segmentation in multicentre Fluid-Attenuated Inversion Recovery (FLAIR) MRI. We evaluate a method specifically developed for FLAIR MRI called IAMLAB along with other popular normalization techniques such as Whitestrip, Nyul and Z-score. We proposed an Ensemble model that combines predictions from each of these models. A skip-connection UNet (SC UNet) was trained on the standardized images, as well as the original data and segmentation performance was evaluated over several dimensions. The training (in-distribution) data consists of a single study, of 60 volumes, and the test (OOD) data is 128 unseen volumes from three clinical cohorts. Results show IAMLAB and Ensemble provide higher WML segmentation performance compared to models from original data or other normalization methods. IAMLAB & Ensemble have the highest dice similarity coefficient (DSC) on the in-distribution data (0.78 & 0.80) and on clinical OOD data. DSC was significantly higher for IAMLAB compared to the original data (p\(<\)0.05) for all lesion categories (LL\(>\)25mL: 0.77 vs. 0.71; 10mL\(\leq\) LL\(<\)25mL: 0.66 vs. 0.61; LL\(<\)10mL: 0.53 vs. 0.52). The IAMLAB and Ensemble normalization methods are mitigating MRI domain shift and are optimal for DL-based WML segmentation in unseen FLAIR data. 1 Accepted for publication at MIDL 2023 ## 1 Introduction White matter lesions (WML), or leukoaraiosis, are routinely found in the aging brain and are established cerebral vascular disease (CVD) markers (Wardlaw et al., 2015)(Pantoni, 2010)(Azizyan et al., 2011). WML represent increased and altered water content in hydrophobic white matter fibers and tracts. Changes in white matter vasculature likely contributes to WML pathogenesis (Gorelick et al., 2011). WML may be the result of ischemic injury from decreases in regional cerebral blood flow (Pantoni and Garcia, 1997). Demyelination and axonal degeneration have also been suggested as probable mechanisms (Wardlaw et al., 2015). Typically, WML manifest as multifocal, diffuse periventricular or subcortical lesions of varying morphologies (Marek et al., 2018). The presence of WML is associated with cognitive decline, dementia, stroke, death, and lesion progression increases these risks (Debette and Markus, 2010)(Alber et al., 2019). Therefore, WML are significant clinical biomarkers for investigation. In T2-weighted and fluid-attenuated inversion recovery (FLAIR) magnetic resonance images (MRI), WML appear as hyperintense signals in the cerebral white matter (Marek et al., 2018). FLAIR MRI is preferred for WML analysis (Azizyan et al., 2011), (Badji and Westman, 2020), (Wardlaw et al., 2013), since the high signal from the cerebrospinal fluid (CSF) in T2 is suppressed, thus highlighting white matter disease (Lao et al., 2008). This is due to increased water content secondary to ischemia and demyelination and much more robustly seen in FLAIR than with T1/T2 (Gorelick et al., 2011). WML classification is typically performed by a radiologist using visual rating systems such as the Fazekas scale (Fazekas et al., 1993) or by manual segmentation (Caligiuri et al., 2015). Manual segmentation is time-consuming, laborious, and has high inter and intra-variability (Caligiuri et al., 2015). For objective, consistent, and efficient WML analysis, automated WML segmentation methods have been the focus of extensive research efforts in recent decades. There have been many WML frameworks in the past for FLAIR MRI, that consider unsupervised (Caligiuri et al., 2015) (Khademi et al., 2011)(Khademi et al., 2014), supervised (Anbeek et al., 2004) (De Boer et al., 2009) (Simoes et al., 2013) (Knight et al., 2018) (Schmidt, 2017) and deep learning methods more recently. Comparisons of WML algorithms, such as in (Heinen et al., 2019), evaluated the performance of five automated WML segmentation methods in a multicentre FLAIR and T1 dataset. The methods mainly consisted of traditional machine learning (ML) algorithms and performance is reported for 60 volumes from six centres. Using similar WML segmentation methods, in (de Sitter et al., 2017), the authors investigate five WML segmentation tools for multiple sclerosis (MS) lesion segmentation using FLAIR and T1 images for 70 patients from six centres. In (Vanderbecq et al., 2020), the authors considered seven open source traditional WML segmentation methods for T1 and FLAIR and studied performance on research and clinical datasets. In (Frey et al., 2019), the authors provide a meta-review of the current WML segmentation methods applied in large-scale MRI studies. One of the key limitations in machine learning models is poor testing performance on out of distribution (OOD) data - data that is not within the training distribution (In Distribution, ID). This is especially true for MRI, as variations in hardware and software create non-standard intensities, contrasts, and noise distributions across scanners. As shown in (Khademi et al., 2021), CNN algorithms typically perform the best for WML segmentation, but do not equally generalize across scanners and datasets. This domain gap is a significant problem for deployment and limits wide scale adoption, since models will not work equally well in new centres. One method to reduce the domain gap is intensity standardization (Reiche et al., 2019). Intensity standardization is the process of aligning the intensity histogram to some known distribution which maps the same tissues to the same intensity ranges. In this work, we evaluate several intensity normalization methods for FLAIR MRI, for WML segmentation performance on OOD data. ## 2 Methods and Materials ### Data Experimental data for this work comes from 4 multicentre FLAIR MRI datasets for a total of 188 volumes with pixel-wise WML annotations. Sixty volumes from the MICCAI WML Segmentation Challenge (Kuijf et al., 2019) are used to train the models (ID) and the remaining is used for held-out OOD testing. The three OOD clinical datasets are from the Alzheimer's disease Neuroimaging Initiative (ADNI) (Aisen et al., 2015), the Canadian Atherosclerosis Imaging Network (CAIN) (Tardif et al., 2013), a pan-Canadian clinical study on vascular disease, and the Canadian Consortium on Neurodegeneration in Aging (CCNA), a pan-Canadian clinical study to analyze different types of dementia (Chertkow et al., 2019)(Mohaddes et al., 2018). Annotations for CAIN, ADNI and CCNA were developed by the authors. See (Khademi et al., 2021) for the annotation protocol and Figure 7 for inter-rater agreement between the two raters. Table 1 shows the acquisition parameters. ### Intensity Standardization Our original work in intensity standardization is performed to remove variability caused by the multicentre effect using a modified version of our original work in (Reiche et al., 2019) called IAMLAB. The original work performs 3\(\times\)3 median filter denoising, bias field correction through lowpass filtering, and intensity standardization. Intensity standardization is achieved through a novel scaling factor that aligns the histogram modes of two volumes. As shown in (Reiche et al., 2019), the intensity intervals of tissues in 350K FLAIR MRI are more consistent across multicentre data using this approach. Slice refinement was removed which improves robustness since peak detection failed in upper and lower slices (and reduced alignment performance) and N4 bias field correction was used. Our method is compared to several other methods in the literature, including Nyul (Nyul and Udupa, 1999), which provides piece-wise histogram matching, z-score normalization and White Stripe (Shinohara et al., 2014), which provides a z-score normalization within a specific percentile. \begin{table} \begin{tabular}{||c|c|c|c|c|c|c||} \hline & \multicolumn{6}{c|}{**Patient Information**} \\ \hline **Database** & **Disease** & **Volumes** & **Images** & **Patients** & **Centres** & **LL (mL)** \\ \hline ADNI & Dementia & 35 & 1225 & 35 & 22 & 11.8 \(\pm\) 10.1 \\ \hline CAIN & Vascular & 63 & 3024 & 63 & 8 & 12.2 \(\pm\) 12.3 \\ \hline CCNA & Dementia & 30 & 1440 & 30 & 7 & 22.8 \(\pm\) 18.8 \\ \hline MICCAI & Vascular & 60 & 3580 & 60 & 3 & 17.6 \(\pm\) 17.4 \\ \hline Total & All & 188 & 9.27K & 188 & 39 & 15.0 \(\pm\) 15.2 \\ \hline \hline \multicolumn{6}{||c|}{**Acquisition Parameters**} \\ \hline **Database** & **GE/Phil./Siem.** & **TR (ms)** & **TE (ms)** & **TI (ms)** & **X (mm)** & **Y (mm)** \\ \hline ADNI & 10/7/18 & 9000-11000 & 90-154 & 2250-2500 & 0.8594 & 0.8594 \\ \hline CAIN & 12/35/16 & 9000-11000 & 117-150 & 2200-2800 & 0.4285-1 & 0.4285-1 \\ \hline CCNA & 2/3/25 & 9000-9840 & 125-144 & 2250-2500 & 0.9375 & 0.9375 \\ \hline MICCAI & 20/20/20 & 4800-11000 & 82-279 & 1650-2500 & 0.9583-1.2 & 0.9583-1.2 \\ \hline Total & 44/65/79 & 4800-11000 & 82-279 & 1650-2800 & 0.4295-1.2 & 0.4295-1.2 \\ \hline \end{tabular} \end{table} Table 1: FLAIR MRI ground truth datasets. All data is 3T and 3-5mm slice thickness. ### WML Segmentation The skip connection (SC) U-Net proposed in (Wu et al., 2019) is used in this work as it was found to be optimal for FLAIR-only WML segmentation (Khademi et al., 2021). SC UNet adds skip connections between the shallow and deep layers of a CNN architecture. The outputs from each max-pooling layer in the encoder arm are inputs for each transposed convolution layer in the decoder. Skip connections ease training through improved information and back-propagation flow (Wu et al., 2019), (Drozdzal et al., 2016) which has been shown to diminish vanishing gradients (Drozdzal et al., 2016). Generalized dice loss (Sudre et al., 2017), Adam Optimizer with a learning rate of 1e-4 over 100 epochs, and batch size of 64 were used. Images were patched into 64 x 64 regions with 50% overlap. Slight data augmentations were applied for rotation, scaling, shearing, scaling and translation (Li et al., 2018). Models were trained on a computer with a NVIDIA Tesla P100 GPU, 16GB RAM. ### Performance Metrics The KL-divergence is used to measure alignment between the average volume histogram of the dataset and each individual volume histogram. A low KL divergence indicates high alignment in intensities across the dataset. The evaluation metrics used in the MICCAI WML segmentation competition were used which includes the dice similarity coefficient (DSC), the H95, average volume difference (AVD), F1-score and Recall (Kuijf et al., 2019). The extra fraction (EF) was also used to measure the relative false-positive rate. To determine whether segmentation performance is significantly improved using intensity standardized data, a t-test is conducted between performance metrics for predictions from standardized and original data. Box-cox transformation is used to normalize distributions (except for AVD). Stochastic neighbour embedded (t-SNE) graphs (Hinton and Roweis, 2003) are also investigated to examine patterns in the data. The t-SNE method uses a pre-trained CNN and a projection of the feature representations onto two dimensions. Features similar to one another are overlapping in the feature space. The t-SNE graphs for original and normalized data are examined. ## 3 Results The multicenter datasets listed in Table 1 are standardized using IAMLAB, white stripe, Nyul and z-score. SC U-Net was trained separately for original data as well as IAMLAB, Whitesripe, Nyul and Z-score standardized for WML segmentation, resulting in five models in total. An Ensemble method is considered, which takes pixel-wise majority vote across predictions generated by the five models trained on different intensity standardized images. The entire MICCAI dataset (which is balanced between GE, Philips and Siemens) is used for training all the models, and the held-out (unseen) clinical data (CAIN, CCNA, ADNI) are used to examine generalization. Three folds are used (approximately 67% for training, and 33% to testing) for all experiments. Prior to intensity standardization and WML segmentation, skull-stripping is performed on the volumes using U-Net for intracranial volume (ICV) segmentation (DiGregorio et al., 2021). ### Intensity Standardization Intensity standardized images are shown in Figure 9. Histograms of original and standardized volumes for WML only for all datasets and images are shown in Figure 1. To quantify the degree of alignment to the mean intensity distribution for each method, the KL-distance was computed and is shown in Figure 2. IAMLAB normalization has the best alignment (lowest KL) of all the methods, with KL = 0.06, compared to the original data with KL = 0.83. For reference, the intensity normalized histograms for the entire brain and FLAIR MRI slices of original and standardized image are shown in Figure 8 and 9. The t-SNE results for the various standardized and original datasets are shown in Figure 10, which shows features from different scanner vendors are more overlapping in the standardized images. The original data has non-overlapping clusters for the different scanners, indicating different feature mappings. related to under-estimation in Figure 3. Segmentation performance of top intensity standardized models (IAMLAB, Ensemble) were statistically different from the original data. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|} \hline & **DSC** & **EF** & **H95** & **AVD** & **F1-score** & **Recall** \\ \hline **Original** & 0.75 & **0.14** & 5.34 & 26.60 & 0.71 & 0.66 \\ \hline **Nyul** & 0.76 & 0.16 & 5.32 & 22.47* & 0.68 & 0.63 \\ \hline **White-Strip** & 0.78* & 0.17 & 5.25 & 19.72* & **0.72*** & 0.66 \\ \hline **Z-Score** & 0.78* & 0.17 & 4.78 & 20.01* & 0.72* & 0.69 \\ \hline **IAMLAB** & 0.78* & 0.18 & 4.59 & 20.08* & 0.71 & 0.69 \\ \hline **Ensemble** & **0.80*** & 0.28 & **4.52*** & **19.46*** & 0.71 & **0.78*** \\ \hline \end{tabular} \end{table} Table 2: Model validation: WML segmentation performance on 60 MICCAI. **Bold** is the best and * indicates mean of the metric is significantly different from the original data using t-tests (p\(<\)0.05). Figure 3: SC U-Net trained and tested on MICCAI (ID). Figure 2: KL divergence for all metrics on the ground truth data. ### WML Segmentation: Out of Distribution Next SC U-Net is trained using all 60 MICCAI volumes for the original and standardized data and tested on the held-out OOD clinical data from ADNI, CAIN and CCNA (128 volumes, approximately 5700 image slices). Example segmentations are shown in Figure 11 over all models and mean validation metrics are shown in Table 3. IAMLAB (0.64) and Ensemble (0.65) have the highest DSCs compared to the original (DSC = 0.60). Ensemble is also a top performer, with lowest EF (0.21) and H95 (11.21) and highest F1 score (0.60). Nyul has the highest Recall=0.76. When testing differences in DSC means, segmentation improvement afforded by intensity standardization was statistically different between original and all normalization methods, indicating segmentation improvement is significant. To analyze the effect of IAMLAB and the Ensemble method on WML segmentation further, the change in DSC is plotted to investigate cases where segmentation was improved or hindered by normalization (Figure 4). The change in DSC is calculated by the DSC of the standardized model subtracted from the original data model for each volume. If there is a positive value, standardization improved performance while a negative value means it was more optimal to use original data and model. IAMLAB improved in 77% of the cases (98/128) and the Ensemble method improved in 86% of the cases. See example predictions in Figure 11, for cases with an average negative DSC change of -0.12 (A, B, C) and cases with an average positive DSC change of 0.17 (D-J) for IAMLAB standardized data compared to the original data. The improvement over most of the cases indicates standardization improves generalization to unseen data (OOD). highest DSC (0.53). Compared to original data, DSC from IAMLAB and Ensemble were statistically different from the original data DSC, indicating the gains from standardization on OOD data are significant. For 10-25mL, Ensemble had the top DSC (0.67) and IAMLAB had the second highest DSC (0.66). For 25+mL, Ensemble had the highest DSC (0.77) by a large margin compared to the original data (0.71). For both groups, DSC was statistically different for IAMLAB and Ensemble compared to the original data. Models trained on the original data had lowest performance across the board especially in the large lesion group. Of all metrics, original data was best only in EF for 10-25mL and 25mL+ groups, which may be due to undersegmentation. To investigate differences in normalization methods in terms of segmentation consistency, the coefficient of variation (CoV) of the DSC over different lesion loads are shown in Figure 6. Ensemble and IAMLAB methods have the highest consistency (lowest CoV) which indicates models developed on these datasets are more consistent and reliable. We postulate this is because these methods have better feature representation across imaging scanners due to aligned intensity profiles of the imaging volumes. This is supported by the t-SNE feature representations in the clinical datasets (CCNA, CAIN and ADNI), which are Figure 5: SC-UNET trained with MICCAI tested on ADNI (A), CAIN (B) and CCNA (C). \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & **DSC** & **EF** & **H95** & **AVD** & **F1-score** & **Recall** \\ \hline **Original** & 0.60 & 0.21 & 13.44 & 35.16 & 0.56 & 0.69 \\ \hline **Nyul** & 0.54 & 1.09 & 16.99 & 82.84 & 0.38 & **0.76*** \\ \hline **White-Stripe** & 0.62* & 0.48 & 15.06 & 33.89* & 0.53 & 0.73* \\ \hline **Z-Score** & 0.62 & 0.55 & 13.61 & 44.19 & 0.48 & 0.73* \\ \hline **IAMLAB** & **0.64*** & 0.29 & 13.01 & **24.35*** & 0.53* & 0.71* \\ \hline **Ensemble** & **0.65*** & **0.21** & **11.21*** & 26.57* & **0.60*** & 0.69 \\ \hline \end{tabular} \end{table} Table 3: WML segmentation performance on CCNA, CAIN, ADNI (N=128). **Bold** is the best and * indicates mean of the metric is significantly different from the original data using t-tests (p\(<\)0.05). unseen, OOD data, in Figure 10. Models trained on the original data have features that are more separated (and different) for each scanner type. In contrast, features extracted from the standardized data are more overlapping and similar across scanners likely leading to improved generalization in OOD data. This suggests intensity normalization minimizes the generalization gap between datasets for WML segmentation. It is interesting to note that Ensemble and IAMLAB consistently provide the highest DSC and lowest H95, AVD. These two algorithms may be providing complimentary information for optimal segmentation on OOD datasets. Overall, Ensemble and IAMLAB are the best performing algorithms on OOD for WML segmentation, which provides significant motivation for using intensity normalization methods for testing in unseen multicentre FLAIR MRI or when deploying algorithms on new scanners. ## 4 Conclusion We investigate intensity normalization methods for deep learning-based WML segmentation methods on out-of-distribution FLAIR MRI datasets. An SC U-Net was trained using MICCAI competition data for the original dataset along with the four normalization methods. Models were tested on a diverse OOD test set from four different datasets comprising 128 imaging volumes. It was observed that intensity normalization using IAMLAB and the ensemble segmentation from IAMLAB, White-Stripe and Z-score normalization models leads to a statistically significant improvement in segmentation performance on OOD data. Therefore, IAMLAB and Ensemble methods are excellent candidates to improve generalization across clinical datasets from different centers, which is key for translation. ## Acknowledgments We acknowledge the Natural Sciences and Engineering Research Council(NSERC) of Canada (Discovery Grant), Alzheimer's Society Research Program(ASRP) (New Investigator Grant) and the Ontario Government (Early Researcher Award) for funding this research. Figure 6: CoV of DSC metric from CAIN, CCNA and ADNI over lesion load (left) and scanner (right) group for different normalizations.
2308.02364
Matrix Completion When Missing Is Not at Random and Its Applications in Causal Panel Data Models
This paper develops an inferential framework for matrix completion when missing is not at random and without the requirement of strong signals. Our development is based on the observation that if the number of missing entries is small enough compared to the panel size, then they can be estimated well even when missing is not at random. Taking advantage of this fact, we divide the missing entries into smaller groups and estimate each group via nuclear norm regularization. In addition, we show that with appropriate debiasing, our proposed estimate is asymptotically normal even for fairly weak signals. Our work is motivated by recent research on the Tick Size Pilot Program, an experiment conducted by the Security and Exchange Commission (SEC) to evaluate the impact of widening the tick size on the market quality of stocks from 2016 to 2018. While previous studies were based on traditional regression or difference-in-difference methods by assuming that the treatment effect is invariant with respect to time and unit, our analyses suggest significant heterogeneity across units and intriguing dynamics over time during the pilot program.
Jungjun Choi, Ming Yuan
2023-08-04T14:54:29Z
http://arxiv.org/abs/2308.02364v1
# Matrix Completion When Missing Is Not at Random and Its Applications in Causal Panel Data Models + ###### Abstract This paper develops an inferential framework for matrix completion when missing is not at random and without the requirement of strong signals. Our development is based on the observation that if the number of missing entries is small enough compared to the panel size, then they can be estimated well even when missing is not at random. Taking advantage of this fact, we divide the missing entries into smaller groups and estimate each group via nuclear norm regularization. In addition, we show that with appropriate debiasing, our proposed estimate is asymptotically normal even for fairly weak signals. Our work is motivated by recent research on the Tick Size Pilot Program, an experiment conducted by the Security and Exchange Commission (SEC) to evaluate the impact of widening the tick size on the market quality of stocks from 2016 to 2018. While previous studies were based on traditional regression or difference-in-difference methods by assuming that the treatment effect is invariant with respect to time and unit, our analyses suggest significant heterogeneity across units and intriguing dynamics over time during the pilot program. _Keywords:_ Matrix completion; Missing not at random (MNAR); Weak signal-to-noise ratio; Multiple treatments; Tick size pilot program Introduction The problem of noisy matrix completion in which we are interested in reconstructing a low-rank matrix from partial and noisy observations of its entries arises naturally in numerous applications. It has attracted a considerable amount of attention in recent years, and a lot of impressive results have been obtained from both statistical and computational perspectives. See, e.g., Candes and Plan (2010); Mazumder et al. (2010); Koltchinskii et al. (2011); Negahban and Wainwright (2012); Chen et al. (2019, 2020); Jin et al. (2021); Xia and Yuan (2021); Bhattacharya and Chatterjee (2022) among many others. A common and crucial premise underlying these developments is that observations of the entries are missing at random. Although this is a reasonable assumption for some applications, it could be problematic for many others. In the past several years, there has been growing interest to investigate how to deal with situations where missing is not at random and to what extent the techniques and insights that are initially developed assuming missing at random can be extended to these cases. See, e.g. Agarwal et al. (2020, 2021); Athey et al. (2021); Bai and Ng (2021); Chernozhukov et al. (2021); Cahan et al. (2023); Xiong and Pelger (2023) among others. This fruitful line of research is largely inspired by the development of synthetic control methods in causal inference. See, e.g., Abadie and Gardeazabal (2003); Abadie et al. (2010); Abadie (2021). The close connection between noisy matrix completion and synthetic control methods for panel data was first made formal by Athey et al. (2021) who showed that powerful matrix completion techniques such as nuclear norm regularization can be very useful for many causal panel data models where missing is not at random. It also helps bring together two complementary perspectives of noisy matrix completion: one focuses on statistical inferences assuming a strong factor structure and the other aims at recovery guarantees with minimum signal strength requirement. The main objective of this work is to further bridge the gap between these two schools of ideas and develop a general and flexible inferential framework for matrix completion when missing is not at random and without the requirement of strong factors. In particular, we shall follow Athey et al. (2021) and investigate how the technique of nuclear norm regularization can be used to infer individual treatment effects under a variety of missing mechanisms. One of the key observations to our development is the fact that if the number of missing entries is sufficiently small when compared to the panel size, then they can be estimated well even when missing is not at random. For more general missing patterns with an arbitrary proportion of missingness, we can judicially divide the missing entries into smaller groups and leverage this fact by applying the nuclear norm regularization to a submatrix with a small number of missing entries. This is where our approach differs from that of Athey et al. (2021) who suggest applying the nuclear norm regularized estimation to the full matrix. We shall show that subgrouping is essential in producing more accurate estimates and more efficient inferences about individual treatment effects. It is worth noting that it is computationally more efficient to estimate all missing entries together, as suggested by Athey et al. (2021). But estimating too many missing entries simultaneously can be statistically suboptimal. In a way, our results suggest how to trade-off between the computational cost and statistical efficiency. Our proposal of subgrouping is similar in spirit to the approach taken by Agarwal et al. (2021) who suggested estimating the missing entries one at a time. For estimating a single missing entry, they propose a matching scheme that constructs multiple "synthetic" neighbors and averages the observed outcomes associated with each synthetic neighbor. Separating the observations into different sets of neighbors, however, could lead to a loss in efficiency. For example, when estimating the mean of an \(N\times N\) matrix with one missing entry, the estimation error of the approach from Agarwal et al. (2021) for the missing entry converges at the rate of \(N^{-1/4}\), which is far slower than the rate of \(N^{-1/2}\) attained by our method. Furthermore, we show that, with appropriate debiasing, our proposed estimate is asymptotically normal even with fairly weak signals. More specifically, the asymptotic normality holds if \(\psi_{\min}^{2}\gg\sigma^{2}N\) where \(\psi_{\min}\) is the smallest nonzero singular value of the mean of an \(N\times N\) matrix and \(\sigma^{2}\) is the variance of the observed entries. Our development builds upon and complements a series of recent works that show that statistical inference for matrix completion is possible with a low signal-to-noise ratio when the data are missing uniformly at random. See, e.g., Chen et al. (2019, 2020); Xia and Yuan (2021). Our results also draw an immediate comparison with the recent works by Bai and Ng (2021); Cahan et al. (2023) who developed an inferential theory for the asymptotic principle component (APC) based approaches when the signal is much stronger, e.g., \(\psi_{\min}^{2}\gtrsim\sigma^{2}N^{2}\). It is worth pointing out that the nuclear norm regularization and APC-based approach each has its own merits and requires different treatment. For example, APC-based methods usually assume that the factors are random and impose moment conditions to ensure that the factor structure is strong and identifiable, whereas our development assumes that the factors are deterministic but incoherent and allows for weaker signals. Our work is motivated by a number of recent studies on the Tick Size Pilot Program, an experiment conducted by the Security and Exchange Commission (SEC) to evaluate the impact of widening the tick size on the market quality of small and illiquid stocks from 2016 to 2018. See, e.g., Albuquerque et al. (2020); Chung et al. (2020); Werner et al. (2022). The pilot consisted of three treatment groups with a control group: 1) The first treatment group was quoted in $0.05 increments but still traded in $0.01 increments (only Q rule), 2) The second treatment group was quoted and traded in $0.05 increments (Q+T rule), 3) The third treatment group was quoted and traded in $0.05 increments, and also subject to the trade-at rule (Q+T+TA rule). The trade-at rule, in general, prevents price matching by exchanges that are not displaying the best price. The control group was quoted and traded in $0.01 increments. Previous studies (see, e.g., Chung et al., 2020) on the effects of the quote rule (Q), trade rule (T), and trade rule (TA) on the liquidity measure are based on traditional regression or difference-in-difference methods and assume that the treatment effect is invariant with respect to time and unit. As we shall demonstrate, this assumption is problematic for the Tick Size Pilot Program data and there is significant heterogeneity in the treatment effect across both time and units. Indeed, more insights can be obtained using a potential outcome model with interactive fixed effects to capture such heterogeneity. To do so, we extend our methodology from estimating a single matrix to the simultaneous completion of multiple matrices, accounting for the multiple potential situations. The remainder of this paper is organized as follows. Section 2 introduces the method of using the nuclear norm penalized estimation when missing is not at random and provides the convergence rates of the estimator. Section 3 discusses how to reduce bias and provides inferential theory using the debiased estimator. Section 4 shows how our proposed methodology can be applied to infer the treatment effect in the Tick Size Pilot Program and presents the empirical findings of our analysis. Section 5 examines the finite sample performance of our estimators using simulation studies. Finally, we conclude with a few remarks in Section 6. All proofs are relegated to the Appendix due to the space limit. In what follows, we use \(\|\cdot\|_{\text{F}}\), \(\|\cdot\|\), and \(\|\cdot\|_{*}\) to denote the matrix Frobenius norm, spectral norm, and nuclear norm, respectively. In addition, \(\|\cdot\|_{\infty}\) denotes the entrywise \(\ell_{\infty}\) norm, and \(\|\cdot\|_{2,\infty}\) the largest \(\ell_{2}\) norm of all rows of the matrix, i.e., \(\|A\|_{2,\infty}=\max_{i}(\sum_{j}a_{ij}^{2})^{1/2}\). For any vector \(a\), \(\|a\|\) denotes its \(\ell_{2}\) norm. For any set \(\mathcal{A}\), \(|\mathcal{A}|\) is the number of elements in \(\mathcal{A}\). We use \(\circ\) to denote the Hadamard product or the entry-by-entry product between matrices of conformable dimensions. \(a\lesssim b\) means \(|a|/|b|\leq C_{1}\) for some constant \(C_{1}>0\) and \(a\gtrsim b\) means \(|a|/|b|\geq C_{2}\) for some constant \(C_{2}>0\). \(c\asymp d\) means that both \(c/d\) and \(d/c\) are bounded. \(a\ll b\) indicates \(|a|\leq c_{1}|b|\) for some sufficiently small constant \(c_{1}>0\) and \(a\gg b\) indicates \(c_{2}|a|\geq|b|\) for some sufficiently small constant \(c_{2}>0\). In addition, \([K]=\{1,\ldots,K\}\). ## 2 Noisy Matrix Completion Consider a panel data setting where \(M=(m_{it})_{1\leq i\leq N,1\leq t\leq T}\) is a \(N\times T\) matrix of rank \(r\) (\(\ll\min\{N,T\}\)). We use \(i\) as the cross-section index and \(t\) as the time index. Following the convention of the matrix completion literature, we shall assume that the singular vectors of \(M\) are incoherent in that there is a \(\mu\geq 1\) such that \(\|U_{M}\|_{2,\infty}\leq\sqrt{\mu r/N}\), \(\|V_{M}\|_{2,\infty}\leq\sqrt{\mu r/T}\) where \(U_{M}\) and \(V_{M}\) denote the left and right singular vectors of \(M\), respectively. The incoherence condition requires the singular vectors to be de-localized, in the sense that entries are not dominated by a small number of rows or columns. Instead of \(M\), we observe a subset of the entries of \(Y=M+E\) where \(E\) is a noise matrix whose entries are independent and identically distributed zero-mean, sub-Gaussian random variable, i.e., \(\mathbb{E}[\epsilon_{it}^{2}]=\sigma^{2}\), \(\mathbb{E}[\exp(s\epsilon_{it})]\leq\exp(Cs^{2}\sigma^{2})\), \(\forall s\in\mathbb{R}\) and some constant \(C>0\). Let \(\Omega=(\omega_{it})_{1\leq i\leq N,1\leq t\leq T}\in\{0,1\}^{N\times T}\) indicate the observed entries: \(\omega_{it}=1\) if and only if \(y_{it}\) is observed. The goal of noisy matrix completion is to estimate \(M\) from \(Y_{\Omega}:=\{y_{it}:\omega_{it}=1\}\). A popular approach to do so is the nuclear norm penalization: \[\widetilde{M}=\operatorname*{arg\,min}_{A\in\mathbb{R}^{N\times T}}\left\{\| \Omega\circ(Y-A)\|_{\mathrm{F}}^{2}+\lambda\|A\|_{*}\right\},\] where \(\lambda\geq 0\) is a tuning parameter. The properties of \(\widetilde{M}\) are by now well understood in the case of missing completely at random, especially when the entries of \(\Omega\) are independently sampled from a Bernoulli distribution. See, e.g., Koltchinskii et al. (2011); Chen et al. (2020). Instead, we are interested here in the situation where \(\Omega\) is not random. Situations when missing is not at random arise naturally in many causal panel models. Consider, for example, the evaluation of a program that takes effect after time \(T_{0}\) for the last \(N-N_{0}\) units. If \(M\) is the potential outcome under the control, then we do not have observations of its entries for \(i>N_{0}\) and \(t>T_{0}\), e.g., \(\Omega=1\{t\leq T_{0}\text{ or }i\leq N_{0}\}\), yielding a block missing pattern as shown in the left panel of Figure 1. A more general setting that often arises in causal panel data is the staggered adoption where units may differ in the time they are first exposed to the treatment, yielding a missing pattern as shown in the right panel of Figure 1. See Athey et al. (2021); Agarwal et al. (2021) for other similar missing patterns that are common in the context of recommendation systems and A / B testing. Note that if the entries are observed uniformly at random, then \[\|\Omega\circ(Y-A)\|_{\mathrm{F}}^{2}\approx\frac{|\Omega|}{NT}\mathbb{E}\|Y-A \|_{\mathrm{F}}^{2}\] for sufficiently large \(N\) and \(T\). The right-hand side is minimized by \(M\), which justifies \(\widetilde{M}\) as a plausible estimate of \(M\). This intuition, however, no longer applies when \(\Omega\) is not random and has more structured patterns. Our proposal to overcome this problem is dividing the missing entries into smaller groups and estimating each group via nuclear norm regularization. The main inspiration behind our method is the observation that \(\widetilde{M}\) is a good estimate of \(M\) when there are only a few missing entries, even if they are missing not at random. It is instructive to start with a single treated period, e.g., \(\Omega=1\{t\leq T-1\text{ or }i\leq N_{0}\}\). In this case, the number of missing entries is \(|\Omega^{c}|=N-N_{0}\). Denote by \(\psi_{\max}\) and \(\psi_{\min}\) the largest and smallest nonzero singular value of \(M\), respectively, and \(\kappa=\psi_{\max}/\psi_{\min}\) its condition number. The following theorem provides bounds for the estimation error of \(\widetilde{M}\). **Theorem 2.1**.: _Assume that_ 1. \(\sigma\kappa^{2}\mu^{\frac{1}{2}}r^{\frac{1}{2}}\max\{N\sqrt{\log N},T\sqrt{ \log T}\}\ll\psi_{\min}\min\{\sqrt{N},\sqrt{T}\}\) Figure 1: Two typical observation patterns of the potential outcomes under the control in the causal panel model: Here, the blue area is the observed area, and the white area is the missing area. Missingness occurs because we cannot observe the potential outcomes under the control for the treated entries. _;_ 2. \(\kappa^{4}\mu^{2}r^{2}\max\{N\log^{3}N,T\log^{3}T\}\ll\min\{N^{2},T^{2}\}\)_;_ 3. \(|\Omega^{c}|\kappa^{2}\mu r\ll\min\{N,T\}\)_._ _Then, with probability at least \(1-O(\min\{N^{-9},T^{-9}\})\), we have_ \[\left\|\widetilde{M}-M\right\|_{\infty}\leq\frac{C\sigma\mu r^{\frac{3}{2}} \kappa^{2}\max\{\sqrt{\log N},\sqrt{\log T}\}}{\min\{\sqrt{N},\sqrt{T}\}},\] _for some absolute constant \(C>0\)._ Some immediate remarks are in order. Consider the situation where \(\kappa,\mu,r=O(1)\), and \(N\asymp T\). Ignoring the logarithmic term, the signal-to-noise ratio requirement given by Assumption (i) reduces to \(\psi_{\min}\gg\sigma N^{1/2}\) which is significantly weaker than those in the existing literature. More specifically, if there is a single missing entry, e.g., \(N_{0}=N-1\), Agarwal et al. (2021) suggest to partition the submatrix \((m_{it})_{1\leq i<N,1\leq t<T}\) into \(K\) smaller matrices. In particular, their Theorem 2 states that the best estimation error for their estimate is given by \[|\widehat{m}_{NT}^{\text{ADSS}}-m_{NT}|=O_{p}\left(\frac{1}{N^{1/4}}+\frac{1} {T^{1/4}}\right)\] by setting \(K\asymp N^{1/2}\). In contrast, under the assumptions of Agarwal et al. (2021), \(\sigma,\kappa,\mu,r\) are bounded and hence the convergence rate of our estimator is \[|\widetilde{m}_{NT}-m_{NT}|=O_{p}\left(\left(\frac{1}{N^{1/2}}+\frac{1}{T^{1/ 2}}\right)\sqrt{\log(NT)}\right).\] Theorem 2 serves as our building block for dealing with more general and common missing patterns, which we shall now discuss in detail. Single Treated Period.Note that Assumption (iii) of Theorem 2 restricts the number of missing entries not to be large compared to \(N\) and \(T\). In particular, if \(\kappa,\mu,r=O(1)\) and \(N\asymp T\), then it requires that \(|\Omega^{c}|=o\left(N\right)\). To deal with a larger number of missing entries, we shall leverage this result by splitting the missing entries into small groups and estimating them separately, as illustrated in Figure 2. Specifically, we split the missing entries into small groups, denoted by \(\{\mathcal{G}_{l}\}_{1\leq l\leq L}\), and construct the submatrices \(\{Y_{l}\}_{1\leq l\leq L}\) as illustrated in Figure 2. For each \(1\leq l\leq L\), we estimate \(M_{l}\), the corresponding submatrix of \(M\), using the nuclear norm penalization: \[\widetilde{M}_{l}=\operatorname*{arg\,min}_{A\in\mathbb{R}^{N_{l}\asymp T}} \left\{\|\Omega_{l}\circ(Y_{l}-A)\|_{\mathrm{F}}^{2}+\lambda_{l}\|A\|_{*} \right\}, \tag{2.1}\] where \(N_{l}=N_{0}+|\mathcal{G}_{l}|\) and \(\Omega_{l}\) is the corresponding submatrix of \(\Omega\). We shall then assemble these estimated submatrices into an estimate \(\widetilde{M}\) of \(M\). Note that each missing entry appears in one and only one of the submatrices and can therefore be estimated accordingly. The entries from \(O\) in Figure 2, e.g., the \(N_{0}\times(T-1)\) principle submatrix of \(M\), on the other hand, are estimated for all groups. We can estimate these entries by averaging all of these estimates. Let the smallest nonzero singular value of \(M_{O}\) be \(\psi_{\min,O}\), where \(M_{O}\) is the submatrix of \(M\) corresponding to \(O\). Denote by \(u_{i}^{\top}\) and \(v_{t}^{\top}\) the \(i\)-th row of \(U_{M}\) and \(t\)-th row of \(V_{M}\), respectively. We can then derive the following bounds from Theorem 2.1. **Corollary 2.2**.: _Assume that_ 1. \(\sigma\kappa^{\frac{9}{4}}\mu^{\frac{1}{2}}r^{\frac{1}{2}}\max\{N_{0}\sqrt{ \log N_{0}},T\sqrt{\log T}\}\ll\psi_{\min,O}\min\{\sqrt{N_{0}},\sqrt{T}\}\)_;_ 2. \(\kappa^{5}\mu^{2}r^{2}\max\{N_{0}\log^{3}N_{0},T\log^{3}T\}\ll\min\{N_{0}^{2}, T^{2}\}\)_;_ 3. \(|\mathcal{G}_{l}|\kappa^{\frac{5}{2}}\mu r\ll\min\{N_{0},T\}\)_,_ \(l=1,\ldots,L\)_;_ 4. _There are constants_ \(C,c>0\) _such that_ \[c\leq\lambda_{\min}\left(\frac{N}{N_{0}}\sum_{i\leq N_{0}}u_{i}u_{i}^{\top} \right)\leq\lambda_{\max}\left(\frac{N}{N_{0}}\sum_{i\leq N_{0}}u_{i}u_{i}^{ \top}\right)\leq C,\] _where_ \(\lambda_{\max}(A)\) _and_ \(\lambda_{\min}(A)\) _are the largest and smallest singular value of_ \(A\)_, respectively._ Figure 2: How to construct the submatrix: We divide the missing entries into \(L\) groups. For each \(1\leq l\leq L\), we estimate the entries in \(\mathcal{G}_{l}\) using the nuclear norm penalized estimation on the submatrix \(Y_{l}\) after making the submatrix \(Y_{l}\) as described in the right panel. _Then, with probability at least \(1-O(\min\{N_{0}^{-9},T^{-9}\}L)\), we have_ \[\left\|\widetilde{M}-M\right\|_{\infty}\leq C\frac{\sigma\kappa^{\frac{5}{2}}\mu r ^{\frac{5}{2}}\max\{\sqrt{\log N_{0}},\sqrt{\log T}\}}{\min\{\sqrt{N_{0}}, \sqrt{T}\}},\] _for some absolute constant \(C>0\)._ The main difference from Theorem 2.1 lies in Assumptions (iii) and (iv) of Corollary 2.2. Assumption (iii) specifies how large a block can be. In principle, we can always take \(|\mathcal{G}_{l}|=1\), that is, recovering one entry at a time so that this condition is trivially satisfied with sufficiently large \(N_{0}\) and \(T\). However, there could be enormous computational advantages in creating groups as large as possible because the number of \(\widetilde{M}_{l}\)s that need to be computed decreases with increasing group size. Assumption (iv) can be viewed as an incoherence condition to ensure that the singular vectors of \(M\) are not dominated by either the treated or untreated units. It is easy to see that when there are few missing entries, e.g., \(N_{0}\approx N\), the condition is satisfied by virtue of the incoherence of \(u_{i}\)s. In general, if \(\{u_{i}\}_{i\in[N]}\) is exchangeable or if the treated units are uniformly selected, then this condition is satisfied with high probability, at least for sufficiently large \(N_{0}\), since \(\frac{N}{N_{0}}\sum_{i\leq N_{0}}u_{i}u_{i}^{\top}\approx\sum_{i\leq N}u_{i}u_ {i}^{\top}=I_{r}\) by means of matrix concentration inequalities (see, e.g., Tropp et al., 2015). Single Treated Unit.A similar estimating strategy can also be used to deal with a single treated unit. Without loss of generality, let \(\Omega=1\{t\leq T_{0}\text{ or }i\leq N-1\}\). Then the fully observed submatrix is \(O=(y_{it})_{1\leq i\leq N-1,1\leq t\leq T_{0}}\). As in the case of a single treated period, we split the missing entries into smaller groups, denoted by \(\mathcal{G}_{1},\ldots,\mathcal{G}_{L}\), by periods, and estimate them separately as before. Similar to Theorem 2.2, we have the following bounds for the resulting estimate. **Corollary 2.3**.: _Assume that_ 1. \(\sigma\kappa^{\frac{9}{4}}\mu^{\frac{1}{2}}r^{\frac{1}{2}}\max\{N\sqrt{\log N},T_{0}\sqrt{\log T_{0}}\}\ll\psi_{\min,O}\min\{\sqrt{N},\sqrt{T_{0}}\}\)_;_ 2. \(\kappa^{5}\mu^{2}r^{2}\max\{N\log^{3}N,T_{0}\log^{3}T_{0}\}\ll\min\{N^{2},T_{ 0}^{2}\}\)_;_ 3. \(|\mathcal{G}_{l}|\kappa^{\frac{5}{2}}\mu r\ll\min\{N,T_{0}\}\)_,_ \(l=1,\ldots,L\)_;_ 4. _There are constants_ \(C,c>0\) _such that_ \[c\leq\lambda_{\min}\left(\frac{T}{T_{0}}\sum_{t\leq T_{0}}v_{t}v_{t}^{\top} \right)\leq\lambda_{\max}\left(\frac{T}{T_{0}}\sum_{t\leq T_{0}}v_{t}v_{t}^{ \top}\right)\leq C.\] _Then, with probability at least \(1-O(\min\{N^{-9},T_{0}^{-9}\}L)\), we have_ \[\left\|\widetilde{M}-M\right\|_{\infty}\leq C\frac{\sigma\kappa^{\frac{5}{2}}\mu r ^{\frac{3}{2}}\max\{\sqrt{\log N},\sqrt{\log T_{0}}\}}{\min\{\sqrt{N},\sqrt{T_ {0}}\}},\] _for some absolute constant \(C>0\)._ General Block Missing Pattern.We can also apply the grouping and estimating procedure to general block missing structures such as that depicted in the left panel of Figure 1, e.g., \(\Omega=1\{t\leq T_{0}\text{ or }i\leq N_{0}\}\), by estimating missing entries one period at a time (or one unit at a time). Denote by \(\mathcal{G}_{1},\mathcal{G}_{2},\ldots,\mathcal{G}_{L}\) the groups of missing units (or periods). The following result again follows from Theorem 2.1: **Corollary 2.4**.: _Assume that_ 1. \(\sigma\kappa^{\frac{9}{4}}\mu^{\frac{1}{2}}r^{\frac{1}{2}}\max\{N_{0}\sqrt{ \log N_{0}},T_{0}\sqrt{\log T_{0}}\}\ll\psi_{\min,O}\min\{\sqrt{N_{0}},\sqrt{ T_{0}}\}\)_;_ 2. \(\kappa^{5}\mu^{2}r^{2}\max\{N_{0}\log^{3}N_{0},T_{0}\log^{3}T_{0}\}\ll\min\{N_{0}^{2},T_{0}^{2}\}\)_;_ 3. \(|\mathcal{G}_{l}|\kappa^{\frac{5}{2}}\mu r\ll\min\{N_{0},T_{0}\}\)_,_ \(l=1,\ldots,L\)_;_ 4. _There are constants_ \(C,c>0\) _such that_ \[c\leq\lambda_{\min}\left(\frac{N}{N_{0}}\sum_{i\leq N_{0}}u_{i}u _{i}^{\top}\right)\leq\lambda_{\max}\left(\frac{N}{N_{0}}\sum_{i\leq N_{0}}u_{ i}u_{i}^{\top}\right)\leq C,\] \[c\leq\lambda_{\min}\left(\frac{T}{T_{0}}\sum_{t\leq T_{0}}v_{t}v_ {t}^{\top}\right)\leq\lambda_{\max}\left(\frac{T}{T_{0}}\sum_{t\leq T_{0}}v_{ t}v_{t}^{\top}\right)\leq C.\] _Then, with probability at least \(1-O(\min\{N_{0}^{-9},T_{0}^{-9}\}L(T-T_{0}))\), we have_ \[\left\|\widetilde{M}-M\right\|_{\infty}\leq C\frac{\sigma\kappa^{\frac{5}{2}} \mu r^{\frac{3}{2}}\max\{\sqrt{\log N_{0}},\sqrt{\log T_{0}}\}}{\min\{\sqrt{N _{0}},\sqrt{T_{0}}\}},\] _for some absolute constant \(C>0\)._ It is worth noting that both Corollary 2.2 and Corollary 2.3 can be viewed as special cases of Corollary 2.4. It is also of interest to compare the rates of convergence with those of Athey et al. (2021). Athey et al. (2021) considered a direct application of the nuclear norm penalized estimation to the full matrix. Their Theorem 2 states that \[\frac{1}{\sqrt{NT}}\left\|\widetilde{M}-M\right\|_{\mathrm{F}}=O_{p}\left( \sqrt{\frac{T}{N}}+\sqrt{\frac{1}{T}}\right),\] ignoring the logarithmic factors and \(\sigma\), \(r\), and \(\left\|M\right\|_{\infty}\). In other words, the estimate could be inconsistent when \(N=O(T)\). On the other hand, the convergence rate of our estimator is given by \[\left\|\widetilde{M}-M\right\|_{\infty}=O_{p}\left(\sqrt{\frac{1}{N_{0}}}+\sqrt {\frac{1}{T_{0}}}\right),\] up to a logarithmic factor when we assume \(\kappa,\mu=O_{p}(1)\). Hence, our estimator is consistent as long as \(\min\{N_{0},T_{0}\}\) diverges. Furthermore, the simulation results in Section 5 also show that applying the nuclear norm penalized estimation to the submatrix indeed performs much better than applying it to the full matrix as long as \(N_{0}\) and \(T_{0}\) are not too small. Staggered Adoption.More generally, we can take advantage of our estimation strategy for staggered adoption where there are \(D\) number of adoption time points, says \(T_{1}<\cdots<T_{D}\), and \(D\) number of corresponding groups of treated units, says \(G_{1},\ldots,G_{D}\). That is, for each \(d\in[D]\), the units in \(G_{d}\) adopt the treatment in the time period \(T_{d}\). We can utilize the strategy for block missing patterns to estimate the missing entries. More specifically, denote by \(M_{d,d^{\prime}}\) the submatrix with missing entries corresponding to units in \(G_{d}\) and time periods in \([T_{d^{\prime}},T_{d^{\prime}+1})\), with the convention that \(T_{D+1}=T+1\), where \(d\leq d^{\prime}\leq D\). To estimate these missing entries, we can assemble a submatrix, denoted by \(Y_{d,d^{\prime}}\), with units untreated prior to \(T_{d^{\prime}+1}\) and time periods in \([1,T_{d})\cup[T_{d^{\prime}},T_{d^{\prime}+1})\), as well as units in \(G_{d}\) and time periods in \([1,T_{d})\). As shown in Figure 3, \(M_{d,d^{\prime}}\) is now the missing block of \(Y_{d,d^{\prime}}\), and can be estimated as described in the previous case. Denote by \(\mathcal{G}_{1},\mathcal{G}_{2},\ldots,\mathcal{G}_{L}\) the groups for missing units in \(M_{d,d^{\prime}}\) such as \(\cup_{l\in[L]}\mathcal{G}_{l}=G_{d}\), \(N_{d^{\prime}}\) the number of units that are untreated prior to \(T_{d^{\prime}+1}\), and \(\psi_{\min,O_{d,d^{\prime}}}\) the smallest singular value of the submatrix \(M_{O_{d,d^{\prime}}}=(m_{it})_{1\leq i\leq N_{d^{\prime}},1\leq t\leq T_{d}}\). The performance of the resulting estimate is given by Corollary 2.5. **Corollary 2.5**.: _Assume that_ 1. \(\sigma\kappa^{\frac{9}{4}}\mu^{\frac{1}{2}}r^{\frac{1}{2}}\max\{N_{d^{\prime} }\sqrt{\log N_{d^{\prime}}},T_{d}\sqrt{\log T_{d}}\}\ll\psi_{\min,O_{d,d^{ \prime}}}\min\{\sqrt{N_{d^{\prime}}},\sqrt{T_{d}}\}\)_;_ 2. \(\kappa^{5}\mu^{2}r^{2}\max\{N_{d^{\prime}}\log^{3}N_{d^{\prime}},T_{d}\log^{3 }T_{d}\}\ll\min\{N_{d^{\prime}}^{2},T_{d}^{2}\}\)_;_ 3. \(|\mathcal{G}_{l}|\kappa^{\frac{5}{2}}\mu r\ll\min\{N_{d^{\prime}},T_{d}\}\)_,_ \(l=1,\ldots,L\)_;_ 4. _There are constants_ \(C,c>0\) _such that_ \[c\leq\lambda_{\min}\left(\frac{N}{N_{d^{\prime}}}\sum_{i\leq N_{d^{\prime}}}u_ {i}u_{i}^{\top}\right)\leq\lambda_{\max}\left(\frac{N}{N_{d^{\prime}}}\sum_{i \leq N_{d^{\prime}}}u_{i}u_{i}^{\top}\right)\leq C,\] \[c\leq\lambda_{\min}\left(\frac{T}{T_{d}}\sum_{t\leq T_{d}}v_{t}v_{t}^{\top} \right)\leq\lambda_{\max}\left(\frac{T}{T_{d}}\sum_{t\leq T_{d}}v_{t}v_{t}^{ \top}\right)\leq C.\] _Then, with probability at least \(1-O(\min\{N_{d^{\prime}}^{-9},T_{d}^{-9}\}L(T_{d^{\prime}+1}-T_{d^{\prime}}))\), we have_ \[\left\|\widetilde{M}_{d,d^{\prime}}-M_{d,d^{\prime}}\right\|_{\infty}\leq C \frac{\sigma\kappa^{\frac{5}{2}}\mu^{\frac{3}{2}}\max\{\sqrt{\log N_{d^{\prime }}},\sqrt{\log T_{d}}\}}{\min\{\sqrt{N_{d^{\prime}}},\sqrt{T_{d}}\}},\] _for some absolute constant \(C>0\)._ It is worth comparing the rates of convergence with those of Bai and Ng (2021) which apply their TW algorithm to the full matrix. For all missing entries, the convergence rates of the estimators in Bai and Ng (2021) are \(O_{p}\left(\frac{1}{\sqrt{N_{D}}}+\frac{1}{\sqrt{T_{1}}}\right)\). On the other hand, if we assume \(\kappa,\mu=O_{p}(1)\), the convergence rate of our estimator is \(O_{p}\left(\frac{1}{\sqrt{N_{d^{\prime}}}}+\frac{1}{\sqrt{T_{d}}}\right)\) up to a logarithmic factor. Since \(N_{d^{\prime}}>N_{D}\) and \(T_{d}>T_{1}\) for all \(d^{\prime}<D\) and \(d>1\), our convergence rate is faster than that of Bai and Ng (2021) except for the estimation of missing entries in part \(M_{1,D}\) for which both estimates have similar rates of convergence. This shows the advantage of exploiting submatrices for the imputation of missing entries. ## 3 Debiasing and Statistical Inferences We now turn our attention to inferences. While the nuclear norm regularized estimator \(\widetilde{M}\) enjoys good rates of convergence, it is not directly suitable for statistical inferences due Figure 3: How to construct the general block missing pattern: Consider the case of \(d=1\) and \(d^{\prime}=2\). When we estimate the missing entries in \(M_{1,2}\), we make the block missing matrix \(Y_{1,2}\) by assembling four red matrices. Then, we can estimate the missing entries in \(M_{1,2}\) using the estimation method for the general block missing pattern. to the bias induced by the penalty. To overcome this challenge, we propose an additional projection step after applying the nuclear norm penalization in recovering missing entries from group \(\mathcal{G}_{l}\): \[\widehat{M_{l}}=\mathcal{P}_{r}\left(\Omega_{l}^{c}\circ\widetilde{M_{l}}+\Omega _{l}\circ Y_{l}\right), \tag{3.1}\] where \(\mathcal{P}_{r}(B)=\arg\min_{A\text{rank}(A)\leq r}\left\|A-B\right\|_{F}\) is the best rank-\(r\) approximation of \(B\). We now discuss how this enables us to develop an inferential theory for estimating the missing entries. To fix ideas, we shall focus on inferences about the average of a group of entries at a given time period, e.g., \(\sum_{i\in\mathcal{G}}m_{it_{0}}/|\mathcal{G}|\), where \(\mathcal{G}\subseteq[N]\). Block Missing Patterns.We shall begin with general block missing patterns, e.g., \(\omega_{it}=1\) if \(t\leq T_{0}\) or \(i\leq N_{0}\). Note that both the single treated period and single treated unit examples from the previous section can be viewed as special cases with \(T_{0}=T-1\) and \(N_{0}=N-1\), respectively. Suppose that we are interested in the inference of the average of a group of entries at the time \(t_{0}\), \(\sum_{i\in\mathcal{G}}m_{it_{0}}/|\mathcal{G}|\), where \(\mathcal{G}\subseteq\{1,\cdots,N\}\) and \(t_{0}>T_{0}\). Similar to before, we split the interesting group, \(\mathcal{G}\), into smaller subgroups, denoted by \(\{\mathcal{G}_{l}\}_{0\leq l\leq L}\) with the convention that \(\mathcal{G}_{0}=\mathcal{G}\cap\{1,\cdots,N_{0}\}\), and construct the corresponding submatrices \(\{Y_{l}\}_{1\leq l\leq L}\) as illustrated in Figure 4, and construct \(Y_{0}=[(y_{it})_{i\leq N_{0},t\leq T_{0}}\ \ (y_{it})_{i\leq N_{0},t=t_{0}}]\) if \(\mathcal{G}_{0}\neq\emptyset\). Recall that \(\psi_{\min,O}\) is the smallest nonzero singular value of the \(N_{0}\times T_{0}\) matrix \(M_{O}=(m_{it})_{1\leq i\leq N_{0},1\leq t\leq T_{0}}\). The following theorem establishes the asymptotic normality of the group Figure 4: How to construct the submatrix: The blue area is the observed area and the white area is the missing area. We estimate the entries in \(\mathcal{G}_{l}\) using the submatrix \(Y_{l}\) as described in the figure. average estimator, \(\sum_{i\in\mathcal{G}}\widehat{m}_{i_{0}}/|\mathcal{G}|\). **Theorem 3.1**.: _Assume that_ 1. \(\sigma\kappa^{\frac{23}{4}}\mu^{\frac{3}{2}}r^{\frac{3}{2}}\min\{\sqrt{N_{0}}, \sqrt{|\mathcal{G}|T_{0}}\}\max\{N_{0}\sqrt{\log N_{0}},T_{0}\sqrt{\log T_{0}} \}=o_{p}\left(\psi_{\min,O}\min\{N_{0},T_{0}\}\right)\)_;_ 2. \(\kappa^{\frac{11}{2}}\mu^{3}r^{3}\min\{\sqrt{N_{0}},\sqrt{|\mathcal{G}|T_{0}} \}\max\{\sqrt{N_{0}\log^{3}N_{0}},\sqrt{T_{0}\log^{3}T_{0}}\}=o_{p}\left(\min\{ N_{0}^{\frac{3}{2}},T_{0}^{\frac{3}{2}}\}\right)\)_;_ 3. \(|\mathcal{G}_{l}|\kappa^{\frac{17}{4}}\mu^{\frac{5}{2}}r^{\frac{5}{2}}\max\{ \sqrt{N_{0}\log N_{0}},\sqrt{T_{0}\log T_{0}}\}=o_{p}\left(\sqrt{N_{0}}\min\{ N_{0},T_{0}\}\right)\)_,_ \(l=1,\ldots,L\)_;_ 4. _There are constants_ \(C,c>0\) _such that_ \[c\leq\lambda_{\min}\left(\frac{N}{N_{0}}\sum_{i\leq N_{0}}u_{i}u _{i}^{\top}\right)\leq\lambda_{\max}\left(\frac{N}{N_{0}}\sum_{i\leq N_{0}}u_ {i}u_{i}^{\top}\right)\leq C,\] \[c\leq\lambda_{\min}\left(\frac{T}{T_{0}}\sum_{t\leq T_{0}}v_{t}v _{t}^{\top}\right)\leq\lambda_{\max}\left(\frac{T}{T_{0}}\sum_{t\leq T_{0}}v_ {t}v_{t}^{\top}\right)\leq C;\] 5. \(\sqrt{N}\left\|\bar{u}_{\mathcal{G}}\right\|\geq c\) _and_ \(\sqrt{T}\left\|v_{t_{0}}\right\|\geq c\) _for some constant_ \(c>0\) _where_ \(\bar{u}_{\mathcal{G}}=|\mathcal{G}|^{-1}\sum_{i\in\mathcal{G}}u_{i}\)_._ _Then, we have_ \[\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}\left(\frac{1}{|\mathcal{G}|}\sum_{i \in\mathcal{G}}\widehat{m}_{it_{0}}-\frac{1}{|\mathcal{G}|}\sum_{i\in\mathcal{ G}}m_{it_{0}}\right)\stackrel{{ D}}{{\longrightarrow}}\mathcal{N}(0,1),\] _where_ \[\mathcal{V}_{\mathcal{G}}=\sigma^{2}\left(\bar{u}_{\mathcal{G}}^{\top}\left( \sum_{j\leq N_{0}}u_{j}u_{j}^{\top}\right)^{-1}\bar{u}_{\mathcal{G}}+\frac{1} {|\mathcal{G}|}v_{t_{0}}^{\top}\left(\sum_{s\leq T_{0}}v_{s}v_{s}^{\top} \right)^{-1}v_{t_{0}}\right).\] Staggered Adaption.More generally, consider the case of staggered adoption when there are \(D\) number of adoption time points, \(T_{1}<T_{2}<\cdots<T_{D}\), and \(D\) number of corresponding groups of treated units, \(G_{1},\ldots,G_{D}\). As in the previous situation, suppose that we are interested in inference for the group average at time \(t_{0}\). Denote by \(N_{0}\) the number of units that are untreated until \(t_{0}\), and by \(T_{0}\) the number of time periods where \(\{1,\ldots,N_{0}\}\) is untreated, respectively. We proceed by first splitting \(\mathcal{G}\) into smaller groups, denoted by \(\{\mathcal{G}_{l}\}_{0\leq l\leq L}\) with the convention that \(\mathcal{G}_{0}=\mathcal{G}\cap\{1,\cdots,N_{0}\}\). In doing so, we want to make sure that all units in each subgroup \(\{\mathcal{G}_{l}\}_{1\leq l\leq L}\) have the same adoption time point, e.g., \(\mathcal{G}_{l}\subseteq G_{d_{l}}\), as illustrated in Figure 5. Denote by \(D_{\mathcal{G}}=\{d_{l}:1\leq l\leq L\}\) and by \(\psi_{\min,O_{d}}\) the smallest singular value of the submatrix \(M_{O_{d}}=(m_{it})_{1\leq i\leq N_{0},1\leq t\leq T_{d}}\). **Theorem 3.2**.: _Assume that for any \(d\in D_{\mathcal{G}}\cup\{0\}\) and \(l=1,\ldots,L\),_ 1. \(\sigma\kappa^{\frac{23}{4}}\mu^{\frac{3}{2}}r^{\frac{3}{2}}\sqrt{N_{0}}\max\{N_ {0}\sqrt{\log N_{0}},T_{d}\sqrt{\log T_{d}}\}=o_{p}\left(\psi_{\min,O_{d}}\min\{N _{0},T_{d}\}\right)\)_;_ 2. \(\kappa^{\frac{11}{2}}\mu^{3}r^{3}\sqrt{N_{0}}\max\{\sqrt{N_{0}\log^{3}N_{0}}, \sqrt{T_{d}\log^{3}T_{d}}\}=o_{p}\left(\min\{N_{0}^{\frac{3}{2}},T_{d}^{\frac{ 3}{2}}\}\right)\)_;_ 3. \(|\mathcal{G}_{l}|\kappa^{\frac{17}{4}}\mu^{\frac{5}{2}}r^{\frac{5}{2}}\max\{ \sqrt{N_{0}\log N_{0}},\sqrt{T_{d_{l}}\log T_{d_{l}}}\}=o_{p}\left(\sqrt{N_{0} }\min\{N_{0},T_{d_{l}}\}\right)\)_;_ 4. _there are constants_ \(C,c>0\) _such that_ \[c\leq\lambda_{\min}\left(\frac{N}{N_{0}}\sum_{i\leq N_{0}}u_{i}u _{i}^{\top}\right)\leq\lambda_{\max}\left(\frac{N}{N_{0}}\sum_{i\leq N_{0}}u_ {i}u_{i}^{\top}\right)\leq C,\] \[c\leq\lambda_{\min}\left(\frac{T}{T_{d}}\sum_{t\leq T_{d}}v_{t}v _{t}^{\top}\right)\leq\lambda_{\max}\left(\frac{T}{T_{d}}\sum_{t\leq T_{d}}v_ {t}v_{t}^{\top}\right)\leq C;\] 5. \(\sqrt{N}\left\|\bar{u}_{\mathcal{G}}\right\|\geq c\) _and_ \(\sqrt{T}\left\|v_{t_{0}}\right\|\geq c\) _for some constant_ \(c>0\)_._ _Then, we have_ \[\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}\left(\frac{1}{|\mathcal{G}|}\sum_{i \in\mathcal{G}}\widehat{m}_{it_{0}}-\frac{1}{|\mathcal{G}|}\sum_{i\in \mathcal{G}}m_{it_{0}}\right)\stackrel{{ D}}{{\longrightarrow}} \mathcal{N}(0,1),\] _where_ \[\mathcal{V}_{\mathcal{G}}=\sigma^{2}\left(\bar{u}_{\mathcal{G}}^{\top}\left( \sum_{j\leq N_{0}}u_{j}u_{j}^{\top}\right)^{-1}\bar{u}_{\mathcal{G}}+\frac{1} {|\mathcal{G}|}v_{t_{0}}^{\top}\left[\sum_{d\in D_{\mathcal{G}}\cup\{0\}} \frac{|G_{d}\cap\mathcal{G}|}{|\mathcal{G}|}\left(\sum_{s\leq T_{d}}v_{s}v_{s} ^{\top}\right)^{-1}\right]v_{t_{0}}\right),\] _with the convention that \(G_{0}=\{1,\ldots,N_{0}\}\)._ Figure 5: Submatrix construction: For each \(1\leq l\leq 3\), we make the submatrix \(Y_{l}\) by putting \(O_{l}\), \(p_{l}\), \(q\), and \(\mathcal{G}_{l}\) together. In addition, we estimate the entries in \(\mathcal{G}_{0}\) using the fully observed part \(Y_{0}=(y_{it})_{1\leq i\leq N_{0},1\leq t\leq T_{0}}\). Variance Estimation.In practice, to use the results above for inferences, we also need to estimate the variance. To this end, let \(\widetilde{U}_{l}\widetilde{D}_{l}\widetilde{V}_{l}^{\top}\) be the SVD of \(\mathcal{P}_{r}(\widetilde{M}_{l})\). Denote by \(\widetilde{X}_{l}=\widetilde{U}_{l}\widetilde{D}_{l}^{1/2}\) and \(\widetilde{Z}_{l}=\widetilde{V}_{l}\widetilde{D}_{l}^{1/2}\). They can be viewed as estimates of rescaled left and right singular vectors. However, as such, they are significantly biased and the bias can be reduced by considering instead \[\widehat{X}_{l}=\widetilde{X}_{l}\left(I_{r}+\lambda_{l}(\widetilde{X}_{l}^{ \top}\widetilde{X}_{l})\right)^{1/2},\;\;\widehat{Z}_{l}=\widetilde{Z}_{l} \left(I_{r}+\lambda_{l}(\widetilde{Z}_{l}^{\top}\widetilde{Z}_{l})\right)^{1/2}.\] We can then use \(\widehat{X}_{l}\) and \(\widehat{Z}_{l}\) in place of the left and right singular vector in defining \(\mathcal{V}_{\mathcal{G}}\), leading to the following variance estimate \[\widehat{\mathcal{V}}_{\mathcal{G}}=\widehat{\sigma}^{2}\sum_{i \leq N_{0}}\left(\sum_{0\leq l\leq L}\frac{|\mathcal{G}_{l}|}{|\mathcal{G}|} \widehat{\widetilde{X}}_{\mathcal{G}_{l}}^{\top}\left(\sum_{j\leq N_{0}} \widehat{X}_{l,j}\widehat{X}_{l,j}^{\top}\right)^{-1}\widehat{X}_{l,i}\right)^ {2}\] \[+\frac{\widehat{\sigma}^{2}}{|\mathcal{G}|}\sum_{0\leq l\leq L} \frac{|\mathcal{G}_{l}|}{|\mathcal{G}|}\widehat{Z}_{t_{0}}^{\top}\left(\sum_{s \leq T_{d_{l}}}\widehat{Z}_{l,s}\widehat{Z}_{l,s}^{\top}\right)^{-1}\widehat{ Z}_{l,t_{0}},\] where \(\widehat{\widetilde{X}}_{\mathcal{G}_{l}}=\frac{1}{|\mathcal{G}_{l}|}\sum_{j \in\mathcal{G}_{l}}\widehat{X}_{l,j}\), \(\widehat{\sigma}^{2}=\frac{1}{N_{0}T_{0}}\sum_{i\leq N_{0},t\leq T_{0}} \widehat{\epsilon}_{it}^{2}\), and \(\widehat{\epsilon}_{it}=y_{it}-\widehat{m}_{it}\). The following corollary shows that asymptotic normality established in Theorem 3.2 continues to hold if we use this variance estimate. **Corollary 3.3**.: _Suppose that the assumptions in Theorem 3.2 hold. In addition, suppose that for any \(d\in D_{\mathcal{G}}\cup\{0\}\),_ \[\sigma\kappa^{5}\mu^{3}r^{3}N_{0}\max\{\sqrt{N_{0}\log N_{0}},\sqrt{T_{d}\log T _{d}}\}=o_{p}\left(\psi_{\min,O_{d}}\min\{N_{0},T_{d}\}\right).\] _Then_ \[\widehat{\mathcal{V}}_{\mathcal{G}}^{-\frac{1}{2}}\left(\frac{1}{|\mathcal{G }|}\sum_{i\in\mathcal{G}}\widehat{m}_{it_{0}}-\frac{1}{|\mathcal{G}|}\sum_{i \in\mathcal{G}}m_{it_{0}}\right)\stackrel{{ D}}{{\longrightarrow}} \mathcal{N}(0,1).\] Since Theorem 3.1 is a special case of Theorem 3.2, the variance estimator can also be used for Theorem 3.1. Specifically, it is enough to change from \(T_{d_{l}}\) in \(\widehat{\mathcal{V}}_{\mathcal{G}}\) to \(T_{0}\) for Theorem 3.1. ## 4 Application to Tick Size Pilot Program Our work was motivated by the analysis of the Tick Size Pilot Program, which we shall now discuss in detail to demonstrate how the proposed methodology can be applied in causal panel data models. ### Data and Methods Background.In October 2016, the SEC launched the Tick Size Pilot Program to evaluate the impact of an increase in tick sizes on the market quality of stocks. As noted before, the pilot consisted of a control group and three treatment groups: * stocks in the control group was quoted and traded in $0.01 increments; * stocks in the Q rule group was quoted in $0.05 increments but still traded in $0.01 increments; * stocks in this rule group was quoted and traded in $0.05 increments; * stocks in this group are also subject to the additional trade-at rule, a regulation which makes exchanges display the NBBO (National Best Bid and Offer) when they execute a trade at the NBBO. This pilot program has attracted considerable attention, and there are a growing number of studies on the impact of these changes on market quality, often represented by a liquidity measure such as the effective spread since its conclusion in 2018. See, e.g., Albuquerque et al. (2020); Chung et al. (2020); Griffith and Roseman (2019); Rindi and Werner (2019); Werner et al. (2022). Data.Data for control variables were obtained from the Center for Research in Security Prices (CRSP) and the daily share-weighted dollar effective spread data from the Millisecond Intraday Indicators by Wharton Research Data Services (WRDS). A key control variable introduced by Chung et al. (2020) is TBC which measures the extent to which the new tick size ($0.05) is a binding constraint on the quoted spreads in the pilot periods and is estimated by the percentage of quoted spreads during the day that are equal to or less than 5 cents, which is the new minimum quoted tick size under the Q rule. Specifically, we calculate the percentage of NBBO updates with quoted spread less than or equal to 5 cents for each day. Using the TBC variable, we can check the effect of an increase in the minimum quoted spread (from 1 cent to 5 cents) on the effective spread. A data-cleaning process similar to Chung et al. (2020) yields a total of \(N=1,461\) stocks with \(N_{0}=735\) in the control group, \(N_{1}=254\) in the Q group, \(N_{2}=244\) in the Q+T group, and \(N_{3}=228\) in the Q+T+TA group. Following Chung et al. (2020), data from Oct 1, 2015 to Sep 30, 2016 were used as the pre-pilot periods and Nov 1, 2016 to Oct 31, 2017 as the pilot periods, i.e., \(T_{0}=253\) and \(T_{1}=252\) for daily data. See Chung et al. (2020) for further discussion of data collection. As is common in previous studies, we consider the daily effective spread in cents as a measure of liquidity. Denote by \(y_{it}^{(d)}\) the potential outcome for stock \(i\) at time \(t\) under treatment \(d\) with the convention that \(d=0,1,2,3\) corresponds to the control, the Q rule, the Q + T rule, and the Q + T + TA rule, respectively. The four matrices \(Y^{(d)}=(y_{it}^{(d)})_{1\leq i\leq N,1\leq t\leq T}\) have block missing patterns, as shown in Figure 6. Model.Previous studies of the effects of the quote (Q) rule, the trade (T) rule, and the trade-at (TA) rule on the liquidity measure are usually based on traditional regression or difference-in-difference methods by assuming that the treatment effect is constant across all units and time periods. For instance, Chung et al. (2020) postulated \(y_{it}=y_{it}^{(d)}\) if unit \(i\) receives treatment \(d\) at time \(t\) where the potential outcomes \[y_{it}^{(d)}=m_{it}^{(d)}+x_{it}^{\top}\beta+\epsilon_{it}\] and \[m_{it}^{(d)}=\mu^{(d)}+\alpha_{i}+\delta_{t}. \tag{4.1}\] Figure 6: Missing pattern in the pilot program: The blue area is the observed area and the white area is the missing area. In the case of the controlled situation (\(d=0\)), we can observe the outcomes of all units in the pre-pilot periods and those of the control group in the pilot periods. In the case of the treated situation by the treatment \(d\), we can only observe the outcomes of the treatment group \(\mathcal{I}_{d}\) in the pilot periods. Here, \(\mu^{(0)}=0\), \(\mu^{(1)},\mu^{(2)},\mu^{(3)}\), \(\alpha_{i}\)s and \(\delta_{i}\)s are unknown parameters, and \(x_{it}\) is a set of control variables that includes typical stock characteristics like stock prices and trading volumes, and TBC, a variable measuring the extent to which the new tick size (\(\$0.05\)) is a binding constraint on the quoted spreads in the pilot period. See Section E in the Appendix for further details. It is worth noting that, in addition to the treatment effects (\(\mu^{(1)}\), \(\mu^{(2)}\) and \(\mu^{(3)}\)), their differences \(\theta^{(d)}:=\mu^{(d)}-\mu^{(d-1)}\) are also of interest, as they represent the treatment effects of quote rule, trade rule, and trade-at rule, respectively. However, (4.1) fails to account for the significant heterogeneity in the treatment effects across units and time periods. To this end, we shall consider a more flexible model: \[m_{it}^{(d)}=\zeta_{i}^{\top}\eta_{t}^{(d)},\qquad d=0,1,2,3, \tag{4.2}\] where \(\zeta_{i}\) is a \(r\)-dimensional vector of (latent) unit specific characteristics and \(\eta_{t}^{(d)}\) is the corresponding coefficients of \(\zeta_{i}\) at time \(t\) in the potential situation \(d\). As we shall see later in this section, (4.2) allows us to get more insights into the treatment effects of the pilot program. One of the key assumptions of Model (4.2) is that the subspace spanned by the left singular vector of \(M^{(d)}=(m_{it}^{(d)})_{1\leq i\leq N,1\leq t\leq T}\) for all \(d=1,2,3\) is included in the subspace spanned by the left singular vector of \(M^{(0)}\). Agarwal et al. (2020) propose a subspace inclusion test to check the validity of this assumption. We carried out this test on the pilot data, which confirms this is a reasonable assumption. We note that similar low-rank models have also been considered by Agarwal et al. (2020) and Chernozhukov et al. (2021) earlier. However, it is unclear how their methodology can be adapted for the analysis of the Tick Size Program. For example, Chernozhukov et al. (2021) impose conditions on the missing pattern that are clearly violated by the pilot data; Agarwal et al. (2020) only study the average treatment effect and so cannot be used to assess the heterogeneity or dynamics of the treatment effects across units and time periods, respectively. Estimation.We now discuss how we can apply the methodology in the previous sections to analyze the tick size program, and in particular to estimate and make inferences about (4.2). More specifically, we are interested in estimating the group-averaged treatment effects: for an interesting group of treated units \(\mathcal{G}\), \[\mu_{t}^{(d)}:=\frac{1}{|\mathcal{G}|}\sum_{i\in\mathcal{G}}[m_{it}^{(d)}-m_{it}^ {(0)}],\] and their differences: \[\theta_{t}^{(d)}:=\mu_{t}^{(d)}-\mu_{t}^{(d-1)},\] for \(t>T_{0}\). Especially, when \(\mathcal{G}\) is a certain unit, it reduces to the individual treatment effect and if \(\mathcal{G}\) is the group of all treated units, it becomes the cross-sectional averaged treatment effect. To this end, we shall derive estimates for \(m_{it}^{(d)}\) under Model (4.2). First, note that, for this particular application, one of the covariates (TBC) is only present for the pilot periods. Therefore, we cannot hope to estimate the regression coefficient \(\beta\) using the pre-pilot data alone, as suggested by Bai and Ng (2021). Nonetheless, under (4.2), \(y_{it}\)s follow an interactive fixed effect model: \[y_{it}=x_{it}^{\top}\beta+L_{it}+\epsilon_{it}\] for some low rank components \(L_{it}\) and therefore the regression coefficient \(\beta\) can be estimated at the rate of \(O_{p}(1/\sqrt{NT})\). See Bai (2009) for details. This is much faster than that of the estimates of \(m_{it}^{(d)}\). For brevity, we shall, therefore, treat the regression coefficient \(\beta\) as known in what follows, without loss of generality. For \(d=0\), we can apply the method proposed in the previous sections to the potential outcome panel \(\tilde{Y}_{it}^{(0)}=(y_{it}^{(0)}-x_{it}^{\top}\beta)_{1\leq i\leq N,1\leq t \leq T}\). As illustrated in Figure 6, it has a block missing pattern with \(\omega_{it}^{(0)}=1\) if and only if \(t\leq T_{0}\) or \(i\leq N_{0}\). As such, we can derive estimates \(\widehat{m}_{it}^{(0)}\) for \(t>T_{0}\). When \(d>0\), we can only observe \(y_{it}^{(d)}\) if unit \(i\) receives treatment \(d\) and \(t>T_{0}\), so our method cannot be applied directly. Instead, we shall combine all observations from prepilot periods and these observations to form a panel \(\tilde{Y}^{(d)}\) whose \((i,t)\) entry is \(y_{it}^{(d)}-x_{it}^{\top}\beta\) if \(i\) receives treatment \(d\) and \(t>T_{0}\), is \(y_{it}^{(0)}-x_{it}^{\top}\beta\) if \(t\leq T_{0}\), and is missing otherwise. Let \(\tilde{M}^{(d)}\) be a \(N\times T\) matrix whose \((i,t)\) entry is \(m_{it}^{(0)}\) if \(t\leq T_{0}\), and \(m_{it}^{(d)}\) otherwise. \(\tilde{Y}^{(d)}\) can be viewed as the noisy observation of \(\tilde{M}^{(d)}\) with a block missing pattern: \(\omega_{it}^{(d)}=1\) if and only if unit \(i\) receives treatment \(d\) or \(t\leq T_{0}\). Under (4.2), \(\tilde{m}_{it}^{(d)}=\zeta_{i}^{\top}\tilde{\eta}_{t}^{(d)}\) where \(\tilde{\eta}_{t}^{(d)}=\eta_{t}^{(0)}\) if \(t\leq T_{0}\) and \(\eta_{t}^{(d)}\) otherwise. Therefore, we can again apply our method to \(\tilde{Y}^{(d)}\) to obtain estimates \(\hat{m}_{it}^{(d)}\) for \(t>T_{0}\). We shall then proceed to estimate the treatment effects by \[\widehat{\mu}_{t}^{(d)}:=\frac{1}{|\mathcal{G}|}\sum_{i\in\mathcal{G}}[\widehat{m }_{it}^{(d)}-\widehat{m}_{it}^{(0)}]\qquad\text{and}\qquad\widehat{\theta}_{t}^ {(d)}:=\frac{1}{|\mathcal{G}|}\sum_{i\in\mathcal{G}}[\widehat{m}_{it}^{(d)}- \widehat{m}_{it}^{(d-1)}].\] Inferences.We can also use the results from the last section to derive the asymptotic distribution for \(\widehat{\mu}_{t}^{(d)}\) and \(\widehat{\theta}_{t}^{(d)}\). More specifically, let \(M\) be a \(N\times(T+3T_{1})\) matrix that combines all observed outcomes: the first \(T\) columns of \(M\) consist of the potential outcomes under the control for the whole periods \((m_{it}^{(0)})_{i\leq N,t\leq T}\), the next \(T_{1}\) columns the potential outcomes under the Q rule for the pilot periods \((m_{it}^{(1)})_{i\leq N,t>T_{0}}\), followed by those under the Q+T rule again for the pilot periods \((m_{it}^{(2)})_{i\leq N,t>T_{0}}\), and finally those under the Q+T+TA rule \((m_{it}^{(3)})_{i\leq N,t>T_{0}}\). Note that \(M\) is also a rank-\(r\) matrix. Let \(M=UDV^{\top}\) be its singular value decomposition. Denote by \(u_{i}^{\top}\) and \(v_{t}^{\top}\) the \(i\)-th row vector of \(U\) and \(t\)-th row vector of \(V\), respectively. In addition, denote by \(\mathcal{I}_{d}\) the group of units treated by treatment \(d\) with the convention that \(\mathcal{I}_{0}\) is the control group. Then, under suitable conditions, we have \[\mathcal{V}_{\mu}^{-\frac{1}{2}}\left(\widehat{\mu}_{t_{0}}^{(d)}-\mu_{t_{0}}^ {(d)}\right)\overset{D}{\longrightarrow}\mathcal{N}(0,1),\ \ \mathcal{V}_{\theta}^{-\frac{1}{2}}\left(\widehat{\theta}_{t_{0}}^{(d)}-\theta _{t_{0}}^{(d)}\right)\overset{D}{\longrightarrow}\mathcal{N}(0,1),\] \(\mathcal{V}_{\mu}=\mathcal{V}_{\mathcal{G}}(d,0)\) and \(\mathcal{V}_{\theta}=\mathcal{V}_{\mathcal{G}}(d,d-1)\) where \[\mathcal{V}_{\mathcal{G}}(d,d^{\prime})= \sigma^{2}\bar{u}_{\mathcal{G}}^{\top}\left(\sum_{j\in\mathcal{I }_{d}}u_{j}u_{j}^{\top}\right)^{-1}\bar{u}_{\mathcal{G}}+\sigma^{2}\bar{u}_{ \mathcal{G}}^{\top}\left(\sum_{j\in\mathcal{I}_{d^{\prime}}}u_{j}u_{j}^{\top} \right)^{-1}\bar{u}_{\mathcal{G}}\] \[+\frac{\sigma^{2}}{|\mathcal{G}|}\left(v_{(d\cdot T_{1}+t_{0})}-v _{(d^{\prime}\cdot T_{1}+t_{0})}\right)^{\top}\left(\sum_{s\leq T_{0}}v_{s}v_ {s}^{\top}\right)^{-1}\left(v_{(d\cdot T_{1}+t_{0})}-v_{(d^{\prime}\cdot T_{1 }+t_{0})}\right).\] Similar to before, the variance can be replaced by its estimate. Due to the space limit, we shall defer the formal statements and proofs, as well as derivations of the variance estimator to the Appendix. ### Empirical Findings Fixed Effects vs Interactive Effects.We begin with some exploratory analyses to illustrate the impact of the pilot program. The top left panel of Figure 7 gives the boxplots of difference in the effective spread, averaged over time, after and before the pilot. There are a few units with differences that are much larger in magnitude than usual. For better visualization, the top right panel zooms in with a difference between -10 cents and 10 cents. Taken together, it is clear that the three treatment groups have a significant impact on the effective spread. The treatment effect of the pilot, however, differs between units. The bottom panels of Figure 7 show barplots of the time series of the effective spread of two typical stocks. The impact of the treatment is much clearer for the stock depicted in the bottom right panel. The difference in treatment effect among the units suggests that the interactive effect model is more suitable than the fixed effect model used in the previous studies. Note that the fixed effect model (4.1) can be viewed as a special case of the interactive effect model (4.2) with \(\zeta_{i}=[1\ \ \alpha_{i}]^{\top}\), \(\eta_{t}^{(d)}=[\delta_{t}+\mu^{(d)}\ \ 1]^{\top}\). We conducted a Hausman-type model specification test to further show that the fixed effect model is inadequate in capturing the heterogeneity of the treatment effect. More specifically, denote our estimator of \(\theta_{it}^{(d)}\coloneqq m_{it}^{(d)}-m_{it}^{(d-1)}\) by \(\hat{\theta}_{it}^{(d)}\) and the two-way fixed effect estimator of \(\theta^{(d)}\coloneqq\mu^{(d)}-\mu^{(d-1)}(=m_{it}^{(d)}-m_{it}^{(d-1)})\) in Model (4.1) by \(\tilde{\theta}^{(d)}\). We considered the following test statistic for model specification: \[T-stat_{\text{ms}}=\max_{i\in\mathcal{N}_{tr},T_{0}<t\leq T}\max_{1\leq d\leq 3 }|\hat{\tau}_{it}^{(d)}|\] Figure 7: Top panels: Boxplot of difference in averaged effective spread after and before the tick size program. Bottom panels: two stocks treated with Q rule and with different treatment effects. where \(\mathcal{N}_{tr}\) is the group of all treated stocks, \(\hat{\tau}^{(d)}_{it}=\hat{\mathcal{V}}^{-1/2}_{d,it}(\hat{\theta}^{(d)}_{it}- \tilde{\theta}^{(d)})\), and \(\hat{\mathcal{V}}_{d,it}\) is the estimator of the asymptotic variance of \(\hat{\theta}^{(d)}_{it}-\tilde{\theta}^{(d)}\). Moreover, to test whether \(\theta^{(d)}_{it}\) is time and unit invariant or not, we also considered the test statistic such that \[T-stat_{(d)}=\max_{i\in\mathcal{N}_{tr},T_{0}<t\leq T}\left|\hat{\mathcal{V}}^{ -1/2}_{d,it}(\hat{\theta}^{(d)}_{it}-\bar{\tilde{\theta}}^{(d)})\right|\] where \(\bar{\tilde{\theta}}^{(d)}=\frac{1}{|\mathcal{N}_{tr}|T_{1}}\sum_{i\in \mathcal{N}_{tr},T_{0}<t\leq T}\hat{\theta}^{(d)}_{it}\). We derived the large sample distributions of the test statistics under the null and corresponding critical values using the Gaussian bootstrap method (see, e.g., Belloni et al., 2018). And the null hypothesis that Model (4.1) is well specified and the null hypotheses that \(\{\theta^{(d)}_{it}\}_{1\leq d\leq 3}\) are time and unit invariant are all rejected at \(1\%\) significance level, again indicating that Model (4.1) is misspecified and \(\{\theta^{(d)}_{it}\}_{1\leq d\leq 3}\) are time and unit variant. To further illustrate the heterogeneity of the treatment effect, we compute the estimated unit-specific treatment effect averaged over time: \(\bar{\tilde{\theta}}^{(d)}_{i}:=T_{1}^{-1}\sum_{t>T_{0}}\hat{\theta}^{(d)}_{it}\) and Figure 8 gives the kernel density estimates of these unit-specific treatment effects for the Q rule, T rule and TA rule respectively. It is evident from these density plots that there is considerable amount of variation and skewness among the estimated treatment effects across units. Note that a key assumption behind the interactive effect model is that the unit specific Figure 8: Kernel density estimates of the estimated unit-specific treatment effect averaged over time. characteristic \(\zeta_{i}\) remains the same across all treatment groups as well as the control group so that they can be learned from the pre-pilot periods and utilized for the estimation of \(m_{it}^{(d)}\) during the pilot period. This amounts to the assumption that the left singular space of \(M^{(d)}\) is included in that of \(M^{(0)}\). To check the validity of the assumption, we carry out the subspace inclusion test for \(d=1,2,3\) introduced in Agarwal et al. (2020), and the test statistics are 0.15, 0.19 and 0.11 with corresponding critical values at 95% level 0.43, 0.48 and 0.28. Additionally, we also confirm that the ranks of \((m_{it}^{(0)})_{i\in\mathcal{I}_{d},t\leq T_{0}}\) and \([(m_{it}^{(0)})_{i\in\mathcal{I}_{d},t\leq T_{0}}\ \ (m_{it}^{(d)})_{i\in \mathcal{I}_{d},t>T_{0}}]\) are the same for all \(1\leq d\leq 3\) using the typical rank estimation method (e.g., Ahn and Horenstein, 2013), which implies the validity of this assumption. The rank test also indicates that \(r=1\) is an appropriate choice for the pilot data. The associated \(R^{2}\) is 0.79. This is to compared with the fixed effect model (4.1) whose \(R^{2}\) is 0.67 with the same degrees of freedom. This again suggests that the interactive effect model (4.2) is preferable. Dynamics of Treatment Effects.Next, we examine the dynamics of the treatment effects of the Q rule, the T rule, and the TA rule. To better visualize the dynamics, we plot in Figure 9 the estimated daily treatment effects along with their 95% confidence interval, adjusted with Bonferroni correction. To gain further insights, we also plot in Figure 10 the weekly average of the estimated daily treatment effects, again with their 95% confidence interval adjusted with Bonferroni correction. Note that to do so, we need to consider the estimator of the form \[\frac{1}{|\mathcal{S}|}\frac{1}{|\mathcal{N}_{tr}|}\sum_{t\in\mathcal{S}}\sum _{i\in\mathcal{N}_{tr}}\hat{\theta}_{it}^{(d)}\] where \(\mathcal{S}\) is a week of interest. We can generalize the inferential theory from the previous section straightforwardly with the new variance: \[\sum_{\rho\in\{d,d-1\}}\left[\frac{\sigma^{2}}{|\mathcal{S}|}\bar{u}_{ \mathcal{N}_{tr}}^{\top}\left(\sum_{j\in\mathcal{I}_{\rho}}u_{j}u_{j}^{\top} \right)^{-1}\bar{u}_{\mathcal{N}_{tr}}\right]+\frac{\sigma^{2}}{|\mathcal{N} _{tr}|}\bar{v}_{\text{diff}}^{\top}\left(\sum_{s\leq T_{0}}v_{s}v_{s}^{\top} \right)^{-1}\bar{v}_{\text{diff}},\] where \[\bar{v}_{\text{diff}}=\frac{1}{|\mathcal{S}|}\sum_{t\in\mathcal{S}}v_{(d\cdot T _{1}+t)}-v_{((d-1)\cdot T_{1}+t)}.\] \(\theta_{it}^{(2)}\) and \(\theta_{it}^{(3)}\) can be interpreted as the treatment effects of the T rule and the TA rule. As expected by theory in the literature, we have the positive treatment effects of T rule most of the time. The T rule has a negative effect on price improvements, as liquidity providers are less likely to offer them when the minimum possible price improvement is larger. For example, if the T rule makes the minimum possible price improvement to be 5 cents, liquidity providers who would have been willing to provide less than 5 cents of price improvements are unlikely to offer any price improvement at all. Since the effective Figure 9: The dynamics of the daily cross-sectional average of \(\theta_{it}^{(d)}\): For the confidence band, we use the 95% uniform critical value, \(\Phi^{-1}(1-0.025/252)\). The dots denote the daily cross-sectional average of \(\theta_{it}^{(d)}\). spread is "quoted spread - price improvement", we can expect that treatment effects of the T rule is positive. Here, we use the following definitions: \(\texttt{Quoted Spread}_{t}=A_{t}-B_{t}\), \(\texttt{Effective Spread}_{t}=2(P_{t}-\frac{A_{t}+B_{t}}{2})\), and \(\texttt{Price Improvement}_{t}=2(A_{t}-P_{t})\), where \(A_{t}\) is the national best ask price at time \(t\), \(B_{t}\) is the national best bid price at time \(t\), and \(P_{t}\) is the transaction price. Interestingly, one can observe that the periods associated with large effects of the T rule Figure 10: The dynamics of the weekly cross-sectional average of \(\theta_{it}^{(d)}\): For the confidence band, we use the 95% uniform critical value, \(\Phi^{-1}(1-0.025/53)\). The dots denote the weekly cross-sectional average of \(\theta_{it}^{(d)}\). usually correspond to large trading volumes. In particular, there were large trading volumes in November, early and mid-December in 2016, March, mid and late June, early August, early September, and late October in 2017, and, by and large, these periods coincide with periods with larger impact of the T rule. In general, the correlation coefficients between the estimated effect of the T rule and the trading volume is 0.33. This suggests that the effect of the T rule becomes stronger when transactions are more active. This agrees with the well-known fact that price improvement is more likely to occur when stocks are actively traded, and therefore the effect of the T rule through price improvement will become amplified and strong when trades are active. Moreover, we find that the treatment effects of the TA rule are negative most of the time. The TA rule increases visible liquidity by exposing hidden liquidity because, under the TA rule, a venue should display the best bid or ask to execute incoming market orders at the NBBO. It implies a decrease in the quoted spread and a smaller room for price improvements. Chung et al. (2020) expect that the effect on the quoted spread is likely to be greater than the effect on price improvements, and so the TA rule decreases the effective spread. Our result corroborates with their conjecture. Further discussion about the empirical findings is given in Section E in the Appendix. ## 5 Simulated Experiments To further demonstrate the practical merits and finite sample performance of our methodology, we conducted several sets of simulation experiments. ### Basic Setting The first set of simulations was designed to compare the performance of the proposed estimator with that of other existing estimators in a staggered adoption setting. Here, the size of "no adoption" group (G0) was set to 200. There are three adoption groups (G1, G2, G3), and the size of each adoption group was set to 100. The number of time points was 500 with G1 adopting the intervention at the 201st time period, G2 at the 301st time period, and G3 at the 401st time period. The potential outcome under the control follows a low-rank model \(y_{it}^{(0)}=\zeta_{i}^{\top}\eta_{t}^{(0)}+\varepsilon_{it}\) where the noise \(\varepsilon_{it}\) was sampled independently from the standard normal distribution. The unit specific characteristics \(\zeta_{i}\)s were sampled independently from \(\mathcal{N}((2.5/\sqrt{2},2.5/\sqrt{2})^{\top},I_{2})\) for G0, \(\mathcal{N}((1/\sqrt{2},1/\sqrt{2})^{\top},I_{2})\) for G1, \(\mathcal{N}((1.5/\sqrt{2},1.5/\sqrt{2})^{\top},I_{2})\) for G2, and \(\mathcal{N}((\sqrt{2},\sqrt{2})^{\top},I_{2})\) for G3. In addition, the corresponding coefficient \(\eta_{t}^{(0)}\)s were sampled independently from \(\mathcal{N}((1/\sqrt{2},1/\sqrt{2})^{\top},I_{2})\). To fix ideas, we consider estimating the missing potential outcome \(m_{it}^{(0)}\) of a randomly chosen unit in G2 during the last time period (\(t=500\)) using different estimators including ours (CY) along with those from Bai and Ng (2021) (BN), Agarwal et al. (2021) (ADSS) and Athey et al. (2021) (ABDIK). For ADSS, following the recommendation in Agarwal et al. (2021), we set the number of sub-subgroup \(K\) to be \(K\asymp|AR^{(k)}|_{o}^{1/3}\). Table 1 reports the RMSE, summarized from 1,000 simulation runs. The performance of CY, BN, and ADSS are superior to that of ABDIK with CY slightly better than BN and ADSS. In addition, we recorded the coverage probabilities of the (asymptotic) confidence intervals associated with each method, with the exception of ABDIK for which such inferential tools have not been developed in the literature. From Table 2, we can see that the coverage probabilities of ADSS are not close to the nominal level, indicating that the asymptotic distributional properties may not provide good approximations in this setting. On the other hand, our method and BN are more accurate, with ours more closely following the target probabilities. ### Interactive Effect Model Our next set of simulations mimics the setting of the pilot program studied in the previous section. More specifically, we considered Model (4.2) with two treatment groups, \(\mathcal{I}_{1}\) and \(\mathcal{I}_{2}\), and a control group, \(\mathcal{I}_{0}\). Each treatment group receives a different treatment in the pilot periods. We set \(r=2\) and generated the unit specific characteristics from \(\zeta_{i}\sim\mathcal{N}((1/\sqrt{2},1/\sqrt{2})^{\top},I_{2})\), \(\varepsilon_{it}\sim\mathcal{N}(0,1)\), \(\eta_{it}^{(0)}\sim\mathcal{N}((1/\sqrt{2},1/\sqrt{2})^{\top},I_{2})\), \(\eta_{it}^{(1)}\sim\mathcal{N}((1.5/\sqrt{2},1.5/\sqrt{2})^{\top},I_{2})\), and \(\eta_{it}^{(2)}\sim\) \begin{table} \begin{tabular}{c|c c c c} \hline \hline & CY & BN & ADSS & ABDIK \\ \hline RMSE & 0.1157 & 0.1176 & 0.1193 & 0.3507 \\ \hline \hline \end{tabular} \end{table} Table 1: Root mean square error for different methods. \begin{table} \begin{tabular}{c|c c c} \hline \hline & \multicolumn{3}{c}{Target probability} \\ Estimator & 90\% & 95\% & 99\% \\ \hline CY & 90.50\% & 95.90\% & 99.30\% \\ BN & 94.20\% & 97.50\% & 99.50\% \\ ADSS & 68.90\% & 76.10\% & 84.80\% \\ \hline \hline \end{tabular} \end{table} Table 2: Coverage probability of the confidence interval. \(\mathcal{N}((\sqrt{2},\sqrt{2})^{\top},I_{2})\). In addition, two control variables were included: \(x_{1,it}\) is generated from \(\mathcal{N}(0,1)\) while \(x_{2,it}\) is generated from \(\mathcal{N}(0,1)\) if \(t\in\) Pilot period and 0 otherwise. We set the regression coefficient \(\beta=(1,1)^{\top}\) and estimated it using the interactive fixed effect estimation with data of whole periods. The numbers of \(\mathcal{I}_{0}\), \(\mathcal{I}_{1}\), and \(\mathcal{I}_{2}\) were set to 250 and the numbers of pre-pilot periods and pilot periods were both set to 250. As before, we estimated \(\mu_{it}^{(d)}\) and \(\theta_{it}^{(d)}\) for \(1\leq d\leq 2\) of a randomly chosen unit in \(\mathcal{I}_{2}\) at the last period (\(t=500\)). Table 3 reports the coverage probabilities of our methods for \(\mu_{it}^{(1)}\), \(\mu_{it}^{(2)}\), and \(\theta_{it}^{(2)}\), summarized from 1,000 simulation runs. It is evident that our coverage probabilities are quite close to the corresponding target probabilities. This is complemented by Figure 11 that shows the histograms of the standardized estimates (t-statistics) along with the standard normal distribution, which again confirms the asymptotic normality of our estimates. ### Simulated Tabacco Sales Experiments Our final experiment is similar to that from Agarwal et al. (2021) and Athey et al. (2021) and is based on the tobacco sales data of Abadie et al. (2010). In 1988, California introduced the first anti-tobacco legislation in the United States (Proposition 99) and to study the effect of this legislation on tobacco sales, Abadie et al. (2010) used the per capita cigarette sales data which was collected across 39 U.S. states from 1970 to 2000. We considered the time Figure 11: Histograms for standardized estimates (t-statistics) \begin{table} \begin{tabular}{c|c c c} \hline \hline & \multicolumn{3}{c}{Target probability} \\ Parameter & 90\% & 95\% & 99\% \\ \hline \(\mu_{it}^{(1)}\) (\(=\theta_{it}^{(1)}\)) & 90.20\% & 95.60\% & 98.70\% \\ \(\mu_{it}^{(2)}\) & 90.70\% & 95.80\% & 99.00\% \\ \(\theta_{it}^{(2)}\) & 89.20\% & 94.20\% & 98.50\% \\ \hline \end{tabular} \end{table} Table 3: Coverage probability of the confidence interval horizon of \(n=31\) years and restricted our focus to the \(m=38\) untreated states (excluding California) in their dataset. This data was encoded into a \(38\times 31\) matrix, \(Y\), where the entry \(y_{it}\) represents the potential outcome of per capita cigarette sales (in packs) for state \(i\) in year \(t\) under control, i.e., without any intervention in place. To generate MNAR data, we artificially introduced interventions to a subset of states where the probability that a state adopts an intervention (e.g., tobacco control program) depends on their change in cigarette sales pre-1986 and post-1986. More specifically, we considered the following adoption protocol: First, we clustered states into four categories -- severe, moderate, mild, and good -- based on their percentage change in average cigarette sales during 1986-2000 compared to that during 1970-1985. The severe states are the states where average cigarette sales are hardly reduced (\(-0\%\sim-10\%\), MO,WV,SC,AL,AR,TN), and the moderate states are the states whose percentage change is between \(-10\%\) and \(-15\%\) (KY,DE,GA,IN,OH,MS). The mild states are the states where the percentage change is between \(-15\%\) and \(-20\%\) (NE,LA,IA,SD,WI,PA). The rest are good states (\(-20\%\sim\)). We then designated the timing and probability of intervention for mild, moderate, severe, and good states differently. Half of the severe states adopt an intervention in 1986 and the other half in 1991. Half of the moderate states adopt the intervention at 1991 and the other half in 1996. Half of the mild states adopt the intervention in 1996, and the other half do not adopt the intervention. In addition, the good states do not adopt the intervention at all. This setup reflects the scenario in which a state whose average sales may not be reduced sufficiently without the intervention is more likely to adopt the intervention early. Table 4 shows the average RMSE of missing components caused by the intervention in 10 experiments. Here, the missing components mean the potential "control (no adoption)" outcomes in the intervention period. The only randomization lies in the resampling of the observation patterns. We can check that ABDIK performs relatively poorly. In addition, the performance of our estimator is slightly better than that of BN and ADSS. \begin{table} \begin{tabular}{c|c c c c} \hline \hline & CY & BN & ADSS & ABDIK \\ \hline average RMSE & 18.362 (0.431) & 19.692 (0.400) & 19.619 (0.432) & 25.522 (0.414) \\ \hline \end{tabular} \end{table} Table 4: Average RMSE: The values inside brackets are the standard errors. ## 6 Concluding Remarks This article develops an inference framework for the matrix completion when missing is not at random and without the need for strong signals. One of the key observations to our development is that if the number of missing entries is small enough compared to the size of the panel, they can be well estimated even if missing is not at random. We judicially divide the missing entries into smaller groups and use this observation to provide accurate estimates and efficient inferences. Moreover, we showed that our proposed estimate, even with fairly weak signals, is asymptotically normal with suitable debiasing. As an application, we studied the treatment effects in the tick size pilot program, an experiment conducted by the SEC to assess the impact of tick size extension on the market quality of small and illiquid stocks from 2016 to 2018. While previous studies on this program were based on traditional regression or difference-in-difference methods by assuming that the treatment effect is invariant with respect to time and unit, we observed significant heterogeneity in treatment effects and gained further insights about treatment effects in the pilot program using our estimation method. Lastly, we conducted simulation experiments to further demonstrate the practical merits of our methodology. ## Appendix A Estimation of submatrix where missing occurs only at one column We shall first present the statistical properties of our estimators when missing occurs only at one column, since the estimation in this case serves as the main tool for dealing with more general and common missing patterns. More specifically, we consider the estimation of an arbitrary \(N_{o}\times T_{o}\) submatrix of \(M\) that is constructed using the indices \(\mathcal{I}_{o}\subseteq[N]\) and \(\mathcal{T}_{o}\subseteq[T]\). Without loss of generality, assume that \(\mathcal{I}_{o}=\{1,\cdots,N_{o}\}\) and \(\mathcal{T}_{o}=\{1,\cdots,T_{o}\}\). The model we consider is the following: \[Y_{o}=M_{o}+\mathcal{E}_{o}=X_{o}Z_{o}^{\top}+\mathcal{E}_{o},\] \(X_{o}=U_{o}D_{o}^{\frac{1}{2}}\) and \(X_{o}=V_{o}D_{o}^{\frac{1}{2}}\) where \(U_{o}D_{o}V_{o}^{\top}\) is the SVD of \(M_{o}=(m_{it})_{i\in\mathcal{I}_{o},t\in\mathcal{T}_{o}}\). Denote by \(\Omega_{o}=(\omega_{it})_{i\in\mathcal{I}_{o},t\in\mathcal{T}_{o}}\) and we treat it as a given one. Importantly, missing occurs only in the column \(t_{o}\in\mathcal{T}_{o}\): \(\omega_{it}=0\) if \(i\in\mathcal{Q}_{o}\subset\mathcal{I}_{o}\) and \(t=t_{o}\), \(\omega_{it}=1\) otherwise. Denote the number of missing entries by \(|\mathcal{Q}_{o}|=\vartheta_{o}\). In addition, we put the subscript '\(o\)' in all parameters regarding the submatrix \(M_{o}\) to distinguish them from the parameters of the full matrix \(M\). ### Definitions of estimators Our proof follows a general strategy recently developed by Chen et al. (2020, 2019, 2020): we first establish the statistical properties of a certain non-convex estimator and then show that it is close to the nuclear norm penalized estimator. There are two main reasons why this approach is more suitable for our purpose than the usual the restricted strong convexity (RSC) condition based techniques. See, e.g., Negahban and Wainwright (2012); Klopp (2014); Athey et al. (2021); Hamdi and Bayati (2022). First, this approach is more amenable for deriving estimation error in max norm. Moreover, RSC based approach has difficulty in handling situations where the observation probabilities of some entries are deterministically zero. We shall show that even though the strategy was developed for missing at random, it can be used to deal with deterministic missing patterns and in particular when some entries are missing with probability one. Recall that the nuclear norm penalized estimator is \[\widetilde{M}_{o}\coloneqq\operatorname*{arg\,min}_{A\in\mathbb{R}^{N_{o}\times T _{o}}}\ \ \frac{1}{2}||\Omega_{o}\circ(A-Y_{o})||_{F}^{2}+\lambda_{o}||A||_{*},\] and the corresponding debiased estimator is \[\widehat{M}_{o}\coloneqq\mathcal{P}_{r}\left[\mathcal{P}_{\Omega_{o}^{c}}( \widetilde{M}_{o})+\mathcal{P}_{\Omega_{o}}(Y_{o})\right].\] Here, \(\mathcal{P}_{\Omega_{o}}(B)=\Omega_{o}\circ B\), and \(\mathcal{P}_{\Omega_{o}^{c}}(B)=\Omega_{o}^{c}\circ B\) where \(\Omega_{o}^{c}=\mathbf{11}^{\top}-\Omega_{o}\). The estimators for \(X_{o}\) and \(Z_{o}\) are defined as \(\widetilde{X}_{o}\coloneqq\widetilde{U}_{o}\widetilde{D}_{o}^{\frac{1}{2}}\) and \(\widetilde{Z}_{o}\coloneqq\widetilde{V}_{o}\widetilde{D}_{o}^{\frac{1}{2}}\) where \(\widetilde{U}_{o}\widetilde{D}_{o}\widetilde{V}_{o}^{\top}\) is the SVD of \(\mathcal{P}_{r}(\widetilde{M}_{o})\). In addition, their corresponding debiased estimators are defined as \[\widehat{X}_{o}\coloneqq\widetilde{X}_{o}\left(I_{r}+\lambda_{o}(\widetilde{ X}_{o}^{\top}\widetilde{X}_{o})^{-1}\right)^{\frac{1}{2}},\ \ \widehat{Z}_{o}\coloneqq\widetilde{Z}_{o}\left(I_{r}+\lambda_{o}(\widetilde{Z}_ {o}^{\top}\widetilde{Z}_{o})^{-1}\right)^{\frac{1}{2}}.\] These quantities will also be useful in defining the variance estimation later on. We now introduce the non-convex estimators. We start with defining the following two loss functions, one for the typical non-convex estimator and the other for the leave-one-out estimator: \[f(X,Z)\coloneqq\frac{1}{2}\|\mathcal{P}_{\Omega_{o}}\left(XZ^{ \top}-Y_{o}\right)\|_{F}^{2}+\frac{\lambda_{o}}{2}\|X\|_{F}^{2}+\frac{\lambda_ {o}}{2}\|Z\|_{F}^{2},\] (A.1) \[f^{(m)}(X,Z)\] \[\coloneqq\begin{cases}\frac{1}{2}\left\|\mathcal{P}_{\Omega_{-m,\cdot}}(XZ^{\top}-Y_{o})\right\|_{F}^{2}+\frac{1}{2}\left\|\mathcal{P}_{m, \cdot}(XZ^{\top}-M_{o})\right\|_{F}^{2}+\frac{\lambda_{o}}{2}\left\|X\right\|_ {F}^{2}+\frac{\lambda_{o}}{2}\left\|Z\right\|_{F}^{2},\ \text{ if }1\leq m\leq N_{o},\\ \frac{1}{2}\left\|\mathcal{P}_{\Omega_{,-(m-N_{o})}}(XZ^{\top}-Y_{o})\right\| _{F}^{2}+\frac{1}{2}\left\|\mathcal{P}_{\cdot,(m-N_{o})}(XZ^{\top}-M_{o}) \right\|_{F}^{2}+\frac{\lambda_{o}}{2}\left\|X\right\|_{F}^{2}+\frac{\lambda_ {o}}{2}\left\|Z\right\|_{F}^{2},\end{cases}\] \[\text{ if }N_{o}+1\leq m\leq N_{o}+T_{o},\] (A.2) where \(X\) and \(Z\) are \(N_{o}\times r\) and \(T_{o}\times r\) matrices, respectively. Here, for each \(1\leq m\leq N_{o}\), \(\mathcal{P}_{\Omega_{-m,\cdot}}(B)\coloneqq\Omega_{-m,\cdot}\circ B\) where \(\Omega_{-m,\cdot}\coloneqq(\omega_{js}1\{j\neq m\})_{j\leq N_{o},s\leq T_{o}}\). Also, \(\mathcal{P}_{m,\cdot}(B)\coloneqq E_{m,\cdot}\circ B\) where \(E_{m,\cdot}\coloneqq(1\{j=m\})_{j\leq N_{o},s\leq T_{o}}\). Note that the estimator constructed from the loss function \(f^{(m)}\) is independent of \(\{\epsilon_{ms}\}_{s\leq T_{o}}\). Similarly, for each \(N_{o}+1\leq m\leq N_{o}+T_{o}\), we define \(\mathcal{P}_{\Omega_{,-(m-N_{o})}}(B)\coloneqq\Omega_{\cdot,-(m-N_{o})}\circ B\) where \(\Omega_{\cdot,-(m-N_{o})}\coloneqq(\omega_{js}1\{s\neq m-N_{o}\})_{j\leq N_{o}, s\leq T_{o}}\), and \(\mathcal{P}_{\cdot,(m-N_{o})}(B)\coloneqq E_{\cdot,(m-N_{o})}\circ B\) where \(E_{\cdot,(m-N_{o})}\coloneqq(1\{s=m-N_{o}\})_{j\leq N_{o},s\leq T_{o}}\). In this case, the estimator is constructed from \(f^{(m)}\), which is independent of \(\{\epsilon_{j,(m-N_{o})}\}_{j\leq N_{o}}\). Then, based on (A.1), we define the following gradient descent iterates: \[\begin{bmatrix}X_{o}^{\tau+1}\\ Z_{o}^{\tau+1}\end{bmatrix}=\begin{bmatrix}X_{o}^{\tau}-\eta_{o}\nabla_{X}f(X_ {o}^{\tau},Z_{o}^{\tau})\\ Z_{o}^{\tau}-\eta_{o}\nabla_{Z}f(X_{o}^{\tau},Z_{o}^{\tau})\end{bmatrix}\] (A.3) where \(X_{o}^{0}=X_{o}\), \(Z_{o}^{0}=Z_{o}\), \(\tau=0,1,\ldots,\bar{\tau}-1\), and \(\bar{\tau}=\max\{N_{o}^{23},T_{o}^{23}\}\). Here, \(\eta_{o}>0\) is the step size. Similarly, for (A.2), we define \[\begin{bmatrix}X_{o}^{\tau+1,(m)}\\ Z_{o}^{\tau+1,(m)}\end{bmatrix}=\begin{bmatrix}X_{o}^{\tau,(m)}-\eta_{o} \nabla_{X}f^{(m)}(X_{o}^{\tau,(m)},Z_{o}^{\tau,(m)})\\ Z_{o}^{\tau,(m)}-\eta_{o}\nabla_{Z}f^{(m)}(X_{o}^{\tau,(m)},Z_{o}^{\tau,(m)}) \end{bmatrix}\] (A.4) where \(X_{o}^{0,(m)}=X_{o}\), \(Z_{o}^{0,(m)}=Z_{o}\). Note that the gradient descent iterates in (A.3) and (A.4) are not computable because the initial value \((X_{o},Z_{o})\) is unknown. However, it does not cause any problems in the paper since we do not need to actually compute \(X_{o}^{\tau}\), \(Z_{o}^{\tau}\), \(X_{o}^{\tau,(m)}\), and \(Z_{o}^{\tau,(m)}\) and only use their existence and theoretical properties for the proof. In addition, we define the corresponding debiased iterates: \[X_{o}^{d,\tau}\coloneqq X_{o}^{\tau}\left(I_{r}+\lambda_{o}(X_{o }^{\tau\top}X_{o}^{\tau})^{-1}\right)^{\frac{1}{2}},\ \ Z_{o}^{d,\tau}\coloneqq Z_{o}^{\tau}\left(I_{r}+\lambda_{o}(Z_{o}^{\tau\top}Z _{o}^{\tau})^{-1}\right)^{\frac{1}{2}},\] \[X_{o}^{d,\tau,(m)}\coloneqq X_{o}^{\tau,(m)}\left(I_{r}+\lambda_ {o}(X_{o}^{\tau,(m)\top}X_{o}^{\tau,(m)})^{-1}\right)^{\frac{1}{2}},\ \ Z_{o}^{d,\tau,(m)}\coloneqq Z_{o}^{\tau,(m)}\left(I_{r}+\lambda_{o}(Z_{o}^{ \tau,(m)\top}Z_{o}^{\tau,(m)})^{-1}\right)^{\frac{1}{2}}.\] Moreover, we define corresponding rotation matrices: \[H_{o}^{\tau}\coloneqq\operatorname*{arg\,min}_{R\in\mathcal{O}^ {r\times r}}\left\|\mathcal{F}_{o}^{\tau}R-\mathcal{F}_{o}\right\|_{F},\ \ H_{o}^{\tau,(m)}\coloneqq \operatorname*{arg\,min}_{R\in\mathcal{O}^{r\times r}}\left\|\mathcal{F}_{o}^ {\tau,(m)}R-\mathcal{F}_{o}\right\|_{F},\] \[Q_{o}^{\tau,(m)}\coloneqq\operatorname*{arg\,min}_{R\in\mathcal{ O}^{r\times r}}\left\|\mathcal{F}_{o}^{\tau,(m)}R-\mathcal{F}_{o}^{\tau}H_{o}^{ \tau}\right\|_{F},\ \ H_{o}^{d,\tau}\coloneqq\operatorname*{arg\,min}_{R\in\mathcal{ O}^{r\times r}}\left\|\mathcal{F}_{o}^{d,\tau}R-\mathcal{F}_{o}\right\|_{F},\] \[H_{o}^{d,\tau,(m)}\coloneqq\operatorname*{arg\,min}_{R\in \mathcal{O}^{r\times r}}\left\|\mathcal{F}_{o}^{d,\tau,(m)}R-\mathcal{F}_{o} \right\|_{F},\ \text{where}\] \[\mathcal{F}_{o}^{\tau}\coloneqq\begin{bmatrix}X_{o}^{\tau}\\ Z_{o}^{\tau}\end{bmatrix},\ \ \mathcal{F}_{o}^{\tau,(m)}\coloneqq\begin{bmatrix}X_{o}^{\tau,(m)}\\ Z_{o}^{\tau,(m)}\end{bmatrix},\ \ \mathcal{F}_{o}^{d,\tau}\coloneqq \begin{bmatrix}X_{o}^{d,\tau}\\ Z_{o}^{d,\tau}\end{bmatrix},\ \ \mathcal{F}_{o}^{d,\tau,(m)}\coloneqq \begin{bmatrix}X_{o}^{d,\tau,(m)}\\ Z_{o}^{d,\tau,(m)}\end{bmatrix},\ \ \mathcal{F}_{o}\coloneqq\begin{bmatrix}X_{o}\\ Z_{o}\end{bmatrix},\] and \(\mathcal{O}^{r\times r}\) is the set of \(r\times r\) orthogonal matrix. Finally, we define the non-convex estimators using the gradient descent iterates. Let \[\tau_{o}^{*}\coloneqq\operatorname*{arg\,min}_{0\leq\tau<\bar{\tau}}\left\| \nabla f(X_{o}^{\tau},Z_{o}^{\tau})\right\|_{F}.\] Then, the non-convex estimators are defined as: \[(\breve{X}_{o},\breve{Z}_{o})\coloneqq(X_{o}^{\tau_{o}^{*}},Z_{o}^{\tau_{o}^{ *}})\quad\text{from (A.3)},\quad(\breve{X}_{o}^{(m)},\breve{Z}_{o}^{(m)})\coloneqq(X_{o}^{ \tau_{o}^{*},(m)},Z_{o}^{\tau_{o}^{*},(m)})\quad\text{from (A.4)},\] and the corresponding debiased estimators are defined as: \[(\breve{X}_{o}^{d},\breve{Z}_{o}^{d})\coloneqq(X_{o}^{d,\tau_{o}^{*}},Z_{o}^ {d,\tau_{o}^{*}}),\quad(\breve{X}_{o}^{d,(m)},\breve{Z}_{o}^{d,(m)})\coloneqq(X _{o}^{d,\tau_{o}^{*},(m)},Z_{o}^{d,\tau_{o}^{*},(m)}),\] with the corresponding rotation matrices \(\breve{H}_{o}\coloneqq H_{o}^{\tau_{o}^{*}}\), \(\breve{H}_{o}^{(m)}\coloneqq H_{o}^{\tau_{o}^{*},(m)}\), \(\breve{H}_{o}^{d}\coloneqq H_{o}^{d,\tau_{o}^{*}}\), and \(\tilde{H}_{o}^{d,(m)}\coloneqq H_{o}^{d,\tau_{o}^{*},(m)}\). Lastly, we define the rotation matrix for \((\widehat{X}_{o},\widehat{Z}_{o})\) as \(\widehat{H}_{o}=B_{o}\tilde{H}_{o}^{d}\) where \(B_{o}=\arg\min_{R\in\mathcal{O}^{r\times r}}||\widehat{X}_{o}R-\check{X}_{o}^ {d}||_{F}^{2}+||\widehat{Z}_{o}R-\check{Z}_{o}^{d}||_{F}^{2}\). ### Key propositions for inferential theory This subsection provides several key propositions for developing the inferential theory of our debiased estimator \(\widehat{M}_{o}\). First, we derive a suitable decomposition for the asymptotic normality of the debiased estimator \((\widehat{X}_{o},\widehat{Z}_{o})\) (Propositions A.1 and A.2). By using the proximity between \(\widehat{M}_{o}\) and \(\widehat{X}_{o}\widehat{Z}_{o}^{\top}\) (Proposition A.3) with this decomposition, we derive a decomposition of \(\widehat{m}_{o,it}-m_{o,it}\), which is used to show the asymptotic normality of \(\widehat{m}_{o,it}\) (Proposition A.4). We begin by introducing several assumptions. **Assumption A.1** (Noise).: \(\epsilon_{it}\) _is i.i.d. zero mean sub-Gaussian random variable such that \(\mathbb{E}[\epsilon_{it}]=0\), \(\mathbb{E}[\epsilon_{it}^{2}]=\sigma^{2}\), \(\mathbb{E}[\exp(s\epsilon_{it})]\leq\exp(Cs^{2}\sigma^{2})\), \(\forall s\in\mathbb{R}\), for some constant \(C>0\)._ **Assumption A.2** (Incoherence).: _There is \(\mu_{o}\geq 1\) such that \(||U_{M_{o}}||_{2,\infty}\leq\sqrt{\frac{\mu_{o}r}{N_{o}}}\), \(||V_{M_{o}}||_{2,\infty}\leq\sqrt{\frac{\mu_{o}r}{T_{o}}}\). Here, \(U_{A}\) and \(V_{A}\) denote the left and right singular vector of \(A\), respectively._ **Assumption A.3** (Signal to noise ratio).: \[\sigma\kappa_{o}^{2}\mu_{o}^{\frac{1}{2}}r^{\frac{1}{2}}\max\{N_{o}\sqrt{\log N _{o}},T_{o}\sqrt{\log T_{o}}\}\ll\psi_{\min,o}\min\{\sqrt{N_{o}},\sqrt{T_{o}}\},\] _where \(\psi_{\min,o}\) is the smallest nonzero singular value of \(M_{o}\)._ **Assumption A.4** (Size of \(\vartheta_{o}\) and parameters).: _(i) \(\kappa_{o}^{4}\mu_{o}^{2}r^{2}\max\{N_{o}\log^{3}N_{o},T_{o}\log^{3}T_{o}\} \ll\min\{N_{o}^{2},T_{o}^{2}\}\) and (ii) \(\vartheta_{o}\kappa_{o}^{2}\mu_{o}r\ll\min\{N_{o},T_{o}\}\)._ Denote by \(\Omega_{o,i}\) the diagonal matrix consisting of \(\{\omega_{is}\}_{1\leq s\leq T_{o}}\) and by \(\Omega_{o,t}\) the diagonal matrix consisting of \(\{\omega_{jt}\}_{1\leq j\leq N_{o}}\). **Proposition A.1**.: _Suppose that Assumptions A.1 - A.4 hold. Then, with probability at least \(1-O(\min\{N_{o}^{-9},T_{o}^{-9}\})\), we have for all \(1\leq i\leq N_{o}\),_ \[e_{i}^{\top}(\widehat{X}_{o}\widehat{H}_{o}-X_{o})=e_{i}^{\top}\mathcal{P}_{ \Omega_{o}}(\mathcal{E}_{o})Z_{o}(Z_{o}^{\top}\Omega_{o,i}Z_{o})^{-1}+ \mathcal{R}_{o,i}^{X},\] _where_ \[\max_{i}||\mathcal{R}_{o,i}^{X}||\] \[\leq C_{X}\frac{\sigma}{\sqrt{\psi_{\min,o}}}\left(\frac{\sigma}{ \psi_{\min,o}}\sqrt{\frac{\kappa_{o}^{9}\mu_{o}r\max\{N_{o}^{2}\log N_{o},T_{o} ^{2}\log T_{o}\}}{\min\{N_{o},T_{o}\}}}+\sqrt{\frac{\kappa_{o}^{7}\mu_{o}^{3}r^ {3}\max\{N_{o}^{2}\log N_{o},T_{o}^{2}\log T_{o}\}}{N_{o}\min\{N_{o}^{2},T_{o} ^{2}\}}}\right)\] _for an absolute constant \(C_{X}>0\)._ **Proposition A.2**.: _Suppose that Assumptions A.1 - A.4 hold. Then, with probability at least \(1-O(\min\{N_{o}^{-9},T_{o}^{-9}\})\), we have for all \(1\leq t\leq T_{o}\),_ \[e_{t}^{\top}(\widehat{Z}_{o}\widehat{H}_{o}-Z_{o})=e_{t}^{\top} \mathcal{P}_{\Omega_{o}}(\mathcal{E}_{o})^{\top}X_{o}(X_{o}^{\top}\Omega_{o,t }X_{o})^{-1}+\mathcal{R}_{o,t}^{Z},\] _where_ \[\max_{t}||\mathcal{R}_{o,t}^{Z}||\] \[\leq C_{Z}\frac{\sigma}{\sqrt{\psi_{\min,o}}}\left(\frac{\sigma} {\psi_{\min,o}}\sqrt{\frac{\kappa_{o}^{9}\mu_{o}r\max\{N_{o}^{2}\log N_{o},T_{o }^{2}\log T_{o}\}}{\min\{N_{o},T_{o}\}}}+\sqrt{\frac{\kappa_{o}^{7}\mu_{o}^{3 }r^{3}\max\{N_{o}^{2}\log N_{o},T_{o}^{2}\log T_{o}\}}{T_{o}\min\{N_{o}^{2}, T_{o}^{2}\}}}\right.\] \[\left.+\vartheta_{o}\sqrt{\frac{\mu_{o}^{3}r^{3}\kappa_{o}^{5} \max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}{N_{o}\min\{N_{o}^{2},T_{o}^{2}\}}}\right)\] _for an absolute constant \(C_{Z}>0\)._ **Proposition A.3**.: _Suppose that Assumptions A.1 - A.4 hold. With probability at least \(1-O(\min\{N_{o}^{-10},T_{o}^{-10}\})\), we have_ \[\left\|\widehat{M}_{o}-\widehat{X}_{o}\widehat{Z}_{o}^{\top} \right\|_{F}\leq C_{prx}\frac{\sigma}{\max\{N_{o}^{7/2},T_{o}^{7/2}\}}\] _for an absolute constant \(C_{prx}>0\)._ **Proposition A.4**.: _Suppose that Assumptions A.1 - A.4 hold. With probability at least \(1-O(\min\{N_{o}^{-9},T_{o}^{-9}\})\), we have_ \[\widehat{m}_{o,it_{o}}-m_{o,it_{o}}\] \[=X_{o,i}^{\top}\left(\sum_{j\in\mathcal{I}_{o}}\omega_{jt_{o}}X_ {o,j}X_{o,j}^{\top}\right)^{-1}\sum_{j\in\mathcal{I}_{o}}\omega_{jt_{o}} \epsilon_{jt_{o}}X_{o,j}+Z_{o,t_{o}}^{\top}\left(\sum_{s\in\mathcal{T}_{o}} \omega_{is}Z_{o,s}Z_{o,s}^{\top}\right)^{-1}\sum_{s\in\mathcal{T}_{o}}\omega_ {is}\epsilon_{is}Z_{o,s}+\mathcal{R}_{o,i}^{M},\] _where_ \[\max_{i}||\mathcal{R}_{o,i}^{M}||\leq C_{M}\left(\frac{\sigma^{2} }{\psi_{\min,o}}\frac{\kappa_{o}^{5}\mu_{o}r\max\{N_{o}\log N_{o},T_{o}\log T _{o}\}}{\min\{N_{o},T_{o}\}}+\sigma\frac{\kappa_{o}^{4}\mu_{o}^{2}r^{2}\max\{ \sqrt{N_{o}\log N_{o}},\sqrt{T_{o}\log T_{o}}\}}{\min\{N_{o}^{\frac{3}{2}},T_{ o}^{\frac{3}{2}}\}}\right.\] \[\left.+\sigma\frac{\vartheta_{o}\mu_{o}^{2}r^{2}\kappa_{o}^{3} \max\{\sqrt{N_{o}\log N_{o}},\sqrt{T_{o}\log T_{o}}\}}{N_{o}\min\{N_{o},T_{o} \}},\right)\] \(C_{M}>0\) _is an absolute constant._ ### Proofs of Propositions A.1-A.4 Proof of Proposition a.1.: We first derive a decomposition of \(e_{i}^{\top}(\breve{X}_{o}^{d}\breve{H}_{o}^{d}-X_{o})\). From the definition of the gradient \(\nabla_{X}f(\breve{X}_{o},\breve{Z}_{o})=\mathcal{P}_{\Omega_{o}}(\breve{X}_{o} \breve{Z}_{o}^{\top}-Y_{o})\breve{Z}_{o}+\lambda_{o}\breve{X}_{o}\) with the decomposition \[\mathcal{P}_{\Omega_{o}}(\breve{X}_{o}\breve{Z}_{o}^{\top}-Y_{o})=\breve{X}_{o }\breve{Z}_{o}^{\top}-X_{o}Z_{o}^{\top}+A-\mathcal{P}_{\Omega_{o}}(\mathcal{E} _{o}),\] where \(A\coloneqq\Omega_{o}\circ(\breve{X}_{o}\breve{Z}_{o}^{\top}-X_{o}Z_{o}^{\top} )-(\breve{X}_{o}\breve{Z}_{o}^{\top}-X_{o}Z_{o}^{\top})\), we have \[\breve{X}_{o}\left(\breve{Z}_{o}^{\top}\breve{Z}_{o}+\lambda_{o}I_{r}\right) =X_{o}Z_{o}^{\top}\breve{Z}_{o}+\mathcal{P}_{\Omega_{o}}(\mathcal{E}_{o}) \breve{Z}_{o}-A\breve{Z}_{o}+\nabla_{X}f(\breve{X}_{o},\breve{Z}_{o}).\] In addition, a simple calculation shows that \(\breve{Z}_{o}^{d\top}\breve{Z}_{o}^{d}=\breve{Z}_{o}^{\top}\breve{Z}_{o}+ \lambda_{o}I_{r}\). Then, by combining these two equations, we have \[\breve{X}_{o}\breve{Z}_{o}^{d\top}\breve{Z}_{o}^{d}=X_{o}Z_{o}^{\top}\breve{Z} _{o}+\mathcal{P}_{\Omega_{o}}(\mathcal{E}_{o})\breve{Z}_{o}-A\breve{Z}_{o}+ \nabla_{X}f(\breve{X}_{o},\breve{Z}_{o}).\] Multiplying both sides by \((I_{r}+\lambda_{o}(\breve{Z}_{o}^{\top}\breve{Z}_{o})^{-1})^{1/2}\), we have \[\breve{X}_{o}\breve{Z}_{o}^{d\top}\breve{Z}_{o}^{d}(I_{r}+\lambda_{o}(\breve{Z} _{o}^{\top}\breve{Z}_{o})^{-1})^{1/2}=X_{o}Z_{o}^{\top}\breve{Z}_{o}^{(d)}+ \mathcal{P}_{\Omega_{o}}(\mathcal{E}_{o})\breve{Z}_{o}^{(d)}-A\breve{Z}_{o}^{ (d)}+\nabla_{X}f(\breve{X}_{o},\breve{Z}_{o})(I_{r}+\lambda_{o}(\breve{Z}_{o} ^{\top}\breve{Z}_{o})^{-1})^{1/2}.\] Moreover, because the left hand side can be also represented as \[\breve{X}_{o}\breve{Z}_{o}^{d\top}\breve{Z}_{o}^{d}(I_{r}+\lambda _{o}(\breve{Z}_{o}^{\top}\breve{Z}_{o})^{-1})^{1/2} =\breve{X}_{o}(I_{r}+\lambda_{o}(\breve{Z}_{o}^{\top}\breve{Z}_{o} )^{-1})^{1/2}\breve{Z}_{o}^{d\top}\breve{Z}_{o}^{d}\] \[=\breve{X}_{o}^{d}(\breve{Z}_{o}^{d\top}\breve{Z}_{o}^{d})-\breve {X}_{o}\Delta_{balance}\breve{Z}_{o}^{d\top}\breve{Z}_{o}^{d},\] where \(\Delta_{balance}\coloneqq(I_{r}+\lambda_{o}(\breve{X}_{o}^{\top}\breve{X}_{o} )^{-1})^{\frac{1}{2}}-(I_{r}+\lambda_{o}(\breve{Z}_{o}^{\top}\breve{Z}_{o})^{ -1})^{\frac{1}{2}}\), we have \[\breve{X}_{o}^{d} =\mathcal{P}_{\Omega_{o}}(\mathcal{E}_{o})\breve{Z}_{o}^{d}( \breve{Z}_{o}^{d\top}\breve{Z}_{o}^{d})^{-1}+X_{o}Z_{o}^{\top}\breve{Z}_{o}^{ d}(\breve{Z}_{o}^{d\top}\breve{Z}_{o}^{d})^{-1}-A\breve{Z}_{o}^{d}(\breve{Z}_{o}^{d \top}\breve{Z}_{o}^{d})^{-1}\] \[\quad+\nabla_{X}f(\breve{X}_{o},\breve{Z}_{o})\left(I_{r}+ \lambda_{o}(\breve{Z}_{o}^{\top}\breve{Z}_{o})^{-1}\right)^{1/2}(\breve{Z}_{o }^{d\top}\breve{Z}_{o}^{d})^{-1}+\breve{X}_{o}\Delta_{balance},\] by multiplying \((\breve{Z}_{o}^{d\top}\breve{Z}_{o}^{d})^{-1}\). Then, using the identity \(\breve{Z}_{o}^{d}(\breve{Z}_{o}^{d\top}\breve{Z}_{o}^{d})^{-1}\breve{H}_{o}^{ d}=\bar{\breve{Z}}_{o}^{d}(\bar{\breve{Z}}_{o}^{d\top}\bar{\breve{Z}}_{o}^{d})^{-1}\) where \(\bar{\breve{Z}}_{o}^{d}=\breve{Z}_{o}^{d}\breve{H}_{o}^{d}\), we have the following decomposition: \[e_{i}^{\top}(\breve{X}_{o}^{d}\breve{H}_{o}^{d}-X_{o})=e_{i}^{ \top}\mathcal{P}_{\Omega_{o}}(\mathcal{E}_{o})Z_{o}(Z_{o}^{\top}\Omega_{o,i}Z _{o})^{-1}+\sum_{k=1}^{5}\delta_{k,i},\] \[\delta_{1,i} =e_{i}^{\top}\mathcal{P}_{\Omega_{o}}(\mathcal{E}_{o})(\bar{\breve {Z}}_{o}(\bar{\breve{Z}}_{o}^{\top}\bar{\breve{Z}}_{o}^{d})^{-1}-Z_{o}(Z_{o}^{ \top}Z_{o})^{-1}),\] \[\delta_{2,i} =e_{i}^{\top}\mathcal{P}_{\Omega_{o}}(\mathcal{E}_{o})\left(Z_{o}(Z _{o}^{\top}Z_{o})^{-1}-Z_{o}(Z_{o}^{\top}\Omega_{o,i}Z_{o})^{-1}\right),\] \[\delta_{3,i} =e_{i}^{\top}X_{o}[Z_{o}^{\top}\bar{\breve{Z}}_{o}^{d}(\bar{\breve{Z }}_{o}^{d\top}\bar{\breve{Z}}_{o}^{d})^{-1}-I_{r}],\] \[\delta_{4,i} =e_{i}^{\top}A\bar{\breve{Z}}_{o}^{d}(\bar{\breve{Z}}_{o}^{d\top} \bar{\breve{Z}}_{o}^{d})^{-1},\] \[\delta_{5,i}=e_{i}^{\top}\nabla_{X}f(\breve{X}_{o},\breve{Z}_{o})\left(I_{r}+ \lambda_{o}\big{(}\breve{Z}_{o}^{\top}\breve{Z}_{o}\big{)}^{-1}\right)^{1/2}( \breve{Z}_{o}^{d\top}\breve{Z}_{o}^{d})^{-1}\breve{H}_{o}^{d}+e_{i}^{\top} \breve{X}_{o}\Delta_{balance}\breve{H}_{o}^{d}.\] Furthermore, by defining \(\delta_{6,i}=e_{i}^{\top}(\widehat{X}_{o}B_{o}-\breve{X}_{o}^{d})\breve{H}_{o} ^{d}\) where \[B_{o}=\operatorname*{arg\,min}_{R\in\mathcal{O}^{r\times r}}||\widehat{X}_{o} R-\breve{X}_{o}^{d}||_{F}^{2}+||\widehat{Z}_{o}R-\breve{Z}_{o}^{d}||_{F}^{2},\] we have the following decomposition for \(e_{i}^{\top}(\widehat{X}_{o}\widehat{H}_{o}-X_{o})\): \[e_{i}^{\top}(\widehat{X}_{o}\widehat{H}_{o}-X_{o})=e_{i}^{\top}\mathcal{P}_{ \Omega_{o}}(\mathcal{E}_{o})Z_{o}(Z_{o}^{\top}\Omega_{o,i}Z_{o})^{-1}+\sum_{k= 1}^{6}\delta_{k,i}\] where \(\widehat{H}_{o}=B_{o}\breve{H}_{o}^{d}\). Part 1.First, bound the part \(\delta_{1,i}\). By defining \(\bar{\breve{Z}}_{o}^{d,(i)}=\breve{Z}_{o}^{d,(i)}\breve{H}_{o}^{d,(i)}\), we have \[||\delta_{1,i}||_{2} \leq\left\|e_{i}^{\top}\mathcal{P}_{\Omega_{o}}(\mathcal{E}_{o}) \left[\bar{\breve{Z}}_{o}^{d,(i)}\left(\bar{\breve{Z}}_{o}^{d,(i)\top}\bar{ \breve{Z}}_{o}^{d,(i)}\right)^{-1}-Z_{o}\left(Z_{o}^{\top}Z_{o}\right)^{-1} \right]\right\|_{2}\] \[\quad+\left\|e_{i}^{\top}\mathcal{P}_{\Omega_{o}}(\mathcal{E}_{o} )\left[\bar{\breve{Z}}_{o}^{d}\left(\bar{\breve{Z}}_{o}^{d\top}\bar{\breve{Z} }_{o}^{d}\right)^{-1}-\bar{\breve{Z}}_{o}^{d,(i)}\left(\bar{\breve{Z}}_{o}^{d,(i)\top}\bar{\breve{Z}}_{o}^{d,(i)}\right)^{-1}\right]\right\|_{2}.\] The first part is bounded in Lemma A.6. For the second part, note that \[\left\|e_{i}^{\top}\mathcal{P}_{\Omega_{o}}(\mathcal{E}_{o}) \left[\bar{\breve{Z}}_{o}^{d}\left(\bar{\breve{Z}}_{o}^{d\top}\bar{\breve{Z}}_ {o}^{d}\right)^{-1}-\bar{\breve{Z}}_{o}^{d,(i)}\left(\bar{\breve{Z}}_{o}^{d,(i )\top}\bar{\breve{Z}}_{o}^{d,(i)}\right)^{-1}\right]\right\|_{2}\] \[\quad\leq\left\|\mathcal{P}_{\Omega_{o}}(\mathcal{E}_{o})\right\| \left\|\bar{\breve{Z}}_{o}^{d}\left(\bar{\breve{Z}}_{o}^{d\top}\bar{\breve{Z} }_{o}^{d}\right)^{-1}-\bar{\breve{Z}}_{o}^{d,(i)}\left(\bar{\breve{Z}}_{o}^{d,(i)\top}\bar{\breve{Z}}_{o}^{d,(i)}\right)^{-1}\right\|\] \[\quad\lesssim\sigma\sqrt{\max\{N_{o},T_{o}\}}\frac{1}{\psi_{\min, o}}\left\|\bar{\breve{Z}}_{o}^{d}-\bar{\breve{Z}}_{o}^{d,(i)}\right\|\] \[\quad\lesssim\sigma\sqrt{\max\{N_{o},T_{o}\}}\frac{1}{\psi_{\min, o}}\kappa_{o}\frac{\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}}{\psi_{\min, o}}\left\|\mathcal{F}_{o}\right\|_{2,\infty}\] by Lemmas A.5 and D.4. Hence, we have with probability at least \(1-O(\min\{N_{o}^{-9},T_{o}^{-9}\})\), \[\max_{i}||\delta_{1,i}||_{2}\leq C_{\delta,1}\frac{\sigma}{\sqrt{\psi_{\min,o}} }\frac{\sigma}{\psi_{\min,o}}\sqrt{\frac{\kappa_{o}^{3}\mu_{o}r\max\{N_{o}^{2} \log N_{o},T_{o}^{2}\log T_{o}\}}{\min\{N_{o},T_{o}\}}}\] for some absolute constant \(C_{\delta,1}>0\). Part 2.Note that \[\delta_{2,i}=\sum_{s=1}^{T_{o}}\omega_{is}\epsilon_{is}Z_{o,s}\left((Z_{o}^{ \top}Z_{o})^{-1}-(Z_{o}^{\top}\Omega_{o,i}Z_{o})^{-1}\right).\] Because \[\left\|Z_{o}^{\top}Z_{o}-Z_{o}^{\top}\Omega_{o,i}Z_{o}\right\|=||Z_{o,t_{o}}Z_{o, t_{o}}^{\top}||\leq\frac{\kappa_{o}\mu_{o}r}{T_{o}}\psi_{\min,o}\] and \(||(Z_{o}^{\top}Z_{o})^{-1}||=\psi_{\min,o}^{-1}\), we have \[\left\|(Z_{o}^{\top}Z_{o})^{-1}-(Z_{o}^{\top}\Omega_{o,i}Z_{o})^{-1}\right\| \lesssim\left\|Z_{o}^{\top}Z_{o}-Z_{o}^{\top}\Omega_{o,i}Z_{o}\right\|||(Z_{o} ^{\top}Z_{o})^{-1}||^{2}\leq\frac{\kappa_{o}\mu_{o}r}{T_{o}}\psi_{\min,o}^{-1}.\] In addition, by the matrix Berstein inequality, we have \[\left\|\sum_{s=1}^{T_{o}}\omega_{is}\epsilon_{is}Z_{o,s}\right\|\lesssim \sigma\sqrt{\log T_{o}}||Z_{o}||_{F}\lesssim\sigma\sqrt{\log T_{o}}\kappa_{o}^ {\frac{3}{2}}r^{\frac{1}{2}}\psi_{\min,o}^{\frac{1}{2}}\] with probability at least \(1-O(\min\{N_{o}^{-100},T_{o}^{-100}\})\). So, we have with probability at least \(1-O(\min\{N_{o}^{-9},T_{o}^{-9}\})\), \[\max_{i}||\delta_{2,i}||_{2}\leq C_{\delta,2}\frac{\sigma}{\sqrt{\psi_{\min,o }}}\frac{\kappa_{o}^{\frac{3}{2}}\mu_{o}r^{\frac{3}{2}}\sqrt{\log T_{o}}}{T_{ o}}.\] Part 3.Note that \[\left\|e_{i}^{\top}X_{o}\right\|_{2}\leq\sqrt{\frac{\kappa_{o}\mu_{o}r}{N_{o }}\psi_{\min,o}}\] by the incoherence condition. By Lemma A.5 and the fact that \(\left\|(\bar{Z}_{o}^{d\top}\bar{\bar{Z}}_{o}^{d})^{-1}\right\|\lesssim\psi_{ \min,o}^{-1}\), we have \[||\delta_{3,i}||_{2} =\left\|e_{i}^{\top}X_{o}[Z_{o}^{\top}\bar{\bar{Z}}_{o}^{d}(\bar{ \bar{Z}}_{o}^{d\top}\bar{\bar{Z}}_{o}^{d})^{-1}-\bar{\bar{Z}}_{o}^{d\top}\bar {\bar{Z}}_{o}^{d}(\bar{\bar{Z}}_{o}^{d\top}\bar{\bar{Z}}_{o}^{d})^{-1}]\right\| _{2}\] \[\leq\left\|e_{i}^{\top}X_{o}\right\|_{2}\left\|(Z_{o}-\bar{\bar{Z }}_{o}^{d})^{\top}\bar{\bar{Z}}_{o}^{d}\right\|\left\|(\bar{\bar{Z}}_{o}^{d \top}\bar{\bar{Z}}_{o}^{d})^{-1}\right\|\] \[\lesssim\sqrt{\frac{\kappa_{o}\mu r}{N_{o}}}\frac{1}{\sqrt{\psi_{ \min,o}}}\left\|(Z_{o}-\bar{\bar{Z}}_{o}^{d})^{\top}\bar{\bar{Z}}_{o}^{d}\right\|.\] Next, we bound \(\left\|(Z_{o}-\bar{\bar{Z}}_{o}^{d})^{\top}\bar{\bar{Z}}_{o}^{d}\right\|\). Let \(\Delta_{X}\coloneqq\bar{\bar{X}}_{o}^{d}-X_{o}\) and \(\Delta_{Z}\coloneqq\bar{\bar{Z}}_{o}^{d}-Z_{o}\). Then, \((Z_{o}-\bar{\bar{Z}}_{o}^{d})^{\top}\bar{\bar{Z}}_{o}^{d}=\Delta_{Z}^{\top}Z_ {o}+\Delta_{Z}^{\top}\Delta_{Z}\). Following the proof of Lemma 6 in Chen et al. (2019), we can reach \[\left\|(Z_{o}-\bar{\bar{Z}}_{o}^{d})^{\top}\bar{\bar{Z}}_{o}^{d}\right\|\] \[\leq\left\|\Delta_{Z}^{\top}Z_{o}\right\|+\left\|\Delta_{Z}^{\top }\Delta_{Z}\right\|\] \[\lesssim\frac{1}{\psi_{\min,o}}\underbrace{\left\|\bar{\bar{X}}_ {o}^{d\top}\mathcal{P}_{\Omega_{o}}(\mathcal{E}_{o})Z_{o}\right\|}_{=\alpha_{ 1}}+\frac{1}{\psi_{\min,o}}\underbrace{\left\|\bar{\bar{X}}_{o}^{d\top}AZ_{o} \right\|}_{=\alpha_{2}}+\kappa_{o}\underbrace{\left(\left\|\Delta_{X}^{\top} \Delta_{X}\right\|+\left\|\Delta_{Z}^{\top}\Delta_{Z}\right\|\right)}_{= \alpha_{3}}\] \[\alpha_{1}\leq\left\|X_{o}^{\top}\mathcal{P}_{\Omega_{o}}(\mathcal{E}_{o})Z_{o} \right\|+\left\|\Delta_{X}^{\top}\mathcal{P}_{\Omega_{o}}(\mathcal{E}_{o})Z_{o} \right\|.\] By the Bernstein inequality, we have \[\left\|X_{o}^{\top}\mathcal{P}_{\Omega_{o}}(\mathcal{E}_{o})Z_{o}\right\|= \left\|\sum_{i\in\mathcal{I}_{o}\in\mathcal{T}_{o}}\omega_{it}\epsilon_{it}X_{ o,i}Z_{o,t}\right\|\lesssim\sigma r\kappa_{o}\psi_{\min,o}\sqrt{\max\{\log N_{o}, \log T_{o}\}}.\] In addition, we have by Lemmas A.5 and D.4 that \(\left\|\Delta_{X}^{\top}\mathcal{P}_{\Omega_{o}}(\mathcal{E}_{o})Z_{o} \right\|\leq\sigma^{2}\kappa_{o}^{2}\max\{N_{o},T_{o}\}\). Hence, we have \[\alpha_{1}\lesssim\sigma r\kappa_{o}\psi_{\min,o}\sqrt{\max\{\log N_{o},\log T _{o}\}}+\sigma^{2}\kappa_{o}^{2}\max\{N_{o},T_{o}\}.\] Moreover, since \[||A||\lesssim\sigma\sqrt{\max\{N_{o},T_{o}\}}\sqrt{\frac{\kappa_{o}^{4}\mu_{o }^{2}r^{2}\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}{\min\{N_{o}^{2},T_{o}^{2}\}}}\] by Lemma D.7, we have \[\alpha_{2}\lesssim\sigma\sqrt{\max\{N_{o},T_{o}\}}\sqrt{\frac{\kappa_{o}^{6} \mu_{o}^{2}r^{2}\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}{\min\{N_{o}^{2},T_{o} ^{2}\}}}\psi_{\min,o}.\] By Lemma A.5, we know \[\alpha_{3}\lesssim\max\{||\Delta_{X}||^{2},||\Delta_{Z}||^{2}\}\leq\sigma^{2 }\frac{\kappa_{o}^{3}\max\{N_{o},T_{o}\}}{\psi_{\min,o}}.\] Lastly, the term \(\alpha_{4}\) is bounded like \[\alpha_{4} \leq\left\|\left(I_{r}+\lambda_{o}(\breve{X}_{o}^{\top}\breve{X} _{o})^{-1}\right)^{1/2}\right\|\left\|\nabla_{Z}f(\breve{X}_{o},\breve{Z}_{o} )\right\|\left\|Z_{o}\right\|-\left\|\breve{\breve{X}}_{o}^{d\top}\breve{ \breve{X}}_{o}^{d}\right\|\left\|\Delta_{balance}\right\|\left\|\breve{Z}_{o}^ {\top}Z_{o}\right\|+\left\|\Delta_{XZ}^{d}\right\|\left\|D_{o}\right\|\] \[\lesssim\sigma\frac{\kappa_{o}^{2}}{\max\{N_{o}^{\frac{9}{2}},T_{ o}^{\frac{9}{2}}\}}\psi_{\min,o},\] due to Lemmas A.5 and A.9, and the relation (A.17). Therefore, we have \[\max_{i}||\delta_{3,i}||_{2}\lesssim\sqrt{\frac{\kappa_{o}\mu_{o}r}{N_{o}}} \frac{1}{\sqrt{\psi_{\min,o}}}\left\|(Z_{o}-\bar{\breve{Z}}_{o}^{d})^{\top} \bar{\breve{Z}}_{o}^{d}\right\|\] \[\lesssim\frac{\sigma}{\sqrt{\psi_{\min,o}}}\left(\kappa_{o}\frac{ \sigma}{\psi_{\min,o}}\sqrt{\frac{\kappa_{o}^{7}\mu_{o}r\max\{N_{o}^{2},T_{o}^{2} \}}{N_{o}}}+\sqrt{\frac{\kappa_{o}^{7}\mu_{o}^{3}r^{3}\max\{N_{o}^{2}\log N_{o},T_{o}^{2}\log T_{o}\}}{N_{o}\min\{N_{o}^{2},T_{o}^{2}\}}}\right).\] Part 4.Note that \[\left\|\delta_{4,i}\right\|_{2}=\left\|e_{i}^{\top}A\bar{\tilde{Z}}_{o}^{d}( \bar{\tilde{Z}}_{o}^{d\top}\bar{\tilde{Z}}_{o}^{d})^{-1}\right\|_{2}\leq\left\| e_{i}^{\top}A\bar{\tilde{Z}}_{o}^{d}\right\|_{2}\left\|(\bar{\tilde{Z}}_{o}^{d \top}\bar{\tilde{Z}}_{o}^{d})^{-1}\right\|\lesssim\frac{1}{\psi_{\min,o}} \left\|e_{i}^{\top}A\bar{\tilde{Z}}_{o}^{d}\right\|_{2}.\] Let \(\nu=[\nu_{1},\ldots,\nu_{T_{o}}]\coloneqq e_{i}^{\top}(\tilde{X}_{o}\tilde{Z} _{o}^{\top}-X_{o}Z_{o}^{\top})\). Then, we have by Lemma A.8 \[\left\|\nu\right\|_{\infty} =\left\|\tilde{X}_{o}\tilde{Z}_{o}^{\top}-X_{o}Z_{o}^{\top}\right\| _{\infty}\] \[\leq\left\|\tilde{X}_{o}\tilde{H}_{o}-X_{o}\right\|_{2,\infty} \left\|\tilde{Z}_{o}\right\|_{2,\infty}+\left\|X_{o}\right\|_{2,\infty}\left\| \tilde{Z}_{o}\tilde{H}_{o}-Z_{o}\right\|_{2,\infty}\] \[\lesssim\sigma\kappa_{o}^{2}\sqrt{\frac{\mu_{o}^{2}r^{2}\max\{N_ {o}\log N_{o},T_{o}\log T_{o}\}}{\min\{N_{o}^{2},T_{o}^{2}\}}}.\] Note that \[\left\|e_{i}^{\top}A\bar{\tilde{Z}}_{o}^{d}\right\|_{2}=\left\|\sum_{s=1}^{T_ {o}}(\omega_{is}-1)\nu_{s}\bar{\tilde{Z}}_{o,s,}^{d}\right\|_{2}=\left\|( \omega_{it_{o}}-1)\nu_{t_{o}}\bar{\tilde{Z}}_{o,t_{o},}^{d}\right\|_{2}\leq ||\nu||_{\infty}||Z_{o}||_{2,\infty}.\] Then, since \[||\nu||_{\infty}||Z_{o}||_{2,\infty}\lesssim\sigma\sqrt{\psi_{\min,o}}\sqrt{ \frac{\mu_{o}^{3}r^{3}\kappa_{o}^{5}\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}{ T_{o}\min\{N_{o}^{2},T_{o}^{2}\}}},\] we reach \[\max_{i}\left\|\delta_{4,i}\right\|_{2}\lesssim\frac{\sigma}{\sqrt{\psi_{\min,o}}}\sqrt{\frac{\mu_{o}^{3}r^{3}\kappa_{o}^{5}\max\{N_{o}\log N_{o},T_{o}\log T _{o}\}}{T_{o}\min\{N_{o}^{2},T_{o}^{2}\}}}.\] Part 5.It is easy to check from Lemmas A.5 and A.9, and the relation (A.17) that \[\left\|e_{i}^{\top}\nabla_{X}f(\breve{X}_{o},\breve{Z}_{o})\left( I_{r}+\lambda_{o}(\breve{Z}_{o}^{\top}\breve{Z}_{o})^{-1}\right)^{1/2}(\breve{Z}_{o}^ {d\top}\breve{Z}_{o}^{d})^{-1}\breve{H}_{o}^{d}\right\|\] \[\quad\leq\left\|\nabla_{X}f(\breve{X}_{o},\breve{Z}_{o})\right\| \left\|\left(I_{r}+\lambda_{o}(\breve{Z}_{o}^{\top}\breve{Z}_{o})^{-1}\right)^ {1/2}\right\|\left\|(\breve{Z}_{o}^{d\top}\breve{Z}_{o}^{d})^{-1}\right\|\] \[\quad\lesssim\frac{\sigma}{\sqrt{\psi_{\min,o}}}\frac{1}{\max\{N_{ o}^{4},T_{o}^{4}\}},\] \[\left\|e_{i}^{\top}\breve{X}_{o}\Delta_{balance}\breve{H}_{o}^{d} \right\|\leq\left\|\breve{X}_{o}\right\|\left\|\Delta_{balance}\right\|\lesssim \frac{\sigma}{\sqrt{\psi_{\min,o}}}\sqrt{\frac{\kappa_{o}^{3}\mu_{o}r}{\max\{N_{ o}^{9},T_{o}^{9}\}\min\{N_{o},T_{o}\}}}.\] Hence, we have \(\max_{i}\left\|\delta_{5,i}\right\|_{2}\lesssim\frac{\sigma}{\sqrt{\psi_{\min, o}}}\frac{1}{\max\{N_{o}^{4},T_{o}^{4}\}}\). Part 6.Lastly, we check the proximity between the non-convex debiased estimator and the convex debiased estimator to bound \(\max_{i}||\delta_{6,i}||\). The proof is basically the same as Section C.2 of Chen et al. (2019b). Denote the SVD of \(\breve{X}_{o}\breve{Z}_{o}^{\top}\) by \(L_{o}\Sigma_{o}R_{o}^{\top}\). First, we show that \(\breve{X}_{o}^{d}\) is close to \(L_{o}(\Sigma_{o}+\lambda_{o}I_{r})^{\frac{1}{2}}\). By Lemma 20 of Chen et al. (2020b), there is an invertible matrix \(G\) such that \(\breve{X}_{o}=L_{o}\Sigma_{o}^{1/2}G\) and \(\breve{Z}_{o}=R_{o}\Sigma_{o}^{1/2}G^{-1^{\top}}\). Denote the SVD of \(G\) by \(L_{G}\Sigma_{G}R_{G}^{\top}\). Then, we have by Lemma 20 of Chen et al. (2020b) that \[\left\|\breve{X}_{o}-L_{o}\Sigma_{o}^{1/2}L_{G}R_{G}^{\top}\right\| =\left\|L_{o}\Sigma_{o}^{1/2}L_{G}\Sigma_{G}R_{G}^{\top}-L_{o} \Sigma_{o}^{1/2}L_{G}R_{G}^{\top}\right\|\] \[\leq\left\|\Sigma_{o}^{1/2}\right\|\left\|\Sigma_{G}-I_{r}\right\|\] \[\lesssim\sqrt{\psi_{\max,o}}\frac{1}{\psi_{\min,o}}\left\|\breve {X}_{o}^{\top}\breve{X}_{o}-\breve{Z}_{o}^{\top}\breve{Z}_{o}\right\|_{F}\] \[\lesssim\frac{\sigma}{\max\{N_{o}^{\frac{\tau}{2}},T_{o}^{\frac{ \tau}{2}}\}}\sqrt{\frac{\kappa_{o}}{\psi_{\min,o}}}.\] Here, we use the fact \(\left\|\Sigma_{G}-I_{r}\right\|\lesssim\left\|\Sigma_{G}-\Sigma_{G}^{-1} \right\|_{F}\) and Lemma A.8. Let \(\dddot{X}\coloneqq L_{o}\Sigma_{o}^{1/2}L_{G}R_{G}^{\top}\). Then, we have by Lemma 13 of Chen et al. (2019b) with the above result \[\left\|\breve{X}_{o}^{d}-\dddot{X}\left(I_{r}+\lambda_{o}(\dddot{X }^{\top}\dddot{X})^{-1}\right)^{1/2}\right\|\] \[\leq\left\|\breve{X}_{o}-\dddot{X}\right\|\left\|\left(I_{r}+ \lambda_{o}(\dddot{X}_{o}^{\top}\breve{X}_{o})^{-1}\right)^{1/2}\right\|+ \left\|\dddot{X}\right\|\left\|\left(I_{r}+\lambda_{o}(\dddot{X}_{o}^{\top} \dddot{X}_{o})^{-1}\right)^{1/2}-\left(I_{r}+\lambda_{o}(\dddot{X}_{o}^{\top} \dddot{X}_{o})^{-1}\right)^{1/2}\right\|\] \[\lesssim\frac{\sigma}{\max\{N_{o}^{\frac{\tau}{2}},T_{o}^{\frac{ \tau}{2}}\}}\sqrt{\frac{\kappa_{o}}{\psi_{\min,o}}}.\] A similar bound holds for \(\dddot{Y}_{o}^{d}\). Note that \[\dddot{X}\left(I_{r}+\lambda_{o}(\dddot{X}^{\top}\dddot{X})^{-1}\right)^{1/2} =L_{o}(\Sigma_{o}+\lambda_{o}I_{r})^{1/2}L_{G}R_{G}^{\top}.\] Hence, we have \[\min_{O\in\mathcal{O}^{r\times}}\sqrt{\left\|\breve{X}_{o}^{d}O- L_{o}(\Sigma_{o}+\lambda_{o}I_{r})^{\frac{1}{2}}\right\|_{F}^{2}+\left\|\dddot{Z}_{o}^{ d}O-R_{o}(\Sigma_{o}+\lambda_{o}I_{r})^{\frac{1}{2}}\right\|_{F}^{2}}\] \[\leq\sqrt{\left\|\dddot{X}_{o}^{d}-L_{o}(\Sigma_{o}+\lambda_{o}I_ {r})^{1/2}L_{G}R_{G}^{\top}\right\|_{F}^{2}+\left\|\dddot{Z}_{o}^{d}-R_{o}( \Sigma_{o}+\lambda_{o}I_{r})^{\frac{1}{2}}L_{G}R_{G}^{\top}\right\|_{F}^{2}}\] \[\lesssim\frac{\sigma}{\max\{N_{o}^{\frac{\tau}{2}},T_{o}^{\frac{ \tau}{2}}\}}\sqrt{\frac{\kappa_{o}r}{\psi_{\min,o}}}.\] Next, we show that \(\dddot{X}_{o}\) is also close to \(L_{o}(\Sigma_{o}+\lambda_{o}I_{r})^{\frac{1}{2}}\). Because \((\dddot{X}_{o},\dddot{Z}_{o})\) is a balanced factorization of \(\mathcal{P}_{r}(\dddot{M}_{o})\), and \((L_{o}\Sigma_{o}^{\frac{1}{2}},R_{o}\Sigma_{o}^{\frac{1}{2}})\) is that of \(\dddot{X}_{o}\dddot{Z}_{o}^{\top}\), we have by the theory for the perturbation bounds on the balanced factorization (Appendix B.7 of Ma et al. (2020), Appendix B.2.1 of Chen et al. (2020a)), \[\min_{O\in\mathcal{O}^{r\times r}}\sqrt{\left\|\widetilde{X}_{o}O-L_ {o}\Sigma_{o}^{\frac{1}{2}}\right\|_{F}^{2}+\left\|\widetilde{Z}_{o}O-R_{o} \Sigma_{o}^{\frac{1}{2}}\right\|_{F}^{2}} \lesssim\sqrt{\frac{\kappa_{o}^{4}r}{\psi_{\min,o}}}\left\| \mathcal{P}_{r}(\widetilde{M}_{o})-\breve{X}_{o}\breve{Z}_{o}^{\top}\right\|_ {F}\] \[\leq\sqrt{\frac{\kappa_{o}^{4}r}{\psi_{\min,o}}}\frac{\sigma}{ \max\{N_{o}^{\frac{9}{2}},T_{o}^{\frac{9}{2}}\}}.\] (A.5) Then, by repeating the same argument as above, we can conclude from (A.5) that \[\min_{O\in\mathcal{O}^{r\times r}}\sqrt{\left\|\widehat{X}_{o}O-L_ {o}(\Sigma_{o}+\lambda_{o}I_{r})^{\frac{1}{2}}\right\|_{F}^{2}+\left\| \widehat{Z}_{o}O-R_{o}(\Sigma_{o}+\lambda_{o}I_{r})^{\frac{1}{2}}\right\|_{F }^{2}}\lesssim\sqrt{\frac{\kappa_{o}^{4}r}{\psi_{\min,o}}}\frac{\sigma}{\max \{N_{o}^{\frac{9}{2}},T_{o}^{\frac{9}{2}}\}}.\] (A.6) Hence, we have \[\max_{i}||\delta_{6,i}||\leq\left\|\widehat{X}_{o}B_{o}-\breve{X}_{o}^{d} \right\|\left\|\hat{H}_{o}^{d}\right\|\lesssim\sqrt{\frac{\kappa_{o}^{4}r}{ \psi_{\min,o}}}\frac{\sigma}{\max\{N_{o}^{\frac{7}{2}},T_{o}^{\frac{7}{2}}\}}.\] Proof of Proposition a.2.: The proof is basically same as that of Proposition A.1 except for some parts. Here, we check the parts which are different from that of Proposition A.1. Part 2.In this case, we have \[\left\|(X_{o}^{\top}X_{o})^{-1}-(X_{o}^{\top}\Omega_{o,t}X_{o})^{-1}\right\| \lesssim\left\|X_{o}^{\top}X_{o}-X_{o}^{\top}\Omega_{o,t}X_{o}\right\|\left| \left|(X_{o}^{\top}X_{o})^{-1}\right|\right|^{2}\leq\frac{\vartheta_{o}\kappa_ {o}\mu_{o}r}{N_{o}}\psi_{\min,o}^{-1},\] because \(\left\|X_{o}^{\top}X_{o}-X_{o}^{\top}\Omega_{o,t}X_{o}\right\|\leq\left\| \sum_{j\in\mathcal{Q}_{o}}X_{o,j}X_{o,j}^{\top}\right\|.\) So, we have with probability at least \(1-O(\min\{N_{o}^{-9},T_{o}^{-9}\})\) that \[\max_{t}||\delta_{2,t}||_{2}\leq C_{\delta,2}\frac{\sigma}{\sqrt{\psi_{\min,o }}}\frac{\vartheta_{o}\kappa_{o}^{\frac{3}{2}}\mu_{o}r^{\frac{3}{2}}\sqrt{ \log N_{o}}}{N_{o}}.\] Part 4.Note that \[\left\|\delta_{4,t}\right\|_{2}=\left\|e_{t}^{\top}A^{\top}\bar{\breve{X}}_{ o}^{d}(\bar{\breve{X}}_{o}^{d,\top}\bar{\breve{X}}_{o}^{d})^{-1}\right\|_{2} \leq\left\|e_{t}^{\top}A^{\top}\bar{\breve{X}}_{o}^{d}\right\|_{2}\left\|(\bar {\breve{X}}_{o}^{d,\top}\bar{\breve{X}}_{o}^{d})^{-1}\right\|\lesssim\frac{1} {\psi_{\min,o}}\left\|e_{t}^{\top}A^{\top}\bar{\breve{X}}_{o}^{d}\right\|_{2}.\] Let \(\nu=[\nu_{1},\ldots,\nu_{N_{o}}]\coloneqq e_{t}^{\top}(\breve{Z}_{o}\breve{X} _{o}^{\top}-Z_{o}X_{o}^{\top})\). Then, because \[\left\|e_{t}^{\top}A^{\top}\bar{\breve{X}}_{o}^{d}\right\|_{2}=\left\|\sum_{j =1}^{N_{o}}(\omega_{jt}-1)\nu_{j}\bar{\breve{X}}_{o,j,\cdot}^{d}\right\|_{2}= \left\|\sum_{j\in\mathcal{Q}_{o}}(\omega_{jt}-1)\nu_{j}\bar{\breve{X}}_{o,j, \cdot}^{d}\right\|_{2}\text{ and}\] \[\left\|\sum_{j\in\mathcal{Q}_{o}}(\omega_{jt}-1)\nu_{j}\bar{\breve{X}}_{o,j, \cdot}^{d}\right\|_{2}\leq\vartheta_{o}||\nu||_{\infty}||X_{o}||_{2,\infty} \lesssim\sigma\vartheta_{o}\sqrt{\psi_{\min,o}}\sqrt{\frac{\mu^{3}r^{3}\kappa_{ o}^{5}\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}{N_{o}\min\{N_{o}^{2},T_{o}^{2}\}}},\] we have \[\max_{t}\left\|\delta_{4,t}\right\|_{2}\lesssim\frac{\sigma\vartheta_{o}}{ \sqrt{\psi_{\min,o}}}\sqrt{\frac{\mu_{o}^{3}r^{3}\kappa_{o}^{5}\max\{N_{o}\log N _{o},T_{o}\log T_{o}\}}{N_{o}\min\{N_{o}^{2},T_{o}^{2}\}}}.\] Other parts are the same as that of the proof of Proposition A.1. Proof of Proposition a.3.: Note that \[\widehat{M}_{o}=\mathcal{P}_{r}\left[\mathcal{P}_{\Omega_{o}^{c}}(\widetilde{ M}_{o})+\mathcal{P}_{\Omega_{o}}(Y_{o})\right].\] Replacing \(\widetilde{M}_{o}\) by \(\breve{X}_{o}\breve{Z}_{o}^{\top}\) results in \[\mathcal{P}_{\Omega_{o}^{c}}(\widetilde{M}_{o})+\mathcal{P}_{\Omega_{o}}(Y_{o })=\mathcal{P}_{\Omega_{o}^{c}}(\breve{X}_{o}\breve{Z}_{o}^{\top})+\mathcal{P} _{\Omega_{o}}(Y_{o})+\Delta_{Y},\] where \(\Delta_{Y}=\mathcal{P}_{\Omega_{o}^{c}}(\widetilde{M}_{o}-\breve{X}_{o}\breve {Z}_{o}^{\top})\). Then, by Lemma A.9, we can bound \[||\Delta_{Y}||_{F}\leq\left\|\widetilde{M}_{o}-\breve{X}_{o}\breve{Z}_{o}^{ \top}\right\|_{F}\lesssim\frac{\lambda_{o}}{8}.\] Denote the SVD of \(\breve{X}_{o}\breve{Z}_{o}^{\top}\) by \(L_{o}\Sigma_{o}R_{o}^{\top}\). By the simple modification of Claim 2 in Chen et al. (2020b) for our missing pattern, we can have \[\mathcal{P}_{\Omega_{o}}(\breve{X}_{o}\breve{Z}_{o}^{\top}-Y_{o})=-\lambda_{o }L_{o}R_{o}^{\top}+\mathfrak{R}\] where \(\mathfrak{R}\) is a residual matrix such that \[\left\|\mathcal{P}_{T}(\mathfrak{R})\right\|_{F}\leq 72\kappa_{o}\frac{1}{ \sqrt{\psi_{\min,o}}}\left\|\nabla f(\breve{X}_{o},\breve{Z}_{o})\right\|_{F} \leq\frac{1}{8}\lambda_{o},\ \ \left\|\mathcal{P}_{T^{\perp}}(\mathfrak{R}) \right\|\leq\frac{1}{2}\lambda_{o}\] with probability at least \(1-O(\min\{N_{o}^{-10},T_{o}^{-10}\})\). Here, \(T\) is the tangent space of \(\breve{X}_{o}\breve{Z}_{o}^{\top}\). Then, we have \[\widehat{M}_{o} =\mathcal{P}_{r}\left[\mathcal{P}_{\Omega_{o}^{c}}(\breve{X}_{o} \breve{Z}_{o}^{\top})+\mathcal{P}_{\Omega_{o}}(Y_{o})+\Delta_{Y}\right]\] \[=\mathcal{P}_{r}\left[\breve{X}_{o}\breve{Z}_{o}^{\top}+\lambda_ {o}L_{o}R_{o}^{\top}+\Delta_{Y}-\mathfrak{R}\right]\] \[=\mathcal{P}_{r}\left[L_{o}(\Sigma_{o}+\lambda_{o}I_{r})R_{o}^{ \top}+\Delta_{Y}-\mathfrak{R}\right]\] \[=\mathcal{P}_{r}\left[\underbrace{L_{o}(\Sigma_{o}+\lambda_{o}I_{ r})R_{o}^{\top}+\mathcal{P}_{T^{\perp}}(\Delta_{Y}-\mathfrak{R})}_{\coloneqq C}+ \underbrace{\mathcal{P}_{T}(\Delta_{Y}-\mathfrak{R})}_{\coloneqq\Delta}\right].\] Note that \(\psi_{k}\left(L_{o}(\Sigma_{o}+\lambda_{o}I_{r})R_{o}^{\top}\right)\geq\lambda_{o}\) for all \(1\leq k\leq r\) and \[\|\mathcal{P}_{T^{\perp}}(\Delta_{Y}-\mathfrak{R})\|\leq||\Delta_{Y}||_{F}+\| \mathcal{P}_{T^{\perp}}(\mathfrak{R})\|\leq\frac{5}{8}\lambda_{o}\] where \(\psi_{k}(A)\) is the \(k\)-th largest singular value of \(A\). Then, because \(L_{o}(\Sigma_{o}+\lambda_{o}I_{r})R_{o}^{\top}\) and \(\mathcal{P}_{T^{\perp}}(\Delta_{Y}-\mathfrak{R})\) are orthogonal to each other, we know \(L_{o}(\Sigma_{o}+\lambda_{o}I_{r})R_{o}^{\top}\) is the top-\(r\) SVD of \(C\), \(\psi_{k}(C)=\psi_{k}\left(L_{o}(\Sigma_{o}+\lambda_{o}I_{r})R_{o}^{\top}\right)\) for all \(1\leq k\leq r\), and \(\psi_{r+1}(C)=\|\mathcal{P}_{T^{\perp}}(\Delta_{Y}-\mathfrak{R})\|\). In addition, denote the top-\(r\) SVD of \(C+\Delta\) by \(\check{L}_{o}\check{\Sigma}_{o}\check{R}_{o}^{\top}\). Note that \[\psi_{r+1}(C+\Delta)\leq\psi_{r+1}(C)+||\Delta||\leq\|\mathcal{P}_{T^{\perp}}( \Delta_{Y}-\mathfrak{R})\|+||\Delta||_{F}\leq\frac{5}{8}\lambda_{o}+\frac{ \lambda_{o}}{\max\{N_{o}^{4},T_{o}^{4}\}}\leq\frac{3}{4}\lambda_{o}\] since \(||\Delta||_{F}\leq||\Delta_{Y}||_{F}+\|\mathcal{P}_{T}(\mathfrak{R})\|\leq \frac{\lambda_{o}}{\max\{N_{o}^{4},T_{o}^{4}\}}\) by Lemma A.9. Hence, we have \[\psi_{r}(C)-\psi_{r+1}(C+\Delta)\geq\psi_{r}(L_{o}(\Sigma_{o}+ \lambda_{o}I_{r})R_{o}^{\top})-\frac{3}{4}\lambda_{o}\geq\psi_{r}(\Sigma_{o}) +\frac{1}{4}\lambda_{o}\geq\frac{\psi_{\min,o}}{2}.\] Then, because \(\widehat{M}_{o}=\check{L}_{o}\check{\Sigma}_{o}\check{R}_{o}^{\top}\), we can apply Lemma 14 of Chen et al. (2019b) to obtain \[\left\|\widehat{M}_{o}-L_{o}(\Sigma_{o}+\lambda_{o}I_{r})R_{o}^{\top}\right\| _{F}\leq\left(\frac{12||\Sigma_{o}+\lambda_{o}I_{r}||}{\psi_{\min,o}}+1\right) ||\Delta||_{F}\lesssim\kappa_{o}||\Delta||_{F}\lesssim\frac{\lambda_{o}}{\max\{N _{o}^{4},T_{o}^{4}\}}.\] (A.7) Moreover, we can also obtain from (A.6) that \[\left\|\widehat{X}_{o}\widehat{Z}_{o}^{\top}-L_{o}(\Sigma_{o}+ \lambda_{o}I_{r})R_{o}^{\top}\right\|_{F} \lesssim\left\|\widehat{X}_{o}O_{o}-L_{o}(\Sigma_{o}+\lambda_{o}I _{r})^{\frac{1}{2}}\right\|_{F}||\widehat{Z}_{o}||+\left\|\widehat{Z}_{o}O_{o }-R_{o}(\Sigma_{o}+\lambda_{o}I_{r})^{\frac{1}{2}}\right\|_{F}||\widehat{X}_{ o}||\] \[\lesssim\sqrt{\kappa_{o}^{5}r}\frac{\sigma}{\max\{N_{o}^{\frac{ 9}{2}},T_{o}^{\frac{9}{2}}\}},\] (A.8) where \[O_{o}=\operatorname*{arg\,min}_{O\in\mathcal{O}^{r\times r}}\sqrt{\left\| \widehat{X}_{o}O-L_{o}(\Sigma_{o}+\lambda_{o}I_{r})^{\frac{1}{2}}\right\|_{F}^ {2}+\left\|\widehat{Z}_{o}O-R_{o}(\Sigma_{o}+\lambda_{o}I_{r})^{\frac{1}{2}} \right\|_{F}^{2}}.\] Then, we get the desired result from (A.7) and (A.8). Proof of Proposition a.4.: Thanks to Propositions A.1, A.2, and A.3, we have the following decomposition: \[\widehat{m}_{o,it_{o}}-m_{o,it_{o}} =(\widehat{X}_{o,i}^{\top}\widehat{Z}_{o,t_{o}}-X_{o,i}^{\top}Z_ {o,t_{o}})+(\widehat{m}_{o,it_{o}}-\widehat{X}_{o,i}^{\top}\widehat{Z}_{o,t_{o }})\] \[=X_{o,i}^{\top}(X_{o}^{\top}\Omega_{o,t_{o}}X_{o})^{-1}X_{o}^{ \top}\mathcal{P}_{\Omega_{o}}(\mathcal{E}_{o})e_{t_{o}}+Z_{o,t_{o}}^{\top}(Z_{o }^{\top}\Omega_{o,i}Z_{o})^{-1}Z_{o}^{\top}\mathcal{P}_{\Omega_{o}}(\mathcal{E }_{o})^{\top}e_{i}\] \[\quad+\mathcal{R}_{i}^{X\top}Z_{o,t_{o}}+X_{o,i}^{\top}\mathcal{R }_{t_{o}}^{Z}+e_{i}^{\top}(\widehat{X}_{o}\widehat{H}_{o}-X_{o})(\widehat{Z}_{o }\widehat{H}_{o}-Z_{o})^{\top}e_{t_{o}}+(\widehat{m}_{o,it_{o}}-\widehat{X}_{o,i}^{\top}\widehat{Z}_{o,t_{o}}).\] First, because of Proposition A.1 and the inequality \(\|Z_{o,t_{o}}\|\leq\sqrt{\frac{\kappa_{o}\mu_{o}r}{T_{o}}\psi_{\min,o}}\), we have \[\max_{i}\left\|\mathcal{R}_{i}^{X\top}Z_{o,t_{o}}\right\| \leq\max_{i}\left\|\mathcal{R}_{i}^{X}\right\|\left\|Z_{o,t_{o}}\right\|\] \[\leq C_{X}\left(\frac{\sigma^{2}}{\psi_{\min,o}}\sqrt{\frac{ \kappa_{o}^{10}\mu_{o}^{2}r^{2}\max\{N_{o}^{2}\log N_{o},T_{o}^{2}\log T_{o}\} }{\min\{N_{o}^{2},T_{o}^{2}\}}}\right.\] \[\qquad\qquad\left.+\sigma\sqrt{\frac{\kappa_{o}^{8}\mu_{o}^{4}r^ {4}\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}{\min\{N_{o}^{3},T_{o}^{3}\}}} \right).\] Similarly, due to Proposition A.2, we have \[\max_{i}\left\|X_{o,i}^{\top}\mathcal{R}_{t_{o}}^{Z}\right\| \leq\max_{i}\left\|X_{o,i}\right\|\left\|\mathcal{R}_{t_{o}}^{Z}\right\|\] \[\leq C_{Z}\left(\frac{\sigma^{2}}{\psi_{\min,o}}\sqrt{\frac{ \kappa_{o}^{10}\mu_{o}^{2}r^{2}\max\{N_{o}^{2}\log N_{o},T_{o}^{2}\log T_{o}\} }{\min\{N_{o}^{2},T_{o}^{2}\}}}\right.\] \[\left.+\sigma\sqrt{\frac{\kappa_{o}^{8}\mu_{o}^{4}r^{4}\max\{N_{ o}\log N_{o},T_{o}\log T_{o}\}}{\min\{N_{o}^{3},T_{o}^{3}\}}}+\sigma\frac{\vartheta _{o}}{N_{o}}\sqrt{\frac{\mu_{o}^{4}r^{4}\kappa_{o}^{6}\max\{N_{o}\log N_{o},T _{o}\log T_{o}\}}{\min\{N_{o}^{2},T_{o}^{2}\}}}\right).\] In addition, by Lemma A.5 with the assertion in Part 6 of the proof for Proposition A.1 that \[\max\{\left\|\widehat{X}_{o}B_{o}-\check{X}_{o}^{d}\right\|,\left\|\widehat{Z }_{o}B_{o}-\check{Z}_{o}^{d}\right\|\}\lesssim\sqrt{\frac{\kappa_{o}^{4}r}{ \psi_{\min,o}}}\frac{\sigma}{\max\{N_{o}^{\frac{7}{2}},T_{o}^{\frac{7}{2}}\}},\] we obtain \[\max\left\|e_{i}^{\top}(\widehat{X}_{o}\widehat{H}_{o}-X_{o})( \widehat{Z}_{o}\widehat{H}_{o}-Z_{o})^{\top}e_{t_{o}}\right\| \leq\left\|\widehat{X}_{o}\widehat{H}_{o}-X_{o}\right\|_{2, \infty}\left\|\widehat{Z}_{o}\widehat{H}_{o}-Z_{o}\right\|_{2,\infty}\] \[\leq 2C_{d,\infty}^{2}\frac{\sigma^{2}}{\psi_{\min,o}}\frac{ \kappa_{o}^{3}\mu_{o}r\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}{\min\{N_{o},T_ {o}\}}.\] Lastly, we have \[||\widehat{m}_{o,it_{o}}-\widehat{X}_{o,i}^{\top}\widehat{Z}_{o,t_{o}}||\leq C _{prx}\frac{\sigma}{\max\{N_{o}^{7/2},T_{o}^{7/2}\}}\] by Proposition A.3. This completes the proof. ### Technical lemmas: Statistical properties of the debiased estimators This section presents the statistical properties of the debiased estimators. Although this section studies the convergence rates of the nonconvex debiased estimators \((\check{X}_{o}^{d},\check{Z}_{o}^{d})\), since the nonconvex debiased estimators are very close to the convex debiased estimators \((\widehat{X}_{o},\widehat{Z}_{o})\), as noted in Part 6 of the proof of Proposition A.1, these results are frequently used when we prove the propositions in Section A.2. Remind that \[\mathcal{F}_{o}^{d,\tau}\coloneqq\begin{bmatrix}X_{o}^{d,\tau}\\ Z_{o}^{d,\tau}\end{bmatrix}\in\mathbb{R}^{(N_{o}+T_{o})\times r},\quad\mathcal{ F}_{o}^{d,\tau,(m)}\coloneqq\begin{bmatrix}X_{o}^{d,\tau,(m)}\\ Z_{o}^{d,\tau,(m)}\end{bmatrix}\in\mathbb{R}^{(N_{o}+T_{o})\times r},\quad \mathcal{F}_{o}\coloneqq\begin{bmatrix}X_{o}\\ Z_{o}\end{bmatrix}\in\mathbb{R}^{(N_{o}+T_{o})\times r}.\] **Lemma A.5**.: _Suppose that Assumptions A.1 - A.4 hold. With probability at least \(1-O(\min\{N_{o}^{-10},T_{o}^{-10}\})\), the iterates \(\{\mathcal{F}_{o}^{d,\tau}\}_{0\leq\tau\leq\tau}\) and \(\{\mathcal{F}_{o}^{d,\tau,(m)}\}_{0\leq\tau\leq\tau}\) satisfy_ \[\left\|\mathcal{F}_{o}^{d,\tau}H_{o}^{\tau}-\mathcal{F}_{o} \right\|\leq C_{d,op1}\frac{\sigma\sqrt{\max\{N_{o},T_{o}\}}}{\psi_{\min,o}} \left\|X_{o}\right\|,\] (A.9) \[\left\|\mathcal{F}_{o}^{d,\tau}H_{o}^{d,\tau}-\mathcal{F}_{o} \right\|\leq C_{d,op2}\frac{\kappa_{o}\sigma\sqrt{\max\{N_{o},T_{o}\}}}{\psi_{ \min,o}}\left\|X_{o}\right\|,\] (A.10) \[\left\|\mathcal{F}_{o}^{d,\tau}H_{o}^{d,\tau}-\mathcal{F}_{o} \right\|_{F}\leq C_{d,F}\frac{\sigma\sqrt{\max\{N_{o},T_{o}\}}}{\psi_{\min,o}} \left\|X_{o}\right\|_{F},\] (A.11) \[\left\|\mathcal{F}_{o}^{d,\tau}H_{o}^{d,\tau}-\mathcal{F}_{o} \right\|_{2,\infty}\leq C_{d,\infty}\kappa_{o}\frac{\sigma\sqrt{\max\{N_{o} \log N_{o},T_{o}\log T_{o}\}}}{\psi_{\min,o}}\left\|\mathcal{F}_{o}\right\|_{2,\infty},\] (A.12) \[\left\|X_{o}^{d,\tau\top}X_{o}^{d,\tau}-Z_{o}^{d,\tau\top}Z_{o}^ {d,\tau}\right\|\leq C_{d,B}\frac{\kappa_{o}^{2}\sigma}{\max\{N_{o}^{9/2},T_{o }^{9/2}\}},\] (A.13) \[\max_{1\leq m\leq N_{o}+T_{o}}\left\|\mathcal{F}_{o}^{d,\tau,(m)} H_{o}^{d,\tau,(m)}-\mathcal{F}_{o}\right\|\leq 2C_{d,op2}\frac{\kappa_{o}\sigma \sqrt{\max\{N_{o},T_{o}\}}}{\psi_{\min,o}}\left\|X_{o}\right\|,\] (A.14) \[\max_{1\leq m\leq N_{o}+T_{o}}\left\|\mathcal{F}_{o}^{d,\tau,(m)} H_{o}^{d,\tau,(m)}-\mathcal{F}_{o}\right\|_{2,\infty}\leq 2C_{d,\infty}\kappa_{o} \frac{\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}}{\psi_{\min,o}} \left\|\mathcal{F}_{o}\right\|_{2,\infty},\] (A.15) \[\max_{1\leq m\leq N_{o}+T_{o}}\left\|\mathcal{F}_{o}^{d,\tau}H_{o }^{d,\tau}-\mathcal{F}_{o}^{d,\tau,(m)}H_{o}^{d,\tau,(m)}\right\|\leq C_{d,3} \kappa_{o}\frac{\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}}{\psi_{ \min,o}}\left\|\mathcal{F}_{o}\right\|_{2,\infty},\] (A.16) _where \(C_{d,F}\), \(C_{d,op1}\), \(C_{d,op2}\), \(C_{d,\infty}\), \(C_{d,3}\), \(C_{d,B}>0\) are absolute constants, provided that \(\eta_{o}\stackrel{{ c}}{{\asymp}}\frac{1}{\max(N_{o}^{5},T_{o}^{ 5})\kappa_{o}^{2}\psi_{\max,o}}\) and that \(\bar{\tau}=\max\{N_{o}^{23},T_{o}^{23}\}\)._ Additionally, the following lemma is exploited in Part 1 of the proof of Proposition A.1 to bound some residual term. **Lemma A.6**.: _Suppose that Assumptions A.1 - A.4 hold. With probability at least \(1-O(\min\{N_{o}^{-10},T_{o}^{-10}\})\), the iterates \(\{(X_{o}^{d,\tau},Z_{o}^{d,\tau})\}_{0\leq\tau\leq\bar{\tau}}\) and \(\{(X_{o}^{d,\tau,(m)},Z_{o}^{d,\tau,(m)})\}_{0\leq\tau\leq\bar{\tau}}\) satisfy_ \[\max_{1\leq m\leq N_{o}}\left\|e_{m}^{\top}\mathcal{P}_{\Omega_{o}}(\mathcal{E} _{o})\left[\bar{Z}_{o}^{d,\tau,(m)}\left(\bar{Z}_{o}^{d,\tau,(m)\top}\bar{Z}_{o}^ {d,\tau,(m)}\right)^{-1}-Z_{o}\left(Z_{o}^{\top}Z_{o}\right)^{-1}\right]\right\| _{2}\] \[\leq\lambda_{o}\left\|(X_{o}^{\tau\top}X_{o}^{\tau})^{-1}\right\| \left\|X_{o}^{\tau\top}X_{o}^{\tau}-Z_{o}^{\tau\top}Z_{o}^{\tau}\right\|\left\|(Z _{o}^{\tau\top}Z_{o}^{\tau})^{-1}\right\|\] \[\lesssim\frac{\lambda_{o}}{\max\{N_{o}^{5},T_{o}^{5}\}}\frac{ \kappa_{o}}{\psi_{\min,o}}.\] (A.17) In addition, by Lemma 13 of Chen et al. (2019b), we have \[\left\|(I_{r}+\lambda_{o}(X_{o}^{\tau\top}X_{o}^{\tau})^{-1})^{\frac{1}{2}}-I_ {r}\right\|\lesssim\frac{\lambda_{o}}{\psi_{\min,o}}\leq\frac{\sigma\sqrt{\max N _{o},T_{o}}}{\psi_{\min,o}}.\] Hence, we have (A.9) from the above bounds. Similarly, we can derive \[\left\|\mathcal{F}_{o}^{d,\tau}H_{o}^{d,\tau}-\mathcal{F}_{o}\right\|_{F} \leq\left\|\mathcal{F}_{o}^{d,\tau}H_{o}^{\tau}-\mathcal{F}_{o}\right\|_{F} \lesssim\frac{\sigma\sqrt{\max N_{o},T_{o}}}{\psi_{\min,o}}||X_{o}||_{F}\] which is (A.11). For (A.10), note that \[\left\|\mathcal{F}_{o}^{d,\tau}H_{o}^{d,\tau}-\mathcal{F}_{o}\right\|\leq \left\|\mathcal{F}_{o}^{d,\tau}\right\|\left\|H_{o}^{d,\tau}-H_{o}^{\tau} \right\|+\left\|\mathcal{F}_{o}^{d,\tau}H_{o}^{\tau}-\mathcal{F}_{o}\right\|.\] Then, by using Lemma 36 of Ma et al. (2020), we have \[\left\|H_{o}^{d,\tau}-H_{o}^{\tau}\right\|\lesssim\frac{1}{\psi_{\min,o}}\left\| \mathcal{F}_{o}^{d,\tau}-\mathcal{F}_{o}^{\tau}\right\|\left\|\mathcal{F}_{o} \right\|\lesssim\kappa_{o}\frac{\sigma\sqrt{\max\{N_{o},T_{o}\}}}{\psi_{\min,o}}\] and it gives (A.10). In addition, by the similar logic with Lemma A.8, we can derive (A.12) also. For (A.13), notice that \[\left\|X_{o}^{d,\tau\top}X_{o}^{d,\tau}-Z_{o}^{d,\tau\top}Z_{o}^{ d,\tau}\right\| \leq\left\|(I_{r}+\lambda_{o}(X_{o}^{\tau\top}X_{o}^{\tau})^{-1}) ^{\frac{1}{2}}\right\|\left\|X_{o}^{\tau\top}X_{o}^{\tau}-Z_{o}^{\tau\top}Z_{ o}^{\tau}\right\|\left\|(I_{r}+\lambda_{o}(X_{o}^{\tau\top}X_{o}^{\tau})^{-1}) ^{\frac{1}{2}}\right\|\] \[\quad+\left\|(I_{r}+\lambda_{o}(Z_{o}^{\tau\top}Z_{o}^{\tau})^{-1 })^{\frac{1}{2}}\right\|\left\|Z_{o}^{\tau\top}Z_{o}^{\tau}\right\|\left\| \Delta_{balance}^{\tau}\right\|.\] Then, by the above bounds, we can derive (A.13). Using the similar methods of deriving (A.10) and (A.12), we can derive (A.14) and (A.15) also. Lastly, we show (A.16). Set \(\mathcal{F}_{0}=\mathcal{F}_{o}\), \(\mathcal{F}_{1}=\mathcal{F}_{o}^{d,\tau}H_{o}^{\tau}\) and \(\mathcal{F}_{2}=\mathcal{F}_{o}^{d,\tau,(m)}Q_{o}^{\tau,(m)}\). Then, assumptions of Lemma D.23 are satisfied as noted in Section I of Chen et al. (2019b). So, we can apply Lemma D.23 to obtain \[\left\|\mathcal{F}_{o}^{d,\tau}H_{o}^{d,\tau}-\mathcal{F}_{o}^{d, \tau,(m)}H_{o}^{d,\tau,(m)}\right\| \lesssim\kappa_{o}\left\|\mathcal{F}_{o}^{d,\tau}H_{o}^{\tau}- \mathcal{F}_{o}^{d,\tau,(m)}Q_{o}^{\tau,(m)}\right\|\] \[\lesssim\kappa_{o}\left\|\mathcal{F}_{o}^{\tau}H_{o}^{\tau}- \mathcal{F}_{o}^{\tau,(m)}Q_{o}^{\tau,(m)}\right\|\] \[\lesssim\kappa_{o}\frac{\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o} \log T_{o}\}}}{\psi_{\min,o}}\left\|\mathcal{F}_{o}\right\|_{2,\infty}.\] Proof of Lemma a.6.: Define \[\Delta^{\tau,(m)}\coloneqq\bar{Z}_{o}^{d,\tau,(m)}\left(\bar{Z}_{o}^{d,\tau, (m)\top}\bar{Z}_{o}^{d,\tau,(m)}\right)^{-1}-Z_{o}\left(Z_{o}^{\top}Z_{o} \right)^{-1}\] where \(\bar{Z}_{o}^{d,\tau,(m)}=Z_{o}^{d,\tau,(m)}H_{o}^{d,\tau,(m)}\). Then, \[\left\|e_{m}^{\top}\mathcal{P}_{\Omega_{o}}(\mathcal{E}_{o})\left[\bar{Z}_{o} ^{d,\tau,(m)}\left(\bar{Z}_{o}^{d,\tau,(m)\top}\bar{Z}_{o}^{d,\tau,(m)}\right) ^{-1}-Z_{o}\left(Z_{o}^{\top}Z_{o}\right)^{-1}\right]\right\|_{2}=\left\|\sum_ {t=1}^{T_{o}}\omega_{mt}\epsilon_{mt}\Delta_{t.}^{\tau,(m)}\right\|_{2}.\] Note that \(\mathbb{E}[\omega_{mt}\epsilon_{mt}\Delta_{t.}^{\tau,(m)}|\Delta_{t.}^{\tau,( m)}]=0\) and \(\{\epsilon_{mt}\}_{t\leq T_{o}}\) are independent across \(t\) conditioning on \(\{\Delta_{t.}^{\tau,(m)}\}_{t\leq T_{o}}\). Hence, we have by the matrix Bernstein inequality with Claim A.7 that \[\left\|\sum_{t=1}^{T_{o}}\omega_{mt}\epsilon_{mt}\Delta_{t.}^{\tau,(m)}\right\| _{2}\lesssim\sqrt{\sigma^{2}||\Delta^{\tau,(m)}||_{F}^{2}\max\{\log N_{o},\log T _{o}\}}+\sigma||\Delta^{\tau,(m)}||_{2,\infty}\max\{\log^{2}N_{o},\log^{2}T_{o}\}\] \[\lesssim\sigma\frac{\sqrt{r}}{\sqrt{\psi_{\min,o}}}\frac{\sigma}{\psi_{ \min,o}}\sqrt{\kappa_{o}^{3}\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}.\] **Claim A.7**.: _With probability at least \(1-O(\min\{N_{o}^{-10},T_{o}^{-10}\})\), we have for all \(0\leq\tau\leq\bar{\tau}\) and \(1\leq m\leq N_{o}\),_ \[||\Delta^{\tau,(m)}|| \lesssim\frac{1}{\sqrt{\psi_{\min,o}}}\frac{\sigma}{\psi_{\min,o} }\sqrt{\kappa_{o}^{3}\max\{N_{o},T_{o}\}},\ \ ||\Delta^{\tau,(m)}||_{2,\infty}\] \[\lesssim\frac{1}{\sqrt{\psi_{\min,o}}}\frac{\sigma}{\psi_{\min,o} }\sqrt{\frac{\kappa_{o}^{5}\mu_{o}r\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}{ \min\{N_{o},T_{o}\}}}.\] The proof for the part \[\left\|e_{m}^{\top}\mathcal{P}_{\Omega_{o}}(\mathcal{E}_{o})^{ \top}\left[\bar{X}_{o}^{d,\tau,(N_{o}+m)}\left(\bar{X}_{o}^{d,\tau,(N_{o}+m) \top}\bar{X}_{o}^{d,\tau,(N_{o}+m)}\right)^{-1}-X_{o}\left(X_{o}^{\top}X_{o} \right)^{-1}\right]\right\|_{2}\] is similar, and therefore omitted for brevity. Proof of Claim a.7.: By Lemma 12 of Chen et al. (2019b) with Lemma A.5, we have \[||\Delta^{\tau,(m)}|| \lesssim\max\Bigl{\{}\left\|Z_{o}(Z_{o}^{\top}Z_{o})^{-1}\right\| ^{2},\left\|\bar{Z}_{o}^{d,\tau,(m)}\left(\bar{Z}_{o}^{d,\tau,(m)\top}\bar{Z}_ {o}^{d,\tau,(m)}\right)^{-1}\right\|^{2}\Bigr{\}}\left\|\bar{Z}_{o}^{d,\tau,(m) }-Z_{o}\right\|\] \[\lesssim\frac{1}{\psi_{\min,o}}\frac{\kappa_{o}\sigma\sqrt{\max\{N _{o},T_{o}\}}}{\psi_{\min,o}}\left\|X_{o}\right\|\] \[\lesssim\frac{1}{\sqrt{\psi_{\min,o}}}\frac{\kappa_{o}^{\frac{3}{ 2}}\sigma\sqrt{\max\{N_{o},T_{o}\}}}{\psi_{\min,o}}.\] In addition, because \[\left\|\left(\bar{Z}_{o}^{d,\tau,(m)\top}\bar{Z}_{o}^{d,\tau,(m) }\right)^{-1}-(Z_{o}^{\top}Z_{o})^{-1}\right\| \leq\left\|\left(\bar{Z}_{o}^{d,\tau,(m)\top}\bar{Z}_{o}^{d,\tau,( m)}\right)^{-1}\right\|\left\|\bar{Z}_{o}^{d,\tau,(m)\top}\bar{Z}_{o}^{d, \tau,(m)}-Z_{o}^{\top}Z_{o}\right\|\left\|(Z_{o}^{\top}Z_{o})^{-1}\right\|\] \[\lesssim\frac{1}{\psi_{\min,o}}\kappa_{o}^{2}\frac{\sigma}{\psi_{ \min,o}}\sqrt{\max\{N_{o},T_{o}\}},\] we have from Lemma A.5 that \[\left\|\Delta^{\tau,(m)}\right\|_{2,\infty} \leq\left\|\bar{Z}_{o}^{d,\tau,(m)}\right\|_{2,\infty}\left\| \left(\bar{Z}_{o}^{d,\tau,(m)\top}\bar{Z}_{o}^{d,\tau,(m)}\right)^{-1}-(Z_{o}^ {\top}Z_{o})^{-1}\right\|+\left\|\bar{Z}_{o}^{d,\tau,(m)}-Z_{o}\right\|_{2, \infty}\left\|(Z_{o}^{\top}Z_{o})^{-1}\right\|\] \[\lesssim\frac{1}{\sqrt{\psi_{\min,o}}}\frac{\sigma}{\psi_{\min,o} }\sqrt{\frac{\kappa_{o}^{5}\mu_{o}r\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}{ \min\{N_{o},T_{o}\}}}.\] Technical lemmas: Statistical properties of the nuclear norm penalized estimators and the corresponding non-convex estimator Lastly, we present the statistical properties of the non-convex estimator \((\breve{X}_{o},\breve{Z}_{o})\). Since this estimator is very close to the nuclear norm penalized estimator \(\widetilde{M}_{o}\) as we will see in Lemma A.9, we can derive the convergence rates of the nuclear norm penalized estimator from this result. Besides, the statistical properties of the debiased estimators in the previous section are largely based on the result of the non-convex estimators in this section. Basically, the result in this section is the modification of Chen et al. (2020b) for the case where missing is not at random and occurs only at one column. To save space, we omit the proofs of some lemmas if the proof is a simple modification of that in Chen et al. (2020b). We are willing to provide the full proofs upon request. First, the following lemma shows the statistical properties of the nonconvex estimator which are used for the proofs in the previous sections. Remind that \[\mathcal{F}_{o}^{\tau}\coloneqq\begin{bmatrix}X_{o}^{\tau}\\ Z_{o}^{\tau}\end{bmatrix}\in\mathbb{R}^{(N_{o}+T_{o})\times r},\quad\mathcal{F }_{o}^{\tau,(m)}\coloneqq\begin{bmatrix}X_{o}^{\tau,(m)}\\ Z_{o}^{\tau,(m)}\end{bmatrix}\in\mathbb{R}^{(N_{o}+T_{o})\times r},\quad \mathcal{F}_{o}\coloneqq\begin{bmatrix}X_{o}\\ Z_{o}\end{bmatrix}\in\mathbb{R}^{(N_{o}+T_{o})\times r}.\] **Lemma A.8**.: _Suppose that Assumptions A.1 - A.4 hold. With probability at least \(1-O(\min\{N_{o}^{-11},T_{o}^{-11}\})\), the iterates \(\{\mathcal{F}_{o}^{\tau}\}_{0\leq\tau\leq\sharp}\) and \(\{\mathcal{F}_{o}^{\tau,(m)}\}_{0\leq\tau\leq\sharp}\) satisfy_ \[\left\|\mathcal{F}_{o}^{\tau}H_{o}^{\tau}-\mathcal{F}_{o}\right\| _{F}\leq C_{F}\left(\frac{\sigma\sqrt{\max\{N_{o},T_{o}\}}}{\psi_{\min,o}}+ \frac{\lambda_{o}}{\psi_{\min,o}}\right)\left\|X_{o}\right\|_{F},\] (A.18) \[\left\|\mathcal{F}_{o}^{\tau}H_{o}^{\tau}-\mathcal{F}_{o}\right\| \leq C_{op}\left(\frac{\sigma\sqrt{\max\{N_{o},T_{o}\}}}{\psi_{\min,o}}+\frac{ \lambda_{o}}{\psi_{\min,o}}\right)\left\|X_{o}\right\|,\] (A.19) \[\max_{1\leq m\leq N_{o}+T_{o}}\left\|\mathcal{F}_{o}^{\tau}H_{o}^ {\tau}-\mathcal{F}_{o}^{\tau,(m)}Q_{o}^{\tau,(m)}\right\|_{F}\leq C_{3}\left( \frac{\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}}{\psi_{\min,o}}+ \frac{\lambda_{o}}{\psi_{\min,o}}\right)\left\|\mathcal{F}_{o}\right\|_{2, \infty},\] (A.20) \[\max_{1\leq m\leq N_{o}+T_{o}}\left\|\left(\mathcal{F}_{o}^{\tau, (m)}H_{o}^{\tau,(m)}-\mathcal{F}_{o}\right)_{m,\cdot}\right\|_{2}\leq C_{4} \kappa_{o}\left(\frac{\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}}{ \psi_{\min,o}}+\frac{\lambda_{o}}{\psi_{\min,o}}\right)\left\|\mathcal{F}_{o} \right\|_{2,\infty},\] (A.21) \[\left\|\mathcal{F}_{o}^{\tau}H_{o}^{\tau}-\mathcal{F}_{o}\right\| _{2,\infty}\leq C_{\infty}\kappa_{o}\left(\frac{\sigma\sqrt{\max\{N_{o}\log N _{o},T_{o}\log T_{o}\}}}{\psi_{\min,o}}+\frac{\lambda_{o}}{\psi_{\min,o}} \right)\left\|\mathcal{F}_{o}\right\|_{2,\infty},\] (A.22) \[\left\|X_{o}^{\tau+1}H_{o}^{\tau+1}-X_{o}\right\|_{2,\infty}\leq C_{\infty,X}r^{ 1/2}\kappa_{o}\left(\frac{\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}}{ \psi_{\min,o}}+\frac{\lambda_{o}}{\psi_{\min,o}}\right)\left\|X_{o}\right\|_{2, \infty},\] (A.23) \[\left\|Z_{o}^{\tau+1}H_{o}^{\tau+1}-Z_{o}\right\|_{2,\infty}\leq C_{\infty,Z}r^ {1/2}\kappa_{o}\left(\frac{\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}} }{\psi_{\min,o}}+\frac{\lambda_{o}}{\psi_{\min,o}}\right)\left\|Z_{o}\right\|_ {2,\infty},\] (A.24) \[\left\|X_{o}^{\tau\top}X_{o}^{\tau}-Z_{o}^{\tau\top}Z_{o}^{\tau}\right\|_{F} \leq C_{B}\kappa_{o}\eta_{o}\left(\frac{\sigma\sqrt{\max\{N_{o},T_{o}\}}}{\psi _{\min,o}}+\frac{\lambda_{o}}{\psi_{\min,o}}\right)\sqrt{\tau}\psi_{\max,o}^{ 2}\leq C_{B}\frac{\psi_{\max,o}}{\max\{N_{o}^{5},T_{o}^{5}\}},\] (A.25) \[f(X_{o}^{\tau},Z_{o}^{\tau})\leq f(X_{o}^{\tau-1},Z_{o}^{\tau-1})- \frac{\eta_{o}}{2}\left\|\nabla f(X_{o}^{\tau-1},Z_{o}^{\tau-1})\right\|_{F}^ {2},\] (A.26) \[\max_{1\leq m\leq N_{o}+T_{o}}\left\|\mathcal{F}_{o}^{\tau}H_{o}^{\tau}- \mathcal{F}_{o}^{\tau,(m)}H_{o}^{\tau,(m)}\right\|_{F}\leq 5C_{3}\kappa_{o} \left(\frac{\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}}{\psi_{\min,o }}+\frac{\lambda_{o}}{\psi_{\min,o}}\right)\left\|\mathcal{F}_{o}\right\|_{2, \infty},\] (A.27) \[\max_{1\leq m\leq N_{o}+T_{o}}\left\|\mathcal{F}_{o}^{\tau,(m)}H_ {o}^{\tau,(m)}-\mathcal{F}_{o}\right\|\leq 2C_{op}\left(\frac{\sigma\sqrt{ \max\{N_{o},T_{o}\}}}{\psi_{\min,o}}+\frac{\lambda_{o}}{\psi_{\min,o}}\right) \left\|X_{o}\right\|,\] (A.28) \[\max_{1\leq m\leq N_{o}+T_{o}}\left\|\mathcal{F}_{o}^{\tau,(m)}Q_{o}^{\tau,(m) }-\mathcal{F}_{o}\right\|_{2,\infty}\] \[\leq(C_{\infty}\kappa_{o}+C_{3})\left(\frac{\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}}{\psi_{\min,o}}+\frac{\lambda_{o}}{\psi_{\min,o}}\right) \left\|\mathcal{F}_{o}\right\|_{2,\infty},\] (A.29) _where \(C_{F}\), \(C_{op}\), \(C_{3}\), \(C_{4}\), \(C_{\infty,C_{\infty,X}\),\(C_{\infty,Z}\), \(C_{B}>0\) are absolute constants, provided that \(\eta_{o}\stackrel{{ c}}{{\asymp}}\frac{1}{\max\{N_{o}^{5},T_{o}^{5}\} \kappa_{o}^{2}\psi_{\max,o}}\) and that \(\bar{\tau}=\max\{N_{o}^{23},T_{o}^{23}\}\)._ Proof.: Because the initial estimators, \((X_{o}^{0},Z_{o}^{0})\) and \((X_{o}^{0,(m)},Z_{o}^{0,(m)})\), are set to \((X_{o},Z_{o})\), (A.18) - (A.25) are satisfied when \(\tau=0\). Then, by the mathematical induction, Lemmas D.16 - D.20 with Lemmas D.8, D.14 show that the iterates \(\{\mathcal{F}_{o}^{\tau}\}_{o\leq\tau\leq\bar{\tau}}\) and \(\{\mathcal{F}_{o}^{\tau,(m)}\}_{o\leq\tau\leq\bar{\tau}}\) satisfy (A.18) - (A.25) with probability at least \(1-O(\min\{N_{o}^{-11},T_{o}^{-11}\})\). In addition, (A.26) - (A.29) are derived from Lemmas D.21 and D.22. The technical lemmas used in this proof are relegated to Section D. The following lemma shows the proximity between the non-convex estimator and the nuclear norm penalized estimator. **Lemma A.9**.: _Let \(\tau_{o}^{*}=\operatorname*{arg\,min}_{0\leq\tau\leq\bar{\tau}}||\nabla f(X_{o }^{\tau},Z_{o}^{\tau})||_{F}\). Suppose that Assumptions A.1 - A.4 hold. Then, with probability at least \(1-O(\min\{N_{o}^{-11},T_{o}^{-11}\})\), we have_ \[||\nabla f(X_{o}^{\tau_{o}^{*}},Z_{o}^{\tau_{o}^{*}})||_{F}\leq C_{gr}\frac{1}{ \max\{N_{o}^{5},T_{o}^{5}\}}\lambda_{o}\sqrt{\psi_{\min,o}},\] (A.30) \[\max\Bigl{\{}\left\|X_{o}^{\tau_{o}^{*}}Z_{o}^{\tau_{o}^{*}\top}-\widetilde{M}_{o} \right\|_{F},\left\|X_{o}^{\tau_{o}^{*}}Z_{o}^{\tau_{o}^{*}\top}-\mathcal{P}_{r}( \widetilde{M}_{o})\right\|_{F}\Bigr{\}}\leq 4C_{cvx}C_{gr}\frac{\lambda_{o}}{\max\{N_{o}^{5},T _{o}^{5}\}},\] (A.31) _where \(C_{cvx},C_{gr}>0\) are absolute constants._ Proof.: The inequality (A.30) comes from Lemma D.15. In addition, we have \[\left\|\mathcal{P}_{r}(\widetilde{M}_{o})-\widetilde{M}_{o}\right\|_{F}\leq \left\|X_{o}^{\tau_{o}^{*}}Z_{o}^{\tau_{o}^{*}\top}-\widetilde{M}_{o}\right\|_ {F}\leq 2C_{cvx}C_{gr}\frac{\lambda_{o}}{\max\{N_{o}^{5},T_{o}^{5}\}}\] from Lemma D.3 with Lemmas A.8, D.4 and D.5, and the inequality (A.30) by setting \((\breve{X}_{o},\breve{Z}_{o})=(X_{o}^{\tau_{o}^{*}}H_{o}^{\tau_{o}^{*}},Z_{o }^{\tau_{o}^{*}}H_{o}^{\tau_{o}^{*}})\). Besides, the inequality (A.31) comes from this inequality. ## Appendix B Proofs of theorems and corollaries in the main text Using the tools from the previous section, we shall now prove the theorems and corollaries in the main text. ### Proofs for Section 2 **Proof of Theorem 2.1**. Note that \[\widetilde{M}-M=(\widetilde{M}-\breve{X}\breve{Z}^{\top})+(\breve{X}\breve{Z }^{\top}-XZ^{\top}).\] Here, \((\breve{X},\breve{Z})\) are the nonconvex estimator introduced in Section A.1 and \((X,Z)=(UD^{\frac{1}{2}},VD^{\frac{1}{2}})\) where \(UDV^{\top}\) is the SVD of \(M\). Note that Assumptions A.1 - A.4 are satisfied since the number of missing entries \(\vartheta_{o}\) is \(|\Omega^{c}|\) in this case. Then, we have from Lemmas A.8 and A.9 that \[\left\|\widetilde{M}-M\right\|_{\infty} =\left\|\widetilde{M}-\breve{X}\breve{Z}^{\top}\right\|_{\infty}+ \left\|\breve{X}\breve{H}-X\right\|_{2,\infty}\left\|\breve{H}^{\top}\breve{Z }^{\top}\right\|_{2,\infty}+\left\|X\right\|_{2,\infty}\left\|\breve{Z} \breve{H}-Z\right\|_{2,\infty}\] \[\lesssim\frac{\lambda}{\max\{N^{5},T^{5}\}}+\frac{\sigma\mu r^{ \frac{3}{2}}\kappa^{2}\sqrt{\max\{\log N,\log T\}}}{\sqrt{\min\{N,T\}}}\] \[\lesssim\frac{\sigma\mu r^{\frac{3}{2}}\kappa^{2}\sqrt{\max\{\log N,\log T\}}}{\sqrt{\min\{N,T\}}},\] where \(\lambda=C_{\lambda}\sigma\sqrt{\max\{N,T\}}\) for some constant \(C_{\lambda}>0\), since we have by Lemma A.8 \[\left\|X\right\|_{2,\infty}\left\|\breve{Z}\breve{H}-Z\right\|_{2,\infty}, \left\|\breve{X}\breve{H}-X\right\|_{2,\infty}\left\|\breve{H}^{\top}\breve{Z }^{\top}\right\|_{2,\infty}\] \[\lesssim\sqrt{r}\kappa\left(\frac{\sigma\sqrt{\max\{N\log N,T\log T \}}}{\psi_{\min}}+\frac{\lambda}{\psi_{\min}}\right)\left\|X\right\|_{2,\infty} \left\|Z\right\|_{2,\infty}\] \[\lesssim\frac{\sigma\mu r^{\frac{3}{2}}\kappa^{2}\sqrt{\max\{\log N,\log T\}}}{\sqrt{\min\{N,T\}}}.\ \ \Box\] **Proofs of Corollaries 2.2 and 2.3**. First, we prove Corollary 2.2. By Assumption (iii), we know \(N_{0}\leq N_{l}=N_{0}+|\mathcal{G}_{l}|\leq 2N_{0}\). Then, we have by Assumptions (iii) and (iv) \[\lambda_{\min}\left(\frac{1}{N_{l}}\sum_{i\leq N_{l}}\left(\sqrt{ N}u_{i}\right)\left(\sqrt{N}u_{i}\right)^{\top}\right)\] (B.1) \[\geq\lambda_{\min}\left(\frac{1}{N_{l}}\sum_{i\leq N_{0}}\left( \sqrt{N}u_{i}\right)\left(\sqrt{N}u_{i}\right)^{\top}\right)-\left\|\frac{1} {N_{l}}\sum_{i\in\mathcal{G}_{l}}\left(\sqrt{N}u_{i}\right)\left(\sqrt{N}u_{ i}\right)^{\top}\right\|\] \[\geq\frac{c}{2}-\frac{\mu r|\mathcal{G}_{l}|}{N_{l}}\geq\frac{c} {4}.\] Similarly, we can have \(\lambda_{\max}\left(\frac{1}{N_{l}}\sum_{i\leq N_{l}}\left(\sqrt{N}u_{i} \right)\left(\sqrt{N}u_{i}\right)^{\top}\right)\leq 4C\). Then, using Lemma B.3, we can have \(\mu_{l}\lesssim\mu\kappa^{\frac{1}{2}}\), \(\kappa_{l}\lesssim\kappa\), and \(\psi_{O,\min}\asymp\psi_{l,\min}\), where \(\mu_{l}\) and \(\kappa_{l}\) are the incoherence parameter and condition number of the submatrix \(M_{l}\), and \(\psi_{l,\min}\) is the smallest nonzero singular value of \(M_{l}\). Using these relations, we can check that submatrix \(M_{l}\) satisfies Assumptions A.1 - A.4 under the assumptions of Corollary 2.2. Then, we can derive the bound of \(\left\|\widetilde{M_{l}}-M_{l}\right\|_{\infty}\) by the same way as in the proof of Theorem 2.1 from Lemmas A.8 and A.9. In addition, we replace \(\mu_{l}\) and \(\kappa_{l}\) in the bound of \(\left\|\widetilde{M_{l}}-M_{l}\right\|_{\infty}\) with \(\mu\kappa^{\frac{1}{2}}\) and \(\kappa\) using the above relations from Lemma B.3, and replace \(N_{l}\) with \(N_{0}\) since \(N_{0}\leq N_{l}=N_{0}+|\mathcal{G}_{l}|\leq 2N_{0}\). Lastly, the bound of \(\left\|\widetilde{M}-M\right\|_{\infty}\) trivially follows from that of \(\left\|\widetilde{M_{l}}-M_{l}\right\|_{\infty}\) since any entry of \(M\) is included in at least one of \(M_{l}\). Symmetrically, we can prove Corollary 2.3 using the same way. So, we omit the proof. \(\Box\) **Proof of Corollary 2.4** It is a simple extension of Corollary 2.2 and the proof is same as that of Corollary 2.2. The only difference is that the dimension of the submatrix \(M_{l}\) becomes \(N_{l}\times T_{l}\) where \(N_{l}=N_{0}+|\mathcal{G}_{l}|\) and \(T_{l}=T_{0}+1\). Here, we have from Assumption (iv) that \[\lambda_{\min}\left(\frac{1}{T_{l}}\sum_{t\leq T_{l}}\left(\sqrt{T} v_{t}\right)\left(\sqrt{T}v_{t}\right)^{\top}\right) \geq\lambda_{\min}\left(\frac{1}{T_{l}}\sum_{t\leq T_{0}}\left( \sqrt{T}v_{t}\right)\left(\sqrt{T}v_{t}\right)^{\top}\right)-\left\|\frac{1}{T _{l}}\left(\sqrt{T}v_{t_{o}}\right)\left(\sqrt{T}v_{t_{o}}\right)^{\top}\right\|\] \[\geq\frac{c}{2}-\frac{\mu r}{T_{l}}\geq\frac{c}{4}\] (B.2) and \(\lambda_{\max}\left(\frac{1}{T_{l}}\sum_{t\leq T_{l}}\left(\sqrt{T}v_{t} \right)\left(\sqrt{T}v_{t}\right)^{\top}\right)\leq 4C\). Then, by (B.1) and (B.2), we can exploit Lemma B.3. In the bounds of \(\left\|\widetilde{M}_{l}-M_{l}\right\|_{\infty}\), we replace \(\mu_{l}\) and \(\kappa_{l}\) with \(\mu\kappa^{\frac{1}{2}}\) and \(\kappa\) using the results of Lemma B.3, and replace \(N_{l}\) and \(T_{l}\) with \(N_{0}\) and \(T_{0}\) since \(N_{0}\leq N_{l}=N_{0}+|\mathcal{G}_{l}|\leq 2N_{0}\) and \(T_{l}=T_{0}+1\). In addition, the bound of \(\left\|\widetilde{M}-M\right\|_{\infty}\) trivially follows from that of \(\left\|\widetilde{M}_{l}-M_{l}\right\|_{\infty}\). \(\square\) **Proof of Corollary 2.5** In the case of the estimation of missing entries in the matrix \(M_{d,d^{\prime}}\), the dimension of each submatrix is \(N_{l}\times T_{l}\) where \(N_{l}=N_{d^{\prime}}+|\mathcal{G}_{l}|\) and \(T_{l}=T_{d}+1\). By the similar way to (B.1) and (B.2), we can show \[\frac{c}{4} \leq\lambda_{\min}\left(\frac{1}{N_{l}}\sum_{i\leq N_{l}}\left( \sqrt{N}u_{i}\right)\left(\sqrt{N}u_{i}\right)^{\top}\right)\leq\lambda_{\max} \left(\frac{1}{N_{l}}\sum_{i\leq N_{l}}\left(\sqrt{N}u_{i}\right)\left(\sqrt{ N}u_{i}\right)^{\top}\right)\leq 4C,\] \[\frac{c}{4} \leq\lambda_{\min}\left(\frac{1}{T_{l}}\sum_{t\leq T_{l}}\left( \sqrt{T}v_{t}\right)\left(\sqrt{T}v_{t}\right)^{\top}\right)\leq\lambda_{\max }\left(\frac{1}{T_{l}}\sum_{t\leq T_{l}}\left(\sqrt{T}v_{t}\right)\left(\sqrt{ T}v_{t}\right)^{\top}\right)\leq 4C.\] Hence, we can exploit Lemma B.3 to replace \(\mu\kappa^{\frac{1}{2}}\), \(\kappa\), \(\psi_{\min,O_{d,d^{\prime}}}\) with \(\mu_{l}\), \(\kappa_{l}\), \(\psi_{\min,l}\) and replace \(N_{d}\) and \(T_{d^{\prime}}\) with \(N_{l}\) and \(T_{l}\) in our conditions and then, we can check that for each \(l\), Assumptions A.1 - A.4 are satisfied. Then, we derive the bounds of \(\left\|\widetilde{M}_{l}-M_{l}\right\|_{\infty}\) by the same way as in the proof of Theorem 2.1 using Lemmas A.8 and A.9. The bound of \(\left\|\widetilde{M}_{d,d^{\prime}}-M_{d,d^{\prime}}\right\|_{\infty}\) trivially follows from that of \(\left\|\widetilde{M}_{l}-M_{l}\right\|_{\infty}\). \(\square\) ### Proofs for Section 3 **Proof of Theorem 3.1** First of all, by using the fact from Lemma B.3 that \(\mu_{l}\lesssim\mu\kappa^{\frac{1}{2}}\), \(\kappa_{l}\lesssim\kappa\), \(\psi_{\min,l}\asymp\psi_{\min,O}\) and the relations that \(N_{0}\leq N_{l}=N_{0}+|\mathcal{G}_{l}|\leq 2N_{0}\) and \(T_{l}=T_{0}+1\) w.h.p., we can check that Assumptions A.1 - A.4 are satisfied for each submatrix. Denote by \(l(i)\) the group \(0\leq l\leq L\) where the unit \(i\) is included in. That is, \(i\in\mathcal{G}_{l(i)}\). Then, by Proposition A.4, we have the following decomposition: \[\frac{\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}}{|\mathcal{G}|}\sum_ {i\in\mathcal{G}}(\widehat{m}_{it_{0}}-m_{it_{0}})\] \[=\underbrace{\frac{\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}}{| \mathcal{G}|}\sum_{i\in\mathcal{G}}X_{l(i),i}^{\top}\left(\sum_{j\leq N_{0}}X_ {l(i),j}X_{l(i),j}^{\top}\right)^{-1}\sum_{j\leq N_{0}}\epsilon_{jt_{o}}X_{l(i ),j}}_{:=A}\] \[+\underbrace{\frac{\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}}{| \mathcal{G}|}\sum_{i\in\mathcal{G}}Z_{l(i),t_{o}}^{\top}\left(\sum_{s\leq T_{0} }Z_{l(i),s}Z_{l(i),s}^{\top}\right)^{-1}\sum_{s\leq T_{0}}\epsilon_{is}Z_{l(i ),s}}_{:=B}\] \[+\underbrace{\frac{\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}}{| \mathcal{G}|}\sum_{i\in\mathcal{G}_{0}}Z_{0,t_{o}}^{\top}\left[\left(\sum_{s \leq T_{0}}Z_{0,s}Z_{0,s}^{\top}+Z_{0,t_{o}}Z_{0,t_{0}}^{\top}\right)^{-1} \left(\sum_{s\leq T_{0}}\epsilon_{is}Z_{0,s}+\epsilon_{it_{0}}Z_{0,t_{0}} \right)-\left(\sum_{s\leq T_{0}}Z_{0,s}Z_{0,s}^{\top}\right)^{-1}\sum_{s\leq T _{0}}\epsilon_{is}Z_{0,s}\right]}_{:=\mathcal{R}_{1}}\] \[+\underbrace{\frac{\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}}{| \mathcal{G}|}\sum_{i\in\mathcal{G}}\mathcal{R}_{l(i),i}^{M}}_{:=\mathcal{R}_ {2}}.\] Here, \((X_{l},Z_{l})=(U_{l}D_{l}^{\frac{1}{2}},V_{l}D_{l}^{\frac{1}{2}})\) where \(U_{l}D_{l}V_{l}^{\top}\) is the SVD of \(M_{l}\). \(X_{l,j}\) is the transpose of the row of \(X_{l}\) corresponding to the unit \(j\) and \(Z_{l,s}\) is the transpose of the row of \(Z_{l}\) corresponding to the time period \(s\). Because for each \(0\leq 1\leq L\), there is an invertible matrix \(H_{l}\) such that \(u_{j}=H_{l}X_{l,j}\), we have \[A=\frac{\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}}{|\mathcal{G}|}\sum_{i\in \mathcal{G}}u_{i}^{\top}\left(\sum_{j\leq N_{o}}u_{j}u_{j}^{\top}\right)^{-1} \sum_{j\leq N_{o}}\epsilon_{jt_{0}}u_{j}.\] Similarly, we can show that \[B=\frac{\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}}{|\mathcal{G}|}\sum_{i\in \mathcal{G}}v_{t_{o}}^{\top}\left(\sum_{s\leq T_{0}}v_{s}v_{s}^{\top}\right)^ {-1}\sum_{s\leq T_{0}}\epsilon_{is}v_{s}.\] Note that \[\|a_{j}\|\coloneqq\left\|\frac{\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}}{| \mathcal{G}|}\sum_{i\in\mathcal{G}}u_{i}^{\top}\left(\sum_{j\leq N_{0}}u_{j}u _{j}^{\top}\right)^{-1}u_{j}\right\|\leq\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2 }}\max_{i}\|u_{i}\|^{2}\psi_{\min}^{-1}\left(\sum_{j\leq N_{0}}u_{j}u_{j}^{\top }\right)\leq\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}\frac{\mu r}{N_{0}}.\] Hence, we have \[\left\|\sum_{j\leq N_{0}}\mathbb{E}[a_{j}^{4}\epsilon_{jt_{0}}^{4}]\right\|= \left\|\sum_{j\leq N_{0}}\mathbb{E}[\epsilon_{jt_{0}}^{4}]a_{j}^{4}\right\| \leq\mathcal{V}_{\mathcal{G}}^{-2}\sigma^{4}\frac{\mu^{4}r^{4}}{N_{0}^{3}}.\] Then, for any \(q>0\), we have by Cauchy-Schwarz and Markov inequalities that \[\mathrm{Var}(A)^{-1}\sum_{j\leq N_{0}}\mathbb{E}[(a_{j}\epsilon_{jt_{0}})^{2}1_{ \{|a_{j}\epsilon_{jt_{0}}|>q\mathrm{Var}(A)^{1/2}\}}]\leq\frac{1}{\mathrm{Var}(A )q}\sqrt{\sum_{j\leq N_{0}}\mathbb{E}[(a_{j}\epsilon_{jt_{0}})^{4}]}\lesssim \frac{\mu^{2}r^{2}}{N_{0}^{\frac{1}{2}}}=o_{p}(1)\] since \[\mathrm{Var}(A)=\mathcal{V}_{\mathcal{G}}^{-1}\sigma^{2}\bar{u}_{\mathcal{G}}^ {\top}\left(\sum_{j\leq N_{0}}u_{j}u_{j}^{\top}\right)^{-1}\bar{u}_{\mathcal{G }}\geq c\mathcal{V}_{\mathcal{G}}^{-1}\sigma^{2}N_{0}^{-1}\] for some constant \(c>0\). Then, we have by Lindeberg theorem that \[\mathrm{Var}(A)^{-1/2}A\stackrel{{ D}}{{\longrightarrow}} \mathcal{N}(0,1).\] In the same way, we can derive \[\mathrm{Var}(B)^{-1/2}B\stackrel{{ D}}{{\longrightarrow}} \mathcal{N}(0,1)\] where \(\mathrm{Var}(B)=\mathcal{V}_{\mathcal{G}}^{-1}\frac{\sigma^{2}}{|\mathcal{G} |}v_{t_{0}}^{\top}\left(\sum_{s\leq T_{0}}v_{s}v_{s}^{\top}\right)^{-1}v_{t_{ 0}}\). Then, because \(A\) and \(B\) are independent, by using the similar assertion in the proof of Theorem 3 of Bai (2003), we have \[A+B=\mathrm{Var}(A)^{1/2}(\mathrm{Var}(A)^{-1/2}A)+\mathrm{Var}(B)^{1/2}( \mathrm{Var}(B)^{-1/2}B)\stackrel{{ D}}{{\longrightarrow}} \mathcal{N}\left(0,1\right)\] since \(\mathrm{Var}(A)+\mathrm{Var}(B)=1\). In addition, note that the difference between \(\sum_{s\leq T_{0}}Z_{0,s}Z_{0,s}^{\top}+Z_{0,t_{0}}Z_{0,t_{0}}^{\top}\) and \(\sum_{s\leq T_{0}}Z_{0,s}Z_{0,s}^{\top}\) is just one element \(Z_{0,t_{0}}Z_{0,t_{0}}^{\top}\), and that between \(\sum_{s\leq T_{0}}\epsilon_{is}Z_{0,s}+\epsilon_{it_{0}}Z_{0,t_{0}}\) and \(\sum_{s\leq T_{0}}\epsilon_{is}Z_{0,s}\) is just \(\epsilon_{it_{0}}Z_{0,t_{0}}\). Hence, without difficulty, we can show that \(\|\mathcal{R}_{1}\|=o_{p}(1)\). Moreover, note that since \[\mathcal{V}_{\mathcal{G}}=\sigma^{2}\bar{u}_{\mathcal{G}}^{\top}\left(\sum_{j \leq N_{0}}u_{j}u_{j}^{\top}\right)^{-1}\bar{u}_{\mathcal{G}}+\frac{\sigma^{2} }{|\mathcal{G}|}v_{t_{0}}^{\top}\left(\sum_{s\leq T_{0}}v_{s}v_{s}^{\top} \right)^{-1}v_{t_{0}}\geq c\sigma^{2}\left(\frac{1}{N_{0}}+\frac{1}{|\mathcal{ G}|T_{0}}\right)\] for some constant \(c>0\), we have \[\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}\lesssim\min\{\sqrt{N_{0}},\sqrt{| \mathcal{G}|T_{0}}\}/\sigma.\] Hence, by Proposition A.4, we have with probability at least \(1-O(\min\{N_{0}^{-7},T_{0}^{-7}\})\) that \[\|\mathcal{R}_{2}\| \leq\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}\max_{0\leq l\leq L} \max_{i\in\mathcal{G}_{l}}||\mathcal{R}_{l,i}^{M}||\] \[\leq C_{M}^{\prime}\left(\max_{0\leq l\leq L}\frac{\sigma\kappa_{l }^{5}\mu_{l}r\min\{\sqrt{N_{0}},\sqrt{|\mathcal{G}|T_{0}}\}\max\{N_{l}\log N_{ l},T_{l}\log T_{l}\}}{\psi_{\min,l}\min\{N_{l},T_{l}\}}\right.\] \[+\max_{0\leq l\leq L}\frac{\kappa_{l}^{4}\mu_{l}^{2}r^{2}\min\{\sqrt{N_{0}}, \sqrt{|\mathcal{G}|T_{0}}\}\max\{\sqrt{N_{l}\log N_{l}},\sqrt{T_{l}\log T_{l}}\}}{ \min\{N_{l}^{\frac{3}{2}},T_{l}^{\frac{3}{2}}\}}\] \[+\max_{l\in[L]}\frac{\mu_{l}^{2}r^{2}\kappa_{l}^{3}|\mathcal{G}_{ l}|\max\{\sqrt{N_{l}\log N_{l}},\sqrt{T_{l}\log T_{l}}\}}{\sqrt{N_{l}}\min\{N_{l},T_{l} \}}\bigg{)}\] for an absolute constant \(C_{M}^{\prime}>0\). Then, by Assumptions (i), (ii), and (iii) with the relations that \(\mu_{l}\lesssim\mu\kappa^{\frac{1}{2}}\), \(\kappa_{l}\lesssim\kappa\), \(N_{0}\leq N_{l}\leq 2N_{0}\) and \(T_{l}=T_{0}+1\) w.h.p., we have \(\|\mathcal{R}_{2}\|=o_{p}(1)\). Therefore, \[\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}\frac{1}{|\mathcal{G}|}\sum_{i\in \mathcal{G}}(\widehat{m}_{it_{0}}-m_{it_{0}})\stackrel{{ D}}{{ \longrightarrow}}\mathcal{N}(0,1).\ \ \Box\] **Proof of Theorem 3.2** By using the fact from Lemma B.3 that \(\mu_{l}\lesssim\mu\kappa^{\frac{1}{2}}\), \(\kappa_{l}\lesssim\kappa\), \(\psi_{\min,l}\asymp\psi_{\min,O_{d_{l}}}\), and the relations that \(N_{0}\leq N_{l}=N_{0}+|\mathcal{G}_{l}|\leq 2N_{0}\) and \(T_{l}=T_{d_{l}}+1\) w.h.p., we can check that Assumptions A.1 - A.4 are satisfied for each \(N_{l}\times T_{l}\) submatrix \(M_{l}\). Then, by Proposition A.4, we have the following decomposition: \[\frac{\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}}{|\mathcal{G}|} \sum_{i\in\mathcal{G}}(\widehat{m}_{it_{o}}-m_{it_{o}}) =\underbrace{\frac{\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}}{| \mathcal{G}|}\sum_{i\in\mathcal{G}}X_{l(i),i}^{\top}\left(\sum_{j\leq N_{0}}X _{l(i),j}X_{l(i),j}^{\top}\right)^{-1}\sum_{j\leq N_{0}}\epsilon_{jt_{o}}X_{l( i),j}}_{:=A}\] \[\quad+\underbrace{\frac{\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}} {|\mathcal{G}|}\sum_{i\in\mathcal{G}}Z_{l(i),t_{o}}^{\top}\left(\sum_{s\leq T _{d_{l(i)}}}Z_{l(i),s}Z_{l(i),s}^{\top}\right)^{-1}\sum_{s\leq T_{d_{l(i)}}} \epsilon_{is}Z_{l(i),s}}_{:=B}+\frac{\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}} }{|\mathcal{G}|}\sum_{i\in\mathcal{G}}\mathcal{R}_{l(i),s}^{M}\] \[=\underbrace{\frac{\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}}{| \mathcal{G}|}\sum_{i\in\mathcal{G}}u_{i}^{\top}\left(\sum_{j\leq N_{0}}u_{j}u_ {j}^{\top}\right)^{-1}\sum_{j\leq N_{0}}\epsilon_{jt_{o}}u_{j}}_{=A}\] \[\quad+\underbrace{\frac{\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}} }{|\mathcal{G}|}\sum_{i\in\mathcal{G}}v_{t_{o}}^{\top}\left(\sum_{s\leq T_{d_{ l(i)}}}v_{s}v_{s}^{\top}\right)^{-1}\sum_{s\leq T_{d_{l(i)}}}\epsilon_{is}v_{s}}_{ =B}+\frac{\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}}{|\mathcal{G}|}\sum_{i\in \mathcal{G}}\mathcal{R}_{l(i),i}^{M},\] with the convention that \(d_{0}=0\). Then, we can represent \(A+B\) as \[A+B=\sum_{j\leq N}\sum_{s\leq T}\underbrace{\left(P1_{\{j\leq N_{0},s=t_{o}\}} +\sum_{0\leq l\leq L}Q_{l}1_{\{j\in\mathcal{G}_{l},s\in T_{d_{l}}\}}\right) \epsilon_{js}}_{:=\mathcal{Y}_{js}},\] \[\text{where}\ \ P=\frac{\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}}{|\mathcal{G}|} \sum_{i\in\mathcal{G}}u_{i}^{\top}\left(\sum_{j\leq N_{0}}u_{j}u_{j}^{\top} \right)^{-1}u_{j},\ \ Q_{l}=\frac{\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}}{|\mathcal{G}|}v_{t_{o} }^{\top}\left(\sum_{s\leq T_{d_{l}}}v_{s}v_{s}^{\top}\right)^{-1}v_{s}.\] Because \(\{\epsilon_{js}\}_{j\leq N,s\leq T}\) is independent across \(j\) and \(s\), \(A+B\) is a sum of independent random variables and so, we can use Lindeberg CLT. To check the Lindeberg condition, we first bound \(\sum_{j,s}\mathbb{E}[\mathcal{Y}_{js}^{4}]\). Note that \[\|P\|=\frac{\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}}{|\mathcal{G}|}\left\| \sum_{i\in\mathcal{G}}u_{i}^{\top}\left(\sum_{j\leq N_{0}}u_{j}u_{j}^{\top} \right)^{-1}u_{j}\right\|\leq\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}\max_{i} ||u_{i}||^{2}\psi_{\min}^{-1}\left(\sum_{j\leq N_{0}}u_{j}u_{j}^{\top}\right) \lesssim\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}\frac{\mu r}{N_{0}}.\] Hence, we have \[\left\|\sum_{j\leq N}\sum_{s\leq T}\mathbb{E}[P^{4}1_{\{j\leq N_{0},s=t_{o}\}} \epsilon_{js}^{4}]\right\|=\left\|\sum_{j\leq N_{0}}\sum_{s=t_{o}}\mathbb{E}[ \epsilon_{js}^{4}]P^{4}\right\|\lesssim\sigma^{4}N_{0}||P||^{4}\leq\mathcal{V }_{\mathcal{G}}^{-2}\sigma^{4}\frac{\mu^{4}r^{4}}{N_{0}^{3}}.\] In addition, because \(1_{\{j\in\mathcal{G}_{l^{\prime}},s\leq T_{d_{l^{\prime}}}\}}1_{\{j\in\mathcal{ G}_{l},s\leq T_{d_{l}}\}}=0\) when \(l\neq l^{\prime}\), we have \[\sum_{j\leq N}\sum_{s\leq T}\mathbb{E}\left[\left(\sum_{0\leq l \leq L}Q_{l}1_{\{j\in\mathcal{G}_{l},s\leq T_{d_{l}}\}}\right)^{4}\epsilon_{js} ^{4}\right] =\sum_{j\leq N}\sum_{s\leq T}\sum_{0\leq l\leq L}Q_{l}^{4}1_{\{j \in\mathcal{G}_{l},s\leq T_{d_{l}}\}}\mathbb{E}\left[\epsilon_{js}^{4}\right]\] \[=\sum_{0\leq l\leq L}\sum_{j\leq N}\sum_{s\leq T}Q_{l}^{4}1_{\{j \in\mathcal{G}_{l},s\leq T_{d_{l}}\}}\mathbb{E}\left[\epsilon_{js}^{4}\right].\] For each \(l\), we have \[\sum_{j\in\mathcal{G}_{l}}\sum_{s\leq T_{d_{l}}}Q_{l}^{4}\mathbb{E}\left[ \epsilon_{js}^{4}\right]\lesssim\frac{|\mathcal{G}_{l}|}{|\mathcal{G}|} \mathcal{V}_{\mathcal{G}}^{-2}\sigma^{4}\frac{1}{|\mathcal{G}|^{3}}\frac{\mu^ {4}r^{4}}{T_{d_{l}}^{3}}\leq\mathcal{V}_{\mathcal{G}}^{-2}\sigma^{4}\frac{1}{( L+1)^{3}}\frac{\mu^{4}r^{4}}{T_{d_{l}}^{3}}\] because \[||Q_{l}||\leq\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}\frac{1}{|\mathcal{G}|} \max_{t}||v_{t}||^{2}\psi_{\min}^{-1}\left(\sum_{s\leq T_{d_{l}}}v_{s}v_{s}^{ \top}\right)\leq\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}\frac{1}{|\mathcal{G}|} \frac{\mu r}{T_{d_{l}}}.\] Then, we have \[\sum_{0\leq l\leq L}\sum_{j\in\mathcal{G}_{l}}\sum_{s\leq T_{d_{l }}}Q_{l}^{4}\mathbb{E}\left[\epsilon_{js}^{4}\right] \lesssim\mathcal{V}_{\mathcal{G}}^{-2}\sigma^{4}\mu^{4}r^{4}\frac{1 }{(L+1)^{3}}\sum_{0\leq l\leq L}\frac{1}{T_{d_{l}}^{3}}\] \[\leq\mathcal{V}_{\mathcal{G}}^{-2}\sigma^{4}\mu^{4}r^{4}\left( \frac{1}{L+1}\sum_{0\leq l\leq L}\frac{1}{T_{d_{l}}}\right)^{3}\] \[\lesssim\mathcal{V}_{\mathcal{G}}^{-2}\sigma^{4}\mu^{4}r^{4} \bar{T}^{-3}\] where \(\bar{T}^{-1}\coloneqq\frac{1}{L+1}\sum_{0\leq l\leq L}{T_{d_{l}}}^{-1}\). Therefore, we can reach \[\sum_{j,s}\mathbb{E}[\mathcal{Y}_{js}^{4}]\lesssim\mathcal{V}_{\mathcal{G}}^{-2} \sigma^{4}\frac{\mu^{4}r^{4}}{N_{0}^{3}}+\mathcal{V}_{\mathcal{G}}^{-2}\sigma^ {4}\mu^{4}r^{4}\bar{T}^{-3}.\] Then, for any \(q>0\), we have by Cauchy-Schwarz and Markov inequalities with Claim B.1, \[\mathrm{Var}(A+B)^{-1}\sum_{j,s}\mathbb{E}[\mathcal{V}_{js}^{2}1_{\{|\mathcal{ Y}_{js}|>q\mathrm{Var}(A+B)^{1/2}\}}]\leq\frac{1}{\mathrm{Var}(A+B)q}\sqrt{ \sum_{j,s}\mathbb{E}[\mathcal{Y}_{js}^{4}]}\lesssim\frac{\mu^{2}r^{2}}{N_{0}^{ \frac{1}{2}}}+\frac{\mu^{2}r^{2}N_{0}}{\bar{T}^{\frac{3}{2}}}.\] Because \(\bar{T}\geq\min_{l}T_{d_{l}}\coloneqq T_{\min}\), we have \(\frac{\mu^{2}r^{2}}{N_{0}^{1/2}}+\frac{\mu^{2}r^{2}N_{0}}{T^{3/2}}=o_{p}(1)\) by the Assumption (ii). Hence, the Lindeberg condition is satisfied. **Claim B.1**.: _(i) \(\mathcal{V}_{\mathcal{G}}^{-1}\lesssim\frac{N_{0}}{\sigma^{2}}\) and (ii) \(\mathrm{Var}(A+B)\stackrel{{\mathbb{P}}}{{\longrightarrow}}1\)._ Therefore, by using the Lindeberg CLT with Claim B.1 (ii), we have \(A+B\stackrel{{ D}}{{\longrightarrow}}\mathcal{N}(0,1)\). Next, we show that \(\left\|\frac{\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}}{|\mathcal{G}|}\sum_{i \in\mathcal{G}}\mathcal{R}_{l(i),i}^{M}\right\|=o_{p}(1)\). Since \(\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}\lesssim\frac{\sqrt{N_{0}}}{\sigma} \asymp\frac{\sqrt{N_{l}}}{\sigma}\) for all \(l\), we have by Proposition A.4 with probability at least \(1-O(\min\{N_{0}^{-7},T_{\min}^{-7}\})\) that \[\left\|\frac{\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}}{|\mathcal{G }|}\sum_{i\in\mathcal{G}}\mathcal{R}_{l(i),i}^{M}\right\| \leq\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}\max_{0\leq l\leq L} \max_{i\in\mathcal{G}_{l}}||\mathcal{R}_{l,i}^{M}||\] \[\leq C_{M}^{\prime}\left(\max_{0\leq l\leq L}\frac{\sigma\kappa_{ l}^{5}\mu_{l}r\sqrt{N_{l}}\max\{N_{l}\log N_{l},T_{l}\log T_{l}\}}{\psi_{\min,l} \min\{N_{l},T_{l}\}}\right.\] \[\qquad\qquad+\max_{0\leq l\leq L}\frac{\kappa_{l}^{4}\mu_{l}^{2}r ^{2}\sqrt{N_{l}}\max\{\sqrt{N_{l}\log N_{l}},\sqrt{T_{l}\log T_{l}}\}}{\min\{N _{l}^{\frac{3}{2}},T_{l}^{\frac{3}{2}}\}}\] \[\qquad\qquad+\max_{1\leq l\leq L}\frac{\mu_{l}^{2}r^{2}\kappa_{ l}^{3}|\mathcal{G}_{l}|\max\{\sqrt{N_{l}\log N_{l}},\sqrt{T_{l}\log T_{l}}\}}{\sqrt{N_{l} \min\{N_{l},T_{l}\}}}\right)\] for an absolute constant \(C_{M}^{\prime}>0\). Then, by Assumptions (i), (ii), and (iii) with the relations that \(\mu_{l}\lesssim\mu\kappa^{\frac{1}{2}}\), \(\kappa_{l}\lesssim\kappa\), \(N_{0}\leq N_{l}\leq 2N_{0}\) and \(T_{l}=T_{d_{l}}+1\), we have \(\left\|\frac{\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}}{|\mathcal{G}|}\sum_{i \in\mathcal{G}}\mathcal{R}_{l(i),i}^{M}\right\|=o_{p}(1)\). Therefore, \[\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}\frac{1}{|\mathcal{G}|}\sum_{i\in \mathcal{G}}(\widehat{m}_{it_{o}}-m_{it_{o}})\stackrel{{ D}}{{ \longrightarrow}}\mathcal{N}(0,1).\ \ \Box\] Proof of Claim b.1.: (i) We have \(\mathcal{V}_{\mathcal{G}}^{-1}\lesssim\frac{N_{0}}{\sigma^{2}}\), because for some constant \(c>0\), \[\mathcal{V}_{\mathcal{G}}\geq\sigma^{2}\bar{u}_{\mathcal{G}}^{\top}\left(\sum_ {j\leq N_{0}}u_{j}u_{j}^{\top}\right)^{-1}\bar{u}_{\mathcal{G}}\geq\sigma^{2} \left\|\bar{u}_{\mathcal{G}}\right\|^{2}\psi_{\min}\left(\left(\sum_{j\leq N_{0 }}u_{j}u_{j}^{\top}\right)^{-1}\right)\geq c\frac{\sigma^{2}}{N_{0}}.\] (ii) A simple calculation shows that \[\text{Var}(A)=\mathcal{V}_{\mathcal{G}}^{-1}\sigma^{2}\bar{u}_{\mathcal{G}}^{\top }\left(\sum_{j\leq N_{0}}u_{j}u_{j}^{\top}\right)^{-1}\bar{u}_{\mathcal{G}}, \text{Var}(B)=\mathcal{V}_{\mathcal{G}}^{-1}\frac{\sigma^{2}}{|\mathcal{G}|} \sum_{0\leq l\leq L}\alpha_{l}v_{t_{o}}^{\top}\left(\sum_{s\leq T_{d_{l}}}v_{s} v_{s}^{\top}\right)^{-1}v_{t_{o}}\] where \(\alpha_{l}=\frac{|\mathcal{G}_{l}|}{|\mathcal{G}|}\). Hence, we have \(\text{Var}(A)+\text{Var}(B)=1\). In addition, note that \(\text{Cov}(A,B)=\text{Cov}(A,B^{(t_{o})})\) where \[B^{(t_{o})}\coloneqq\frac{\mathcal{V}_{\mathcal{G}}^{-\frac{1}{2}}}{|\mathcal{ G}|}\sum_{j\in\mathcal{G}_{0}}v_{t_{o}}^{\top}\left(\sum_{s\leq T_{0}}v_{s}v_{s}^{ \top}\right)^{-1}\epsilon_{jt_{o}}v_{t_{o}}.\] Then, we have \[\left\|\text{Cov}(A,B^{(t_{o})})\right\| =\left\|\mathcal{V}_{\mathcal{G}}^{-1}\sigma^{2}v_{t_{o}}^{\top} \left(\sum_{s\leq T_{0}}v_{s}v_{s}^{\top}\right)^{-1}v_{t_{o}}\bar{u}_{G}^{ \top}\left(\sum_{j\leq N_{0}}u_{j}u_{j}^{\top}\right)^{-1}\frac{1}{|\mathcal{G }|}\sum_{j\in\mathcal{G}_{0}}u_{j}\right\|\] \[\leq\mathcal{V}_{\mathcal{G}}^{-1}\sigma^{2}\max_{s}\|v_{s}\|^{2} \max_{j}\|u_{j}\|^{2}\left\|\left(\sum_{s\leq T_{0}}v_{s}v_{s}^{\top}\right)^{ -1}\right\|\left\|\left(\sum_{j\leq N_{0}}u_{j}u_{j}^{\top}\right)^{-1}\right\|\] \[\leq\mathcal{V}_{\mathcal{G}}^{-1}\sigma^{2}\frac{\mu^{2}r^{2}}{N _{0}T_{0}}\stackrel{{\mathbb{P}}}{{\longrightarrow}}0.\] Hence, we have \(\text{Var}(A+B)=\text{Var}(A)+\text{Var}(B)+2\text{Cov}(A,B)\stackrel{{ \mathbb{P}}}{{\longrightarrow}}1\). **Proof of Corollary 3.3** From the proof of Claim B.1 (ii), we know that \(\mathcal{V}_{\mathcal{G}}=\text{Var}(\tilde{A})+\text{Var}(\tilde{B})\) where \(\widetilde{A}=\mathcal{V}_{\mathcal{G}}^{\frac{1}{2}}A\) and \(\widetilde{B}=\mathcal{V}_{\mathcal{G}}^{\frac{1}{2}}B\). Note that \[\widetilde{A}=\sum_{j\leq N_{0}}\epsilon_{jt_{o}}\left(\frac{1}{|\mathcal{G}|} \sum_{i\in\mathcal{G}}u_{i}^{\top}\left(\sum_{k\leq N_{0}}u_{k}u_{k}^{\top} \right)^{-1}u_{j}\right)=\sum_{j\leq N_{0}}\epsilon_{jt_{o}}\left(\sum_{0\leq l \leq L}\frac{|\mathcal{G}_{l}|}{|\mathcal{G}|}\frac{1}{|\mathcal{G}_{l}|}\sum_ {i\in\mathcal{G}_{l}}u_{i}^{\top}\left(\sum_{k\leq N_{0}}u_{k}u_{k}^{\top} \right)^{-1}u_{j}\right).\] Hence, we have \[\text{Var}(\tilde{A})=\sigma^{2}\sum_{j\leq N_{0}}\left(\sum_{0\leq l\leq L} \alpha_{l}\bar{u}_{\mathcal{G}_{l}}^{\top}\left(\sum_{k\leq N_{0}}u_{k}u_{k}^{ \top}\right)^{-1}u_{j}\right)^{2}=\sigma^{2}\sum_{j\leq N_{0}}\left(\sum_{0\leq l \leq L}\alpha_{l}\bar{X}_{l,\mathcal{G}_{l}}^{\top}\left(\sum_{k\leq N_{0}}X_{ l,k}X_{l,k}^{\top}\right)^{-1}X_{l,j}\right)^{2},\] where \(\bar{X}_{l,\mathcal{G}_{l}}=\frac{1}{|\mathcal{G}_{l}|}\sum_{i\in\mathcal{G}_{ l}}X_{l,i}\). In addition, as noted in the proof of Claim B.1 (ii), we have \[\text{Var}(\tilde{B})=\frac{\sigma^{2}}{|\mathcal{G}|}\sum_{0\leq l\leq L} \alpha_{l}Z_{l,t_{o}}^{\top}\left(\sum_{s\in T_{d_{l}}}Z_{l,s}Z_{l,s}^{\top} \right)^{-1}Z_{l,t_{o}}.\] First, we show that \[\mathcal{V}_{\mathcal{G}}^{-1}\left\|\widehat{\operatorname{Var}}(\tilde{A})- \operatorname{Var}(\tilde{A})\right\|=o_{p}(1)\] where \[\widehat{\operatorname{Var}}(\tilde{A})=\widetilde{\sigma}^{2}\sum_{j\leq N_{ 0}}\left(\sum_{0\leq l\leq L}\alpha_{l}\widehat{X}_{l,\mathcal{G}_{l}}^{\top} \left(\sum_{k\leq N_{0}}\widehat{X}_{l,k}\widehat{X}_{l,k}^{\top}\right)^{-1} \widehat{X}_{l,j}\right)^{2}.\] Note that \[\left\|\widehat{\operatorname{Var}}(\tilde{A})-\operatorname{ Var}(\tilde{A})\right\| \lesssim\left|\widehat{\sigma}^{2}-\sigma^{2}\right|\sum_{j\leq N_{ 0}}\left(\sum_{0\leq l\leq L}\alpha_{l}\bar{X}_{l,\mathcal{G}_{l}}^{\top} \left(\sum_{k\leq N_{0}}X_{l,k}X_{l,k}^{\top}\right)^{-1}X_{l,j}\right)^{2}\] \[\quad+\sigma^{2}\sum_{j\leq N_{0}}\left\|\sum_{0\leq l\leq L} \alpha_{l}\bar{X}_{l,\mathcal{G}_{l}}^{\top}\left(\sum_{k\leq N_{0}}X_{l,k}X_ {l,k}^{\top}\right)^{-1}X_{l,j}\right\|\] \[\quad\quad\times\left\|\sum_{0\leq l\leq L}\alpha_{l}\left( \widehat{\bar{X}}_{l,\mathcal{G}_{l}}^{\top}\left(\sum_{k\leq N_{0}}\widehat{X }_{l,k}\bar{X}_{l,k}^{\top}\right)^{-1}\widehat{X}_{l,j}-\bar{X}_{l,\mathcal{G }_{l}}^{\top}\left(\sum_{k\leq N_{0}}X_{l,k}X_{l,k}^{\top}\right)^{-1}X_{l,j} \right)\right\|.\] Because \[\left\|\sum_{0\leq l\leq L}\alpha_{l}\bar{X}_{l,\mathcal{G}_{l}}^{\top}\left( \sum_{k\leq N_{0}}X_{l,k}X_{l,k}^{\top}\right)^{-1}X_{l,j}\right\|\leq\max_{l} \left\|\bar{u}_{\mathcal{G}_{l}}^{\top}\left(\sum_{k\leq N_{0}}u_{k}u_{k}^{ \top}\right)^{-1}u_{j}\right\|\leq\frac{\mu r}{N_{0}},\] we know by Claims B.1 and B.2 that \[\mathcal{V}_{\mathcal{G}}^{-1}\left|\widehat{\sigma}^{2}-\sigma ^{2}\right|\sum_{j\leq N_{0}}\left(\sum_{0\leq l\leq L}\alpha_{l}\bar{X}_{l, \mathcal{G}_{l}}^{\top}\left(\sum_{k\leq N_{0}}X_{l,k}X_{l,k}^{\top}\right)^{ -1}X_{l,j}\right)^{2}\] \[\quad\lesssim\frac{\kappa^{5/2}\mu^{3}r^{2}\max\{\sqrt{N_{0}\log N _{0}},\sqrt{T_{0}\log T_{0}}\}}{\min\{N_{0},T_{0}\}}=o_{p}(1).\] **Claim B.2**.: \(|\widehat{\sigma}^{2}-\sigma^{2}|\lesssim\sigma^{2\frac{\kappa^{5/2}\mu r\max \{\sqrt{N_{0}\log N_{0}},\sqrt{T_{0}\log T_{0}}\}}{\min\{N_{0},T_{0}\}}}\)_._ Next, we want to bound the following term: \[\left\|\sum_{0\leq l\leq L}\alpha_{l}\left(\widehat{X}_{l,\mathcal{ G}_{l}}^{\top}\left(\sum_{k\leq N_{0}}\widehat{X}_{l,k}\widehat{X}_{l,k}^{ \top}\right)^{-1}\widehat{X}_{l,j}-\bar{X}_{l,\mathcal{G}_{l}}^{\top}\left( \sum_{k\leq N_{0}}X_{l,k}X_{l,k}^{\top}\right)^{-1}X_{l,j}\right)\right\|\] \[\quad\leq\max_{l}\left\|\widehat{\bar{X}}_{l,\mathcal{G}_{l}}^{ \top}\left(\sum_{k\leq N_{0}}\widehat{X}_{l,k}\widehat{X}_{l,k}^{\top}\right)^{ -1}\widehat{X}_{l,j}-\bar{X}_{l,\mathcal{G}_{l}}^{\top}\left(\sum_{k\leq N_{0} }X_{l,k}X_{l,k}^{\top}\right)^{-1}X_{l,j}\right\|\] \[\quad\leq\max_{l}\left\|\widehat{X}_{l}\widehat{H}_{l}-X_{l} \right\|_{2,\infty}\left\|\left(\sum_{k\leq N_{0}}X_{l,k}X_{l,k}^{\top}\right)^ {-1}X_{l,j}\right\|\] \[\left\|\sum_{0\leq l\leq L}\alpha_{l}\left(\widehat{X}_{l,\mathcal{G}_{l}}^{ \top}\left(\sum_{k\leq N_{0}}\widehat{X}_{l,k}\widehat{X}_{l,k}^{\top}\right)^{- 1}\widehat{X}_{l,j}-\bar{X}_{l,\mathcal{G}_{l}}^{\top}\left(\sum_{k\leq N_{0}}X _{l,k}X_{l,k}^{\top}\right)^{-1}X_{l,j}\right)\right\|\] \[\lesssim\max_{0\leq l\leq L}\sigma\frac{\kappa_{l}^{3}\mu_{l}^{2}r^{2}N_{0}\max \{\sqrt{N_{l}\log N_{l}},\sqrt{T_{l}\log T_{l}}\}}{N_{l}\min\{N_{l},T_{l}\}}.\] Then, we have \[\mathcal{V}_{\mathcal{G}}^{-1}\sigma^{2}\sum_{j\leq N_{0}}\left\|\sum_{0\leq l \leq L}\alpha_{l}\bar{X}_{l,\mathcal{G}_{l}}^{\top}\left(\sum_{k\leq N_{0}}X _{l,k}X_{l,k}^{\top}\right)^{-1}X_{l,j}\right\|\] \[\left|\frac{1}{N_{0}T_{0}}\sum_{i\leq N_{0},t\leq T_{0}}\epsilon_{it}^{2}- \sigma^{2}\right|\lesssim\sigma^{2}\left(N_{0}T_{0}\right)^{-\frac{1}{2}}\log(N_{0 }T_{0})^{1/2}.\] Since the first term dominates the second term using the relations that \(\mu_{0}\lesssim\mu\kappa^{\frac{1}{2}}\), \(\kappa_{0}\lesssim\kappa\), we have the desired result. ### Relations about eigenvalue and eigenvector between the full matrix and its submatrix Lastly, we present one lemma which shows the relations about eigenvalue and eigenvector between the full matrix and its submatrix. **Lemma B.3**.: _(i) Let \(M=(m_{it})_{1\leq i\leq N,1\leq t\leq T}\) be a \(N\times T\) matrix of rank \(r\) and \(M_{o}=(m_{it})_{i\in\mathcal{I}_{o},t\in\mathcal{T}_{o}}\) be a submatrix of \(M\) where \(|\mathcal{I}_{o}|=N_{o}\) and \(|\mathcal{T}_{o}|=T_{o}\). The SVD of \(M\) is \(UDV^{\top}\), and the \(i\)-th row of \(U\) is \(u_{i}^{\top}\) and the \(t\)-th row of \(V\) is \(v_{t}^{\top}\). In addition, \(\mu\), \(\kappa\) denote the incoherence parameter and the condition number of \(M\), and \(\mu_{o},\kappa_{o}\) denote those of \(M_{o}\). If there are constants \(C,c>0\) such that_ \[c \leq\psi_{r}\left(\frac{1}{N_{o}}\sum_{i\in\mathcal{I}_{o}}\left( \sqrt{N}u_{i}\right)\left(\sqrt{N}u_{i}\right)^{\top}\right)\leq\psi_{1}\left( \frac{1}{N_{o}}\sum_{i\in\mathcal{I}_{o}}\left(\sqrt{N}u_{i}\right)\left(\sqrt {N}u_{i}\right)^{\top}\right)\leq C,\] \[c \leq\psi_{r}\left(\frac{1}{T_{o}}\sum_{t\in\mathcal{T}_{o}}\left( \sqrt{T}v_{t}\right)\left(\sqrt{T}v_{t}\right)^{\top}\right)\leq\psi_{1}\left( \frac{1}{T_{o}}\sum_{t\in\mathcal{T}_{o}}\left(\sqrt{T}v_{t}\right)\left(\sqrt {T}v_{t}\right)^{\top}\right)\leq C,\] _we have \(\mu_{o}\lesssim\mu\kappa^{1/2}\) and \(\kappa_{o}\lesssim\kappa\). (ii) Let \(M_{1}=(m_{it})_{i\in\mathcal{I}_{1},t\in\mathcal{T}_{1}}\) and \(M_{2}=(m_{it})_{i\in\mathcal{I}_{2},t\in\mathcal{T}_{2}}\) be submatrices of \(M\) where \(|\mathcal{I}_{1}|=N_{1}\), \(|\mathcal{I}_{2}|=N_{2}\), \(|\mathcal{T}_{1}|=T_{1}\), and \(|\mathcal{T}_{2}|=T_{2}\). If there are constants \(C,c>0\) such that for all \(l\in\{1,2\}\),_ \[c \leq\psi_{r}\left(\frac{1}{N_{l}}\sum_{i\in\mathcal{I}_{l}}\left( \sqrt{N}u_{i}\right)\left(\sqrt{N}u_{i}\right)^{\top}\right)\leq\psi_{1}\left( \frac{1}{N_{l}}\sum_{i\in\mathcal{I}_{l}}\left(\sqrt{N}u_{i}\right)\left(\sqrt {N}u_{i}\right)^{\top}\right)\leq C,\] \[c \leq\psi_{r}\left(\frac{1}{T_{l}}\sum_{t\in\mathcal{T}_{l}}\left( \sqrt{T}v_{t}\right)\left(\sqrt{T}v_{t}\right)^{\top}\right)\leq\psi_{1}\left( \frac{1}{T_{l}}\sum_{t\in\mathcal{T}_{l}}\left(\sqrt{T}v_{t}\right)\left(\sqrt {T}v_{t}\right)^{\top}\right)\leq C,\] _we have \(\frac{\sqrt{N_{1}T_{1}}}{\psi_{1,\min}}\asymp\frac{\sqrt{N_{2}T_{2}}}{\psi_{2,\min}}\) where \(\psi_{l,\min}\) is the smallest singular value of \(M_{l}\)._ Proof of Lemma b.3.: (i) Without loss of generality, assume that \(\mathcal{I}_{o}=\{1,\cdots,N_{o}\}\) and \(\mathcal{T}_{o}=\{1,\cdots,T_{o}\}\). Let the SVD of \(M_{o}\) be \(U_{o}D_{o}V_{o}^{\top}\). Then, we can say \[M_{it}=u_{i}^{\top}Dv_{t}=u_{o,i}^{\top}D_{o}v_{o,t}\] for \(i\leq N_{o}\) and \(t\leq T_{o}\). In addition, let \(B_{sub}=U_{sub}D^{1/2}\) where \(U_{sub}=[u_{1},\ldots,u_{N_{o}}]^{\top}\) and \(F_{sub}=V_{sub}D^{1/2}\) where \(V_{sub}=[v_{1},\ldots,v_{T_{o}}]^{\top}\). Then, we have \(M_{o}=B_{sub}F_{sub}^{\top}\). Define \[L^{*} =\left(B_{sub}^{\top}B_{sub}\right)^{1/2}\left(F_{sub}^{\top}F_{ sub}\right)\left(B_{sub}^{\top}B_{sub}\right)^{1/2}\] \[=D^{1/4}\left(U_{sub}^{\top}U_{sub}\right)^{1/2}D^{1/4}D^{1/2} \left(V_{sub}^{\top}V_{sub}\right)D^{1/2}D^{1/4}\left(U_{sub}^{\top}U_{sub} \right)^{1/2}D^{1/4}.\] Let \(G_{L^{*}}\) be a \(K\times K\) matrix whose columns are the eigenvectors of \(L^{*}\) such that \(\Lambda_{L^{*}}=G_{L^{*}}^{\top}L^{*}G_{L^{*}}\) is a descending order diagonal matrix of the eigenvalues of \(L^{*}\). Define \[H_{u}=\left(B_{sub}^{\top}B_{sub}\right)^{-1/2}G_{L^{*}}=D^{-1/4}\left(U_{sub}^{ \top}U_{sub}\right)^{-1/2}D^{-1/4}G_{L^{*}}.\] Note that \[\left(B_{sub}F_{sub}^{\top}F_{sub}B_{sub}^{\top}\right)B_{sub}H_{u} =B_{sub}\left(B_{sub}^{\top}B_{sub}\right)^{-1/2}\left(B_{sub}^{ \top}B_{sub}\right)^{1/2}\left(F_{sub}^{\top}F_{sub}\right)\left(B_{sub}^{ \top}B_{sub}\right)^{1/2}\left(B_{sub}^{\top}B_{sub}\right)^{1/2}H_{u}\] \[=B_{sub}\left(B_{sub}^{\top}B_{sub}\right)^{-1/2}L^{*}G_{L^{*}}\] \[=B_{sub}\left(B_{sub}^{\top}B_{sub}\right)^{-1/2}G_{L^{*}} \Lambda_{L^{*}}\] \[=B_{sub}H_{u}\Lambda_{L^{*}}.\] In addition, we have \[(B_{sub}H_{u})^{\top}B_{sub}H_{u}=H_{u}^{\top}B_{sub}^{\top}B_{sub}H_{u}=G_{L ^{*}}^{\top}\left(B_{sub}^{\top}B_{sub}\right)^{-1/2}B_{sub}^{\top}B_{sub} \left(B_{sub}^{\top}B_{sub}\right)^{-1/2}G_{L^{*}}=I_{r}.\] Hence, the column of \(B_{sub}H_{u}\) are the eigenvector of \(\left(B_{sub}F_{sub}^{\top}F_{sub}B_{sub}^{\top}\right)=M_{o}M_{o}^{\top}\) corresponding to the eigenvalue \(\Lambda_{L^{*}}\). Hence, \(B_{sub}H_{u}\) is the left singular vector of \(M_{o}\), that is, \(U_{o}\). Then, since \[U_{o}=B_{sub}H_{u}=U_{sub}D^{1/2}D^{-1/4}\left(U_{sub}^{\top}U_{sub}\right)^{ -1/2}D^{-1/4}G_{L^{*}}=U_{sub}D^{1/4}\left(U_{sub}^{\top}U_{sub}\right)^{-1/2} D^{-1/4}G_{L^{*}},\] (B.3) we have the following incoherence condition for the submatrix: \[\max_{i}\left\|u_{o,i}\right\|=\max_{i}\left\|e_{i}^{\top}U_{o}\right\|\leq \max_{i}\left\|e_{i}^{\top}U_{sub}\right\|\left\|D^{1/4}\right\|\left\|D^{-1/4 }\right\|\left\|\left(U_{sub}^{\top}U_{sub}\right)^{-1/2}\right\|\leq\frac{ \mu_{o}^{1/2}r^{1/2}}{\sqrt{N_{o}}}\] where \(\mu_{o}=C\mu\kappa^{1/2}\) for some constant \(C>0\). Similarly, we can have \(\max_{t}\left\|v_{o,t}\right\|\leq\frac{\mu_{o}^{1/2}r^{1/2}}{\sqrt{T_{o}}}\) where \(\mu_{o}=C\mu\kappa^{1/2}\) for some constant \(C>0\). Hence, the incoherence parameter for the submatrix \(M_{o}\) is \(C\mu\kappa^{1/2}\) for some constant \(C>0\). Note that \[M_{o}=U_{o}D_{o}V_{o}^{\top}=U_{sub}DV_{sub}^{\top}\Longrightarrow D_{o}=U_{ o}^{\top}U_{sub}DV_{sub}^{\top}V_{o}.\] Then, by using the relation (B.3), we have \[D_{o} =U_{o}^{\top}(U_{o}G_{L^{*}}^{\top}D^{1/4}\left(U_{sub}^{\top}U_{ sub}\right)^{1/2}D^{-1/4})D(D^{-1/4}\left(V_{sub}^{\top}V_{sub}\right)^{1/2}D^{1/4}G_{ R^{*}}V_{o}^{\top})V_{o}\] \[=G_{L^{*}}^{\top}D^{1/4}\left(U_{sub}^{\top}U_{sub}\right)^{1/2} D^{1/2}\left(V_{sub}^{\top}V_{sub}\right)^{1/2}D^{1/4}G_{R^{*}},\] where \(G_{R^{*}}\) is a \(K\times K\) eigenvector matrix of \(R^{*}=\left(F_{sub}^{\top}F_{sub}\right)^{1/2}\left(B_{sub}^{\top}B_{sub}\right) \left(F_{sub}^{\top}F_{sub}\right)^{1/2}\). Then, we have \[\psi_{1}(D_{o})\leq\left\|D^{1/4}\right\|^{2}\left\|D^{1/2}\right\| \left\|\left(U_{sub}^{\top}U_{sub}\right)^{1/2}\right\|\left\|\left(V_{sub}^{ \top}V_{sub}\right)^{1/2}\right\|\lesssim\psi_{1}(D)\frac{\sqrt{N_{o}T_{o}}}{ \sqrt{NT}},\] (B.4) \[\psi_{r}(D_{o})\geq\lambda_{\min}^{2}(D^{1/4})\lambda_{\min}(D^{1 /2})\lambda_{\min}\left(\left(U_{sub}^{\top}U_{sub}\right)^{1/2}\right) \lambda_{\min}\left(\left(V_{sub}^{\top}V_{sub}\right)^{1/2}\right)\gtrsim \psi_{r}(D)\frac{\sqrt{N_{o}T_{o}}}{\sqrt{NT}}.\] So, the condition number of the submatrix can be bounded like \(\kappa_{o}=\frac{\psi_{1}(D_{o})}{\psi_{r}(D_{o})}\lesssim\frac{\psi_{1}(D)} {\psi_{r}(D)}=\kappa\). (ii) By using the relation (B.4) with the fact that \(\kappa_{2}\lesssim\kappa\), we know \[\psi_{1,\min}^{-1}\lesssim\psi_{\min}^{-1}\frac{\sqrt{NT}}{\sqrt{N_{1}T_{1}}} =\kappa^{-1}\psi_{\max}^{-1}\frac{\sqrt{NT}}{\sqrt{N_{1}T_{1}}}\lesssim\kappa ^{-1}\psi_{2,\max}^{-1}\frac{\sqrt{N_{2}T_{2}}}{\sqrt{N_{1}T_{1}}}\lesssim \kappa_{2}^{-1}\psi_{2,\max}^{-1}\frac{\sqrt{N_{2}T_{2}}}{\sqrt{N_{1}T_{1}}}= \psi_{2,\min}^{-1}\frac{\sqrt{N_{2}T_{2}}}{\sqrt{N_{1}T_{1}}}.\] Similarly, we can show \(\psi_{2,\min}^{-1}\lesssim\psi_{1,\min}^{-1}\frac{\sqrt{N_{1}T_{1}}}{\sqrt{N_{ 2}T_{2}}}\). Hence, we have that \(\frac{\sqrt{N_{1}T_{1}}}{\psi_{1,\min}}\asymp\frac{\sqrt{N_{2}T_{2}}}{\psi_{2, \min}}\). ## Appendix C Formal inferential theory for the treatment effect estimation in Section 4 This section provides the formal inferential theory for the group averaged treatment effects, \(\mu_{t_{0}}^{(d)}\) and \(\theta_{t_{0}}^{(d)}\) in Section 4. The assumption on the noise is the same as that in Section 2, and the singular vectors of \(M\) are incoherent in that there is a \(\mu\geq 1\) such that \(||U||_{2,\infty}\leq\sqrt{\mu r/N}\), \(||V||_{2,\infty}\leq\sqrt{\mu r/(T+3T_{1})}\). Denote by \(M_{O_{(d)}}=(m_{it}^{(0)})_{i\in\mathcal{I}_{d},t\leq T_{0}}\), and the smallest nonzero singular value of it by \(\psi_{\min,O_{(d)}}\). In addition, denote \(\{\mathcal{G}_{(d),l}\}_{0\leq l\leq L_{d}}\) by the subgroups of \(\mathcal{G}\) for the estimation of \(\{m_{it_{0}}^{(d)}\}_{i\in\mathcal{G}}\). Then, we have the following asymptotic normality of the group averaged estimator. **Theorem C.1**.: _Assume that for any \(0\leq d\leq 3\) and \(l=1,\cdots,L_{d}\),_ 1. \(\sigma\kappa^{\frac{23}{4}}\mu^{\frac{3}{2}}r^{\frac{3}{2}}\sqrt{N_{d}}\max\{N _{d}\sqrt{\log N_{d}},T_{0}\sqrt{\log T_{0}}\}=o_{p}\left(\psi_{\min,O_{(d)}} \min\{N_{d},T_{0}\}\right)\)_;_ 2. \(\kappa^{\frac{11}{2}}\mu^{3}r^{3}\sqrt{N_{d}}\max\{\sqrt{N_{d}\log^{3}N_{d}}, \sqrt{T_{0}\log^{3}T_{0}}\}=o_{p}\left(\min\{N_{d}^{\frac{3}{2}},T_{0}^{\frac{ 3}{2}}\}\right)\)_;_ 3. \(|\mathcal{G}_{(d),l}|\kappa^{\frac{17}{4}}\mu^{\frac{5}{2}}r^{\frac{5}{2}}\max \{\sqrt{N_{d}\log N_{d}},\sqrt{T_{0}\log T_{0}}\}=o_{p}\left(\sqrt{N_{d}}\min \{N_{d},T_{0}\}\right)\)_;_ _._ * _There are constants_ \(C,c>0\) _such that_ \[c\leq\lambda_{\min}\left(\frac{N}{N_{d}}\sum_{i\in\mathcal{I}_{d}}u_ {i}u_{i}^{\top}\right)\leq\lambda_{\max}\left(\frac{N}{N_{d}}\sum_{i\in \mathcal{I}_{d}}u_{i}u_{i}^{\top}\right)\leq C,\] \[c\leq\lambda_{\min}\left(\frac{T_{M}}{T_{0}}\sum_{t\leq T_{0}}v_ {t}v_{t}^{\top}\right)\leq\lambda_{\max}\left(\frac{T_{M}}{T_{0}}\sum_{t\leq T _{0}}v_{t}v_{t}^{\top}\right)\leq C,\] _where_ \(T_{M}=T+3T_{1}\) _is the number of columns of_ \(M\)_;_ * \(\sqrt{N}\left\|\bar{u}_{\mathcal{G}}\right\|\geq c\) _for some constant_ \(c>0\) _where_ \(\bar{u}_{\mathcal{G}}=|\mathcal{G}|^{-1}\sum_{i\in\mathcal{G}}u_{i}\)_._ _Then, we have_ \[\mathcal{V}_{\mu}^{-\frac{1}{2}}\left(\widehat{\mu}_{t_{0}}^{(d)}-\mu_{t_{0}}^ {(d)}\right)\stackrel{{ D}}{{\longrightarrow}}\mathcal{N}(0,1), \mathcal{V}_{\theta}^{-\frac{1}{2}}\left(\widehat{\theta}_{t_{0}}^{(d)}- \theta_{t_{0}}^{(d)}\right)\stackrel{{ D}}{{\longrightarrow}} \mathcal{N}(0,1),\] \(\mathcal{V}_{\mu}=\mathcal{V}_{\mathcal{G}}(d,0)\) _and \(\mathcal{V}_{\theta}=\mathcal{V}_{\mathcal{G}}(d,d-1)\) where_ \[\mathcal{V}_{\mathcal{G}}(d,d^{\prime})= \sigma^{2}\bar{u}_{\mathcal{G}}^{\top}\left(\sum_{j\in\mathcal{I }_{d}}u_{j}u_{j}^{\top}\right)^{-1}\bar{u}_{\mathcal{G}}+\sigma^{2}\bar{u}_{ \mathcal{G}}^{\top}\left(\sum_{j\in\mathcal{I}_{d}}u_{j}u_{j}^{\top}\right)^{- 1}\bar{u}_{\mathcal{G}}\] \[+\frac{\sigma^{2}}{|\mathcal{G}|}\left(v_{(d\cdot T_{1}+t_{0})}-v _{(d^{\prime}\cdot T_{1}+t_{0})}\right)^{\top}\left(\sum_{s\leq T_{0}}v_{s}v_{ s}^{\top}\right)^{-1}\left(v_{(d\cdot T_{1}+t_{0})}-v_{(d^{\prime}\cdot T_{1}+t_{0}) }\right).\] For completeness, we provide the variance estimator. For each \(0\leq d\leq 3\) and \(0\leq l\leq L_{d}\), denote by \(\left(\widehat{X}_{l}^{(d)},\widehat{Z}_{l}^{(d)}\right)\) the debiased estimators derived from \(\tilde{Y}_{l}^{(d)}\) which is the submatrix of \(\tilde{Y}^{(d)}\) constructed for the estimation of \(\{m_{it_{0}}^{(d)}\}_{i\in\mathcal{G}_{(d),l}}\). In addition, \(\widehat{X}_{l,j}^{(d)}\) denotes a row of \(\widehat{X}_{l}^{(d)}\) which corresponds to the unit \(j\) and \(\widehat{Z}_{l,s}^{(d)}\) denotes a row of \(\widehat{Z}_{l}^{(d)}\) which corresponds to the \(s\)-th column of \(M\). **Corollary C.2** (Feasible CLT of Theorem C.1).: _Suppose the assumptions in Theorem C.1 hold. In addition, we have for all \(0\leq d\leq 3\), \(\frac{\sigma}{\psi_{\min,O_{(d)}}}\frac{\kappa^{5}\mu^{4}r^{4}N_{d}\max\{\sqrt {N_{d}}\log N_{d},\sqrt{T_{0}}\log T_{0}\}}{\min\{N_{d},T_{0}\}}\stackrel{{ \mathbb{P}}}{{\longrightarrow}}0\). Then,_ \[\widehat{\mathcal{V}}_{\mu}^{-\frac{1}{2}}\left(\widehat{\mu}_{t_{0}}^{(d)}- \mu_{t_{0}}^{(d)}\right)\stackrel{{ D}}{{\longrightarrow}} \mathcal{N}(0,1), \widehat{\mathcal{V}}_{\theta}^{-\frac{1}{2}}\left(\widehat{\theta}_{t_{0} }^{(d)}-\theta_{t_{0}}^{(d)}\right)\stackrel{{ D}}{{ \longrightarrow}}\mathcal{N}(0,1),\] \(\widehat{\mathcal{V}}_{\mu}=\widehat{\mathcal{V}}_{\mathcal{G}}(d,0)\) _and \(\widehat{\mathcal{V}}_{\theta}=\widehat{\mathcal{V}}_{\mathcal{G}}(d,d-1)\) where_ \[\widehat{\mathcal{V}}_{\mathcal{G}}(d,d^{\prime})= \sum_{\delta\in\{d,d^{\prime}\}}\widehat{\sigma}^{2}\sum_{i\in \mathcal{I}_{\delta}}\left(\sum_{0\leq l\leq L_{\delta}}\alpha_{l}^{(\delta)} \widehat{X}_{\mathcal{G}_{(\delta),l}}^{\top}\left(\sum_{j\in\mathcal{I}_{ \delta}}\widehat{X}_{l,j}^{(\delta)}\widehat{X}_{l,j}^{(\delta)\top}\right)^{- 1}\widehat{X}_{l,i}^{(\delta)}\right)^{2}\] \[+\sum_{\delta\in\{d,d^{\prime}\}}\frac{\widehat{\sigma}^{2}}{| \mathcal{G}|}\widehat{Z}_{0,(\delta\cdot T_{1}+t_{o})}^{(\delta)\top}\left( \sum_{s\leq T_{0}}\widehat{Z}_{0,s}^{(\delta)\top}\widehat{Z}_{0,s}^{(\delta) \top}\right)^{-1}\widehat{Z}_{0,(\delta\cdot T_{1}+t_{o})}^{(\delta)}\] \[-2\frac{\widehat{\sigma}^{2}}{|\mathcal{G}|}\sum_{s\leq T_{0}}\left(\widehat{Z}^{(d) \top}_{0,(d\cdot T_{1}+t_{o})}\left(\sum_{s\leq T_{0}}\widehat{Z}^{(d)}_{0,s} \widehat{Z}^{(d)\top}_{0,s}\right)^{-1}\widehat{Z}^{(d)}_{0,s}\right)\left( \widehat{Z}^{(d^{\prime})\top}_{0,s}\left(\sum_{s\leq T_{0}}\widehat{Z}^{(d^{ \prime})}_{0,s}\widehat{Z}^{(d^{\prime})\top}_{0,s}\right)^{-1}\widehat{Z}^{(d ^{\prime})}_{0,(d^{\prime}\cdot T_{1}+t_{o})}\right),\] \(\alpha_{l}^{(d)}=\frac{|\mathcal{G}_{(d),l}|}{|\mathcal{G}|}\), \(\widehat{\sigma}^{2}=\frac{1}{NT_{0}}\sum_{i\leq N,t\leq T_{0}}\widehat{e}^{2}_ {it}\), \(\widehat{\epsilon}_{it}=y_{it}-x_{it}^{\top}\beta-\widehat{m}_{it}^{(0)}\). In addition, \(\widehat{X}_{\mathcal{G}_{(d),l}}=\frac{1}{|\mathcal{G}_{(d),l}|}\sum_{i\in \mathcal{G}_{(d),l}}\widehat{X}^{(d)}_{l,i}\). ### Proof of Theorem C.1 (i) Case 1 (\(\widehat{\mu}_{t_{0}}^{(d)}\)): Following the proof of Theorem 3.1, we have the decomposition: \[\frac{\mathcal{V}_{\mu}^{-\frac{1}{2}}}{|\mathcal{G}|}\sum_{i\in \mathcal{G}}\left(\widehat{m}_{it_{o}}^{(d)}-m_{it_{o}}^{(d)}\right)-\frac{ \mathcal{V}_{\mu}^{-\frac{1}{2}}}{|\mathcal{G}|}\sum_{i\in\mathcal{G}}\left( \widehat{m}_{it_{o}}^{(0)}-m_{it_{o}}^{(0)}\right)\] \[=\underbrace{\mathcal{V}_{\mu}^{-\frac{1}{2}}\bar{u}_{\mathcal{G }}^{\top}\left(\sum_{j\in\mathcal{I}_{d}}u_{j}u_{j}^{\top}\right)^{-1}\sum_{j \in\mathcal{I}_{d}}u_{j}\epsilon_{j_{t}}}_{\coloneqq A^{(d)}}+\underbrace{ \mathcal{V}_{\mu}^{-\frac{1}{2}}}_{\coloneqq B^{(d)}}\sum_{i\in\mathcal{G}}v_ {t_{o}}^{\top}\left(\sum_{s\leq T_{0}}v_{s}v_{s}^{\top}\right)^{-1}\sum_{s \leq T_{0}}v_{s}\epsilon_{is}}_{\coloneqq B^{(d)}}\] \[-\underbrace{\mathcal{V}_{\mu}^{-\frac{1}{2}}\bar{u}_{\mathcal{G }}^{\top}\left(\sum_{j\in\mathcal{I}_{0}}u_{j}u_{j}^{\top}\right)^{-1}\sum_{j \in\mathcal{I}_{0}}u_{j}\epsilon_{j_{t}}}_{\coloneqq A^{(0)}}-\underbrace{ \mathcal{V}_{\mu}^{-\frac{1}{2}}}_{\coloneqq B^{(0)}}v_{t_{o}}\left(\sum_{s \leq T_{0}}v_{s}v_{s}^{\top}\right)^{-1}\sum_{s\leq T_{0}}v_{s}\epsilon_{is} }_{\coloneqq B^{(0)}}+\mathcal{V}_{\mu}^{-\frac{1}{2}}\mathcal{R}\] (C.1) where \(\mathcal{R}\) is a residual term. First, we want to show the Lindeberg condition. Note that \[A^{(d)}+B^{(d)}=\sum_{j\leq N}\sum_{s\leq T}\underbrace{\left(P1 _{\{j\in\mathcal{I}_{d},s=t_{o}\}}+\sum_{0\leq l\leq L_{d}}Q_{l}1_{\{j\in \mathcal{G}_{(d),l},s\leq T_{0}\}}\right)\epsilon_{js}}_{\coloneqq\mathcal{V}_ {js}^{(d)}},\] \[\text{where}\quad P=\mathcal{V}_{\mu}^{-\frac{1}{2}}\bar{u}_{ \mathcal{G}}^{\top}\left(\sum_{j\in\mathcal{I}_{d}}u_{j}u_{j}^{\top}\right)^{- 1}u_{j},\ \ Q_{l}=\frac{\mathcal{V}_{\mu}^{-\frac{1}{2}}}{|\mathcal{G}|}v_{(d \cdot\mathcal{I}_{d}+t_{o})}^{\top}\left(\sum_{s\leq T_{0}}v_{s}v_{s}^{\top} \right)^{-1}\sum_{s\leq T_{0}}v_{s}.\] Using the same way in the proof of Theorem 3.2, we have \(\|P\|\leq\mathcal{V}_{\mu}^{-\frac{1}{2}}\frac{\mu r}{N_{d}}\) and \(\|Q_{l}\|\leq\frac{\mathcal{V}_{\mu}^{-\frac{1}{2}}}{|\mathcal{G}|}\frac{\mu r }{T_{0}}\). Then, by the same token as the proof of Theorem 3.2, we have \[\sum_{j,s}\mathbb{E}[\mathcal{V}_{js}^{(d)4}]\lesssim\mathcal{V}_{\mu}^{-2} \sigma^{4}\frac{\mu^{4}r^{4}}{N_{d}^{3}}+\mathcal{V}_{\mu}^{-2}\sigma^{4}\mu^{ 4}r^{4}T_{0}^{-3}.\] Similarly, we have \[\sum_{j,s}\mathbb{E}[\mathcal{V}_{js}^{(0)4}]\lesssim\mathcal{V}_{\mu}^{-2} \sigma^{4}\frac{\mu^{4}r^{4}}{N_{0}^{3}}+\mathcal{V}_{\mu}^{-2}\sigma^{4}\mu^{ 4}r^{4}T_{0}^{-3}\] where \(A^{(0)}+B^{(0)}=\sum_{j\leq N}\sum_{s\leq T}\mathcal{Y}_{js}^{(0)}\). Then, for any \(q>0\), we have by Cauchy-Schwarz and Markov inequalities with Claim C.3, \[\mathrm{Var}(A+B)^{-1}\sum_{j,s}\mathbb{E}[\mathcal{Y}_{js}^{2}1_{ \{|\mathcal{Y}_{js}|>q\mathrm{Var}(A+B)^{1/2}\}}]\] \[\leq 2\mathrm{Var}(A+B)^{-1}\left(\sum_{j,s}\mathbb{E}[\mathcal{Y} _{js}^{(d)2}1_{\{|\mathcal{Y}_{js}|>q\mathrm{Var}(A+B)^{1/2}\}}]+\sum_{j,s} \mathbb{E}[\mathcal{Y}_{js}^{(0)2}1_{\{|\mathcal{Y}_{js}|>q\mathrm{Var}(A+B)^{ 1/2}\}}]\right)\] \[\leq\frac{2}{\mathrm{Var}(A+B)q}\sqrt{\sum_{j,s}\mathbb{E}[ \mathcal{Y}_{js}^{(d)4}]}+\frac{2}{\mathrm{Var}(A+B)q}\sqrt{\sum_{j,s}\mathbb{ E}[\mathcal{Y}_{js}^{(0)4}]}\] \[\lesssim\frac{\mu^{3}r^{3}}{N_{d}^{\frac{1}{2}}}+\frac{\mu^{3}r^ {3}N_{d}}{T_{0}^{\frac{3}{2}}}+\frac{\mu^{3}r^{3}}{N_{0}^{\frac{1}{2}}}+\frac {\mu^{3}r^{3}N_{0}}{T_{0}^{\frac{3}{2}}},\] where \(A=A^{(d)}-A^{(0)}\), \(B=B^{(d)}-B^{(0)}\), and \(\mathcal{Y}_{js}=\mathcal{Y}_{js}^{(d)}-\mathcal{Y}_{js}^{(0)}\). Because the last term is \(o_{p}(1)\), the Lindeberg condition is satisfied. **Claim C.3**.: _(i) \(\mathrm{Var}(A+B)=1\) and (ii) \(\mathcal{V}_{\mu}^{-1}\lesssim\frac{\mu r\min\{N_{0},N_{d}\}}{\sigma^{2}}\)._ Therefore, by Lindeberg CLT, we have \(A+B\overset{D}{\longrightarrow}\mathcal{N}(0,1)\). In addition, by the same token as in the proof of Theorem 3.1, we can show \(\left\|\mathcal{V}_{\mu}^{-\frac{1}{2}}\mathcal{R}\right\|=o_{p}(1)\). Therefore, \[\mathcal{V}_{\mu}^{-\frac{1}{2}}\left(\frac{1}{|\mathcal{G}|}\sum_{i\in \mathcal{G}}(\widehat{m}_{it_{o}}^{(d)}-\widehat{m}_{it_{o}}^{(0)})-\frac{1}{ |\mathcal{G}|}\sum_{i\in\mathcal{G}}(m_{it_{o}}^{(d)}-m_{it_{o}}^{(0)})\right) \overset{D}{\longrightarrow}\mathcal{N}(0,1).\] (ii) Case 2 (\(\widehat{\theta}_{t_{0}}^{(d)}\)): The proof is the same as that of Case 1 if we change \(A^{(0)},B^{(0)}\) to \(A^{(d-1)},B^{(d-1)}\). Since it is a simple extension of the proof of Case 1, we omit it. \(\square\) Proof of Claim c.3.: (i) Since \(t_{o}>T_{0}\) and \(\mathcal{I}_{d}\) is disjoint with \(\mathcal{I}_{0}\), we have \[\mathrm{Var}(A+B)=\mathrm{Var}(A^{(d)})+\mathrm{Var}(A^{(0)})+\mathrm{Var}(B).\] A Simple calculations show that \[\mathrm{Var}(A^{(d)})=\mathcal{V}_{\mu}^{-1}\sigma^{2}\bar{u}_{ \mathcal{G}}^{\top}\left(\sum_{j\in\mathcal{I}_{d}}u_{j}u_{j}^{\top}\right)^{-1 }\bar{u}_{\mathcal{G}},\ \ \mathrm{Var}(A^{(0)})=\mathcal{V}_{\mu}^{-1}\sigma^{2}\bar{u}_{ \mathcal{G}}^{\top}\left(\sum_{j\in\mathcal{I}_{0}}u_{j}u_{j}^{\top}\right)^{ -1}\bar{u}_{\mathcal{G}},\] \[\mathrm{Var}(B)=\mathcal{V}_{\mu}^{-1}\frac{\sigma^{2}}{|\mathcal{ G}|}\left(v_{(d.T_{1}+t_{o})}-v_{t_{o}}\right)^{\top}\left(\sum_{s\leq T_{0}}v_{s}v_{s}^{ \top}\right)^{-1}\left(v_{(d.T_{1}+t_{o})}-v_{t_{o}}\right).\] Hence, we have \(\mathrm{Var}(A+B)=1\). (ii) Note that \[\mathcal{V}_{\mu}=\mathcal{V}_{\mu}\mathrm{Var}(A)+\mathcal{V}_{\mu}\mathrm{ Var}(B)\geq\mathcal{V}_{\mu}\mathrm{Var}(A)\geq\max\{\mathcal{V}_{\mu}\mathrm{ Var}(A^{(d)}),\mathcal{V}_{\mu}\mathrm{Var}(A^{(0)})\},\] since \(\mathrm{Var}(A)=\mathrm{Var}(A^{(d)})+\mathrm{Var}(A^{(0)})\). In addition, we have \[\mathcal{V}_{\mu}\mathrm{Var}(A^{(d)})=\sigma^{2}\bar{u}_{\mathcal{G}}^{\top} \left(\sum_{j\in\mathcal{I}_{d}}u_{j}u_{j}^{\top}\right)^{-1}\bar{u}_{\mathcal{ G}}\geq\sigma^{2}\left\|\bar{u}_{\mathcal{G}}\right\|^{2}\lambda_{\min}\left( \left(\sum_{j\in\mathcal{I}_{d}}u_{j}u_{j}^{\top}\right)^{-1}\right)\geq c\frac {\sigma^{2}}{\mu rN_{d}}\] for some constant \(c>0\). Similarly, we have \(\mathcal{V}_{\mu}\mathrm{Var}(A^{(0)})\geq c\frac{\sigma^{2}}{\mu rN_{0}}\) for some constant \(c>0\). Therefore, we reach \(\mathcal{V}_{\mu}^{-1}\leq C\frac{\mu r\min\{N_{0},N_{d}\}}{\sigma^{2}}\). **Proof of Corollary c.2** (i) Case 1 (\(\widehat{\mu}_{t_{0}}^{(d)}\)): From the proof of Claim C.3, we know \[\mathcal{V}_{\mu}=\mathrm{Var}(\tilde{A}^{(d)})+\mathrm{Var}(\tilde{A}^{(0)}) +\mathrm{Var}(\tilde{B}^{(d)})+\mathrm{Var}(\tilde{B}^{(0)})-2\mathrm{Var}( \tilde{B}^{(0)})\] where \(\tilde{A}^{(\delta)}=\mathcal{V}_{\mu}^{\frac{1}{2}}A^{(\delta)}\) and \(\tilde{B}^{(\delta)}=\mathcal{V}_{\mu}^{\frac{1}{2}}B^{(\delta)}\). Following the similar argument in the proof of Corollary 3.3 with the definitions in (C.1), we have \[\mathrm{Var}(\tilde{A}^{(d)})=\sigma^{2}\sum_{i\in\mathcal{I}_{d}}\left(\sum_ {0\leq l\leq L_{d}}\alpha_{l}^{(d)}\bar{X}_{\mathcal{G}_{(d),l}}^{\top}\left( \sum_{j\in\mathcal{I}_{d}}X_{l,j}^{(d)}X_{l,j}^{(d)\top}\right)^{-1}X_{l,i}^{( d)}\right)^{2},\] where \(\alpha_{l}^{(d)}=\frac{|\mathcal{G}_{(d),l}|}{|\mathcal{G}|}\), \(\bar{X}_{\mathcal{G}_{(d),l}}=\frac{1}{|\mathcal{G}_{(d),l}|}\sum_{i\in \mathcal{G}_{(d),l}}X_{l,i}^{(d)}\), and \(X_{l}^{(d)}=U_{l}^{(d)}D_{l}^{(d)\frac{1}{2}}\). Here, \(U_{l}^{(d)}D_{l}^{(d)}V_{l}^{(d)\top}\) are the SVD of \(\tilde{M}_{l}^{(d)}\) which is the submatrix of \(\tilde{M}^{(d)}\) constructed for the estimation of \(\{m_{it_{0}}^{(d)}\}_{i\in\mathcal{G}_{(d),l}}\). In addition, we have \[\mathrm{Var}(\tilde{B}^{(d)})=\frac{\sigma^{2}}{|\mathcal{G}|}Z_{0,(d\,T_{1}+ t_{o})}^{(d)\top}\left(\sum_{s\leq T_{0}}Z_{0,s}^{(d)}Z_{0,s}^{(d)\top} \right)^{-1}Z_{0,(d\,T_{1}+t_{o})}^{(d)},\] where \(Z_{l}^{(d)}=V_{l}^{(d)}D_{l}^{(d)\frac{1}{2}}\). Note that for all \(\delta\in\{0,d\}\), \(\mathcal{V}_{\mu}^{-1}\lesssim\frac{\mu rN_{\delta}}{\sigma^{2}}\) by Claim C.3. Then, by the same way as the proof of Corollary 3.3, we have \[\mathcal{V}_{\mu}^{-1}\left\|\widehat{\mathrm{Var}}(\tilde{A}^{(\delta)})- \mathrm{Var}(\tilde{A}^{(\delta)})\right\|=o_{p}(1),\ \ \mathcal{V}_{\mu}^{-1}\left\|\widehat{\mathrm{Var}}(\tilde{B}^{(\delta)})- \mathrm{Var}(\tilde{B}^{(\delta)})\right\|=o_{p}(1).\] Similarly, we can show that \[\mathcal{V}_{\mu}^{-1}\left\|\widehat{\mathrm{Cov}}(\tilde{B}^{(d)},\tilde{B} ^{(0)})-\mathrm{Cov}(\tilde{B}^{(d)},\tilde{B}^{(0)})\right\|=o_{p}(1)\] where \[\mathrm{Cov}(\tilde{B}^{(d)},\tilde{B}^{(0)})=\frac{\sigma^{2}}{|\mathcal{G}|}v _{(d\cdot T_{1}+t_{o})}^{\top}\left(\sum_{s\leq T_{0}}v_{s}v_{s}^{\top}\right)^ {-1}v_{t_{o}}^{\top}\] \[=\frac{\sigma^{2}}{|\mathcal{G}|}\sum_{s\leq T_{0}}v_{(d\cdot T_{1}+t_{ o})}^{\top}\left(\sum_{s\leq T_{0}}v_{s}v_{s}^{\top}\right)^{-1}v_{s}v_{s}^{\top} \left(\sum_{s\leq T_{0}}v_{s}v_{s}^{\top}\right)^{-1}v_{t_{o}}^{\top}\] \[=\frac{\sigma^{2}}{|\mathcal{G}|}\sum_{s\leq T_{0}}Z_{0,(d\cdot T _{1}+t_{o})}^{(d)\top}\left(\sum_{s\leq T_{0}}Z_{0,s}^{(d)}Z_{0,s}^{(d)\top} \right)^{-1}Z_{0,s}^{(d)}Z_{0,s}^{(0)\top}\left(\sum_{s\leq T_{0}}Z_{0,s}^{(0) }Z_{0,s}^{(0)\top}\right)^{-1}Z_{0,t_{o}}^{(0)\top}.\] Hence, we have \(\frac{\widehat{\mathcal{V}}_{\mu}-\mathcal{V}_{\mu}}{\mathcal{V}_{\mu}}=o_{p}(1)\) and it implies that \(\frac{\mathcal{V}_{\mu}}{\widehat{\mathcal{V}}_{\mu}}\xrightarrow{\mathbb{P} }1\). Then, by the Slutsky's theorem with Theorem C.1, we have the desired result. (ii) Case 2 (\(\widehat{\theta}_{t_{0}}^{(d)}\)): The proof is the same as that of Case 1 if we change \(A^{(0)},B^{(0)}\) to \(A^{(d-1)},B^{(d-1)}\). Since it is a simple extension of the proof of Case 1, we omit it. \(\square\) ## Appendix D Modification of results from Chen et al. (2020b) Finally, we present technical tools used for proving Lemmas A.8 and A.9 in Section A.5. These results are modifications of similar results from Chen et al. (2020b) when missing is random. Indeed the overall architecture of our proof is the same as those in Chen et al. (2020b), and for brevity, we shall omit proofs of lemmas that are straightforward adaptation of those in Chen et al. (2020b). ### Proximity between the nonconvex estimator and the nuclear norm penalized estimator We begin by introducing further notations. For any matrix \(G\), we denote by \(G_{l,\cdot}\) (resp. \(G_{\cdot,l}\)) the \(l\)-th row (resp. column) of \(G\). Let \(G\) be a \(N_{o}\times T_{o}\) matrix with rank \(r\) and \(L\Sigma R^{\top}\) be SVD of \(G\). Then the tangent space of \(G\), denoted by \(T(G)\), is defined as \[T(G)=\{D\in\mathbb{R}^{N_{o}\times T_{o}}|D=AR^{\top}+LB^{\top}\text{ for some }A\in\mathbb{R}^{N_{o}\times r}\text{ and }B\in\mathbb{R}^{T_{o}\times r}\}.\] Let \(\mathcal{P}_{T(G)}\) be the orthogonal projection onto \(T(G)\), that is, \[\mathcal{P}_{T(G)}(E)=LL^{\top}E+ERR^{\top}-LL^{\top}ERR^{\top}\] for any \(E\in\mathbb{R}^{N_{o}\times T_{o}}\). When there is no risk of confusion, we will simply denote by \(T\) instead of \(T(G)\). Let \(T^{\perp}\) be the orthogonal complement of \(T\) and \(\mathcal{P}_{T^{\perp}}\) be the projection onto \(T^{\perp}\). Note that \(\mathcal{P}_{T^{\perp}}(E)=(I-LL^{\top})E(I-RR^{\top})\) and \(\mathcal{P}_{T}(E)+\mathcal{P}_{T^{\perp}}(E)=E\). Lastly, we define \(\mathcal{P}_{\Omega_{o}}^{\text{diff}}(G)=\mathcal{P}_{\Omega_{o}}(G)-G\), for all \(G\in\mathbb{R}^{N_{o}\times T_{o}}\). The following lemma plays a key role in showing the proximity between the nonconvex estimator \((\breve{X}_{o},\breve{Z}_{o})\) and the nuclear norm penalized estimator \(\widetilde{M}_{o}\). We will eventually set \((\breve{X}_{o},\breve{Z}_{o})=(\breve{X}_{o}\breve{H}_{o},\breve{Z}_{o}\breve{H} _{o})\) where \((\breve{X}_{o},\breve{Z}_{o})=(X_{o}^{\tau_{o}^{*}},Z_{o}^{\tau_{o}^{*}})\) and \(\breve{H}_{o}=H_{o}^{\tau_{o}^{*}}\). **Condition D.1** (Regularization parameter).: _The regularization parameter \(\lambda_{o}\) satisfies (i) \(\|\mathcal{P}_{\Omega_{o}}(\mathcal{E}_{o})\|<\frac{7}{8}\lambda_{o}\) and (ii) \(\left\|\mathcal{P}_{\Omega_{o}}(\breve{X}_{o}\breve{Z}_{o}^{\top}-M_{o})- \breve{X}_{o}\breve{Z}_{o}^{\top}-M_{o}\right\|<\frac{1}{80}\lambda_{o}\)._ **Condition D.2** (Injectivity).: _Let \(T\) be the tangent space of \(\breve{X}_{o}\breve{Z}_{o}^{\top}\). There is a quantity \(c_{\text{inj},o}>0\) such that \(\left\|\mathcal{P}_{\Omega_{o}}(H)\right\|_{F}^{2}\geq c_{\text{inj},o}\left\| H\right\|_{F}^{2}\) for all \(H\in T\)._ **Lemma D.3**.: _Suppose that \((\breve{X}_{o},\breve{Z}_{o})\) satisfies_ \[\left\|\nabla f(\breve{X}_{o},\breve{Z}_{o})\right\|_{F}\leq c\frac{\sqrt{c_{ \text{inj},o}}}{\kappa_{o}}\lambda_{o}\sqrt{\psi_{\min,o}}\] (D.1) _for some sufficiently small constant \(c>0\). Additionally, assume that any nonzero singular value of \(\breve{X}_{o}\) and \(\breve{Z}_{o}\) exists in the interval \([\sqrt{\frac{\psi_{\min,o}}{2}},\sqrt{2\psi_{\max,o}}]\). Then, under Conditions D.1 and D.2, \(\widetilde{M}_{o}\) satisfies_ \[\left\|\breve{X}_{o}\breve{Z}_{o}^{\top}-\widetilde{M}_{o}\right\|_{F}\leq C_ {cvx}\frac{\kappa_{o}}{c_{\text{inj},o}}\frac{1}{\sqrt{\psi_{\min,o}}}\left\| \nabla f(\breve{X}_{o},\breve{Z}_{o})\right\|_{F}\] _where \(C_{cvx}>0\) is an absolute constant._ Proof.: This lemma is the simple modified version of Lemma 2 of Chen et al. (2020b). If we follow their proof by setting \(p=1\) and considering our observation pattern with caution, we can get the result. To save space, we omit the proof. The following lemmas are used to show that our nonconvex estimator \((\breve{X}_{o}\breve{H}_{o},\breve{Z}_{o}\breve{H}_{o})\) satisfies Conditions D.1 and D.2. Lemma D.4 shows Condition D.1 (i) is satisfied when \(\lambda_{o}=C_{\lambda}\sigma\sqrt{\max\{N_{o},T_{o}\}}\) for a sufficiently large constant \(C_{\lambda}>0\). In addition, Lemma D.5 is used when we show Condition D.1 (ii) and Condition D.2 are satisfied in the case \((\breve{X}_{o},\breve{Z}_{o})=(\breve{X}_{o}\breve{H}_{o},\breve{Z}_{o}\breve {H}_{o})\). **Lemma D.4**.: _With probability at least \(1-O(\min\{N_{o}^{-101},T_{o}^{-101}\})\), we have (i) \(\left\|\mathcal{P}_{\Omega_{o}}(\mathbf{1}\mathbf{1}^{\top})-\mathbf{1} \mathbf{1}^{\top}\right\|\lesssim\sqrt{\max\{N_{o},T_{o}\}}\), (ii) \(\left\|\mathcal{P}_{\Omega_{o}}(\mathcal{E}_{o})\right\|\lesssim\sigma\sqrt{ \max\{N_{o},T_{o}\}}\)._ Proof.: (i) All elements of \(\mathcal{P}_{\Omega_{o}}(\mathbf{1}\mathbf{1}^{\top})-\mathbf{1}\mathbf{1}^{\top}\) excluding the elements of \(\{(i,t_{o})\}_{i\in\mathcal{Q}_{o}}\) are \(0\). Because the elements of \(\{(i,t_{o})\}_{i\in\mathcal{Q}_{o}}\) are \(-1\) and \(|\mathcal{Q}_{o}|\leq N_{o}\), it is trivial. (ii) Denote a \(N_{o}\times(T_{o}-1)\) matrix excluding the \(t_{o}\)-th column of \(\mathcal{P}_{\Omega_{o}}(\mathcal{E}_{o})\) by \(\mathcal{P}_{\Omega_{o}}(\mathcal{E}_{o})^{(-t_{o})}\). By Theorem 5.39 of Vershynin (2010), we have \[||\mathcal{P}_{\Omega_{o}}(\mathcal{E}_{o})^{(-t_{o})}||=||\mathcal{E}_{o}^{(- t_{o})}||\lesssim\sigma\sqrt{\max\{N_{o},T_{o}\}}\] where \(\mathcal{E}_{o}^{(-t_{o})}\) is the \(N_{o}\times(T_{o}-1)\) matrix excluding the \(t_{o}\)-th column of \(\mathcal{E}_{o}\). In addition, it is trivial that \[||\mathcal{P}_{\Omega_{o}}(\mathcal{E}_{o})_{\cdot,t_{o}}||\leq||(\mathcal{E}_{ o})_{\cdot,t_{o}}||\lesssim\sigma\sqrt{\max\{N_{o},T_{o}\}}\] where \(\mathcal{P}_{\Omega_{o}}(\mathcal{E}_{o})_{\cdot,t_{o}}\) and \((\mathcal{E}_{o})_{\cdot,t_{o}}\) are the \(t_{o}\)-th column of \(\mathcal{P}_{\Omega_{o}}(\mathcal{E}_{o})\) and \(\mathcal{E}_{o}\), respectively. **Lemma D.5**.: _Suppose that_ \[\frac{\sigma}{\psi_{\min,o}}\sqrt{\frac{\max\{N_{o}^{2},T_{o}^{2} \}}{\min\{N_{o},T_{o}\}}}\ll\frac{1}{\sqrt{\kappa_{o}^{4}\mu_{o}r\max\{\log N_ {o},\log T_{o}\}}},\ \ \vartheta_{o}\mu_{o}r\ll\min\{N_{o},T_{o}\},\] \[\min\{N_{o}^{2},T_{o}^{2}\}\gg\kappa_{o}^{4}\mu_{o}^{2}r^{2}\max \{N_{o}\log N_{o},T_{o}\log T_{o}\}.\] _Assume that \(\lambda_{o}=C_{\lambda}\sigma\sqrt{\max\{N_{o},T_{o}\}}\) for some large constant \(C_{\lambda}>0\). Further, let \(T\) denote the tangent space of \(\ddot{X}\ddot{Z}_{o}^{\top}\). Then, with probability at least \(1-O(\min\{N_{o}^{-100},T_{o}^{-100}\}\),_ \[\left\|\mathcal{P}_{\Omega_{o}}(\ddot{X}_{o}\ddot{Z}_{o}^{\top}- M_{o})-\ddot{X}_{o}\ddot{Z}_{o}^{\top}-M_{o}\right\|<\frac{1}{80}\lambda_{o}\] _(Condition D.1 (ii))_ \[\left\|\mathcal{P}_{\Omega_{o}}(H)\right\|_{F}^{2}\geq\frac{1}{32 \kappa_{o}}\left\|H\right\|_{F}^{2}\quad\text{for all $H\in T$ \quad(Condition D.2 with $c_{\text{inj},o}=1/(32\kappa_{o})$)}\] _hold uniformly for all \((\ddot{X}_{o},\ddot{Z}_{o})\) satisfying_ \[\max\Big{\{}\left\|\ddot{X}_{o}-X_{o}\right\|_{2,\infty},\left\| \ddot{Z}_{o}-Z_{o}\right\|_{2,\infty}\Big{\}}\] \[\quad\leq C\kappa_{o}\left(\frac{\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}}{\psi_{\min,o}}+\frac{\lambda_{o}}{\psi_{\min,o}}\right) \max\Big{\{}\left\|X_{o}\right\|_{2,\infty},\left\|Z_{o}\right\|_{2,\infty} \Big{\}}\] (D.2) _for some constant \(C>0\)._ Proof.: It follows immediately from Lemma D.6 and Lemma D.7. **Lemma D.6**.: _Assume that \(\min\{N_{o},T_{o}\}\gg\mu_{o}r\max\{\log N_{o},\log T_{o}\}\) and \(\vartheta_{o}\mu_{o}r\ll\min\{N_{o},T_{o}\}\). Let \(T\) denote the tangent space of \(\ddot{X}_{o}\ddot{Z}_{o}^{\top}\). Then, with probability at least \(1-O(\min\{N_{o}^{-100},T_{o}^{-100}\}\),_ \[\left\|\mathcal{P}_{\Omega_{o}}(H)\right\|_{F}^{2}\geq\frac{1}{32\kappa_{o}} \left\|H\right\|_{F}^{2}\quad\text{for all $H\in T$ \quad(Condition D.2 with $c_{\text{inj}}=1/(32\kappa_{o})$)}\] _holds uniformly for all \((\ddot{X}_{o},\ddot{Z}_{o})\) such that_ \[\max\Big{\{}\left\|\ddot{X}_{o}-X_{o}\right\|_{2,\infty},\left\|\ddot{Z}_{o}-Z _{o}\right\|_{2,\infty}\Big{\}}\leq\frac{c}{\kappa_{o}\sqrt{\max\{N_{o},T_{o} \}}}\left\|X_{o}\right\|\] _where \(c>0\) is some sufficiently small constant._ Proof.: This Lemma is the simple modification of Lemma 7 of Chen et al. (2020b). If we follow their proof by considering our observation pattern cautiously, we can get the result. Importantly, we use Lemma D.12 which is the modified version of Corollary 4.3 of Candes and Recht (2009) in the place where Chen et al. (2020b) use Corollary 4.3 of Candes and Recht (2009). To save space, we omit the proof. **Lemma D.7**.: _Assume that_ \[\frac{\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}}{\psi_{\min,o}}\ll \frac{1}{\kappa_{o}},\ \ \min\{N_{o}^{2},T_{o}^{2}\}\gg\kappa_{o}^{4}\mu_{o}^{2}r^{2}\max\{N_{o}\log N_ {o},T_{o}\log T_{o}\}.\] _Let \(\lambda_{o}=C_{\lambda}\sigma\sqrt{\max\{N_{o},T_{o}\}}\) for some large constant \(C_{\lambda}>0\). Then, with probability at least \(1-O(\min\{N_{o}^{-100},T_{o}^{-100}\})\),_ \[\left\|\mathcal{P}_{\Omega_{o}}(\ddot{X}_{o}\ddot{Z}_{o}^{\top}-M_{o})-\ddot{ X}_{o}\ddot{Z}_{o}^{\top}-M_{o}\right\|\lesssim\sigma\sqrt{\max\{N_{o},T_{o}\}} \sqrt{\frac{\kappa_{o}^{4}\mu_{o}^{2}r^{2}\max\{N_{o}\log N_{o},T_{o}\log T_{o }\}}{\min\{N_{o}^{2},T_{o}^{2}\}}}<\frac{1}{80}\lambda_{o}\] _holds uniformly for all \((\ddot{X}_{o},\ddot{Z}_{o})\) satisfying (D.2)._ Proof.: This Lemma is the simple modification of Lemma 8 of Chen et al. (2020b). To save space, we omit the proof. ### Quality of non-convex estimates Before we proceed, we introduce some notations. Define an augmented loss function \(f_{\text{aug}}(A,B)\) to be \[f_{\text{aug}}\coloneqq\frac{1}{2}\left\|\mathcal{P}_{\Omega_{o}}(AB^{\top}-Y _{o})\right\|_{F}^{2}+\lambda_{o}\left\|A\right\|_{F}^{2}+\lambda_{o}\left\|B \right\|_{F}^{2}+\frac{1}{8}\left\|A^{\top}A-B^{\top}B\right\|_{F}^{2}.\] Then, the gradient of \(f_{\text{aug}}(\cdot,\cdot)\) is given by \[\nabla_{X}f_{\text{aug}}(A,B)=\mathcal{P}_{\Omega_{o}}(AB^{\top}- Y_{o})B+\lambda_{o}A+\frac{1}{2}A(A^{\top}A-B^{\top}B),\] \[\nabla_{Z}f_{\text{aug}}(A,B)=\mathcal{P}_{\Omega_{o}}(AB^{\top} -Y_{o}))^{\top}A+\lambda_{o}B+\frac{1}{2}B(B^{\top}B-A^{\top}A).\] The difference between gradients of \(\nabla f(A,B)\) and \(\nabla f_{\text{aug}}(A,B)\) are \[\nabla_{X}f_{\text{diff}}(A,B)=-\frac{1}{2}A(A^{\top}A-B^{\top}B),\ \ \nabla_{Z}f_{\text{diff}}(A,B)=-\frac{1}{2}B(B^{\top}B-A^{\top}A).\] In addition, note that we have the following properties of \(\mathcal{F}_{o}\): \[\psi_{1}(\mathcal{F}_{o})=\left\|\mathcal{F}_{o}\right\|=\sqrt{2\psi_{\max,o}},\ \ \psi_{r}(\mathcal{F}_{o})=\sqrt{2\psi_{\min,o}},\ \ \left\|\mathcal{F}_{o}\right\|_{2,\infty}\leq\sqrt{\frac{\mu r\psi_{\max,o}}{ \min\{N_{o},T_{o}\}}}.\] The following Lemma is the one of the main parts where we require the condition \(\min\{N_{o},T_{o}\}\gg\vartheta_{o}\kappa_{o}^{2}\mu_{o}r\). While it is the modified version of Lemma 12 in Chen et al. (2020), the proof of it is quite different from theirs. Hence, we provide the full proof. **Lemma D.8**.: _Suppose that \(\lambda_{o}=C_{\lambda}\sigma\sqrt{\max\{N_{o},T_{o}\}}\) for some large constant \(C_{\lambda}>0\), \(0<\eta_{o}\ll 1/(\kappa_{o}^{2}\psi_{\max,o}\min\{N_{o},T_{o}\})\), \(\min\{N_{o},T_{o}\}\gg\vartheta_{o}\kappa_{o}^{2}\mu_{o}r\), and_ \[\frac{\sigma}{\psi_{\min,o}}\sqrt{\frac{\max\{N_{o}^{2},T_{o}^{2}\}}{\min\{N_ {o},T_{o}\}}}\ll\frac{1}{\sqrt{\kappa_{o}^{4}\mu_{o}r\max\{\log N_{o},\log T_{ o}\}}},\] \[\min\{N_{o},T_{o}\}\gg\kappa_{o}\mu_{o}r\max\{\log^{3}N,\log^{3}T\}.\] _Suppose also that the iterates satisfy (A.18)-(A.25) at the \(\tau\)-th iteration, then with probability at least \(1-O(\min\{N_{o}^{-99},T_{o}^{-99}\}),\) we have_ \[\max_{1\leq m\leq N_{o}+T_{o}}\left\|\mathcal{F}_{o}^{\tau+1}H_{o}^{\tau+1}- \mathcal{F}_{o}^{\tau+1,(m)}Q_{o}^{\tau+1,(m)}\right\|_{F}\leq C_{3}\left( \frac{\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}}{\psi_{\min,o}}+ \frac{\lambda_{o}}{\psi_{\min,o}}\right)\left\|\mathcal{F}_{o}\right\|_{2,\infty}\] _where \(C_{3}\) is some sufficiently large constant._ Proof.: Fix \(1\leq m\leq N_{o}+T_{o}\). The definition of \(Q_{o}^{\tau+1,(m)}\) and the unitary invariance of Frobenius norm yield \[\left\|\mathcal{F}_{o}^{\tau+1}H_{o}^{\tau+1}-\mathcal{F}_{o}^{\tau+1,(m)}Q_{ o}^{\tau+1,(m)}\right\|_{F}\leq\left\|\mathcal{F}_{o}^{\tau+1}H_{o}^{\tau}- \mathcal{F}_{o}^{\tau+1,(m)}Q_{o}^{\tau,(m)}\right\|_{F}.\] By the gradient update rules (A.3) and (A.4), we obtain \[\mathcal{F}_{o}^{\tau+1}H_{o}^{\tau}-\mathcal{F}_{o}^{\tau+1,(m) }Q_{o}^{\tau,(m)}\] \[=(\mathcal{F}_{o}^{\tau}-\eta_{o}\nabla f(\mathcal{F}_{o}^{\tau} ))\,H_{o}^{\tau}-\left(\mathcal{F}_{o}^{\tau,(m)}-\eta_{o}\nabla f^{(m)}( \mathcal{F}_{o}^{\tau,(m)})\right)Q_{o}^{\tau,(m)}\] \[=\mathcal{F}_{o}^{\tau}H_{o}^{\tau}-\eta_{o}\nabla f(\mathcal{F}_ {o}^{\tau}H_{o}^{\tau})-\left(\mathcal{F}_{o}^{\tau,(m)}Q_{o}^{\tau,(m)}- \eta_{o}\nabla f^{(m)}(\mathcal{F}_{o}^{\tau,(m)}Q_{o}^{\tau,(m)})\right)\] \[=\underbrace{\left(\mathcal{F}_{o}^{\tau}H_{o}^{\tau}-\mathcal{F }_{o}^{\tau,(m)}Q_{o}^{\tau,(m)}\right)-\eta_{o}\left(\nabla f_{\text{aug}}( \mathcal{F}_{o}^{\tau}H_{o}^{\tau})-\nabla f_{\text{aug}}(\mathcal{F}_{o}^{ \tau,(m)}Q_{o}^{\tau,(m)})\right)}_{\coloneqq A_{1}}\] \[-\underbrace{\eta_{o}\left(\nabla f_{\text{diff}}(\mathcal{F}_{ o}^{\tau}H_{o}^{\tau})-\nabla f_{\text{diff}}(\mathcal{F}_{o}^{\tau,(m)}Q_{o}^{ \tau,(m)})\right)}_{\coloneqq A_{2}}+\underbrace{\eta_{o}\left(\nabla f^{(m)}( \mathcal{F}_{o}^{\tau,(m)}Q_{o}^{\tau,(m)})-\nabla f(\mathcal{F}_{o}^{\tau,(m )}Q_{o}^{\tau,(m)})\right)}_{\coloneqq A_{3}},\] Here, we use the facts that \(\nabla f(\mathcal{A})O=\nabla f(\mathcal{A}O)\) and \(\nabla f^{(m)}(\mathcal{A})O=\nabla f^{(m)}(\mathcal{A}O)\) for any \((N_{o}+T_{o})\times r\) matrix \(\mathcal{A}\) and any orthonormal matrix \(O\in\mathcal{O}^{r\times r}\). Hereinafter, we control \(A_{1},A_{2}\) and \(A_{3}\) separately. The way of bounding \(A_{1}\) and \(A_{2}\) are the same as the proof of Lemma 12 in Chen et al. (2020) while the way of bounding \(A_{3}\) is quite different. 1. The first term \(A_{1}\) can be bounded using the same derivation as \(\alpha_{1}\) in the proof of Lemma 10 of Chen et al. (2020b): \[\left\|A_{1}\right\|_{F}\leq\left(1-\frac{\psi_{\min,o}}{20}\eta_{o}\right) \left\|\mathcal{F}_{o}^{\tau}H_{o}^{\tau}-\mathcal{F}_{o}^{\tau,(m)}Q_{o}^{\tau, (m)}\right\|_{\mathrm{F}}\] with probability at least \(1-O(\min\{N_{o}^{-100},T_{o}^{-100}\})\). Here, we use the assumptions \[\frac{\sigma}{\psi_{\min,o}}\sqrt{\frac{\max\{N_{o}^{2},T_{o}^{2}\}}{\min\{N_{o },T_{o}\}}}\ll\frac{1}{\sqrt{\kappa_{o}^{4}\mu_{o}r\max\{\log N_{o},\log T_{o} \}}},\] \[\min\{N_{o},T_{o}\}\gg\kappa_{o}\mu_{o}r\max\{\log^{2}N_{o},\log^{2}T_{o}\},\] and \(0\leq\eta_{o}\ll 1/(\kappa_{o}^{2}\psi_{\max,o}\min\{N_{o},T_{o}\})\). 2. Regarding \(A_{2}\), the triangle inequality gives us, with probability at least \(1-O(\min\{N_{o}^{-100},T_{o}^{-100}\})\), \[\left\|A_{2}\right\|_{F}\leq\eta_{o}\left\|\nabla f_{\mathrm{diff}}(\mathcal{ F}_{o}^{\tau}H_{o}^{\tau})\right\|_{F}+\eta_{o}\left\|\nabla f_{\mathrm{diff}}( \mathcal{F}_{o}^{\tau,(m)}Q_{o}^{\tau,(m)})\right\|_{F}.\] Following the bound of \(\alpha_{2}\) in the proof of Lemma 10 of Chen et al. (2020b), we obtain \[\eta_{o}\left\|\nabla f_{\mathrm{diff}}(\mathcal{F}_{o}^{\tau}H_{o}^{\tau}) \right\|_{F}\leq 2C_{B}\kappa_{o}\eta_{o}^{2}\left(\frac{\sigma\sqrt{\max\{N_{o},T_{o}\}}}{\psi_{\min,o}}+\frac{\lambda_{o}}{\psi_{\min,o}}\right)\sqrt{r} \psi_{\max,o}^{2}\left\|X_{o}\right\|.\] Additionally, Lemma D.20 and the argument for bounding \(\alpha_{2}\) in the proof of Lemma 10 of Chen et al. (2020b) together give us \[\eta_{o}\left\|\nabla f_{\mathrm{diff}}(\mathcal{F}_{o}^{\tau,(m)}Q_{o}^{\tau, (m)})\right\|_{F}\leq 2C_{B}\kappa_{o}\eta_{o}^{2}\left(\frac{\sigma\sqrt{\max\{N_{o},T_{o}\}}}{\psi_{\min,o}}+\frac{\lambda_{o}}{\psi_{\min,o}}\right)\sqrt{r} \psi_{\max,o}^{2}\left\|X_{o}\right\|.\] The three inequalities together allow us to have \[\left\|A_{2}\right\|_{F} \leq 4C_{B}\kappa_{o}\eta_{o}^{2}\left(\frac{\sigma\sqrt{\max\{N_{ o},T_{o}\}}}{\psi_{\min,o}}+\frac{\lambda_{o}}{\psi_{\min,o}}\right)\sqrt{r} \psi_{\max,o}^{2}\left\|X_{o}\right\|\] \[\leq\eta_{o}\left(\sigma\sqrt{\max\{N_{o},T_{o}\}}+\lambda_{o} \right)\left\|\mathcal{F}_{o}\right\|_{2,\infty},\] with probability at least \(1-O(\min\{N_{o}^{-100},T_{o}^{-100}\})\), where the last inequality follows from the assumption \(\eta_{o}\ll\frac{1}{\min\{N_{o},T_{o}\}\kappa_{o}^{2}\psi_{\max,o}}\). 3. For bounding \(A_{3}\), observe that \[A_{3}=\eta_{o}\left[\underbrace{\begin{array}{l}\left(\mathcal{P}_{m,\cdot}(X _{o}^{\tau,(m)}Z_{o}^{\tau,(m)\top}-M_{o})-\mathcal{P}_{\Omega_{m,\cdot}}(X_{ o}^{\tau,(m)}Z_{o}^{\tau,(m)\top}-M_{o})\right)Z_{o}^{\tau,(m)}Q_{o}^{\tau,(m)}\\ \left(\mathcal{P}_{m,\cdot}(X_{o}^{\tau,(m)}Z_{o}^{\tau,(m)\top}-M_{o})- \mathcal{P}_{\Omega_{m,\cdot}}(X_{o}^{\tau,(m)}Z_{o}^{\tau,(m)\top}-M_{o}) \right)^{\top}X_{o}^{\tau,(m)}Q_{o}^{\tau,(m)}\end{array}}_{\coloneqq\mathcal{ B}_{2}}\right]\] \[\leq\eta_{o}\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}} \left\|\mathcal{F}_{o}\right\|_{2,\infty}\] \[+\eta_{o}\frac{\vartheta_{o}\mu_{o}r}{\min\{N_{o},T_{o}\}}\psi_{\max,o} (C_{\infty}\kappa_{o}+C_{3})\left(\frac{\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o} \log T_{o}\}}}{\psi_{\min,o}}+\frac{\lambda_{o}}{\psi_{\min,o}}\right)\left\| \mathcal{F}_{o}\right\|_{2,\infty}.\] Here, the last inequality follows from Lemma D.22 (i). Combining the bounds on \(A_{i}\), \(i=1,2,3\), we reach, with probability at least \(1-O(\min\{N_{o}^{-100},T_{o}^{-100}\})\), \[\left\|\mathcal{F}_{o}^{\tau+1}H_{o}^{\tau+1}-\mathcal{F}_{o}^{ \tau+1,(m)}Q_{o}^{\tau+1,(m)}\right\|_{F}\] \[\leq\left\|A_{1}\right\|_{F}+\left\|A_{2}\right\|_{F}+\left\|A_{3 }\right\|_{F}\] \[\leq\left(1-\frac{\psi_{\min,o}}{20}\eta_{o}\right)\left\| \mathcal{F}_{o}^{\tau}H_{o}^{\tau}-\mathcal{F}_{o}^{\tau,(m)}Q_{o}^{\tau,(m) }\right\|_{F}\] \[\leq\left(1-\frac{\psi_{\min,o}}{20}\eta_{o}\right)C_{3}\left( \frac{\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}}{\psi_{\min,o}}+ \frac{\lambda_{o}}{\psi_{\min,o}}\right)\left\|\mathcal{F}_{o}\right\|_{2,\infty}\] \[\leq\left(1-\frac{\psi_{\min,o}}{20}\eta_{o}\right)C_{3}\left( \frac{\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}}{\psi_{\min,o}}+ \frac{\lambda_{o}}{\psi_{\min,o}}\right)\left\|\mathcal{F}_{o}\right\|_{2,\infty}\] \[\leq C_{3}\left(\frac{\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o}\log T _{o}\}}}{\psi_{\min,o}}+\frac{\lambda_{o}}{\psi_{\min,o}}\right)\left\| \mathcal{F}_{o}\right\|_{2,\infty}\] for some constant \(\widetilde{C}>0\). The penultimate inequality uses the induction hypothesis (A.20), and the last inequality holds provided that \(C_{3}\) is sufficiently large and \(\min\{N_{o},T_{o}\}\gg\vartheta_{o}\kappa_{o}^{2}\mu_{o}r\). Therefore, with probability at least \(1-O(\min\{N_{o}^{-99},T_{o}^{-99}\})\), we have \[\max_{1\leq m\leq N_{o}+T_{o}}\left\|\mathcal{F}_{o}^{\tau+1}H_{o}^{\tau+1}- \mathcal{F}_{o}^{\tau+1,(m)}Q_{o}^{\tau+1,(m)}\right\|_{F}\leq C_{3}\left( \frac{\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}}{\psi_{\min,o}}+ \frac{\lambda_{o}}{\psi_{\min,o}}\right)\left\|\mathcal{F}_{o}\right\|_{2, \infty}.\] Proof of Claim D.9.: Assume that \(m\leq N_{o}\) and define \(C\coloneqq X_{o}^{\tau,(m)}Z_{o}^{\tau,(m)\top}-X_{o}Z_{o}^{\top}\) and \(\mathcal{X}\coloneqq\mathcal{P}_{\Omega_{m,\cdot}}(C)-\mathcal{P}_{m,\cdot}(C)\). Using the unitary invariance of Frobenius norm, we have \(\left\|B_{1}\right\|_{F}=\left\|\mathcal{X}Z_{o}^{\tau,(m)}\right\|_{F}\). First of all, if \(m\notin\mathcal{Q}_{o}\), \(\mathcal{X}=0\). Hence, \(\left\|B_{1}\right\|_{F}=0\). If \(m\in\mathcal{Q}_{o}\), \(\mathcal{X}\) has only one nonzero element \(-C_{t_{o}}\). So, we have \[\left\|B_{1}\right\|_{F}=\left\|\mathcal{X}Z_{o}^{\tau,(m)}\right\|_{F}\leq \left\|C_{t_{o}}Z_{o,t_{o},}^{\tau,(m)}\right\|_{2}\leq\left\|C\right\|_{\infty} \left\|Z_{o}^{\tau,(m)}\right\|_{2,\infty}\leq 2\left\|C\right\|_{\infty}\left\|Z_{o} \right\|_{2,\infty}\] where \(\left\|\cdot\right\|_{\infty}\) is the max norm, and the last inequality follows from Lemma D.22 (iv) provided that \[\frac{\sigma\sqrt{\max\{N_{o},T_{o}\}}}{\psi_{\min,o}}\ll\frac{1}{\sqrt{\kappa_{o}^ {2}\max\{\log N_{o},\log T_{o}\}}}.\] Additionally, observe that Lemma D.22 (iv) gives \[\left\|C\right\|_{\infty} =\left\|X_{o}^{\tau,(m)}Q_{o}^{\tau,(m)}\left(Z_{o}^{\tau,(m)}Q_{o }^{\tau,(m)}\right)^{\top}-X_{o}Z_{o}^{\top}\right\|_{\infty}\] \[\leq\left\|X_{o}^{\tau,(m)}Q_{o}^{\tau,(m)}-X_{o}\right\|_{2, \infty}\left\|Z_{o}^{\tau,(m)}Q_{o}^{\tau,(m)}\right\|_{2,\infty}+\left\|X_{o} \right\|_{2,\infty}\left\|Z_{o}^{\tau,(m)}Q_{o}^{\tau,(m)}-Z_{o}\right\|_{2,\infty}\] \[\leq 3\left\|\mathcal{F}_{o}\right\|_{2,\infty}\left\|\mathcal{F} _{o}^{\tau,(m)}Q_{o}^{\tau,(m)}-\mathcal{F}_{o}\right\|_{2,\infty}.\] Finally, we have \[\left\|B_{1}\right\|_{F}\lesssim\left\|C\right\|_{\infty}\left\|Z_{o}\right\| _{2,\infty}\lesssim\left\|\mathcal{F}_{o}\right\|_{2,\infty}^{2}\left\| \mathcal{F}_{o}^{\tau,(m)}Q_{o}^{\tau,(m)}-\mathcal{F}_{o}\right\|_{2,\infty} \lesssim\frac{\mu_{o}r}{\min\{N_{o},T_{o}\}}\psi_{\max,o}\left\|\mathcal{F}_ {o}^{\tau,(m)}Q_{o}^{\tau,(m)}-\mathcal{F}_{o}\right\|_{2,\infty}.\] Now, assume that \(N_{o}+1\leq m\leq N_{o}+T_{o}\) and define \(\breve{\mathcal{X}}\coloneqq\mathcal{P}_{\Omega_{\cdot,(m-N_{o})}}(C)- \mathcal{P}_{\cdot,(m-N_{o})}(C)\). First, if \(m\neq N_{o}+t_{o}\), \(\breve{\mathcal{X}}=0\). If \(m=N_{o}+t_{o}\), we have \[\left\|B_{1}\right\|_{F}=\left\|\breve{\mathcal{X}}Z_{o}^{\tau,(m)}\right\|_{ F}=\left\|\left[\begin{array}{c}(\omega_{1t_{o}}-1)C_{1,t_{o}}\\ \vdots\\ (\omega_{N_{o}t_{o}}-1)C_{N_{o},t_{o}}\end{array}\right]Z_{o,t_{o},\cdot}^{ \tau,(m)}\right\|_{F}\leq\left\|\left[\begin{array}{c}(\omega_{1t_{o}}-1)C_ {1,t_{o}}\\ \vdots\\ (\omega_{N_{o}t_{o}}-1)C_{N_{o},t_{o}}\end{array}\right]\right\|_{2}\left\|Z_ {o,t_{o},\cdot}^{\tau,(m)}\right\|_{2},\] Then, since \[\left\|\left[\begin{array}{c}(\omega_{1t_{o}}-1)C_{1,t_{o}}\\ \vdots\\ (\omega_{N_{o}t_{o}}-1)C_{N_{o},t_{o}}\end{array}\right]\right\|_{2}\left\|Z_ {o,t_{o},\cdot}^{\tau,(m)}\right\|_{2}\leq\sqrt{\sum_{i\in\mathcal{Q}_{o}}C_ {i,t_{o}}^{2}}\left\|Z_{o,t_{o},\cdot}^{\tau,(m)}\right\|_{2}\leq\sqrt{ \vartheta_{o}}\left\|C\right\|_{\infty}\left\|Z_{o}\right\|_{2,\infty},\] we can obtain \[\left\|B_{1}\right\|_{F}\lesssim\frac{\sqrt{\vartheta_{o}}\mu_{o}r}{\min\{N_{ o},T_{o}\}}\psi_{\max,o}\left\|\mathcal{F}_{o}^{\tau,(m)}Q_{o}^{\tau,(m)}- \mathcal{F}_{o}\right\|_{2,\infty}.\] Proof of Claim D.10.: First, assume that \(m\leq N_{o}\). We follow the notation in the proof of Claim D.9. When \(m\notin\mathcal{Q}_{o}\), \(X=0\). If \(m\in\mathcal{Q}_{o}\), we have \[\left\|B_{2}\right\|_{F}=\left\|\breve{\mathcal{X}}^{\top}X_{o}^{\tau,(m)} \right\|_{F}=\left\|\left[\begin{array}{c}0\\ \vdots\\ -C_{mt_{o}}\\ \vdots\\ 0\end{array}\right]X_{o,m,\cdot}^{\tau,(m)}\right\|_{F}\leq 2\left\|C\right\|_{ \infty}\left\|X_{o}\right\|_{2,\infty}.\] So, we have \[\left\|B_{2}\right\|_{F}\lesssim\frac{\mu_{o}r}{\min\{N_{o},T_{o}\}}\psi_{\max,o} \left\|\mathcal{F}_{o}^{\tau,(m)}Q_{o}^{\tau,(m)}-\mathcal{F}_{o}\right\|_{2, \infty}.\] Assume now that \(N_{o}+1\leq m\leq N_{o}+T_{o}\). Using the unitary invariance of Frobenius norm, we have \(\left\|B_{2}\right\|_{F}=\left\|\breve{\mathcal{X}}^{\top}X_{o}^{\tau,(m)} \right\|_{F}\). If \(m\neq N_{o}+t_{o}\), then \(\breve{\mathcal{X}}=0\). In addition, if \(m=N_{o}+t_{o}\), we obtain \[\left\|B_{2}\right\|_{F}=\left\|\breve{\mathcal{X}}^{\top}X_{o}^{\tau,(m)} \right\|_{F}=\left\|\sum_{i=1}^{N_{o}}\breve{\mathcal{X}}_{i,\cdot}^{\top}X_{o,i,\cdot}^{\tau,(m)}\right\|_{F}=\left\|\sum_{i\in\mathcal{Q}_{o}}-C_{i,t_{o}}X _{o,i,\cdot}^{\tau,(m)}\right\|_{2}\leq 2\vartheta_{o}\left\|C\right\|_{ \infty}\left\|X_{o}\right\|_{2,\infty}.\] Therefore, we have \[\left\|B_{2}\right\|_{F}\lesssim\frac{\vartheta_{o}\mu_{o}r}{\min\{N_{o},T_{o} \}}\psi_{\max,o}\left\|\mathcal{F}_{o}^{\tau,(m)}Q_{o}^{\tau,(m)}-\mathcal{F} _{o}\right\|_{2,\infty}.\] Proof of Claim d.11.: First, we bound \(C_{1}\). Assume that \(m\leq N_{o}\). Since Frobenius norm is unitary invariant, we have \[\left\|C_{1}\right\|_{F}=\left\|\mathcal{P}_{\Omega_{m,\cdot}}(\mathcal{E}_{o })Z_{o}^{\tau,(m)}\right\|_{F}=\left\|\sum_{t=1}^{T_{o}}\underbrace{\omega_{ mt}\epsilon_{mt}Z_{o,t,\cdot}^{\tau,(m)}}_{u_{mt}}\right\|_{2}.\] By the way of construction of leave-one-out estimates, \(\{\epsilon_{mt}\}_{1\leq t\leq T_{o}}\) are independent of \(Z_{o}^{\tau,(m)}\). Therefore, we have \(\mathbb{E}\left[\epsilon_{mt}\left|Z_{o}^{\tau,(m)}\right.\right]=\mathbb{E} \left[\epsilon_{mt}\right]=0\), and conditioning on \(Z_{o}^{\tau,(m)}\), \(\{\epsilon_{mt}\}_{1\leq t\leq T_{o}}\) are independent across \(t\). Hence, conditioning on \(Z_{o}^{\tau,(m)}\), we can exploit the matrix Bernstein inequality (Koltchinskii et al., 2011, Proposition 2). Note that \[\left\|\left\|u_{mt}\right\|_{2}\right\|_{\text{subE}}\leq\left\|Z_{o}^{\tau,( m)}\right\|_{2,\infty}\left\|\omega_{mt}\epsilon_{mt}\right\|_{\text{subE}} \lesssim\sigma\left\|Z_{o}^{\tau,(m)}\right\|_{2,\infty},\] where \(\left\|\cdot\right\|_{\text{subE}}\) denotes the sub-exponential norm; see Koltchinskii et al. (2011); Tropp et al. (2015). Further, we can see that \[\left\|\sum_{t=1}^{T_{o}}\omega_{mt}^{2}\mathbb{E}\left[\epsilon_{mt}^{2}\left| Z_{o}^{\tau,(m)}\right.\right]Z_{o,t,\cdot}^{\tau,(m)}Z_{o,t,\cdot}^{\tau,(m) \top}\right\|\lesssim\sigma^{2}\left\|\sum_{t=1}^{T_{o}}Z_{o,t,\cdot}^{\tau,( m)}Z_{o,t,\cdot}^{\tau,(m)\top}\right\|=\sigma^{2}\left\|Z_{o}^{\tau,(m)} \right\|_{F}^{2}.\] Then, the matrix Bernstein inequality reveals that, with probability at least \(1-O(\min\{N_{o}^{-100},T_{o}^{-100}\})\), \[\left\|\sum_{t=1}^{T_{o}}u_{mt}\right\|_{2} \lesssim\sqrt{\sigma^{2}\left\|Z_{o}^{\tau,(m)}\right\|_{F}^{2} \max\{\log N_{o},\log T_{o}\}}+\sigma\left\|Z_{o}^{\tau,(m)}\right\|_{2,\infty }\max\{\log^{2}N_{o},\log^{2}T_{o}\}\] \[\lesssim\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}\left\| Z_{o}^{\tau,(m)}\right\|_{2,\infty},\] where the last relation uses the assumption \(\max\{N_{o},T_{o}\}\gg\max\{\log^{3}N_{o},\log^{3}T_{o}\}\). Applying Lemma D.22 (iv) with the assumption \[\frac{\sigma\sqrt{\max\{N_{o},T_{o}\}}}{\psi_{\min,o}}\ll\frac{1}{ \sqrt{\kappa_{o}^{2}\max\{\log N_{o},\log T_{o}\}}},\] we reach, with probability at least \(1-O\left(\min\{N_{o}^{-100},T_{o}^{-100}\}\right)\), \[\left\|C_{1}\right\|_{F}\lesssim\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o}\log T_ {o}\}}\left\|\mathcal{F}_{o}\right\|_{2,\infty}.\] Now, we consider the case of \(m\geq N_{o}+1\). Since Frobenius norm is unitary invariant and only the \((m-N_{o})\)-th column of the matrix \(\mathcal{P}_{\Omega_{\cdot,(m-N_{o})}}(\mathcal{E}_{o})\) has nonzero elements, \[\left\|C_{1}\right\|_{F} =\left\|\mathcal{P}_{\Omega_{\cdot,(m-N_{o})}}(\mathcal{E}_{o}) Z_{o}^{\tau,(m)}\right\|_{F}=\left\|\left[\begin{matrix}\omega_{1,(m-N_{o})} \epsilon_{1,(m-N_{o})}\\ \vdots\\ \omega_{N_{o},(m-N_{o})}\epsilon_{N_{o},(m-N_{o})}\end{matrix}\right]Z_{o,(m- N_{o}),\cdot}^{\tau,(m)}\right\|_{F}\] \[\leq\left\|\sum_{i=1}^{N_{o}}\underbrace{e_{i}\omega_{i,(m-N_{o}) }\epsilon_{i,(m-N_{o})}Z_{o,(m-N_{o}),\cdot}^{\tau,(m)}}_{:=u_{i,(m-N_{o})}} \right\|_{F}.\] Similarly, conditioning on \(\{Z_{o}^{\tau,(m)}\}\), we can exploit the matrix Bernstein inequality (Koltchinskii et al., 2011, Proposition 2). Note that \[\left\|\left\|u_{i,(m-N_{o})}\right\|_{F}\right\|_{\text{subE}} \leq\left\|Z_{o}^{\tau,(m)}\right\|_{2,\infty}\left\|\epsilon_{i,(m-N_{o})} \right\|_{\text{subE}}\lesssim\sigma\left\|Z_{o}^{\tau,(m)}\right\|_{2, \infty}\text{ and }\] \[\left\|\sum_{i=1}^{N_{o}}\omega_{i,(m-N_{o})}\mathbb{E}\left[\epsilon_{i,(m-N_ {o})}^{2}\left|Z_{o}^{\tau,(m)}\right.\right]e_{i}Z_{o,(m-N_{o}),\cdot}^{\tau, (m)\top}Z_{o,(m-N_{o}),\cdot}^{\tau,(m)\top}e_{i}^{\top}\right\|\lesssim N_{o }\sigma^{2}\left\|Z_{o}^{\tau,(m)}\right\|_{2,\infty}^{2}.\] Then, the matrix Bernstein inequality reveals that, with probability at least \(1-O(\min\{N_{o}^{-101},T_{o}^{-101}\})\), \[\left\|\sum_{i=1}^{N_{o}}u_{i,(m-N_{o})}\right\|_{F} \lesssim\sqrt{\sigma^{2}\left\|Z_{o}^{\tau,(m)}\right\|_{2, \infty}^{2}\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}+\sigma\left\|Z_{o}^{\tau,( m)}\right\|_{2,\infty}\max\{\log^{2}N_{o},\log^{2}T_{o}\}\] \[\lesssim\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}} \left\|Z_{o}^{\tau,(m)}\right\|_{2,\infty},\] where the last relation uses the assumption \(\max\{N_{o},T_{o}\}\gg\max\{\log^{3}N_{o},\log^{3}T_{o}\}\). Applying Lemma D.22 (iv) with the assumption \[\frac{\sigma\sqrt{\max\{N_{o},T_{o}\}}}{\psi_{\min,o}}\ll\frac{1} {\sqrt{\kappa_{o}^{2}\max\{\log N_{o},\log T_{o}\}}},\] we reach, with probability at least \(1-O\left(\min\{N_{o}^{-100},T_{o}^{-100}\}\right)\), \[\left\|C_{1}\right\|_{F}\lesssim\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o}\log T_{o }\}}\left\|\mathcal{F}_{o}\right\|_{2,\infty}.\] We turn to \(C_{2}\). Assume \(m\leq N_{o}\). Since Frobenius norm is unitary invariant, we have \[\left\|C_{2}\right\|_{F}=\left\|\left(\mathcal{P}_{\Omega_{m,}}(\mathcal{E}_{ o})\right)^{\top}X_{o}^{\tau,(m)}\right\|_{F}=\left\|\begin{bmatrix}\omega_{m1} \epsilon_{m1}\\ \vdots\\ \omega_{mt}\epsilon_{mt}\end{bmatrix}X_{o,m,\cdot}^{\tau,(m)}\right\|_{F}= \left\|\sum_{t=1}^{T_{o}}\underbrace{e_{t}\omega_{mt}\epsilon_{mt}X_{o,m, \cdot}^{\tau,(m)}}_{:=u_{mt}}\right\|_{F}.\] Similarly, conditioning on \(X_{o}^{\tau,(m)}\), we can exploit the matrix Bernstein inequality. Note that \(\left\|\left|u_{mt}\right|_{F}\right\|_{\mathrm{subE}}\lesssim\sigma\left\| X_{o}^{\tau,(m)}\right\|_{2,\infty}\) and \[\left\|\sum_{t=1}^{T_{o}}\omega_{mt}\mathbb{E}\left[\epsilon_{mt}^{2}\left|X_ {o}^{\tau,(m)}\right|e_{t}X_{o,m,\cdot}^{\tau,(m)}X_{o,m,\cdot}^{\tau,(m)\top }e_{t}^{\top}\right\|\lesssim\sigma^{2}\left\|\sum_{t=1}^{T_{o}}X_{o,m,\cdot} ^{\tau,(m)}X_{o,m,\cdot}^{\tau,(m)\top}\right\|\leq T_{o}\sigma^{2}\left\|X_{o }^{\tau,(m)}\right\|_{2,\infty}^{2}.\] Then, the matrix Bernstein inequality reveals that, with probability at least \(1-O(\min\{N_{o}^{-100},T_{o}^{-100}\})\), \[\left\|\sum_{t=1}^{T_{o}}u_{t}\right\|_{F} \lesssim\sqrt{\sigma^{2}\left\|X_{o}^{\tau,(m)}\right\|_{2, \infty}^{2}\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}+\sigma\left\|X_{o}^{\tau, (m)}\right\|_{2,\infty}\max\{\log^{2}N_{o},\log^{2}T_{o}\}\] \[\lesssim\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}} \left\|X_{o}^{\tau,(m)}\right\|_{2,\infty},\] where the last relation uses the assumption \(\max\{N_{o},T_{o}\}\gg\max\{\log^{3}N_{o},\log^{3}T_{o}\}\). Applying Lemma D.22 (iv) with the assumption \[\frac{\sigma\sqrt{\max\{N_{o},T_{o}\}}}{\psi_{\min,o}}\ll\frac{1}{\sqrt{\kappa _{o}^{2}\max\{\log N_{o},\log T_{o}\}}},\] we reach, with probability at least \(1-O\left(\min\{N_{o}^{-100},T_{o}^{-100}\}\right)\), \[\left\|C_{2}\right\|_{F}\lesssim\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o}\log T _{o}\}}\left\|\mathcal{F}_{o}\right\|_{2,\infty}.\] Now, assume that \(m\geq N_{o}+1\). Since Frobenius norm is unitary invariant and only \((m-N_{o})\)-th column of the matrix \(\mathcal{P}_{\Omega_{\cdot,(m-N_{o})}}(\mathcal{E}_{o})\) has nonzero elements, \[\left\|C_{2}\right\|_{F}=\left\|\left(\mathcal{P}_{\Omega_{\cdot,(m-N_{o})}}( \mathcal{E}_{o})\right)^{\top}X_{o}^{\tau,(m)}\right\|_{F}=\left\|\sum_{i=1}^{ N_{o}}\underbrace{\omega_{i,(m-N_{o})}\epsilon_{i,(m-N_{o})}X_{o,i,\cdot}^{\tau,(m)}}_{:=u_{ i,(m-N_{o})}}\right\|_{2}.\] Conditioning on \(X_{o}^{\tau,(m)}\), the matrix Bernstein inequality reveals that, with probability at least \(1-O(\min\{N_{o}^{-100},T_{o}^{-100}\})\), \[\left\|\sum_{i=1}^{N_{o}}u_{i,(m-N_{o})}\right\|_{2} \lesssim\sqrt{\sigma^{2}\left\|X_{o}^{\tau,(m)}\right\|_{F}^{2} \max\{\log N_{o},\log T_{o}\}}+\sigma\left\|X_{o}^{\tau,(m)}\right\|_{2,\infty} \max\{\log^{2}N_{o},\log^{2}T_{o}\}\] \[\lesssim\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}\left\| X_{o}^{\tau,(m)}\right\|_{2,\infty},\] where the last relation uses the assumption \(\max\{N_{o},T_{o}\}\gg\max\{\log^{3}N_{o},\log^{3}T_{o}\}\). Applying Lemma D.22 (iv) with the assumption \[\frac{\sigma\sqrt{\max\{N_{o},T_{o}\}}}{\psi_{\min,o}}\ll\frac{1}{\sqrt{\kappa _{o}^{2}\max\{\log N_{o},\log T_{o}\}}},\] we reach, with probability at least \(1-O\left(\min\{N_{o}^{-100},T_{o}^{-100}\}\right)\), \[\left\|C_{2}\right\|_{F}\lesssim\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o}\log T_ {o}\}}\left\|\mathcal{F}_{o}\right\|_{2,\infty}.\] The following two lemmas are the modifications of Section 4.2 of Candes and Recht (2009) for our missing pattern. The way of proof is different from that of Candes and Recht (2009) since we assume missing not at random. These lemmas are used in many parts of proofs. **Lemma D.12**.: _Define \(\mathcal{P}_{T^{*}}(A)=U_{o}U_{o}^{\top}A+AV_{o}V_{o}^{\top}-U_{o}U_{o}^{\top} AV_{o}V_{o}^{\top}\). Assume that \(\frac{\vartheta_{o}\mu_{o}r}{\min\{N_{o},T_{o}\}}\ll 1\). Then, we have_ \[\sqrt{\frac{99}{100}}\left\|\mathcal{P}_{T^{*}}(A)\right\|_{F}\leq\left\| \mathcal{P}_{\Omega_{o}}\mathcal{P}_{T^{*}}(A)\right\|_{F}\leq\sqrt{\frac{10 1}{100}}\left\|\mathcal{P}_{T^{*}}(A)\right\|_{F}.\] (D.3) Proof.: We have by Lemma D.13 \[\left|\left\|\mathcal{P}_{\Omega_{o}}\mathcal{P}_{T^{*}}(A)\right\|_{F}^{2}- \left\|\mathcal{P}_{T^{*}}(A)\right\|_{F}^{2}\right| =\left|\left\langle\mathcal{P}_{\Omega_{o}}\mathcal{P}_{T^{*}}( A),\mathcal{P}_{\Omega_{o}}\mathcal{P}_{T^{*}}(A)\right\rangle-\left\langle \mathcal{P}_{T^{*}}(A),\mathcal{P}_{T^{*}}(A)\right\rangle\right|\] \[=\left|\left\langle\left(\mathcal{P}_{\Omega_{o}}\mathcal{P}_{T^ {*}}-\mathcal{P}_{T^{*}}\right)(A),\mathcal{P}_{T^{*}}(A)\right\rangle\right|\] \[\leq\left\|\left(\mathcal{P}_{T^{*}}\mathcal{P}_{\Omega_{o}} \mathcal{P}_{T^{*}}-\mathcal{P}_{T^{*}}\right)(A)\right\|_{F}\left\|\mathcal{ P}_{T^{*}}(A)\right\|_{F}\] \[\leq\left\|\mathcal{P}_{T^{*}}\mathcal{P}_{\Omega_{o}}\mathcal{P} _{T^{*}}-\mathcal{P}_{T^{*}}\right\|\left\|\mathcal{P}_{T^{*}}(A)\right\|_{F}^ {2}\] \[\leq 0.01\left\|\mathcal{P}_{T^{*}}(A)\right\|_{F}^{2}.\] **Lemma D.13**.: _Under the incoherence assumption, we have_ \[\left\|\mathcal{P}_{T^{*}}\mathcal{P}_{\Omega_{o}}\mathcal{P}_{T^{*}}-\mathcal{P}_ {T^{*}}\right\|\leq\frac{2\vartheta_{o}\mu_{o}r}{\min\{N_{o},T_{o}\}}.\] Proof.: Let \((e_{i}^{N_{o}})_{i\in[N_{o}]},(e_{t}^{T_{o}})_{t\in[T_{c}]}\) be the standard basis vectors for \(\mathbb{R}^{N_{o}}\) and \(\mathbb{R}^{T_{o}}\), respectively. Then \(A\in\mathbb{R}^{N_{o}\times T_{o}}\) can be written as \(A=\sum_{(i,t)\in[N_{o}]\times[T_{o}]}\langle A,e_{i}^{N_{o}}e_{t}^{T_{o}\top} \rangle e_{i}^{N_{o}}e_{t}^{T_{o}\top}\). Further, we can readily obtain \[\mathcal{P}_{T^{*}}(A)=\sum_{i,t}\langle\mathcal{P}_{T^{*}}(A),e_ {i}^{N_{o}}e_{t}^{T_{o}\top}\rangle e_{i}^{N_{o}}e_{t}^{T_{o}\top}=\sum_{i,t} \langle A,\mathcal{P}_{T^{*}}(e_{i}^{N_{o}}e_{t}^{T_{o}\top})\rangle e_{i}^{N _{o}}e_{t}^{T_{o}\top},\] \[\mathcal{P}_{\Omega_{o}}\mathcal{P}_{T^{*}}(A)=\sum_{i,t}\omega_{ it}\langle A,\mathcal{P}_{T^{*}}(e_{i}^{N_{o}}e_{t}^{T_{o}\top})\rangle e_{i}^{N_{o}}e_{t}^{T_{o}\top}, \mathcal{P}_{T^{*}}\mathcal{P}_{\Omega_{o}}\mathcal{P}_{T^{*}}(A)=\sum_{i,t} \omega_{it}\langle A,\mathcal{P}_{T^{*}}(e_{i}^{N_{o}}e_{t}^{T_{o}\top}) \rangle\mathcal{P}_{T^{*}}(e_{i}^{N_{o}}e_{t}^{T_{o}\top}).\] By defining an outer product \(\otimes\) as \((A\otimes B)(C)=\langle B,C\rangle A\), we also have \[\mathcal{P}_{T^{*}}\mathcal{P}_{\Omega_{o}}\mathcal{P}_{T^{*}}=\sum_{i,t} \omega_{it}\mathcal{P}_{T^{*}}(e_{i}^{N_{o}}e_{t}^{T_{o}\top})\otimes\mathcal{ P}_{T^{*}}(e_{i}^{N_{o}}e_{t}^{T_{o}\top})\] and \(\mathcal{P}_{T^{*}}=\sum_{i,t}\mathcal{P}_{T^{*}}(e_{i}^{N_{o}}e_{t}^{T_{o} \top})\otimes\mathcal{P}_{T^{*}}(e_{i}^{N_{o}}e_{t}^{T_{o}\top})\). Hence, we have \[\mathcal{P}_{T^{*}}\mathcal{P}_{\Omega_{o}}\mathcal{P}_{T^{*}}-\mathcal{P}_{T^ {*}}=\sum_{i,t}(\omega_{it}-1)\mathcal{P}_{T^{*}}(e_{i}^{N_{o}}e_{t}^{T_{o} \top})\otimes\mathcal{P}_{T^{*}}(e_{i}^{N_{o}}e_{t}^{T_{o}\top})=\sum_{i\in \mathcal{Q}_{o}}\mathcal{P}_{T^{*}}(e_{i}^{N_{o}}e_{t_{o}}^{T_{o}\top})\otimes \mathcal{P}_{T^{*}}(e_{i}^{N_{o}}e_{t_{o}}^{T_{o}\top}).\] By the definition of \(\mathcal{P}_{T^{*}}\), \[\left\|\mathcal{P}_{T^{*}}(e_{i}^{N_{o}}e_{t_{o}}^{T_{o}\top})\right\|_{F}^{2} =\langle\mathcal{P}_{T^{*}}(e_{i}^{N_{o}}e_{t_{o}}^{T_{o}\top}),e_{i}^{N_{o}}e _{t_{o}}^{T_{o}\top}\rangle=\left\|U_{o}U_{o}^{\top}e_{i}^{N_{o}}\right\|^{2}+ \left\|V_{o}V_{o}^{\top}e_{t_{o}}^{T_{o}\top}\right\|^{2}-\left\|U_{o}U_{o}^{ \top}e_{i}^{N_{o}}\right\|^{2}\left\|V_{o}V_{o}^{\top}e_{t_{o}}^{T_{o}\top} \right\|^{2}.\] Due to the incoherence condition, \(\left\|U_{o}U_{o}^{\top}e_{i}^{N_{o}}\right\|^{2}\leq\mu_{o}r/N_{o}\) and \(\left\|V_{o}V_{o}^{\top}e_{t_{o}}^{T_{o}}\right\|^{2}\leq\mu_{o}r/T_{o}\). Then, we have \[\left\|\mathcal{P}_{T^{*}}(e_{i}^{N_{o}}e_{t_{o}}^{T_{o}\top})\right\|_{F}^{2} \leq 2\mu_{o}r/\min\{N_{o},T_{o}\}.\] Note that \[\left\|\mathcal{P}_{T^{*}}(e_{i}^{N_{o}}e_{t_{o}}^{T_{o}\top})\otimes\mathcal{ P}_{T^{*}}(e_{i}^{N_{o}}e_{t_{o}}^{T_{o}\top})\right\|=\sup\langle B_{1}, \mathcal{P}_{T^{*}}(e_{i}^{N_{o}}e_{t_{o}}^{T_{o}\top})\rangle\langle\mathcal{ P}_{T^{*}}(e_{i}^{N_{o}}e_{t_{o}}^{T_{o}\top}),B_{2}\rangle\] where the supremum is taken over a countable collection of matrices \(B_{1}\) and \(B_{2}\) such that \(\left\|B_{1}\right\|_{F}\leq 1\) and \(\left\|B_{2}\right\|_{F}\leq 1\). Then, for all \(i\in\mathcal{Q}_{o}\), we have \[\left\|\mathcal{P}_{T^{*}}(e_{i}^{N_{o}}e_{t_{o}}^{T_{o}\top}) \otimes\mathcal{P}_{T^{*}}(e_{i}^{N_{o}}e_{t_{o}}^{T_{o}\top})\right\| \leq|\langle B_{1},\mathcal{P}_{T^{*}}(e_{i}^{N_{o}}e_{t_{o}}^{T_{o} \top})\rangle||\langle\mathcal{P}_{T^{*}}(e_{i}^{N_{o}}e_{t_{o}}^{T_{o}\top}),B_ {2}\rangle|\] \[\leq\left\|\mathcal{P}_{T^{*}}(e_{i}^{N_{o}}e_{t_{o}}^{T_{o}\top}) \right\|_{F}^{2}\] \[\leq\frac{2\mu_{o}r}{\min\{N_{o},T_{o}\}}.\] Hence, we have \[\left\|\mathcal{P}_{T^{*}}\mathcal{P}_{\Omega_{o}}\mathcal{P}_{T^{*} }-\mathcal{P}_{T^{*}}\right\| \leq\sum_{i\in\mathcal{Q}_{o}}\left\|\mathcal{P}_{T^{*}}(e_{i}^{N_{ o}e_{t_{o}}^{T_{o}\top}})\otimes\mathcal{P}_{T^{*}}(e_{i}^{N_{o}}e_{t_{o}}^{T_{o} \top})\right\|\] \[\leq\vartheta_{o}\max_{i\in\mathcal{Q}_{o}}\left\|\mathcal{P}_{T^ {*}}(e_{i}^{N_{o}}e_{t_{o}}^{T_{o}\top})\otimes\mathcal{P}_{T^{*}}(e_{i}^{N_{o }}e_{t_{o}}^{T_{o}\top})\right\|\] \[\leq\frac{2\vartheta_{o}\mu_{o}r}{\min\{N_{o},T_{o}\}}.\] The following lemma is a simple modification of Lemma D.19. Using this lemma, we can change \(\left\|\mathcal{F}_{o}\right\|_{2,\infty}\) with \(\left\|X_{o}\right\|_{2,\infty}\) and \(\left\|Z_{o}\right\|_{2,\infty}\) at the cost of having an additional term \(\sqrt{r}\). **Lemma D.14**.: _Suppose that \(\lambda_{o}=C_{\lambda}\sigma\sqrt{\max\{N_{o},T_{o}\}}\) for some large constant \(C_{\lambda}>0\), \(0<\eta_{o}\ll 1/(\kappa_{o}^{2}\psi_{\max,o}\min\{N_{o},T_{o}\})\), \(\min\{N_{o},T_{o}\}\gg\vartheta_{o}\kappa_{o}^{2}\mu_{o}r\), and_ \[\frac{\sigma}{\psi_{\min,o}}\sqrt{\max\{N_{o}^{2},T_{o}^{2}\}}\ll\frac{1}{ \sqrt{\kappa_{o}^{4}\mu_{o}r\max\{\log N_{o},\log T_{o}\}}},\ \ \min\{N_{o},T_{o}\}\gg\kappa_{o}\mu_{o}r\max\{\log^{3}N,\log^{3}T\}.\] _Suppose also that the iterates satisfy (A.18)-(A.25) at the \(\tau\)-th iteration, then with probability at least \(1-O(\min\{N_{o}^{-99},T_{o}^{-99}\})\), we have_ \[\left\|X_{o}^{\tau+1}H_{o}^{\tau+1}-X_{o}\right\|_{2,\infty}\leq C _{\infty,\mathcal{X}}r^{1/2}\kappa_{o}\left(\frac{\sigma\sqrt{\max\{N_{o}\log N _{o},T_{o}\log T_{o}\}}}{\psi_{\min,o}}+\frac{\lambda_{o}}{\psi_{\min,o}} \right)\left\|X_{o}\right\|_{2,\infty},\] \[\left\|Z_{o}^{\tau+1}H_{o}^{\tau+1}-Z_{o}\right\|_{2,\infty}\leq C _{\infty,Z}r^{1/2}\kappa_{o}\left(\frac{\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o }\log T_{o}\}}}{\psi_{\min,o}}+\frac{\lambda_{o}}{\psi_{\min,o}}\right)\left\| Z_{o}\right\|_{2,\infty},\] _where \(C_{\infty,\mathcal{X}}\) and \(C_{\infty,Z}\) are some sufficiently large constants._ Proof.: By some modification of Lemma D.8, we can have \[\max_{1\leq m\leq N_{o}}\left\|\mathcal{F}_{o}^{\tau+1}H_{o}^{\tau+1}- \mathcal{F}_{o}^{\tau+1,(m)}Q_{o}^{\tau+1,(m)}\right\|_{F}\leq C_{3,X}\sqrt{r }\left(\frac{\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}}{\psi_{\min,o }}+\frac{\lambda_{o}}{\psi_{\min,o}}\right)\left\|X_{o}\right\|_{2,\infty},\] \[\max_{N_{o}+1\leq m\leq T_{o}}\left\|\mathcal{F}_{o}^{\tau+1}H_{o }^{\tau+1}-\mathcal{F}_{o}^{\tau+1,(m)}Q_{o}^{\tau+1,(m)}\right\|_{F}\] \[\leq C_{3,Z}\sqrt{r}\left(\frac{\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}}{\psi_{\min,o}}+\frac{\lambda_{o}}{\psi_{\min,o}}\right) \left\|Z_{o}\right\|_{2,\infty}.\] In addition, by some modification of Lemma D.18, we have \[\max_{1\leq m\leq N_{o}}\left\|\left(\mathcal{F}_{o}^{\tau+1,(m)}H_{o}^{\tau+ 1,(m)}-\mathcal{F}_{o}\right)_{m,\cdot}\right\|_{2}\leq C_{4,X}\kappa_{o}\left( \frac{\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}}{\psi_{\min,o}}+ \frac{\lambda_{o}}{\psi_{\min,o}}\right)\left\|X_{o}\right\|_{2,\infty},\] \[\max_{N_{o}+1\leq m\leq T_{o}}\left\|\left(\mathcal{F}_{o}^{\tau+1,(m)}H_{o}^{ \tau+1,(m)}-\mathcal{F}_{o}\right)_{m,\cdot}\right\|_{2}\leq C_{4,2^{K}\kappa_{o }}\left(\frac{\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}}{\psi_{\min,o }}+\frac{\lambda_{o}}{\psi_{\min,o}}\right)\left\|Z_{o}\right\|_{2,\infty}.\] Then, when \(1\leq m\leq N_{o}\), we have with probability at least \(1-O(\min\{N_{o}^{-99},T_{o}^{-99}\})\) \[\left\|\left(X_{o}^{\tau+1}H_{o}^{\tau+1}-X_{o}\right)_{m,\cdot} \right\|_{2} \leq\left\|\left(\mathcal{F}_{o}^{\tau+1}H_{o}^{\tau+1}-\mathcal{ F}_{o}\right)_{m,\cdot}\right\|_{2}\] \[\leq\left\|\left(\mathcal{F}_{o}^{\tau+1}H_{o}^{\tau+1}-\mathcal{ F}_{o}^{\tau+1,(m)}H_{o}^{\tau+1,(m)}\right)_{m,\cdot}\right\|_{2}+\left\| \left(\mathcal{F}_{o}^{\tau+1,(m)}H_{o}^{\tau+1,(m)}-\mathcal{F}_{o}\right)_{ m,\cdot}\right\|_{2}\] \[\leq\left\|\mathcal{F}_{o}^{\tau+1}H_{o}^{\tau+1}-\mathcal{F}_{o }^{\tau+1,(m)}H_{o}^{\tau+1,(m)}\right\|_{F}\] \[\quad+C_{4,\infty}\kappa_{o}\left(\frac{\sigma\sqrt{\max\{N_{o} \log N_{o},T_{o}\log T_{o}\}}}{\psi_{\min,o}}+\frac{\lambda_{o}}{\psi_{\min,o} }\right)\left\|X_{o}\right\|_{2,\infty},\] (D.4) For the first term, use Lemma D.22 to have, with probability at least \(1-O(\min\{N_{o}^{-99},T_{o}^{-99}\})\) \[\left\|\mathcal{F}_{o}^{\tau+1}H_{o}^{\tau+1}-\mathcal{F}_{o}^{ \tau+1,(m)}H_{o}^{\tau+1,(m)}\right\|_{F} \leq 5\kappa_{o}\left\|\mathcal{F}_{o}^{\tau+1}H_{o}^{\tau+1}- \mathcal{F}_{o}^{\tau+1,(m)}Q_{o}^{\tau+1,(m)}\right\|_{F}\] \[\leq 5\kappa_{o}C_{3,X}\sqrt{r}\left(\frac{\sigma\sqrt{\max\{N_{o} \log N_{o},T_{o}\log T_{o}\}}}{\psi_{\min,o}}+\frac{\lambda_{o}}{\psi_{\min,o} }\right)\left\|X_{o}\right\|_{2,\infty}\] (D.5) Then, (D.4) and (D.5) collectively reveal that, with probability at least \(1-O(\min\{N_{o}^{-99},T_{o}^{-99}\})\), \[\left\|\left(X_{o}^{\tau+1}H_{o}^{\tau+1}-X_{o}\right)_{m,\cdot} \right\|_{2}\leq C_{\infty,X}\sqrt{r}\kappa_{o}\left(\frac{\sigma\sqrt{\max\{ N_{o}\log N_{o},T_{o}\log T_{o}\}}}{\psi_{\min,o}}+\frac{\lambda_{o}}{\psi_{\min,o} }\right)\left\|X_{o}\right\|_{2,\infty}\] under the assumption that \(C_{\infty,X}\geq 5C_{3,X}+C_{4,X}\). Similarly, we can show the bound for \(\left\|\left(Z_{o}^{\tau+1}H_{o}^{\tau+1}-Z_{o}\right)_{m,\cdot}\right\|_{2}\). The following lemmas are the simple modified versions of the lemmas in Chen et al. (2020b). With the aids of Lemmas D.12 and D.13, if we follow their proofs by setting \(p=1\) while considering our observation pattern cautiously, we can get the results. To save space, we omit the proofs. However, we are willing to provide the full proofs upon request. **Lemma D.15**.: _Suppose that \(\lambda_{o}=C_{\lambda}\sigma\sqrt{\max\{N_{o},T_{o}\}}\) for some large constant \(C_{\lambda}>0\), \(\bar{\tau}=\max\{N_{o}^{23},T_{o}^{23}\}\) and \(\eta_{o}\stackrel{{ c}}{{\asymp}}1/\max\{N_{o}^{6},T_{o}^{6}\} \kappa_{o}^{3}\psi_{\max,o}\). Suppose also that_ \[\frac{\sigma}{\psi_{\min,o}}\sqrt{\frac{\max\{N_{o}^{2},T_{o}^{2}\}}{\min\{N_{o},T_{o}\}}}\ll\frac{1}{\sqrt{\kappa_{o}^{4}\mu_{o}r\max\{\log N_{o},\log T_{o}\} }},\ \min\{N_{o},T_{o}\}\gg\mu_{o}r\kappa_{o}\max\{\log^{2}N_{o},\log^{2}T_{o}\},\] _and the induction hypotheses (A.18)-(A.25) hold for all \(0\leq\tau\leq\bar{\tau}\) and (A.26) holds for all \(1\leq\tau\leq\bar{\tau}\). Then there is a constant \(C_{gr}>0\) such that_ \[\min_{0\leq\tau<\bar{\tau}}\left\|\nabla f(X_{o}^{\tau},Z_{o}^{\tau})\right\|_{F} \leq C_{gr}\frac{1}{\max\{N_{o}^{5},T_{o}^{5}\}}\lambda_{o}\sqrt{\psi_{\min,o}}.\] **Lemma D.16**.: _Suppose that \(\lambda_{o}=C_{\lambda}\sigma\sqrt{\max\{N_{o},T_{o}\}}\) for some large constant \(C_{\lambda}>0\),_ \[\frac{\sigma}{\psi_{\min,o}}\sqrt{\frac{\max\{N_{o}^{2},T_{o}^{2}\}}{\min\{N_{ o},T_{o}\}}}\ll\frac{1}{\sqrt{\kappa_{o}^{4}\mu_{o}r\max\{\log N_{o},\log T_{o} \}}},\ \min\{N_{o},T_{o}\}\gg\mu_{o}r\kappa_{o}\max\{\log^{2}N_{o},\log^{2}T_{o}\}\] _and \(0<\eta_{o}\ll 1/(\kappa_{o}^{5/2}\psi_{\max,o})\). Suppose also that the iterates satisfy (A.18)-(A.25) at the \(\tau\)-th iteration, then with probability at least \(1-O(\min\{N_{o}^{-100},T_{o}^{-100}\}),\)_ \[\left\|\mathcal{F}_{o}^{\tau+1}H_{o}^{\tau+1}-\mathcal{F}_{o}\right\|_{F} \leq C_{F}\left(\frac{\sigma\sqrt{\max\{N_{o},T_{o}\}}}{\psi_{\min,o}}+\frac{ \lambda_{o}}{\psi_{\min,o}}\right)\left\|X_{o}\right\|_{F},\] _where \(C_{F}>0\) is large enough._ **Lemma D.17**.: _Suppose \(\lambda_{o}=C_{\lambda}\sigma\sqrt{\max\{N_{o},T_{o}\}}\) for some large constant \(C_{\lambda}>0\),_ \[\frac{\sigma\sqrt{\max\{N_{o},T_{o}\}}}{\psi_{\min,o}}\ll\frac{1}{\sqrt{ \kappa_{o}^{4}\max\{\log N_{o},\log T_{o}\}}},\ \min\{N_{o}^{2},T_{o}^{2}\}\gg\kappa_{o}^{4}\mu_{o}^{2}r^{2}\max\{N_{o}\log N_ {o},T_{o}\log T_{o}\},\] _and \(0<\eta_{o}\ll 1/(\kappa_{o}^{3}\psi_{\max,o}\sqrt{r})\). Suppose also that the iterates satisfy (A.18)-(A.25) at the \(\tau\)-th iteration, then with probability at least \(1-O(\min\{N_{o}^{-100},T_{o}^{-100}\}),\)_ \[\left\|\mathcal{F}_{o}^{\tau+1}H_{o}^{\tau+1}-\mathcal{F}_{o}\right\|\leq C_{ op}\left(\frac{\sigma\sqrt{\max\{N_{o},T_{o}\}}}{\psi_{\min,o}}+\frac{ \lambda_{o}}{\psi_{\min,o}}\right)\left\|X_{o}\right\|\] _provided that \(C_{op}\) is sufficiently large._ **Lemma D.18**.: _Suppose that \(\lambda_{o}=C_{\lambda}\sigma\sqrt{\max\{N_{o},T_{o}\}}\) for some large constant \(C_{\lambda}>0\),_ \[\frac{\sigma\sqrt{\max\{N_{o},T_{o}\}}}{\psi_{\min,o}}\ll\frac{1}{\sqrt{ \kappa_{o}^{2}\max\{\log N_{o},\log T_{o}\}}},\ \min\{N_{o}^{2},T_{o}^{2}\}\gg\kappa_{o}^{2}\mu_{o}^{2}r^{2}\max\{N_{o}\log N_ {o},T_{o}\log T_{o}\},\] _and \(0<\eta_{o}\ll 1/(\kappa_{o}^{2}\sqrt{r}\psi_{\max,o})\). Suppose also that the iterates satisfy (A.18)-(A.25) at the \(\tau\)-th iteration, then with probability at least \(1-O(\min\{N_{o}^{-99},T_{o}^{-99}\}),\)_ \[\max_{1\leq m\leq N_{o}+T_{o}}\left\|\left(\mathcal{F}_{o}^{\tau+1,(m)}H_{o}^{ \tau+1,(m)}-\mathcal{F}_{o}\right)_{m,}\right\|_{2}\leq C_{4}\kappa_{o}\left( \frac{\sigma\sqrt{\max\{N_{o}\log N_{o},T_{o}\log T_{o}\}}}{\psi_{\min,o}}+ \frac{\lambda_{o}}{\psi_{\min,o}}\right)\left\|\mathcal{F}_{o}\right\|_{2,\infty }.\] **Lemma D.19**.: _Suppose that \(\lambda_{o}=C_{\lambda}\sigma\sqrt{\max\{N_{o},T_{o}\}}\) for some large constant \(C_{\lambda}>0\),_ \[\frac{\sigma}{\psi_{\min,o}}\sqrt{\frac{\max\{N_{o}^{2},T_{o}^{2}\}}{\min\{N_{ o},T_{o}\}}}\ll\frac{1}{\sqrt{\kappa_{o}^{4}\mu_{o}r\max\{\log N_{o},\log T_{o} \}}},\] \[\min\{N_{o}^{2},T_{o}^{2}\}\gg\kappa_{o}^{4}\mu_{o}^{2}r^{2}\max\{N_{o}\log^{2}N_{o },T_{o}\log^{2}T_{o}\}.\] _Suppose also that the iterates satisfy (A.18)-(A.25) at the \(\tau\)-th iteration, then with probability at least \(1-O(\min\{N_{o}^{-98},T_{o}^{-98}\}),\)_ \[\left\|\mathcal{F}_{o}^{\tau+1}H_{o}^{\tau+1}-\mathcal{F}_{o}\right\|_{2, \infty}\leq C_{\infty}\kappa_{o}\left(\frac{\sigma\sqrt{\max\{N_{o}\log N_{o},T _{o}\log T_{o}\}}}{\psi_{\min,o}}+\frac{\lambda_{o}}{\psi_{\min,o}}\right)\left\| \mathcal{F}_{o}\right\|_{2,\infty}.\] _holds as long as \(C_{\infty}\geq 5C_{3}+C_{4}\)._ **Lemma D.20**.: _Suppose \(\lambda_{o}=C_{\lambda}\sigma\sqrt{\max\{N_{o},T_{o}\}}\) for some large constant \(C_{\lambda}>0\),_ \[\frac{\sigma\sqrt{\max\{N_{o},T_{o}\}}}{\psi_{\min,o}}\ll\frac{1}{\sqrt{\kappa _{o}^{2}\max\{\log N_{o},\log T_{o}\}}},\,\,\,\min\{N_{o}^{2},T_{o}^{2}\}\gg \kappa_{o}^{2}\mu_{o}^{2}r^{2}\max\{N_{o}\log N_{o},T_{o}\log T_{o}\},\] _and \(0<\eta_{o}<1/\psi_{\min,o}\). Suppose also that the iterates satisfy (A.18)-(A.25) at the \(\tau\)-th iteration, then with probability at least \(1-O(\min\{N_{o}^{-100},T_{o}^{-100}\}),\)_ \[\left\|X_{o}^{\tau+1\top}X_{o}^{\tau+1}-Z_{o}^{\tau+1\top}Z_{o}^{\tau+1} \right\|_{F}\leq C_{B}\kappa_{o}\eta_{o}\left(\frac{\sigma\sqrt{\max\{N_{o},T _{o}\}}}{\psi_{\min,o}}+\frac{\lambda_{o}}{\psi_{\min,o}}\right)\sqrt{r}\psi_{ \max,o}^{2}\] \[\max_{1\leq m\leq N_{o}+T}\left\|X_{o}^{\tau+1,(m)\top}X_{o}^{\tau+1,(m)}-Z_{o }^{\tau+1,(m)\top}Z_{o}^{\tau+1,(m)}\right\|_{F}\leq C_{B}\kappa_{o}\eta_{o} \left(\frac{\sigma\sqrt{\max\{N_{o},T_{o}\}}}{\psi_{\min,o}}+\frac{\lambda_{ o}}{\psi_{\min,o}}\right)\sqrt{r}\psi_{\max,o}^{2}\] _holds true as long as \(C_{B}\gg C_{op}^{2}\)._ **Lemma D.21**.: _Suppose that \(\lambda_{o}=C_{\lambda}\sigma\sqrt{\max\{N_{o},T_{o}\}}\) for some large constant \(C_{\lambda}>0\),_ \[\frac{\sigma}{\psi_{\min,o}}\sqrt{\frac{\max\{N_{o}^{2},T_{o}^{2}\}}{\min\{N_ {o},T_{o}\}}}\ll\frac{1}{\sqrt{\kappa_{o}^{4}\mu_{o}r\max\{\log N_{o},\log T_{ o}\}}},\] _and \(0<\eta_{o}\ll 1/(q\psi_{\max,o}\max\{N_{o},T_{o}\})\). Suppose also that the iterates satisfy (A.18)-(A.25) at the \(\tau\)-th iteration, then with probability at least \(1-O(\min\{N_{o}^{-99},T_{o}^{-99}\}),\)_ \[f(X_{o}^{\tau+1},Z_{o}^{\tau+1})\leq f(X_{o}^{\tau},Z_{o}^{\tau})-\frac{\eta _{o}}{2}\left\|\nabla f(X_{o}^{\tau},Z_{o}^{\tau})\right\|_{F}^{2}.\] **Lemma D.22**.: _Throughout the set of results, we assume that the \(\tau\)-th iterates satisfy the induction hypotheses (A.18)-(A.25)._ * _Suppose that_ \(\min\{N_{o},T_{o}\}\gg\mu_{o}r\max\{\log N_{o},\log T_{o}\}\)_. Then, we obtain_ \[\left\|\mathcal{F}_{o}^{\tau,(m)}Q_{o}^{\tau,(m)}-\mathcal{F}_{o}\right\|_{2, \infty}\leq(C_{\infty}\kappa_{o}+C_{3})\left(\frac{\sigma\sqrt{\max\{N_{o}\log N _{o},T_{o}\log T_{o}\}}}{\psi_{\min,o}}+\frac{\lambda_{o}}{\psi_{\min,o}}\right) \left\|\mathcal{F}_{o}\right\|_{2,\infty},\] \[\left\|\mathcal{F}_{o}^{\tau,(m)}Q_{o}^{\tau,(m)}-\mathcal{F}_{o} \right\|\leq 2C_{op}\left(\frac{\sigma\sqrt{\max\{N_{o},T_{o}\}}}{\psi_{\min,o}} +\frac{\lambda_{o}}{\psi_{\min,o}}\right)\left\|X_{o}\right\|.\] _._ 2. _Suppose that_ \(\frac{\sigma\sqrt{\max\{N_{o},T_{o}\}}}{\psi_{\min,o}}\ll\frac{1}{\kappa_{o}\sqrt{ \max\{\log N_{o},\log T_{o}\}}}\)_. Then, we have_ \[\left\|\mathcal{F}_{o}^{\tau}H_{o}^{\tau}-\mathcal{F}_{o}\right\|\leq\left\|X_{o }\right\|,\quad\left\|\mathcal{F}_{o}^{\tau}H_{o}^{\tau}-\mathcal{F}_{o} \right\|_{F}\leq\left\|X_{o}\right\|_{F},\quad\left\|\mathcal{F}_{o}^{\tau}H_ {o}^{\tau}-\mathcal{F}_{o}\right\|_{2,\infty}\leq\left\|\mathcal{F}_{o}\right\| _{2,\infty},\] (D.6) \[\left\|\mathcal{F}_{o}^{\tau}\right\|\leq 2\left\|X_{o}\right\|,\quad \left\|\mathcal{F}_{o}^{\tau}\right\|_{F}\leq 2\left\|X_{o}\right\|_{F},\quad \left\|\mathcal{F}_{o}^{\tau}\right\|_{2,\infty}\leq 2\left\|\mathcal{F}_{o}\right\| _{2,\infty}.\] (D.7) 3. _Suppose that_ \(\frac{\sigma\sqrt{\max\{N_{o},T_{o}\}}}{\psi_{\min,o}}\ll\frac{1}{\kappa_{o} \sqrt{\max\{\log N_{o},\log T_{o}\}}}\) _and_ \(\sqrt{\frac{\mu_{o}r}{\min\{N_{o},T_{o}\}}}\ll 1\)_. Then, we have_ \[\left\|\mathcal{F}_{o}^{\tau}H_{o}^{\tau}-\mathcal{F}_{o}^{\tau,(m)}H_{o}^{ \tau,(m)}\right\|_{F}\leq 5\kappa_{o}\left\|\mathcal{F}_{o}^{\tau}H_{o}^{\tau}- \mathcal{F}_{o}^{\tau,(m)}Q_{o}^{\tau,(m)}\right\|_{F}.\] 4. _Suppose that_ \(\frac{\sigma\sqrt{\max\{N_{o},T_{o}\}}}{\psi_{\min,o}}\ll\frac{1}{\kappa_{o} \sqrt{\max\{\log N_{o},\log T_{o}\}}}\) _and_ \(\min\{N_{o},T_{o}\}\geq\kappa_{o}\mu_{o}\)_. Then (D.6), (D.7) also hold for_ \(\mathcal{F}_{o}^{\tau,(m)}H_{o}^{\tau,(m)}.\) _Additionally, we have_ \[\psi_{\min,o}/2\leq\psi_{\min}\left((Z_{o}^{\tau,(m)}H_{o}^{\tau,(m)})^{\top} Z_{o}^{\tau,(m)}H_{o}^{\tau,(m)}\right)\leq\psi_{\max}\left((Z_{o}^{\tau,(m)}H_{o}^{ \tau,(m)})^{\top}Z_{o}^{\tau,(m)}H_{o}^{\tau,(m)}\right)\leq 2\psi_{\max,o}.\] **Lemma D.23**.: _Suppose \(\mathcal{F}_{1},\mathcal{F}_{2},\mathcal{F}_{0}\in\mathbb{R}^{(N_{o}+T_{o}) \times r}\) are three matrices such that \(\left\|\mathcal{F}_{1}-\mathcal{F}_{0}\right\|\left\|\mathcal{F}_{0}\right\|\leq \psi_{\min}^{2}(\mathcal{F}_{0})/2\) and \(\left\|\mathcal{F}_{1}-\mathcal{F}_{2}\right\|\left\|\mathcal{F}_{0}\right\| \leq\psi_{\min}^{2}(\mathcal{F}_{0})/4\). Denote_ _Then we have_ \[\left\|\mathcal{F}_{1}R_{1}-\mathcal{F}_{2}R_{2}\right\|\leq 5\frac{\psi_{\max}^{2 }(\mathcal{F}_{0})}{\psi_{\min}^{2}(\mathcal{F}_{0})}\left\|\mathcal{F}_{1}- \mathcal{F}_{2}\right\|\quad\text{and}\quad\left\|\mathcal{F}_{1}R_{1}- \mathcal{F}_{2}R_{2}\right\|_{F}\leq 5\frac{\psi_{\max}^{2}(\mathcal{F}_{0})}{ \psi_{\min}^{2}(\mathcal{F}_{0})}\left\|\mathcal{F}_{1}-\mathcal{F}_{2}\right\| _{F}.\] Proof.: This is the same as Lemma 37 in Ma et al. (2020). Appendix E Additional empirical findings: comparison with the two-way fixed effect model in Chung et al. (2020) Finally, we provide further details of the comparison between our model and the two-way fixed effect model in Chung et al. (2020) which is omitted in the main text to save space. Denote the quote, trade and trade-at-rule dummy variables by \(\mathcal{Q}_{i}\), \(\mathcal{T}_{i}\), and \(\mathcal{TA}_{i}\), respectively, and the pilot period dummy variable by \(Pilot_{t}\). Chung et al. (2020) consider the following two-way fixed effect model: \[y_{it}=(\mathcal{Q}_{i}\times Pilot_{t})\theta^{(1)}+(\mathcal{T}_{i}\times Pilot _{t})\theta^{(2)}+(\mathcal{TA}_{i}\times Pilot_{t})\theta^{(3)}+x_{it}^{\top} \beta+\alpha_{i}+\delta_{t}+\epsilon_{it},\] where \(x_{it}\) is the set of \((\mathcal{Q}_{i}\times Pilot_{t}\times TBC_{it})\), \((Pilot_{t}\times TBC_{it})\) and other control variables like stock prices and trading volumes. Since \(y_{it}=\sum_{0\leq d\leq 3}\Upsilon_{it}^{(d)}y_{it}^{(d)}\), where \(\Upsilon_{it}^{(d)}=1\) if and only if unit \(i\) receives treatment \(d\) at time \(t\), and zero otherwise, with the convention that the treatment \(0\) is the control, this model can be represented as Model (4.1). On the other hand, our model can be represented as: \[y_{it}=(\mathcal{Q}_{i}\times Pilot_{t})\theta_{it}^{(1)}+(\mathcal{T}_{i}\times Pilot _{t})\theta_{it}^{(2)}+(\mathcal{TA}_{i}\times Pilot_{t})\theta_{it}^{(3)}+x_{ it}^{\top}\beta+\zeta_{i}^{\top}\eta_{t}^{(0)}+\epsilon_{it}.\] As noted in the main text, the above model is nested in our model and highly likely to be misspecified. Table E.1 provides estimates for both models. \(\beta_{1}\) and \(\beta_{2}\) are the coefficients for \((\mathcal{Q}_{i}\times Pilot_{t}\times TBC_{it})\) and \((Pilot_{t}\times TBC_{it})\), respectively. Note that the positive \(\beta_{1}\) means that a larger TBC results in a larger treatment effect of the Q rule. It shows that as the minimum quoted spread increases from 1 cent to 5 cents under the Q rule, the effective spread increases, and this effect increases when the extent to which the new tick size ($0.05) is a binding constraint on quoted spreads is larger. It is worth noting that the treatment effect of the Q rule is \(\theta_{it}^{(1)}+\beta_{1}\cdot TBC_{it}\) since \[\mathbb{E}[y_{it}|\mathcal{Q}_{i}=1,Pilot_{t}=1]-\mathbb{E}[y_{it}|\mathcal{Q} _{i}=0,Pilot_{t}=1]=\theta_{it}^{(1)}+\beta_{1}\cdot TBC_{it}\] while that of the T rule and the TA rule are \(\theta_{it}^{(2)}\) and \(\theta_{it}^{(3)}\), respectively. Figure E.1 shows the dynamics of the cross-sectional average of the treatment effects of the Q rule. Note also that the sign of treatment effect of the Q rule is determined by the magnitudes of the positive effect of TBC and the negative effect of \(\theta_{it}^{(1)}\), the effect of coarser quotable prices. To see why the Q rule results in coarser quotable prices, consider, for example, if the quoted spread is 17 cents without the Q rule. It may change to 15 cents or 20 cents under the Q rule. This effect is different from the effect related to the minimum quoted spread captured by \(TBC\). \(\theta_{it}^{(1)}\) can capture the effect of coarser quotable prices. Most of \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \(\beta_{1}\) & \(\beta_{2}\) & \(\theta^{(1)}\) & \(\theta^{(2)}\) & \(\theta^{(3)}\) & \(R^{2}\) \\ \hline \multirow{2}{*}{Our model} & 2.20 *** & -1.46 *** & \(\hat{\theta}_{it}^{(1)}\) & \(\hat{\theta}_{it}^{(2)}\) & \(\hat{\theta}_{it}^{(3)}\) & 0.79 \\ & (0.10) & (0.07) & [mean: -0.40] & [mean: 0.86] & [mean: -0.98] & \\ \hline \multirow{2}{*}{Two-way} & 3.68*** & -0.75 *** & -0.27 *** & 0.28 *** & -0.99 *** & 0.67 \\ & (0.09) & (0.06) & (0.05) & (0.05) & (0.05) & (0.05) & \\ \hline \hline \end{tabular} \end{table} Table E.1: Estimation results: ‘Two-way’ denotes the two-way fixed effect model. Numbers in the parenthesis ( ) are standard errors. ‘mean’ denotes the average of \(\hat{\theta}_{it}^{(d)}\) over all treated stocks in the pilot periods. the time, the positive effect of TBC is greater than the negative effect of \(\theta_{it}^{(1)}\), and therefore the treatment effect of Q rule is positive. Especially, as time passes, the negative effect of \(\theta_{it}^{(1)}\) becomes weaker, and the treatment effect of Q rule becomes more positive. Figure E.1: The dynamics of the cross-sectional average of the effect of Q rule: For the confidence band, we use the 95% uniform critical value, \(\Phi^{-1}(1-0.025/53)\). The dots denote the weekly average.
2301.11175
Quantitative Safety and Liveness
Safety and liveness are elementary concepts of computation, and the foundation of many verification paradigms. The safety-liveness classification of boolean properties characterizes whether a given property can be falsified by observing a finite prefix of an infinite computation trace (always for safety, never for liveness). In quantitative specification and verification, properties assign not truth values, but quantitative values to infinite traces (e.g., a cost, or the distance to a boolean property). We introduce quantitative safety and liveness, and we prove that our definitions induce conservative quantitative generalizations of both (1)~the safety-progress hierarchy of boolean properties and (2)~the safety-liveness decomposition of boolean properties. In particular, we show that every quantitative property can be written as the pointwise minimum of a quantitative safety property and a quantitative liveness property. Consequently, like boolean properties, also quantitative properties can be $\min$-decomposed into safety and liveness parts, or alternatively, $\max$-decomposed into co-safety and co-liveness parts. Moreover, quantitative properties can be approximated naturally. We prove that every quantitative property that has both safe and co-safe approximations can be monitored arbitrarily precisely by a monitor that uses only a finite number of states.
Thomas A. Henzinger, Nicolas Mazzocchi, N. Ege Saraç
2023-01-26T15:30:18Z
http://arxiv.org/abs/2301.11175v2
# Quantitative Safety and Liveness ###### Abstract Safety and liveness are elementary concepts of computation, and the foundation of many verification paradigms. The safety-liveness classification of boolean properties characterizes whether a given property can be falsified by observing a finite prefix of an infinite computation trace (always for safety, never for liveness). In quantitative specification and verification, properties assign not truth values, but quantitative values to infinite traces (e.g., a cost, or the distance to a boolean property). We introduce quantitative safety and liveness, and we prove that our definitions induce conservative quantitative generalizations of both (1) the safety-progress hierarchy of boolean properties and (2) the safety-liveness decomposition of boolean properties. In particular, we show that every quantitative property can be written as the pointwise minimum of a quantitative safety property and a quantitative liveness property. Consequently, like boolean properties, also quantitative properties can be min-decomposed into safety and liveness parts, or alternatively, max-decomposed into co-safety and co-liveness parts. Moreover, quantitative properties can be approximated naturally. We prove that every quantitative property that has both safe and co-safe approximations can be monitored arbitrarily precisely by a monitor that uses only a finite number of states. ## 1 Introduction Safety and liveness are elementary concepts in the semantics of computation [39]. They can be explained through the thought experiment of a _ghost monitor_--an imaginary device that watches an infinite computation trace at runtime, one observation at a time, and always maintains the set of _possible prediction values_ to reflect the satisfaction of a given property. Let \(\Phi\) be a boolean property, meaning that \(\Phi\) divides all infinite traces into those that satisfy \(\Phi\), and those that violate \(\Phi\). After any finite number of observations, True is a possible prediction value for \(\Phi\) if the observations seen so far are consistent with an infinite trace that satisfies \(\Phi\), and False is a possible prediction value for \(\Phi\) if the observations seen so far are consistent with an infinite trace that violates \(\Phi\). When True is no possible prediction value, the ghost monitor can reject the hypothesis that \(\Phi\) is satisfied. The property \(\Phi\) is _safe_ if and only if the ghost monitor can always reject the hypothesis \(\Phi\) after a finite number of observations: if the infinite trace that is being monitored violates \(\Phi\), then after some finite number of observations, True is no possible prediction value for \(\Phi\). Orthogonally, the property \(\Phi\) is _live_ if and only if the ghost monitor can never reject the hypothesis \(\Phi\) after a finite number of observations: for all infinite traces, after every finite number of observations, True remains a possible prediction value for \(\Phi\). The safety-liveness classification of properties is fundamental in verification. In the natural topology on infinite traces--the "Cantor topology"--the safety properties are the closed sets, and the liveness properties are the dense sets [4]. For every property \(\Phi\), the location of \(\Phi\) within the Borel hierarchy that is induced by the Cantor topology--the so-called "safety-progress hierarchy" [17]--indicates the level of difficulty encountered when verifying \(\Phi\). On the first level, we find the safety and co-safety properties, the latter being the complements of safety properties, i.e., the properties whose falsehood (rather than truth) can always be rejected after a finite number of observations by the ghost monitor. More sophisticated verification techniques are needed for second-level properties, which are the countable boolean combinations of first-level properties--the so-called "response" and "persistence" properties [17]. Moreover, the orthogonality of safety and liveness leads to the following celebrated fact: _every_ property can be written as the intersection of a safety property and a liveness property [4]. This means that every property \(\Phi\) can be decomposed into two parts: a safety part--which is amenable to simple verification techniques, such as invariants--and a liveness part--which requires heavier verification paradigms, such as ranking functions. Dually, there is always a disjunctive decomposition of \(\Phi\) into co-safety and co-liveness. So far, we have retold the well-known story of safety and liveness for _boolean_ properties. A boolean property \(\Phi\) is formalized mathematically as the _set_ of infinite computation traces that satisfy \(\Phi\), or equivalently, the characteristic _function_ that maps each infinite trace to a truth value. Quantitative generalizations of the boolean setting allow us to capture not only correctness properties, but also performance properties [31]. In this paper we reveal the story of safety and liveness for such _quantitative_ properties, which are functions from infinite traces to an arbitrary set \(\mathbb{D}\) of _values_. In order to compare values, we equip the value domain \(\mathbb{D}\) with a partial order \(<\), and we require \((\mathbb{D},<)\) to be a complete lattice. The membership problem [18] for an infinite trace \(f\) and a quantitative property \(\Phi\) asks whether \(\Phi(f)\geq v\) for a given threshold value \(v\in\mathbb{D}\). Correspondingly, in our thought experiment, the ghost monitor attempts to reject hypotheses of the form \(\Phi(f)\geq v\), which cannot be rejected as long as all observations seen so far are consistent with an infinite trace \(f\) with \(\Phi(f)\geq v\). We will define \(\Phi\) to be a _quantitative safety_ property if and only if every hypothesis of the form \(\Phi(f)\geq v\) can always be rejected by the ghost monitor after a finite number of observations, and we will define \(\Phi\) to be a _quantitative liveness_ property if and only if some hypothesis of the form \(\Phi(f)\geq v\) can never be rejected by the ghost monitor after any finite number of observations. We note that in the quantitative case, after every finite number of observations, the set of possible prediction values for \(\Phi\) maintained by the ghost monitor may be finite or infinite, and in the latter case, it may not contain a minimal or maximal element. Let us give a few examples. Suppose we have four observations: observation rq for "request a resource," observation gr for "grant the resource," observation tk for "clock tick," and observation oo for "other." The boolean property Resp requires that every occurrence of rq in an infinite trace is followed eventually by an occurrence of gr. The boolean property NoDoubleReq requires that no occurrence of rq is followed by another rq without some gr in between. The quantitative property MinRespTime maps every infinite trace to the largest number \(k\) such that there are at least \(k\) occurrences of tk between each rq and the closest subsequent gr. The quantitative property MaxRespTime maps every infinite trace to the smallest number \(k\) such that there are at most \(k\) occurrences of tk between each rq and the closest subsequent gr. The quantitative property AvgRespTime maps every infinite trace to the lower limit value \(\liminf\) of the infinite sequence \((v_{i})_{i\geq 1}\), where \(v_{i}\) is, for the first \(i\) occurrences of tk, the average number of occurrences of tk between rq and the closest subsequent gr. Note that the values of AvgRespTime can be \(\infty\) for some computations, including those for which the value of Resp is True. This highlights that boolean properties are not embedded in the limit behavior of quantitative properties. The boolean property Resp is live because every finite observation sequence can be extended with an occurrence of gr. In fact, Resp is a second-level liveness property (namely, a response property), because it can be written as a countable intersection of co-safety properties. The boolean property NoDoubleReq is safe because if it is violated, it will be rejected by the ghost monitor after a finite number of observations, namely, as soon as the ghost monitor sees a rq followed by another occurrence of rq without an intervening gr. According to our quantitative generalization of safety, MinRespTime is a safety property. The ghost monitor always maintains the minimal number \(k\) of occurrences of tk between any past rq and the closest subsequent gr seen so far; the set of possible prediction values for MinRespTime is always \(\{0,1,\ldots,k\}\). Every hypothesis of the form "the MinRespTime-value is at least \(v\)" is rejected by the ghost monitor as soon as \(k<v\); if such a hypothesis is violated, this will happen after some finite number of observations. Symmetrically, the quantitative property MaxRespTime is co-safe, because every wrong hypothesis of the form "the MaxRespTime-value is at most \(v\)" will be rejected by the ghost monitor as soon as the smallest possible prediction value for MaxRespTime, which is the maximal number of occurrences of tk between any past rq and the closest subsequent gr seen so far, goes above \(v\). By contrast, the quantitative property AvgRespTime is both live and co-live because no hypothesis of the form "the AvgRespTime-value is at least \(v\)," nor of the form "the AvgRespTime-value is at most \(v\)," can ever be rejected by the ghost monitor after a finite number of observations. All nonnegative real numbers and \(\infty\) always remain possible prediction values for AvgRespTime. Note that a ghost monitor that attempts to reject hypotheses of the form \(\Phi(f)\geq v\) does not need to maintain the entire set of possible prediction values, but only the sup of the set of possible prediction values, and whether or not the sup is contained in the set. Dually, updating inf (and whether it is contained) suffices to reject hypotheses of the form \(\Phi(f)\leq v\). By defining quantitative safety and liveness via ghost monitors, we not only obtain a conservative and quantitative generalization of the boolean story, but also open up attractive frontiers for quantitative semantics, monitoring, and verification. For example, while the approximation of boolean properties reduces to adding and removing traces to and from a set, the approximation of quantitative properties offers a rich landscape of possibilities. In fact, we can approximate the notion of safety itself. Given an error bound \(\alpha\), the quantitative property \(\Phi\) is \(\alpha\)_-safe_ if and only if for every value \(v\) and every infinite trace \(f\) whose value \(\Phi(f)\) is less than \(v\), all possible prediction values for \(\Phi\) are less than \(v+\alpha\) after some finite prefix of \(f\). This means that, for an \(\alpha\)-safe property \(\Phi\), the ghost monitor may not reject wrong hypotheses of the form \(\Phi(f)\geq v\) after a finite number of observations, once the violation is below the error bound. We show that every quantitative property that is both \(\alpha\)-safe and \(\beta\)-co-safe, for any finite \(\alpha\) and \(\beta\), can be monitored arbitrarily precisely by a monitor that uses only a finite number of states. We are not the first to define quantitative (or multi-valued) definitions of safety and liveness [41, 27]. While the previously proposed quantitative generalizations of safety share strong similarities with our definition (without coinciding completely), our quantitative generalization of liveness is entirely new. The definitions of [27] do not support any safety-liveness decomposition, because their notion of safety is too permissive, and their liveness too restrictive. While the definitions of [41] admit a safety-liveness decomposition, our definition of liveness captures strictly fewer properties. Consequently, our definitions offer a stronger safety-liveness decomposition theorem. Our definitions also fit naturally with the definitions of emptiness, equivalence, and inclusion for quantitative languages [18]. #### 2.0.1 Overview. In Section 2, we introduce quantitative properties. In Section 3, we define quantitative safety as well as safety closure, namely, the property that increases the value of each trace as little as possible to achieve safety. Then, we prove that our definitions preserve classical boolean facts. In particular, we show that a quantitative property \(\Phi\) is safe if and only if \(\Phi\) equals its safety closure if and only if \(\Phi\) is upper semicontinuous. In Section 4, we generalize the safety-progress hierarchy to quantitative properties. We first define limit properties. For \(\ell\in\{\inf,\sup,\liminf,\limsup\}\), the class of \(\ell\)-properties captures those for which the value of each infinite trace can be derived by applying the limit function \(\ell\) to the infinite sequence of values of finite prefixes. We prove that inf-properties coincide with safety, \(\sup\)-properties with co-safety, \(\liminf\)-properties are suprema of countably many safety properties, and \(\limsup\)-properties infima of countably many co-safety properties. The \(\liminf\)-properties generalize the boolean persistence properties of [17]; the \(\limsup\)-properties generalize their response properties. For example, \(\mathsf{AvgRespTime}\) is a \(\liminf\)-property. In Section 5, we introduce quantitative liveness and co-liveness. We prove that our definitions preserve the classical boolean facts, and show that there is a unique property which is both safe and live. As main result, we provide a safety-liveness decomposition that holds for every quantitative property. In Section 6, we define approximate safety and co-safety. We generalize the well-known unfolding approximation of discounted properties for approximate safety and co-safety properties over the extended reals. This allows us to provide a finite-state approximate monitor for these properties. In Section 7, we conclude with future research directions. Related Work.The notions of safety and liveness for boolean properties appeared first in [39] and were later formalized in [4], where safety properties were characterized as closed sets of the Cantor topology on infinite traces, and liveness properties as dense sets. As a consequence, the seminal decomposition theorem followed: every boolean property is an intersection of a safety property and a liveness property. A benefit of such a decomposition lies in the difference between the mathematical arguments used in their verification. While safety properties enable simpler methods such as invariants, liveness properties require more complex approaches such as well-foundedness [42, 5]. These classes were characterized in terms of Buchi automata in [5] and in terms of linear temporal logic in [46]. The safety-progress classification of boolean properties [17] proposes an orthogonal view: rather than partitioning the set of properties, it provides a hierarchy of properties starting from safety. This yields a more fine-grained view of nonsafety properties which distinguishes whether a "good thing" happens at least once (co-safety or "guarantee"), infinitely many times (response), or eventually always (persistence). This classification follows the Borel hierarchy that is induced by the Cantor topology on infinite traces, and has corresponding projections within properties that are definable by finite automata and by formulas of linear temporal logic. Runtime verification, or monitoring, is a lightweight, dynamic verification technique [6], where a monitor watches a system during its execution and tries to decide, after each finite sequence of observations, whether the observed finite computation trace or its unknown infinite extension satisfies a desired property. The safety-liveness dichotomy has profound implications for runtime verification as well: safety is easy to monitor [28], while liveness is not. An early definition of boolean monitorability was equivalent to safety with recursively enumerable sets of bad prefixes [35]. The monitoring of infinite-state boolean safety properties was later studied in [26]. A more popular definition of boolean monitorability [44, 8] accounts for both truth and falsehood, establishing the set of monitorable properties as a strict superset of finite boolean combinations of safety and co-safety [23]. Boolean monitors that use the set possible prediction values can be found in [7]. The notion of boolean monitorability was investigated through the safety-liveness lens in [43] and through the safety-progress lens in [23]. Quantitative properties (a.k.a. "quantitative languages") [18] extend their boolean counterparts by moving from the two-valued truth domain to richer domains such as real numbers. Such properties have been extensively studied from a static verification perspective in the past decade, e.g., in the context of model-checking probabilistic properties [38, 37], games with quantitative objectives [10, 15], specifying quantitative properties [11, 1], measuring distances between systems [2, 16, 22, 29], best-effort synthesis and repair [9, 20], and quantitative analysis of transition systems [47, 14, 21, 19]. More recently, quantitative properties have been also studied from a runtime verification perspective, e.g., for limit monitoring of statistical indicators of infinite traces [25] and for analyzing resource-precision trade-offs in the design of quantitative monitors [33, 30]. To the best of our knowledge, previous definitions of (approximate) safety and liveness in nonboolean domains make implicit assumptions about the spec ification language [48, 34, 24, 45]. We identify two notable exceptions. In [27], the authors generalize the framework of [43] to nonboolean value domains. They provide neither a safety-liveness decomposition of quantitative properties, nor a fine-grained classification of nonsafety properties. In [41], the authors present a safety-liveness decomposition and some levels of the safety-progress hierarchy on multi-valued truth domains, which are bounded distributive lattices. Their motivation is to provide algorithms for model-checking properties on multi-valued truth domains. We present the relationships between their definitions and ours in the relevant sections below. ## 2 Quantitative Properties Let \(\Sigma=\{a,b,\ldots\}\) be a finite alphabet of observations. A _trace_ is an infinite sequence of observations, denoted by \(f,g,h\in\Sigma^{\omega}\), and a _finite trace_ is a finite sequence of observations, denoted by \(s,r,t\in\Sigma^{*}\). Given \(s\in\Sigma^{*}\) and \(w\in\Sigma^{*}\cup\Sigma^{\omega}\), we denote by \(s\prec w\) (resp. \(s\preceq w\)) that \(s\) is a strict (resp. nonstrict) prefix of \(w\). Furthermore, we denote by \(|w|\) the length of \(w\) and, given \(a\in\Sigma\), by \(|w|_{a}\) the number of occurrences of \(a\) in \(w\). A _value domain_\(\mathbb{D}\) is a poset. Unless otherwise stated, we assume that \(\mathbb{D}\) is a nontrivial (i.e., \(\bot\neq\top\)) complete lattice and, whenever appropriate, we write \(0,1,-\infty,\infty\) instead of \(\bot\) and \(\top\) for the least and the greatest elements. We respectively use the terms minimum and maximum for the greatest lower bound and the least upper bound of finitely many elements. Definition 1 (Property): A _quantitative property_ (or simply _property_) is a function \(\Phi:\Sigma^{\omega}\to\mathbb{D}\) from the set of all traces to a value domain. A boolean property \(P\subseteq\Sigma^{\omega}\) is defined as a set of traces. We use the boolean domain \(\mathbb{B}=\{0,1\}\) with \(0<1\) and, in place of \(P\), its _characteristic property_\(\Phi_{P}:\Sigma^{\omega}\to\mathbb{B}\), which is defined by \(\Phi_{P}(f)=1\) if \(f\in P\), and \(\Phi_{P}(f)=0\) if \(f\notin P\). For all properties \(\Phi_{1},\Phi_{2}\) on a domain \(\mathbb{D}\) and all traces \(f\in\Sigma^{\omega}\), we let \(\min(\Phi_{1},\Phi_{2})(f)=\min(\Phi_{1}(f),\Phi_{2}(f))\) and \(\max(\Phi_{1},\Phi_{2})(f)=\max(\Phi_{1}(f),\Phi_{2}(f))\). For a domain \(\mathbb{D}\), the _inverse_ of \(\mathbb{D}\) is the domain \(\overline{\mathbb{D}}\) that contains the same elements as \(\mathbb{D}\) but with the ordering reversed. For a property \(\Phi\), we define its _complement_\(\overline{\Phi}:\Sigma^{\omega}\to\overline{\mathbb{D}}\) by \(\overline{\Phi}(f)=\Phi(f)\) for all \(f\in\Sigma^{\omega}\). Some properties can be defined as limits of value sequences. A _finitary property_\(\pi\colon\Sigma^{*}\to\mathbb{D}\) associates a value with each finite trace. A _value function_\(\ell\colon\mathbb{D}^{\omega}\to\mathbb{D}\) condenses an infinite sequence of values to a single value. Given a finitary property \(\pi\), a value function \(\ell\), and a trace \(f\in\Sigma^{\omega}\), we write \(\ell_{s\prec f}\pi(s)\) instead of \(\ell(\pi(s_{0})\pi(s_{1})\ldots)\), where each \(s_{i}\) fulfills \(s_{i}\prec f\) and \(|s_{i}|=i\). ## 3 Quantitative Safety Given a property \(\Phi:\Sigma^{\omega}\to\mathbb{D}\), a trace \(f\in\Sigma^{\omega}\), and a value \(v\in\mathbb{D}\), the quantitative membership problem [18] asks whether \(\Phi(f)\geq v\). We define quantitative safety as follows: the property \(\Phi\) is safe iff every wrong hypothesis of the form \(\Phi(f)\geq v\) has a finite witness \(s\prec f\). Definition 2 (Safety): A property \(\Phi:\Sigma^{\omega}\to\mathbb{D}\) is _safe_ iff for every \(f\in\Sigma^{\omega}\) and value \(v\in\mathbb{D}\) with \(\Phi(f)\not\geq v\), there is a prefix \(s\prec f\) such that \(\sup_{g\in\Sigma^{\omega}}\Phi(sg)\not\geq v\). Let us illustrate this definition with the _minimal response-time_ property. Example 3: Let \(\Sigma=\{\mathtt{rq},\mathtt{gr},\mathtt{tk},\mathtt{oo}\}\) and \(\mathbb{D}=\mathbb{N}\cup\{\infty\}\). We define the minimal response-time property \(\Phi_{\min}\) through an auxiliary finitary property \(\pi_{\min}\) that computes the minimum response time so far. In a finite or infinite trace, an occurrence of \(\mathtt{rq}\) is _granted_ if it is followed, later, by a \(\mathtt{gr}\), and otherwise it is _pending_. Let \(\pi_{\rm last}(s)=\infty\) if the finite trace \(s\) contains a pending \(\mathtt{rq}\), or no \(\mathtt{rq}\), and \(\pi_{\rm last}(s)=|r|_{\mathtt{tk}}-|t|_{\mathtt{tk}}\) otherwise, where \(r\prec s\) is the longest prefix of \(s\) with a pending \(\mathtt{rq}\), and \(t\prec r\) is the longest prefix of \(r\) without pending \(\mathtt{rq}\). Intuitively, \(\pi_{\rm last}\) provides the response time for the last request when all requests are granted, and \(\infty\) when there is a pending request or no request. Given \(s\in\Sigma^{*}\), taking the minimum of the values of \(\pi_{\rm last}\) over the prefixes \(r\preceq s\) gives us the minimum response time so far. Let \(\pi_{\min}(s)=\min_{r\preceq s}\pi_{\rm last}(r)\) for all \(s\in\Sigma^{*}\), and \(\Phi_{\min}(f)=\lim_{s\prec f}\pi_{\min}(s)\) for all \(f\in\Sigma^{\omega}\). The limit always exists because the minimum is monotonically decreasing. The minimal response-time property is safe. Let \(f\in\Sigma^{\omega}\) and \(v\in\mathbb{D}\) such that \(\Phi_{\min}(f)<v\). Then, some prefix \(s\prec f\) contains a \(\mathtt{rq}\) that is granted after \(u<v\) ticks, in which case, no matter what happens in the future, the minimal response time is guaranteed to be at most \(u\); that is, \(\sup_{g\in\Sigma^{\omega}}\Phi_{\min}(sg)\leq u<v\). If you recall from the introduction the ghost monitor that maintains the sup of possible prediction values for the minimal response-time property, that value is always \(\pi_{\min}\); that is, \(\sup_{g\in\Sigma^{\omega}}\Phi_{\min}(sg)=\pi_{\min}(s)\) for all \(s\in\Sigma^{*}\). Note that in the case of minimal response time, the sup of possible prediction values is always realizable; that is, for all \(s\in\Sigma^{*}\), there exists an \(f\in\Sigma^{\omega}\) such that \(\sup_{g\in\Sigma^{\omega}}\Phi_{\min}(sg)=\Phi_{\min}(sf)\). Remark 4: Quantitative safety generalizes boolean safety. For every boolean property \(P\subseteq\Sigma^{\omega}\), the following statements are equivalent: (i) \(P\) is safe according to the classical definition [4], (ii) its characteristic property \(\Phi_{P}\) is safe, and (iii) for every \(f\in\Sigma^{\omega}\) and \(v\in\mathbb{B}\) with \(\Phi_{P}(f)<v\), there exists a prefix \(s\prec f\) such that for all \(g\in\Sigma^{\omega}\), we have \(\Phi_{P}(sg)<v\). We now generalize the notion of safety closure and present an operation that makes a property safe by increasing the value of each trace as little as possible. Definition 5 (Safety closure): The _safety closure_ of a property \(\Phi\) is the property \(\Phi^{*}\) defined by \(\Phi^{*}(f)=\inf_{s\prec f}\sup_{g\in\Sigma^{\omega}}\Phi(sg)\) for all \(f\in\Sigma^{\omega}\). We can say the following about the safety closure operation. Proposition 6: _For every property \(\Phi:\Sigma^{\omega}\to\mathbb{D}\), the following statements hold._ 1. \(\Phi^{*}\) _is safe._ 2. \(\Phi^{*}(f)\geq\Phi(f)\) _for all_ \(f\in\Sigma^{\omega}\)_._ 3. \(\Phi^{*}(f)=\Phi^{**}(f)\) _for all_ \(f\in\Sigma^{\omega}\)_._ 4. _For every safety property_ \(\Psi:\Sigma^{\omega}\to\mathbb{D}\)_, if_ \(\Phi(f)\leq\Psi(f)\) _for all_ \(f\in\Sigma^{\omega}\)_, then_ \(\Psi(g)\not\leq\Phi^{*}(g)\) _for all_ \(g\in\Sigma^{\omega}\) ### Alternative Characterizations of Quantitative Safety Consider a trace and its prefixes of increasing length. For a given property, the ghost monitor from the introduction maintains, for each prefix, the sup of possible prediction values, i.e., the least upper bound of the property values for all possible infinite continuations. The resulting sequence of monotonically decreasing suprema provides an upper bound on the eventual property value. Moreover, for some properties, this sequence always converges to the property value. If this is the case, then the ghost monitor can always dismiss wrong lower-bound hypotheses after finite prefixes, and vice versa. This gives us an alternative definition for the safety of quantitative properties which, inspired by the notion of Scott continuity, was called _continuity_[33]. We now believe that _upper semicontinuity_ is a more appropriate term, as becomes clear when we consider the Cantor topology on \(\Sigma^{\omega}\) and the value domain \(\mathbb{R}\cup\{-\infty,+\infty\}\). Definition 7 (Upper semicontinuity [33]): A property \(\Phi\) is _upper semicontinuous_ iff \(\Phi(f)=\lim_{s\prec f}\sup_{g\in\Sigma^{\omega}}\Phi(sg)\) for all \(f\in\Sigma^{\omega}\). We note that the minimal response-time property is upper semicontinuous. Example 8: Recall the minimal response-time property \(\Phi_{\min}\) from Example 3. For every trace \(f\in\Sigma^{\omega}\), the \(\Phi_{\min}\) value is the limit of the \(\pi_{\min}\) values for the prefixes of \(f\). Therefore, \(\Phi_{\min}\) is upper semicontinuous. In general, a property is safe iff it maps every trace to the limit of the suprema of possible prediction values. Moreover, we can also characterize safety properties as the properties that are equal to their safety closure. Theorem 3.1: _For every property \(\Phi\), the following statements are equivalent: 1. \(\Phi\) is safe. 2. \(\Phi\) is upper semicontinuous. 3. \(\Phi(f)=\Phi^{*}(f)\) for all \(f\in\Sigma^{\omega}\)._ ### Related Definitions of Quantitative Safety In [41], the authors consider the model-checking problem for properties on multi-valued truth domains. They introduce the notion of multi-safety through a closure operation that coincides with our safety closure. Formally, a property \(\Phi\) is _multi-safe_ iff \(\Phi(f)=\Phi^{*}(f)\) for every \(f\in\Sigma^{\omega}\). It is easy to see the following. Proposition 10: _For every property \(\Phi\), we have \(\Phi\) is multi-safe iff \(\Phi\) is safe._ Although the two definitions of safety are equivalent, our definition is consistent with the membership problem for quantitative automata and motivated by the monitoring of quantitative properties. In [27], the authors extend a refinement of the safety-liveness classification for monitoring [43] to richer domains. They introduce the notion of verdict-safety through dismissibility of values not less than or equal to the property value. Formally, a property \(\Phi\) is _verdict-safe_ iff for every \(f\in\Sigma^{\omega}\) and \(v\not\leq\Phi(f)\), there exists a prefix \(s\prec f\) such that for all \(g\in\Sigma^{\omega}\), we have \(\Phi(sg)\neq v\). We demonstrate that verdict-safety is weaker than safety. Moreover, we provide a condition under which the two definitions coincide. To achieve this, we reason about sets of possible prediction values: for a property \(\Phi\) and \(s\in\Sigma^{*}\), let \(P_{\Phi,s}=\{\Phi(sf)\mid f\in\Sigma^{\omega}\}\). Lemma 11: _A property \(\Phi\) is verdict-safe iff \(\Phi(f)=\sup(\lim_{s\prec f}P_{\Phi,s})\) for all \(f\in\Sigma^{\omega}\)._ Notice that \(\Phi\) is safe iff \(\Phi(f)=\lim_{s\prec f}(\sup P_{\Phi,s})\) for all \(f\in\Sigma^{\omega}\). Below we describe a property that is verdict-safe but not safe. Example 12: Let \(\Sigma=\{a,b\}\). Define \(\Phi\) by \(\Phi(f)=0\) if \(f=a^{\omega}\), and \(\Phi(f)=|s|\) otherwise, where \(s\prec f\) is the shortest prefix in which \(b\) occurs. The property \(\Phi\) is verdict-safe. First, observe that \(\mathbb{D}=\mathbb{N}\cup\{\infty\}\). Let \(f\in\Sigma^{\omega}\) and \(v\in\mathbb{D}\) with \(v>\Phi(f)\). If \(\Phi(f)>0\), then \(f\) contains \(b\), and \(\Phi(f)=|s|\) for some \(s\prec f\) in which \(b\) occurs for the first time. After the prefix \(s\), all \(g\in\Sigma^{\omega}\) yield \(\Phi(sg)=|s|\), thus all values above \(|s|\) are rejected. If \(\Phi(f)=0\), then \(f=a^{\omega}\). Let \(v\in\mathbb{D}\) with \(v>0\), and consider the prefix \(a^{v}\prec f\). Observe that the set of possible prediction values after reading \(a^{v}\) is \(\{0,v+1,v+2,\ldots\}\), therefore \(a^{v}\) allows the ghost monitor to reject the value \(v\). However, \(\Phi\) is not safe because, although \(\Phi(a^{\omega})=0\), for every \(s\prec a^{\omega}\), we have \(\sup_{g\in\Sigma^{\omega}}\Phi(sg)=\infty\). The separation is due to the fact that, for some finite traces, the sup of possible prediction values cannot be realized by any future. Below, we present a condition that prevents such cases. Definition 13 (Supremum closedness): A property \(\Phi\) is _sup-closed_ iff for every \(s\in\Sigma^{*}\) we have \(\sup P_{\Phi,s}\in P_{\Phi,s}\). We remark that the minimal response-time property is sup-closed. Example 14: The safety property minimal response-time \(\Phi_{\min}\) from Example 3 is sup-closed. This is because, for every \(s\in\Sigma^{*}\), the continuation \(\mathsf{gr}^{\omega}\) realizes the value \(\sup_{g\in\Sigma^{\omega}}\Phi(sg)\). Recall from the introduction the ghost monitor that maintains the sup of possible prediction values. For monitoring sup-closed properties this suffices; otherwise the ghost monitor also needs to maintain whether or not the supremum of the possible prediction values is realizable by some future continuation. In general, we have the following for every sup-closed property. Lemma 15: _For every \(\sup\)-closed property \(\Phi\) and for all \(f\in\Sigma^{\omega}\), we have \(\lim_{s\prec f}(\sup P_{\Phi,s})=\sup(\lim_{s\prec f}P_{\Phi,s})\)._ As a consequence of the lemmas above, we get the following. Theorem 16: _A \(\sup\)-closed property \(\Phi\) is safe iff \(\Phi\) is verdict-safe._ ## 4 The Quantitative Safety-Progress Hierarchy Our quantitative extension of safety closure allows us to build a Borel hierarchy, which is a quantitative extension of the boolean safety-progress hierarchy [17]. First, we show that safety properties are closed under pairwise min and max. Proposition 17: _For every value domain \(\mathbb{D}\), the set of safety properties over \(\mathbb{D}\) is closed under \(\min\) and \(\max\)._ The boolean safety-progress classification of properties is a Borel hierarchy built from the Cantor topology of traces. Safety and co-safety properties lie on the first level, respectively corresponding to the closed sets and open sets of the topology. The second level is obtained through countable unions and intersections of properties from the first level: persistence properties are countable unions of closed sets, while response properties are countable intersections of open sets. We generalize this construction to the quantitative setting. In the boolean case, each property class is defined through an operation that takes a set \(S\subseteq\Sigma^{*}\) of finite traces and produces a set \(P\subseteq\Sigma^{\omega}\) of infinite traces. For example, to obtain a co-safety property from \(S\subseteq\Sigma^{*}\), the corresponding operation yields \(S\Sigma^{\omega}\). Similarly, we formalize each property class by a value function. For this, we define the notion of _limit property_. Definition 18 (Limit property): A property \(\Phi:\Sigma^{\omega}\to\mathbb{D}\) is a _limit property_ iff there exists a finitary property \(\pi:\Sigma^{*}\to\mathbb{D}\) and a value function \(\ell:\mathbb{D}^{\omega}\to\mathbb{D}\) such that \(\Phi(f)=\ell_{s\prec f}\pi(s)\) for all \(f\in\Sigma^{\omega}\). We denote this by \(\Phi=(\pi,\ell)\), and write \(\Phi(s)\) instead of \(\pi(s)\). In particular, if \(\Phi=(\pi,\ell)\), where \(\ell\in\{\inf,\sup,\liminf,\limsup\}\), then \(\Phi\) is an \(\ell\)-property. To account for the value functions that construct the first two levels of the safety-progress hierarchy, we start our investigation with inf- and sup-properties and later focus on \(\liminf\)- and \(\limsup\)- properties [18]. ### Infimum and Supremum Properties Let us start with an example by demonstrating that the minimal response-time property is an inf-property. Example 19: Recall the safety property \(\Phi_{\min}\) of minimal response time from Example 3. We can equivalently define \(\Phi_{\min}\) as a limit property by taking the finitary property \(\pi_{\mathrm{last}}\) and the value function \(\inf\). As discussed in Example 3, the function \(\pi_{\mathrm{last}}\) outputs the response time for the last request when all requests are granted, and \(\infty\) when there is a pending request or no request. Then \(\inf_{s\prec f}\pi_{\mathrm{last}}(s)=\Phi_{\min}(f)\) for all \(f\in\Sigma^{\omega}\), and therefore \(\Phi_{\min}=(\pi_{\mathrm{last}},\inf)\). In fact, the safety properties coincide with inf-properties. Theorem 4.1: _A property \(\Phi\) is safe iff \(\Phi\) is an inf-property._ Defining the minimal response-time property as a limit property, we observe the following relation between its behavior on finite traces and infinite traces. Example 21: Consider the property \(\Phi_{\min}=(\pi_{\mathrm{last}},\inf)\) from Example 19. Let \(f\in\Sigma^{\omega}\) and \(v\in\mathbb{D}\). Observe that if the minimal response time of \(f\) is at least \(v\), then the last response time for each prefix \(s\prec f\) is also at least \(v\). Conversely, if the minimal response time of \(f\) is below \(v\), then there is a prefix \(s\prec f\) for which the last response time is also below \(v\). In light of this observation, we provide another characterization of safety properties, explicitly relating the specified behavior of the limit property on finite and infinite traces. Theorem 5.1: _A property \(\Phi:\Sigma^{\omega}\to\mathbb{D}\) is safe iff \(\Phi\) is a limit property such that for every \(f\in\Sigma^{\omega}\) and value \(v\in\mathbb{D}\), we have \(\Phi(f)\geq v\) iff \(\Phi(s)\geq v\) for all \(s\prec f\)._ Recall that a safety property allows rejecting wrong lower-bound hypotheses with a finite witness, by assigning a tight upper bound to each trace. We define co-safety properties symmetrically: a property \(\Phi\) is co-safe iff every wrong hypothesis of the form \(\Phi(f)\leq v\) has a finite witness \(s\prec f\). Definition 23 (Co-safety): A property \(\Phi:\Sigma^{\omega}\to\mathbb{D}\) is _co-safe_ iff for every \(f\in\Sigma^{\omega}\) and value \(v\in\mathbb{D}\) with \(\Phi(f)\not\leq v\), there exists a prefix \(s\prec f\) such that \(\inf_{g\in\Sigma^{\omega}}\Phi(sg)\not\leq v\). We note that our definition generalizes boolean co-safety, and thus a dual of Remark 4 holds also for co-safety. Moreover, we analogously define the notions of co-safety closure and lower semicontinuity. Definition 24 (Co-safety closure): The _co-safety closure_ of a property \(\Phi\) is the property \(\Phi_{*}(f)\) defined by \(\Phi_{*}(f)=\sup_{s\prec f}\inf_{g\in\Sigma^{\omega}}\Phi(sg)\) for all \(f\in\Sigma^{\omega}\). Definition 25 (Lower semicontinuity [33]): A property \(\Phi\) is _lower semicontinuous_ iff \(\Phi(f)=\lim_{s\prec f}\inf_{g\in\Sigma^{\omega}}\Phi(sg)\) for all \(f\in\Sigma^{\omega}\). Now, we define and investigate the _maximal response-time_ property. In particular, we show that it is a sup-property that is co-safe and lower semicontinuous. Example 26: Let \(\Sigma=\{\texttt{rq},\texttt{gr},\texttt{tk},\texttt{oo}\}\) and \(\mathbb{D}=\mathbb{N}\cup\{\infty\}\). We define the maximal response-time property \(\Phi_{\max}\) through a finitary property that computes the current response time for each finite trace and the value function \(\sup\). In particular, for all \(s\in\Sigma^{*}\), let \(\pi_{\text{curr}}(s)=|s|_{\texttt{tk}}-|r|_{\texttt{tk}}\), where \(r\preceq s\) is the longest prefix of \(s\) without pending \(\texttt{rq}\); then \(\Phi_{\max}=(\pi_{\text{curr}},\sup)\). Note the contrast between \(\pi_{\text{curr}}\) and \(\pi_{\text{last}}\) from Example 3. While \(\pi_{\text{curr}}\) takes an optimistic view of the future and assumes the gr will follow immediately, \(\pi_{\text{last}}\) takes a pessimistic view and assumes the gr will never follow. Let \(f\in\Sigma^{\omega}\) and \(v\in\mathbb{D}\). If the maximal response time of \(f\) is greater than \(v\), then for some prefix \(s\prec f\) the current response time is greater than \(v\) also, which means that, no matter what happens in the future, the maximal response time is greater than \(v\) after observing \(s\). Therefore, \(\Phi_{\max}\) is co-safe. By a similar reasoning, the sequence of greatest lower bounds of possible prediction values over the prefixes converges to the property value. In other words, we have \(\lim_{s\prec f}\inf_{g\in\Sigma^{\omega}}\Phi_{\max}(sg)=\Phi_{\max}(f)\) for all \(f\in\Sigma^{\omega}\). Thus \(\Phi_{\max}\) is also lower semicontinuous, and it equals its co-safety closure. Now, consider the complementary property \(\overline{\Phi_{\max}}\), which maps every trace to the same value as \(\Phi_{\max}\) on a domain where the order is reversed. It is easy to see that \(\overline{\Phi_{\max}}\) is safe. Finally, recall the ghost monitor from the introduction, which maintains the infimum of possible prediction values for the maximal response-time property. Since the maximal response-time property is inf-closed, the output of the ghost monitor after every prefix is realizable by some future continuation, and that output is \(\pi_{\max}(s)=\max_{r\preceq s}\pi_{\text{curr}}(r)\) for all \(s\in\Sigma^{*}\). Generalizing the observations in the example above, we obtain the following characterizations due to the duality between safety and co-safety. Theorem 4.1: _For every property \(\Phi:\Sigma^{\omega}\to\mathbb{D}\), the following are equivalent._ 1. \(\Phi\) _is co-safe._ 2. \(\Phi\) _is lower semicontinuous._ 3. \(\Phi(f)=\Phi_{*}(f)\) _for every_ \(f\in\Sigma^{\omega}\)_._ 4. \(\Phi\) _is a_ \(\sup\)_-property._ 5. \(\Phi\) _is a limit property such that for every_ \(f\in\Sigma^{\omega}\) _and value_ \(v\in\mathbb{D}\)_, we have_ \(\Phi(f)\leq v\) _iff_ \(\Phi(s)\leq v\) _for all_ \(s\prec f\)_._ 6. \(\overline{\Phi}\) _is safe._ ### Limit Inferior and Limit Superior Properties Let us start with an observation on the minimal response-time property. Example 28: Recall once again the minimal response-time property \(\Phi_{\min}\) from Example 3. In the previous subsection, we presented an alternative definition of \(\Phi_{\min}\) to establish that it is an inf-property. Observe that there is yet another equivalent definition of \(\Phi_{\min}\) which takes the monotonically decreasing finitary property \(\pi_{\min}\) from Example 3 and pairs it with either the value function \(\liminf\), or with \(\limsup\). Hence \(\Phi_{\min}\) is both a \(\liminf\)- and a \(\limsup\)-property. Before moving on to investigating \(\liminf\)- and \(\limsup\)-properties more closely, we show that the above observation can be generalized. Theorem 4.2: _Every \(\ell\)-property \(\Phi\), for \(\ell\in\{\inf,\sup\}\), is both a \(\liminf\)- and a \(\limsup\)-property._ An interesting response-time property beyond safety and co-safety arises when we remove extreme values: instead of minimal response time, consider the property that maps every trace to a value that bounds from below, not all response times, but all of them from a point onward (i.e., all but finitely many). We call this property _tail-minimal response time_. Example 30: Let \(\Sigma=\{\mathtt{rq},\mathtt{gr},\mathtt{tk},\mathtt{oo}\}\) and \(\pi_{\mathrm{last}}\) be the finitary property from Example 3 that computes the last response time. We define the tail-minimal response-time property as \(\Phi_{\mathrm{tmin}}=(\pi_{\mathrm{last}},\liminf)\). Intuitively, it maps each trace to the least response time over all but finitely many requests. This property is interesting as a performance measure, because it focuses on the long-term performance by ignoring finitely many outliers. Consider \(f\in\Sigma^{\omega}\) and \(v\in\mathbb{D}\). Observe that, if the tail-minimal response time of \(f\) is at least \(v\), then there is a prefix \(s\prec f\) such that for all longer prefixes \(s\preceq r\prec f\), the last response time in \(r\) is at least \(v\), and vice versa. Similarly as for \(\inf\)-properties, we characterize \(\liminf\)-properties through a relation between property behaviors on finite and infinite traces. Theorem 4.3: _A property \(\Phi:\Sigma^{\omega}\to\mathbb{D}\) is a \(\liminf\)-property iff \(\Phi\) is a limit property such that for every \(f\in\Sigma^{\omega}\) and value \(v\in\mathbb{D}\), we have \(\Phi(f)\geq v\) iff there exists \(s\prec f\) such that for all \(s\preceq r\prec f\), we have \(\Phi(r)\geq v\)._ Now, we show that the tail-minimal response-time property can be expressed as a countable supremum of \(\inf\)-properties. Example 32: Let \(i\in\mathbb{N}\) and define \(\pi_{i,\mathrm{last}}\) as a finitary property that imitates \(\pi_{\mathrm{last}}\) from Example 3, but ignores the first \(i\) observations of every finite trace. Formally, for \(s\in\Sigma^{*}\), we define \(\pi_{i,\mathrm{last}}(s)=\pi_{\mathrm{last}}(r)\) for \(s=s_{i}r\) where \(s_{i}\preceq s\) with \(|s_{i}|=i\), and \(r\in\Sigma^{*}\). Observe that an equivalent way to define \(\Phi_{\mathrm{tmin}}\) from Example 30 is \(\sup_{i\in\mathbb{N}}(\inf_{s\prec f}(\pi_{i,\mathrm{last}}(s)))\) for all \(f\in\Sigma^{\omega}\). Intuitively, for each \(i\in\mathbb{N}\), we obtain an \(\inf\)-property that computes the minimal response time of the suffixes of a given trace. Taking the supremum over these, we obtain the greatest lower bound on all but finitely many response times. We generalize this observation and show that every \(\liminf\)-property is a countable supremum of \(\inf\)-properties. Theorem 4.1: _Every \(\liminf\)-property is a countable supremum of \(\inf\)-properties._ We would also like to have the converse of Theorem 4.1, i.e., that every countable supremum of \(\inf\)-properties is a \(\liminf\)-property. Currently, we are able to show only the following. Theorem 4.2: _For every infinite sequence \((\Phi_{i})_{i\in\mathbb{N}}\) of \(\inf\)-properties, there is a \(\liminf\)-property \(\Phi\) such that \(\sup_{i\in\mathbb{N}}\Phi_{i}(f)\leq\Phi(f)\)._ We conjecture that some \(\liminf\)-property that satisfies Theorem 4.1 is also a lower bound on the countable supremum that occurs in the theorem. This, together with Theorem 4.1, would imply the converse of Theorem 4.1. Proving the converse of Theorem 4.1 would give us, thanks to the following duality, that the \(\liminf\)- and \(\limsup\)-properties characterize the second level of the Borel hierarchy of the topology induced by the safety closure operator. Proposition 35: _A property \(\Phi\) is a \(\liminf\)-property iff its complement \(\overline{\Phi}\) is a \(\limsup\)-property._ ## 5 Quantitative Liveness Similarly as for safety, we take the perspective of the quantitative membership problem to define liveness: a property \(\Phi\) is live iff, whenever a property value is less than \(\top\), there exists a value \(v\) for which the wrong hypothesis \(\Phi(f)\geq v\) can never be dismissed by any finite witness \(s\prec f\). Definition 36 (Liveness): A property \(\Phi:\Sigma^{\omega}\to\mathbb{D}\) is _live_ iff for all \(f\in\Sigma^{\omega}\), if \(\Phi(f)<\top\), then there exists a value \(v\in\mathbb{D}\) such that \(\Phi(f)\not\geq v\) and for all prefixes \(s\prec f\), we have \(\sup_{g\in\Sigma^{\omega}}\Phi(sg)\geq v\). An equivalent definition can be given through the safety closure. Theorem 5.1: _A property \(\Phi\) is live iff \(\Phi^{*}(f)>\Phi(f)\) for every \(f\in\Sigma^{\omega}\) with \(\Phi(f)<\top\)._ Our definition generalizes boolean liveness. A boolean property \(P\subseteq\Sigma^{\omega}\) is live according to the classical definition [4] iff its characteristic property \(\Phi_{P}\) is live according to our definition. Moreover, the intersection of safety and liveness contains only the single degenerate property that always outputs \(\top\). **Proposition 38**: _A property \(\Phi\) is safe and live iff \(\Phi(f)=\top\) for all \(f\in\Sigma^{\omega}\)._ We define co-liveness symmetrically, and note that the duals of the observations above also hold for co-liveness. **Definition 39** (Co-liveness): _A property \(\Phi:\Sigma^{\omega}\to\mathbb{D}\) is co-live iff for all \(f\in\Sigma^{\omega}\), if \(\Phi(f)>\bot\), then there exists a value \(v\in\mathbb{D}\) such that \(\Phi(f)\not\leq v\) and for all prefixes \(s\prec f\), we have \(\inf_{g\in\Sigma^{\omega}}\Phi(sg)\leq v\)._ Next, we present some examples of liveness and co-liveness properties. We start by showing that \(\liminf\)- and \(\limsup\)-properties can be live and co-live. Example 40: Let \(\Sigma=\{a,b\}\) be an alphabet, and let \(P=\Box\Diamond a\) and \(Q=\Diamond\Box b\) be boolean properties defined in linear temporal logic. Consider their characteristic properties \(\Phi_{P}\) and \(\Phi_{Q}\). As we pointed out earlier, our definitions generalize their boolean counterparts, therefore \(\Phi_{P}\) and \(\Phi_{Q}\) are both live and co-live. Moreover, \(\Phi_{P}\) is a \(\limsup\)-property: define \(\pi_{P}(s)=1\) if \(s\in\Sigma^{*}a\), and \(\pi_{P}(s)=0\) otherwise, and observe that \(\Phi_{P}(f)=\limsup_{s\prec f}\pi_{P}(s)\) for all \(f\in\Sigma^{\omega}\). Similarly, \(\Phi_{Q}\) is a \(\liminf\)-property. Now, we show that the maximal response-time property is live, and the minimal response time is co-live. Example 41: Recall the co-safety property \(\Phi_{\max}\) of maximal response time from Example 26. Let \(f\in\Sigma^{\omega}\) such that \(\Phi_{\max}(f)<\infty\). We can extend every prefix \(s\prec f\) with \(g=\mathtt{rqtk}^{\omega}\), which gives us \(\Phi_{\max}(sg)=\infty>\Phi(f)\). Equivalently, for every \(f\in\Sigma^{\omega}\), we have \(\Phi^{*}_{\max}(f)=\infty>\Phi_{\max}(f)\). Hence \(\Phi_{\max}\) is live and, analogously, the safety property \(\Phi_{\min}\) from Example 3 is co-live. Finally, we show that the _average response-time_ property is live and co-live. Example 42: Let \(\Sigma=\{\mathtt{rq},\mathtt{gr},\mathtt{tk},\mathtt{oo}\}\). For all \(s\in\Sigma^{*}\), let \(p(s)=1\) if there is no pending \(\mathtt{rq}\) in \(s\), and \(p(s)=0\) otherwise. Define \(\pi_{\rm valid}(s)=|\{r\preceq s\mid\exists t\in\Sigma^{*}:r=\mathtt{rq}\wedge p( t)=1\}|\) as the number of valid requests in \(s\), and define \(\pi_{\rm time}(s)\) as the number of \(\mathtt{tk}\) observations that occur after a valid \(\mathtt{rq}\) and before the matching \(\mathtt{gr}\). Then, \(\Phi_{\rm avg}=(\pi_{\rm avg},\liminf)\), where \(\pi_{\rm avg}(s)=\frac{\pi_{\rm time}(s)}{\pi_{\rm valid}(s)}\) for all \(s\in\Sigma^{*}\) with \(\pi_{\rm valid}(s)>0\), and \(\pi_{\rm avg}(s)=\infty\) otherwise. For example, \(\pi_{\rm avg}(s)=\frac{3}{2}\) for \(s=\mathtt{rqtk}\mathtt{gr}\mathtt{tk}\mathtt{rq}\mathtt{tk}\). Note that \(\Phi_{\rm avg}\) is a \(\liminf\)-property. The property \(\Phi_{\rm avg}\) is defined on the value domain \([0,\infty]\) and is both live and co-live. To see this, let \(f\in\Sigma^{\omega}\) such that \(0<\Phi_{\rm avg}(f)<\infty\) and, for every prefix \(s\prec f\), consider \(g=\mathtt{rqtk}^{\omega}\) and \(h=\mathtt{gr}\,(\mathtt{rq}\,\mathtt{gr})^{\omega}\). Since \(sg\) has a pending request followed by infinitely many clock ticks, we have \(\Phi_{\rm avg}(sg)=\infty\). Similarly, since \(sh\) eventually has all new requests immediately granted, we get \(\Phi_{\rm avg}(sh)=0\). ### The Quantitative Safety-Liveness Decomposition A celebrated theorem states that every boolean property can be expressed as an intersection of a safety property and a liveness property [4]. In this section, we prove the analogous result for the quantitative setting. Example 4: Let \(\Sigma=\{\mathtt{rq},\mathtt{gr},\mathtt{tk},\mathtt{oo}\}\). Recall the maximal response-time property \(\Phi_{\max}\) from Example 26, and the average response-time property \(\Phi_{\mathrm{avg}}\) from Example 42. Let \(n>0\) be an integer and define a new property \(\Phi\) by \(\Phi(f)=\Phi_{\mathrm{avg}}(f)\) if \(\Phi_{\max}(f)\leq n\), and \(\Phi(f)=0\) otherwise. For the safety closure of \(\Phi\), we have \(\Phi^{*}(f)=n\) if \(\Phi_{\max}(f)\leq n\), and \(\Phi^{*}(f)=0\) otherwise. Now, we further define \(\Psi(f)=\Phi_{\mathrm{avg}}(f)\) if \(\Phi_{\max}(f)\leq n\), and \(\Psi(f)=n\) otherwise. Observe that \(\Psi\) is live, because every prefix of a trace whose value is less than \(n\) can be extended to a greater value. Finally, note that for all \(f\in\Sigma^{\omega}\), we can express \(\Phi(f)\) as the pointwise minimum of \(\Phi^{*}(f)\) and \(\Psi(f)\). Intuitively, the safety part \(\Phi^{*}\) of this decomposition checks whether the maximal response time stays below the permitted bound, and the liveness part \(\Psi\) keeps track of the average response time as long as the bound is satisfied. Following a similar construction, we show that a safety-liveness decomposition exists for every property. Theorem 5.1: _For every property \(\Phi\), there exists a liveness property \(\Psi\) such that \(\Phi(f)=\min(\Phi^{*}(f),\Psi(f))\) for all \(f\in\Sigma^{\omega}\)._ In particular, if the given property is safe or live, the decomposition is trivial. Remark 4: Let \(\Phi\) be a property. If \(\Phi\) is safe (resp. live), then the safety (resp. liveness) part of the decomposition is \(\Phi\) itself, and the liveness (resp. safety) part is the constant property that maps every trace to \(\top\). For co-safety and co-liveness, the duals of Theorem 5.1 and Remark 45 hold. In particular, every property is the pointwise maximum of its co-safety closure and a co-liveness property. ### Related Definitions of Quantitative Liveness In [41], the authors define a property \(\Phi\) as _multi-live_ iff \(\Phi^{*}(f)>\bot\) for all \(f\in\Sigma^{\omega}\). We show that our definition is more restrictive, resulting in fewer liveness properties while still allowing a safety-liveness decomposition. Proposition 4: _Every live property is multi-live, and the inclusion is strict._ We provide a separating example on a totally ordered domain below. Example 4: Let \(\Sigma=\{a,b,c\}\), and consider the following property: \(\Phi(f)=0\) if \(f\models\Box a\), and \(\Phi(f)=1\) if \(f\models\lozenge c\), and \(\Phi(f)=2\) otherwise (i.e., if \(f\models\lozenge b\wedge\Box\neg c\)). For all \(f\in\Sigma^{\omega}\) and prefixes \(s\prec f\), we have \(\Phi(sc^{\omega})=1\). Thus \(\Phi^{*}(f)\neq\bot\), which implies that \(\Phi\) is multi-live. However, \(\Phi\) is not live. Indeed, for every \(f\in\Sigma^{\omega}\) such that \(f\models\lozenge c\), we have \(\Phi(f)=1<\top\). Moreover, \(f\) admits some prefix \(s\) that contains an occurrence of \(c\), thus satisfying \(\sup_{g\in\Sigma^{\omega}}\Phi(sg)=1\). In [27], the authors define a property \(\Phi\) as _verdict-live_ iff for every \(f\in\Sigma^{\omega}\) and value \(v\not\leq\Phi(f)\), every prefix \(s\prec f\) satisfies \(\Phi(sg)=v\) for some \(g\in\Sigma^{\omega}\). We show that our definition is more liberal. **Proposition 48**: _Every verdict-live property is live, and the inclusion is strict._ We provide a separating example below, concluding that our definition is strictly more general even for totally ordered domains. Example 49: Let \(\Sigma=\{a,b\}\), and consider the following property: \(\Phi(f)=0\) if \(f\not\models\Diamond b\), and \(\Phi(f)=1\) if \(f\models\Diamond(b\wedge\bigcirc\Diamond b)\), and \(\Phi(f)=2^{-|s|}\) otherwise, where \(s\prec f\) is the shortest prefix in which \(b\) occurs. Consider an arbitrary \(f\in\Sigma^{\omega}\). If \(\Phi(f)=1\), then the liveness condition is vacuously satisfied. If \(\Phi(f)=0\), then \(f=a^{\omega}\), and every prefix \(s\prec f\) can be extended with \(g=ba^{\omega}\) or \(h=b^{\omega}\) to obtain \(\Phi(sg)=2^{-(|s|+1)}\) and \(\Phi(sh)=1\). If \(0<\Phi(f)<1\), then \(f\) satisfies \(\Diamond b\) but not \(\Diamond(b\wedge\bigcirc\Diamond b)\), and every prefix \(s\prec f\) can be extended with \(b^{\omega}\) to obtain \(\Phi(sb^{\omega})=1\). Hence \(\Phi\) is live. However, \(\Phi\) is not verdict-live. To see this, consider the trace \(f=a^{k}ba^{\omega}\) for some integer \(k\geq 1\) and note that \(\Phi(f)=2^{-(k+1)}\). Although all prefixes of \(f\) can be extended to reach the value \(1\), the value domain contains elements between \(\Phi(f)\) and \(1\), namely the values \(2^{-m}\) for \(1\leq m\leq k\). Each of these values can be rejected after reading a finite prefix of \(f\), because for \(n\geq m\) it is not possible to extend \(a^{n}\) to reach the value \(2^{-m}\). ## 6 Approximate Monitoring through Approximate Safety In this section, we consider properties on extended reals \(\mathbb{R}^{\pm\infty}=\mathbb{R}\cup\{-\infty,+\infty\}\). We denote by \(\mathbb{R}_{\geq 0}\) the set of nonnegative real numbers. Definition 50 (Approximate safety and co-safety): Let \(\alpha\in\mathbb{R}_{\geq 0}\). A property \(\Phi\) is \(\alpha\)-safe iff for every \(f\in\Sigma^{\omega}\) and value \(v\in\mathbb{R}^{\pm\infty}\) with \(\Phi(f)<v\), there exists a prefix \(s\prec f\) such that \(\sup_{g\in\Sigma^{\omega}}\Phi(sg)<v+\alpha\). Similarly, \(\Phi\) is \(\alpha\)-co-safe iff for every \(f\in\Sigma^{\omega}\) and \(v\in\mathbb{R}^{\pm\infty}\) with \(\Phi(f)>v\), there exists \(s\prec f\) such that \(\inf_{g\in\Sigma^{\omega}}\Phi(sg)>v-\alpha\). When \(\Phi\) is \(\alpha\)-safe (resp. \(\alpha\)-co-safe) for some \(\alpha\in\mathbb{R}_{\geq 0}\), we say that \(\Phi\) is _approximately safe_ (resp. _approximately co-safe_). Approximate safety can be characterized through the following relation with the safety closure. Proposition 51: _For every error bound \(\alpha\in\mathbb{R}_{\geq 0}\), a property \(\Phi\) is \(\alpha\)-safe iff \(\Phi^{*}(f)-\Phi(f)\leq\alpha\) for all \(f\in\Sigma^{\omega}\)._ An analogue of Proposition 51 holds for approximate co-safety and the co-safety closure. Moreover, approximate safety and approximate co-safety are dual notions that are connected by the complement operation, similarly to their precise counterparts (Theorem 4.1). ### The Intersection of Approximate Safety and Co-safety Recall the ghost monitor from the introduction. If, after a finite number of observations, all the possible prediction values are close enough, then we can simply freeze the current value and achieve a sufficiently small error. This happens for properties that are both approximately safe and approximately co-safe, generalizing the unfolding approximation of discounted properties [13]. Proposition 5.2: _For every limit property \(\Phi\) and all error bounds \(\alpha,\beta\in\mathbb{R}_{\geq 0}\), if \(\Phi\) is \(\alpha\)-safe and \(\beta\)-co-safe, then the set \(S_{\delta}=\{s\in\Sigma^{*}\mid\sup_{r_{1}\in\Sigma^{*}}\Phi(sr_{1})-\inf_{r_{2} \in\Sigma^{*}}\Phi(sr_{2})\geq\delta\}\) is finite for all reals \(\delta>\alpha+\beta\)._ Based on this proposition, we show that, for limit properties that are both approximately safe and approximately co-safe, the influence of the suffix on the property value is eventually negligible. Theorem 6.2: _For every limit property \(\Phi\) such that \(\Phi(f)\in\mathbb{R}\) for all \(f\in\Sigma^{\omega}\), and for all error bounds \(\alpha,\beta\in\mathbb{R}_{\geq 0}\), if \(\Phi\) is \(\alpha\)-safe and \(\beta\)-co-safe, then for every real \(\delta>\alpha+\beta\) and trace \(f\in\Sigma^{\omega}\), there is a prefix \(s\prec f\) such that for all continuations \(w\in\Sigma^{*}\cup\Sigma^{\omega}\), we have \(|\Phi(sw)-\Phi(s)|<\delta\)._ We illustrate this theorem with a _discounted safety_ property. Example 5.4: Let \(P\subseteq\Sigma^{\omega}\) be a boolean safety property. We define the finitary property \(\pi_{P}:\Sigma^{*}\to[0,1]\) as follows: \(\pi_{P}(s)=1\) if \(sf\in P\) for some \(f\in\Sigma^{\omega}\), and \(\pi_{P}(s)=1-2^{-|r|}\) otherwise, where \(r\preceq s\) is the shortest prefix with \(rf\notin P\) for all \(f\in\Sigma^{\omega}\). The limit property \(\Phi=(\pi_{P},\inf)\) is called _discounted safety_[3]. Because \(\Phi\) is an inf-property, it is safe by Theorem 5.1. Now consider the finitary property \(\pi^{\prime}_{P}\) defined by \(\pi^{\prime}_{P}(s)=1-2^{-|s|}\) if \(sf\in P\) for some \(f\in\Sigma^{\omega}\), and \(\pi^{\prime}_{P}(s)=1-2^{-|r|}\) otherwise, where \(r\preceq s\) is the shortest prefix with \(rf\notin P\) for all \(f\in\Sigma^{\omega}\). Let \(\Phi^{\prime}=(\pi^{\prime}_{P},\sup)\), and note that \(\Phi(f)=\Phi^{\prime}(f)\) for all \(f\in\Sigma^{\omega}\). Hence \(\Phi\) is also co-safe, because it is a sup-property. Let \(f\in\Sigma^{\omega}\) and \(\delta>0\). For every prefix \(s\prec f\), the set of possible prediction values is either the range \([1-2^{-|s|},1]\) or the singleton \(\{1-2^{-|r|}\}\), where \(r\preceq s\) is chosen as above. In the latter case, we have \(|\Phi(sw)-\Phi(s)|=0<\delta\) for all \(w\in\Sigma^{*}\cup\Sigma^{\omega}\). In the former case, since the range becomes smaller as the prefix grows, there is a prefix \(s^{\prime}\prec f\) with \(2^{-|s^{\prime}|}<\delta\), which yields \(|\Phi(s^{\prime}w)-\Phi(s^{\prime})|<\delta\) for all \(w\in\Sigma^{*}\cup\Sigma^{\omega}\). ### Finite-state Approximate Monitoring Monitors with finite state spaces are particularly desirable, because finite automata enjoy a plethora of desirable closure and decidability properties. Here, we prove that properties that are both approximately safe and approximately co-safe can be monitored approximately by a finite-state monitor. First, we recall the notion of abstract quantitative monitor from [30]. A binary relation \(\sim\) over \(\Sigma^{*}\) is an _equivalence relation_ iff it is reflexive, symmetric, and transitive. Such a relation is _right-monotonic_ iff \(s_{1}\sim s_{2}\) implies \(s_{1}r\sim s_{2}r\) for all \(s_{1},s_{2},r\in\Sigma^{*}\). For an equivalence relation \(\sim\) over \(\Sigma^{*}\) and a finite trace \(s\in\Sigma^{*}\), we write \([s]_{\sim}\) for the equivalence class of \(\sim\) to which \(s\) belongs. When \(\sim\) is clear from the context, we write \([s]\) instead. We denote by \(\Sigma^{*}/\!\!\sim\) the quotient of the relation \(\sim\). Definition 5.5 (Abstract monitor [30]): An _abstract monitor_\(\mathcal{M}=(\sim,\gamma)\) is a pair consisting of a right-monotonic equivalence relation \(\sim\) on \(\Sigma^{*}\) and a function \(\gamma\colon(\Sigma^{*}/\!\sim)\to\mathbb{R}^{\pm\infty}\). The monitor \(\mathcal{M}\) is _finite-state_ iff the relation \(\sim\) has finitely many equivalence classes. Let \(\delta_{\mathrm{fin}},\delta_{\mathrm{lim}}\in\mathbb{R}^{\pm\infty}\) be error bounds. We say that \(\mathcal{M}\) is a \((\delta_{\mathrm{fin}},\delta_{\mathrm{lim}})\)-monitor for a given limit property \(\Phi=(\pi,\ell)\) iff for all \(s\in\Sigma^{*}\) and \(f\in\Sigma^{\omega}\), we have \(|\pi(s)-\gamma([s])|\leq\delta_{\mathrm{fin}}\) and \(|\ell_{s\prec f}(\pi(s))-\ell_{s\prec f}(\gamma([s]))|\leq\delta_{\mathrm{lim}}\)._ Building on Theorem 4.1, we identify a sufficient condition to guarantee the existence of an abstract monitor with finitely many equivalence classes. Theorem 4.2: _For every limit property \(\Phi\) such that \(\Phi(f)\in\mathbb{R}\) for all \(f\in\Sigma^{\omega}\), and for all error bounds \(\alpha,\beta\in\mathbb{R}_{\geq 0}\), if \(\Phi\) is \(\alpha\)-safe and \(\beta\)-co-safe, then for every real \(\delta>\alpha+\beta\), there exists a finite-state \((\delta,\delta)\)-monitor for \(\Phi\)._ Due to Theorem 4.2, the discounted safety property of Example 4.1 has a finite-state monitor for every positive error bound. We remark that Theorem 4.2 is proved by a construction that generalizes the unfolding approach for the approximate determinization of discounted automata [12], which unfolds an automaton until the distance constraint is satisfied. ## 7 Conclusion We presented a generalization of safety and liveness that lifts the safety-progress hierarchy to the quantitative setting of [18] while preserving major desirable features of the boolean setting, such as the safety-liveness decomposition. Monitorability identifies a boundary separating properties that can be verified or falsified from a finite number of observations, from those that cannot. Safety-liveness and co-safety-co-liveness decompositions allow us separate, for an individual property, monitorable parts from nonmonitorable parts. The larger the monitorable parts of the given property, the stronger the decomposition. We provided the strongest known safety-liveness decomposition, which consists of a pointwise minimum between a safe part defined by a quantitative safety closure, and a live part which corrects for the difference. We then defined approximate safety as the relaxation of safety by a parametric error bound. This further increases the monitorability of properties and offers monitorability at a parametric cost. In fact, we showed that every property that is both approximately safe and approximately co-safe can be monitored arbitrarily precisely by a finite-state monitor. A future direction is to extend our decomposition to approximate safety together with a support for quantitative assumptions [32]. The literature contains efficient model-checking procedures that leverage the boolean safety hypothesis [36, 40]. We thus expect that also quantitative safety and co-safety, and their approximations, enable efficient verification algorithms for quantitative properties. #### Acknowledgments. We thank the anonymous reviewers for their helpful comments. This work was supported in part by the ERC-2020-AdG 101020093.
2302.08174
A Direttissimo Algorithm for Equidimensional Decomposition
We describe a recursive algorithm that decomposes an algebraic set into locally closed equidimensional sets, i.e. sets which each have irreducible components of the same dimension. At the core of this algorithm, we combine ideas from the theory of triangular sets, a.k.a. regular chains, with Gr\"obner bases to encode and work with locally closed algebraic sets. Equipped with this, our algorithm avoids projections of the algebraic sets that are decomposed and certain genericity assumptions frequently made when decomposing polynomial systems, such as assumptions about Noether position. This makes it produce fine decompositions on more structured systems where ensuring genericity assumptions often destroys the structure of the system at hand. Practical experiments demonstrate its efficiency compared to state-of-the-art implementations.
Christian Eder, Pierre Lairez, Rafael Mohr, Mohab Safey El Din
2023-02-16T09:42:55Z
http://arxiv.org/abs/2302.08174v2
# A _Direttissimo_ Algorithm for Equidimensional Decomposition ###### Abstract We describe a recursive algorithm that decomposes an algebraic set into locally closed equidimensional sets, i.e. sets which each have irreducible components of the same dimension. At the core of this algorithm, we combine ideas from the theory of triangular sets, a.k.a. regular chains, with Grobner bases to encode and work with locally closed algebraic sets. Equipped with this, our algorithm avoids projections of the algebraic sets that are decomposed and certain genericity assumptions frequently made when decomposing polynomial systems, such as assumptions about Noether position. This makes it produce fine decompositions on more structured systems where ensuring genericity assumptions often destroys the structure of the system at hand. Practical experiments demonstrate its efficiency compared to state-of-the-art implementations. ## 1 Introduction Problem statementLet \(\mathds{K}\) be an algebraically closed field, let \(R=\mathds{K}[x_{1},\ldots,x_{n}]\) be a polynomial ring and let \(f_{1},\ldots,f_{c}\in R\) be a polynomial system generating an ideal \(I\subseteq R\). The zero set \(X\) of the polynomials \(f_{1},\ldots,f_{c}\) in \(\mathds{K}\), decomposes uniquely as a union of irreducible algebraic sets such that none of them contains another. These are the _irreducible components_ of \(X\) and correspond to the _minimal associated primes_ of \(I\). The variety \(X\) is _equidimensional_ if all its irreducible components have the same dimension. It is clear that \(X\) always admits a decomposition \(X=Y_{1}\cup\cdots\cup Y_{c}\) where the \(Y_{i}\) are equidimensional algebraic sets. Given \(f_{1},\ldots,f_{c}\in R\), we aim at computing such an _equidimensional decomposition_ of \(X=V(f_{1},\ldots,f_{c})\). Moreover, if the input polynomials have coefficients in a subfield \(\mathds{L}\subseteq\mathds{K}\), we want the components of the decomposition to be defined by polynomials over \(\mathds{L}\). This problem finds natural applications in singularity analysis of sensor-based controllers of mechanism design (e.g. Garcia Fontan et al., 2022; Pascual-Escudero et al., 2021, and references therein), in algorithms of real algebraic geometry (e.g. Aubry et al., 2002; Safey El Din and Schost, 2004) and real algebra (Safey El Din et al., 2018, 2021) as well as automated theorem proving and geometry (e.g. Chen et al., 2013; W. Wu and Gao, 2007; Yang et al., 1998, 2001). Prior WorksThe importance of this computational problem fostered a vast body of literature often also as an intermediate step towards primary decomposition of ideals or prime decomposition of varieties. Algorithms for equidimensional decomposition of algebraic sets can be classified along the data structures which they employ to represent (equidimensional) algebraic sets. There are two prominent strategies for equidimensional decomposition using Grobner bases frequently implemented in computer algebra systems. The first one uses algebraic elimination techniques. It combines the knowledge of the dimension of the ideal generated by the input polynomials with the elimination theorem (Cox et al., 2015, Theorem 3.1.2) to compute a description of the projection of the algebraic set under study on a well-suited affine linear subspace to deduce how to split the corresponding ideal (Caboara et al., 1997; Decker et al., 1999; Gianni et al., 1988; Kalkbrener, 1994; Krick & Logar, 1991). The projection of the equidimensional component of highest dimension (frequently called the _equidimensional hull_) of the algebraic set under question will then be cut out by a hypersurface whose defining polynomial has degree equal to the degree of this equidimensional hull. As a consequence, such algorithms have the disadvantage that they need to manipulate polynomials of degree in the order of the Bezout bound of the input system. To circumvent this drawback, another set of methods, called _direct methods_ has been introduced by Eisenbud et al. (1992). They rely on homological algebra to reduce the problem of equidimensional decomposition to the computation of syzygies which are then used to split the polynomial ideal under study, while avoiding projections. These algorithms often provide an intermediate step towards primary decomposition of ideals. For this problem, modular techniques and dedicated algorithms for the case where \(\mathbb{K}\) is a finite field have been designed (Ishihara, 2022; Noro & Yokoyama, 2004; Yokoyama, 2002). Another body of work uses _lazy_ representations of algebraic sets. Frequently, the core idea is to exploit the fact that any equidimensional algebraic set is locally a complete intersection, i.e. an equidimensional algebraic set of codimension \(c\) can be represented by the vanishing of \(c\) polynomials on a dense Zariski open subset of itself. Hence, equidimensional algebraic sets can be understood as the Zariski closures of locally closed sets defined by polynomial equations and inequations. Taking this perspective, one additionally enforces these \(c\) polynomial equations to form a _triangular set_. These have their origin in the Wu-Ritt characteristic sets (Chou & Gao, 1990; Gallo & Mishra, 1991; Ritt, 1950; Wang, 1993; W.-T. Wu, 1986). Triangularity is therein understood with respect to the variables of the underlying polynomial ring (i.e. in a sense analogous to the notion of triangular matrices in linear algebra). This triangular structure naturally also yields the equations of the algebraic set where the \(c\) polynomials fail to define the algebraic set at hand and thus triangular sets have a description of the previously mentioned Zariski open subset attached to them in a natural way. Because of their triangular structure they allow the reduction of certain algorithmic challenges to a univariate problem. Of particular importance, especially in the realm of equidimensional decomposition, are certain special triangular sets called _regular chains_, introduced by Kalkbrener (1993) and Lu and Jingzhong (1994). A regular chain models an _unmixed_ dimensional ideal and has good algorithmic properties with respect to the ideal it represents. A related frequently implemented algorithm was also given by Lazard (1991). Algorithms using regular chains are prominently part of the computer algebra system Maple (Chen et al., 2007; Chen & Moreno Maza, 2012). We refer to Hubert (2003) and Wang (2001) for introductions to the subject and to Aubry et al. (1999) for a theoretical account as to how certain different notions of triangular sets relate to each other. It should be noted that, as for methods based on Grobner bases combined with algebraic elimination, these triangular encodings make use of polynomials whose degrees, in the worst case, can be as high as is the degree of the equidimensional components they do encode. Nonetheless, algorithms based on triangular representations can be quite well behaved compared to Grobner basis techniques especially on certain sparse polynomial systems. Another data structure naturally encoding equidimensional algebraic sets is that of a _geometric resolution_ developed by Lecerf (2000, 2003). A geometric resolution is a certain zero-dimensional parametrization of an algebraic set in Noether position. In our setting, these zero-dimensional parametrizations are used to encode generic points in the equidimensional components of the algebraic set under study (the numerical counterpart of this encoding is known as the notion of witness sets (Sommese et al., 2005), a notion that will be utilized in this paper as well). Under certain generically satisfied assumptions on the input, these can be combined with _straight line programs_ to obtain the best known complexity bounds for equidimensional decomposition. See also (Jeronimo & Sabia, 2002) for a related approach. To bypass the "projection-degree" problem, incremental approaches have been investigated in combination with Grobner bases algorithms. Incremental means here that they feed the decomposition algorithm with one input polynomial after another, in the same way as Lazard (1991) or Lecerf (2000), for example, to identify when some polynomial is a zero divisor in the ring of polynomials quotiented by the ideal generated by the previous polynomials. Moroz (2008) combines Grobner bases computations with representations of equidimensional algebraic sets by means of locally closed sets. In a previous work, we also investigated this approach by exploiting properties of signature-based Grobner bases algorithms to enhance the detection and exploitation of zero divisors and compute the so-called nondegenerate locus of a polynomial system (Eder et al., 2022). This WorkIn this work, we again take the incremental approach previously mentioned. As in the other incremental algorithms, the foundation of our algorithm is a decomposition algorithm to, given an equidimensional algebraic set \(X\) and some \(f\in R\), determine the _locus of proper intersection_ of \(f\) on \(X\), i.e. the set of points \(p\in k^{n}\) such that \(X\cap V(f)\) has dimension one less than \(X\). This is then used to iterate over the input equations \(f_{1},\ldots,f_{r}\). More precisely, one starts by decomposing \(V(f_{1},f_{2})\) then uses the output to decompose \(V(f_{1},f_{2},f_{3})\) and so on. In contrast to a lot of other algorithms for equidimensional decomposition based on Grobner bases we borrow from the theory of triangular sets and work with locally closed sets instead of polynomial ideals simlarly to Moroz (2008). In the iterative approach outlined above this turns out to have two benefits. First, it naturally removes from the output sets of our iterative algorithm certain embedded components that appear during the decomposition. To illustrate this consider the following example: _Example 1.1_.: Let \(R:=\mathbb{Q}[x,y,z]\), \(X:=V(xy),f:=xz\). To decompose \(X\cap V(f)\) into equidimensional components one may start by decomposing \(X=V(x)\cup V(y)\). Then one intersects these two components with \(V(f)\) to obtain the equidimensional decomposition \(X\cap V(f)=V(x)\cup V(y,xz)\). The latter set has the irreducible component \(V(y,x)\) which is embedded in \(V(x)\). If one instead splits into a _disjoint_ union \(X=V(x)\cup[V(y)\setminus V(x)]\) and again intersects both components with \(V(f)\) one obtains \(X=V(x)\cup(V(y,z)\setminus V(x))\) and the latter component no longer has the irreducible component \(V(y,x)\). Second, an iterative equidimensional decomposition algorithm may produce redundant components, which, if they are not deduplicated, may yield an exponential blow-up in the number of components: if one has decomposed \(X=\bigcup_{i}X_{i}\) with the \(X_{i}\) sharing a large number of irreducible components then decomposing each \(X_{i}\cap V(f)\) to obtain a decomposition of \(X\cap V(f)\) results in an even more redundant decomposition. Because we use locally closed sets to model our equidimensional sets we are enabled to enforce that every time we decompose a locally closed set the resulting output sets be pairwise set-theoretically disjoint. Our experiments indicate that this seems to enforce a sufficiently strong irredundancy between our components to avoid an exponential blow-up in the number of components. In this paper we provide two methods to work with the locally closed sets appearing in our algorithm: One method models them "naively" in the sense that we encode them by storing their defining equations and inequations and use Grobner bases of their associated ideals to work with them algorithmically. The other method tries to avoid having to know a Grobner basis for the ideal associated to a locally closed set as much as possible by storing instead a Grobner basis for a _witness set_ of the locally closed set in question. Using Grobner bases here with the graded reverse lexicographical ordering has the effect that, compared to algorithms using triangular sets, we are able to * avoid computing projections of the algebraic sets to be decomposed and certain frequently made genericity assumptions such as ideals being in Noether position; * obtain descriptions of these sets with lower degree polynomials. Borrowing further from the theory of triangular sets we also adopt the heuristic that it is a good idea to decompose given algebraic sets as often and as finely as possible when working with them. This philosophy is baked into the recursive structure of our algorithms which exists so as to decompose a given locally closed set as much as possible given generating sets for certain saturation ideals. We implemented our algorithm in the computer algebra system Oscar (The OSCAR team, 2023) using its interface to the library msolve(Berthomieu et al., 2021) for all necessary Grobner basis computations. Experimental results indicate that our algorithm is able to tackle polynomial systems which are out of reach of state-of-the art implementations of algorithms for equidimensional decomposition which are available in leading computer algebra systems. Algorithms ### Principles To illustrate the basic principles behind our equidimensional decomposition algorithm, consider an equidimensional variety \(X\) in the affine space \(\mathbb{K}^{n}\). Let \(f\in R\). The variety \(X\) is partitioned into: 1. Points \(p\) where \(f\) is a non zero divisor locally at \(p\) (that is in the ring \(R_{p}/I(X)R_{p}\)). The polynomial \(f\) takes nonzero values in any open neighborhood of \(p\) in \(X\). This defines an open subset \(X_{\mathrm{proper}}\) of \(X\). 2. Points \(p\) contained in an irreducible component of \(X\) on which \(f\) vanishes identically. This defines a closed subset \(X_{\mathrm{improper}}\) of \(X\). It is clear that \(X=X_{\mathrm{proper}}\sqcup X_{\mathrm{improper}}\) (where \(\sqcup\) denotes a disjoint union) and that \(X_{\mathrm{improper}}\subseteq V(f)\), so that \[X\cap V(f)=\big{(}X_{\mathrm{proper}}\cap V(f)\big{)}\sqcup X_{\mathrm{ improper}}.\] By construction, the \(X_{\mathrm{proper}}\cap V(f)\) is a _proper_ intersection: it is equidimensional of dimension \(\dim X-1\), or empty. As a union of irreducible components of \(X\), the closed set \(X_{\mathrm{improper}}\) is equidimensional, with the same dimension as the one of \(X\), unless it is empty. So we obtain an equidimensional decomposition of \(X\cap V(f)\). Given defining equations for \(X\), this process can be applied iteratively to obtain an equidimensional decomposition of any affine algebraic variety. In our algorithm we apply the above idea without directly computing \(X_{textproper}\) and \(X_{\mathrm{improper}}\). Let \(I(X)\subset R\) be an ideal such that \(V(I(X))=X\). Further, we denote by \((I(X):f^{\infty})\) the saturation ideal of \(I(X)\) by \(f\). Recall that \(V((I(X):f^{\infty}))\) is the Zariski closure of \(X\setminus V(f)\)(Cox et al., 2015, Theorem 4.4-10). We look for an element \(g\in(I(X):f^{\infty})\setminus I(X)\). If there is none, this implies that \(X_{\mathrm{improper}}=\varnothing\) so \(X\cap V(f)\) is equidimensional. If there is such a \(g\), then we consider the following partition of \(X\): 1. the closed locus \(X_{1}\) of points \(p\) where \(g\) has nonzero values in any neighborhood of \(p\) in \(X\); 2. the open locus \(X_{2}\) of points \(p\) where \(g\) is zero in some neighborhood of \(p\) in \(X\). These two sets are equidimensional. By construction, \(fg\) vanishes identically on \(X\), so \(X_{1}\subseteq X_{\mathrm{improper}}\) and this gives the following decomposition of \(X\): \[X=X_{1}\sqcup\left(X_{2}\cap V(f)\right). \tag{1}\] The ideal of \(X_{1}\) is given by \((I(X):g^{\infty})\). The term \(X_{2}\cap V(f)\) may not be equidimensional but we may apply the above idea recursively: We again split \(X_{2}\) along an element in \((I(X_{2}):f^{\infty})\setminus I(X_{2})\) if it exists. This leads to Algorithm _split_. The set \(X_{2}\) is not closed, this raises the need to deal not only with closed sets of the affine space, but more generally locally closed sets. We do so by partitioning them into special locally closed sets, more precisely into closed sets in the complement of a hypersurface in the affine space, which we call _affine cells_. Concretely, suppose that \(I(X_{1})=(I(X):g^{\infty})\) is given by a finite generating set \(H\sqcup\{h\}\subset R\). We then recursively decompose \(X_{2}=X\setminus V(H\sqcup\{h\})\) via \[X_{2}=X\setminus V(H\cup\{h\})=(X\setminus V(h))\sqcup\big{(}(X\setminus V( H))\cap V(h)\big{)}.\] The intersection with \(V(h)\) is computed with _split_ to ensure equidimensionality. Algorithm _remove_ below performs these operations. Findally we obtain an equidimensional decomposition algorithm following an incremental strategy by repeated application of _split_, see Algorithm _equidim_. The primitive operations we use to manipulate affine cells are presented next, while the proof of correctness and termination of the algorithms are in Section 2.3. ### Primitives **Definition 2.1**.: _An affine cell \(X\) is a locally closed set of \(\mathbb{K}^{n}\) of the form \(Z\setminus V(g)\) where \(Z\) is an algebraic set and \(g\in R\). An affine cell \(X\) is equidimensional if all the irreducible components of the Zariski closure \(\overline{X}\) have the same dimension._ Regardless of the mode of representation of an affine cells, we assume that we can perform the following operations on any affine cell \(X\): 1. Given \(f\in R\), compute the affine cell \(X\cap V(f)\); 2. Given \(f\in R\), compute the affine cell \(X\setminus V(f)\); As often in effective algebraic geometry, algebraic sets are defined by ideals that are not always radical so our affine cells come with a distinguished ideal \(I(X)\subseteq R\) such that \(\overline{X}=V(I(X))\). The radical of \(I(X)\) is denoted \(\operatorname{rad}I(X)\). We assume that operations (1) and (2) satisfy \(I(X)+\langle f\rangle\subseteq I(X\cap V(f))\) and \(I(X)\subseteq I(X\setminus V(f))\). We assume further that we can perform the following operations on any affine cell \(X\): 1. Given \(f\in R\), decide if \(f\in I(X)\); 2. Compute a basis of \(I(X)\), denoted \(\mathit{basis}(X)\). For example, we may represent an affine cell \(X\) by a pair \((F,g)\), where \(F\) is a Grobner basis of \(I(X)\), for some monomial ordering, and \(g\) a polynomial such that \(X=\overline{X}\setminus V(g)\) (see Becker & Weispfenning, 1993, for an introduction to Grobner bases). We denote \(X=V(F_{i}g)\). For a set \(F\subseteq R\) and an element \(g\in R\), let \(\mathit{sat}(F,g)\) denote a Grobner basis of the saturation ideal \((\langle F\rangle:g^{\infty})\). Recall that \[(I:g^{\infty})\stackrel{{\mathrm{def}}}{{=}}\left\{f\in R\ \Big{|}\ \exists k\in\mathbb{N},fg^{k}\in I\right\}.\] Using these primitive \(\mathit{sat}\), we can perform all the four operations above: 1. \(V(F;g)\cap V(f)=V(\mathit{sat}(F\cup\{f\},g);g)\); 2. \(V(F;g)\setminus V(f)=V(\mathit{sat}(F,f);fg)\); 3. \(f\in I(X)\) if and only if the normal form of \(f\) w.r.t. \(F\) is zero; 4. \(\mathit{basis}(V(F;g))=F\). _Remark 2.1_.: In Section 3 we explain how to perform the above primitive operations on an affine cell \(X\) using a notion called _witness sets_, introduced for the purpose of equidimensional decomposition by Lecerf (2003) under the name lifting fibers. This leads to a lazier representation of \(X\), one where a Grobner basis for \(I(X)\) is not always required. ``` 1Functionsplit(X,f) 2\(G\leftarrow\textit{basis}(X\setminus V(f))\) 3if\(G\subseteq I(X)\) [can be replaced by \(G\subseteq\text{rad}\ I(X)\)] 4return\(\{X\cap V(f)\}\) 5 6else 7\(g\leftarrow\text{any element of }G\setminus I(X)\) 8\(H\leftarrow\textit{basis}(X\setminus V(g))\) 9\(\mathcal{D}\leftarrow\{X\cap V(H)\}\) 10for\(Y\in\textit{remove}(X\cap V(g),H)\) 11\(\mathcal{D}\leftarrow\mathcal{D}\cup\textit{split}(Y,f)\) 12 13 end if 14return\(\mathcal{D}\) 15 16 end for 20 21 end for ``` **Algorithm 1** Equidimensional decompositions InpuAn a affine cell \(X\), a finite set \(H\subset R\) **Precondition**\(X\setminus V(H)\) is equidimensional A partition of \(X\setminus V(H)\) into equidimensional affine cells ``` 1functionremove(X,H) 2if\(H=\varnothing\) 3return\(\varnothing\) 4else 5\(h\leftarrow\text{any element of }H\) 6\(\mathcal{D}\leftarrow\{X\setminus V(h)\}\) 7for\(Y\in\textit{remove}(X,H\setminus\{h\})\) 8\(\mathcal{D}\leftarrow\mathcal{D}\cup\textit{split}(Y,h)\) 9 end for 10return\(\mathcal{D}\) 11 end if 12 13 end for ``` **Algorithm 2** Equidimensional decompositions **Lemma 2.1**.: To illustrate Algorithm _split_ we spell out how it behaves on the input \(X:=V(xy,zw)\) and \(f:=xz\). Using the notation of Algorithm _split_ we find \(G=\{y,w\}\). This is not contained in \(I(X)\), so we may choose \(g:=y\) in line 6 of Algorithm _split_. Then we find \(H=\{x,zw\}\). Note that \(X\setminus V(zw)=\emptyset\) and so Algorithm _split_ returns \[X\cap V(H)\text{ and }split(remove(X,H),f)\] \[=V(zw,x)\text{ and }split(V(y,zw)\setminus V(x),xz).\] This second call to Algorithm _split_ finds \(G=\{y,w\}\), again this set is not contained in \(I(V(y,zw)\setminus V(x))\), and so we can choose \(g:=w\) in line 6. Then we find \(H=\{z\}\) which this time yields \[split(V(y,zw)\setminus V(x),xz)= V(y,z)\setminus V(x)\] \[\text{ and }split(V(y,w)\setminus V(xz),xz)\] The last call to split simply finds the empty set and so all in all we have obtained the decomposition \[V(xy,zw,xz)=V(x,zw)\cup V(y,z)\setminus V(x).\] _Remark 2.2_.: Example 2.1 illustrates the fact that Algorithm _split_ may split an algebraic set even if it is equidimensional. Heuristically, the finer the intermediate decomposition in Algorithm _equidim_ is, the computationally easier subsequent steps will be. ### Correctness and Termination When computing an interection of an equidimensional affine cell \(X\) with a hypersurface \(V(f)\), we distinguish two cases, depending on whether \(V(f)\) intersects \(X\) properly or not. Lemma 2.2 deals with the first case, while Lemma 2.3 deals with the second. **Lemma 2.2**.: _Let \(X\) be an equidimensional affine cell. Let \(f\in R\), such that \((I(X):f^{\infty})\subseteq\operatorname{rad}I(X)\). Then \(X\cap V(f)\) is empty or equidimensional with dimension \(\dim X-1\)._ Proof.: Let \(I=I(X)\). Suppose that \(X\cap V(f)\) is not empty. By Krull's principal ideal theorem any minimal prime over \(I+\langle f\rangle\) has codimension at most \(\operatorname{codim}I+1\). The condition \((I:f^{\infty})\subseteq\operatorname{rad}I\) means geometrically that \(X\subseteq X\setminus V(f)\), so that \(f\) has nonzero values in the neighborhood of any point in \(X\). So \(f\) is a not a zero divisor in \(R/I\). In particular, there is a regular sequence of length \(\operatorname{codim}I+1\) in \(I+\langle f\rangle\). Since the polynomial ring \(R\) is Cohen-Macaulay it follows that every minimal prime over \(I+\langle f\rangle\) has at least codimension \(\operatorname{codim}I+1\). **Lemma 2.3**.: _Let \(X\) be an equidimensional affine cell. Let \(f\in R\), let \(g\in(I(X):f^{\infty})\) and let \(I_{g}=(I(X):g^{\infty})\). Let \(X_{1}=X\cap V(I_{g})\) and \(X_{2}=(X\cap V(g))\setminus V(I_{g})\). Then:_ 1. \(X=X_{1}\sqcup X_{2}\)_;_ 2. \(X\cap V(f)=X_{1}\sqcup(X_{2}\cap V(f))\) _;_ 3. \(X_{1}\) _is empty or equidimensional with_ \(\dim X_{1}=\dim X\)_;_ 4. \(X_{2}\) _is empty or equidimensional with_ \(\dim X_{2}=\dim X\)_;_ Proof.: Obviously \(X=X_{1}\sqcup(X\setminus V(I_{g}))\). As a set, \(X_{1}\) is the union of the components of \(X\) on which \(g\) is not identically zero. In particular \(X\setminus V(I_{g})\) is the set of points of \(X\) in a neighborhood of which \(g\) is identically zero. Therefore \(X\setminus V(I_{g})\subseteq V(g)\), so we obtain \[X\setminus V(I_{g})=(X\cap V(g))\setminus V(I_{g}),\] which gives (i). Next, we have \(I(X_{1})=I(X)+I_{g}=(I(X):g^{\infty})\). Moreover \(f\in\operatorname{rad}I(X_{1})\). Indeed, \(gf^{k}\in I(X)\) for some \(k\geq 0\), by definition of \(g\), and therefore \(f\in\operatorname{rad}\left(I(X):g\right)\subseteq\operatorname{rad}(I(X):g^{ \infty})\). So \(X_{1}\subseteq V(f)\). It follows that \(X\cap V(f)=X_{1}\sqcup(X_{2}\cap V(f))\). This proves (ii). Since \(X\) is equimensional, it follows that \(X_{1}\) (as a union of components of \(X\)) is also equidimensional of same dimension, unless it is empty. This proves (iii). As for \(X_{2}\), it is open in \(X\), so it inherits the equidimensionality and the dimension of \(X\), unless it is empty. This proves (iv). We now prove correctness and termination of Algorithms _split_ and _remove_ with a mutual induction. On line 3, the test \(G\subseteq I(X)\) can be replaced by \(G\subseteq\operatorname{rad}I(X)\), or any condition which holds when \(G\subseteq I(X)\) and doee not hold when \(G\not\subseteq\operatorname{rad}I(X)\), this does not affect correctness or termination. We will use this variant in Section 3. **Theorem 2.4**.: _For any affine cell \(X\):_ 1. _[label=()]_ 2. _If_ \(X\) _is equidimensional, then for any_ \(f\in R\)_, the procedure_ split _terminates on input_ \(X\) _and_ \(f\) _and outputs a partition of_ \(X\cap V(f)\) _into equidimensional affine cells_ \(Y\) _with_ \(I(X)\subseteq I(Y)\)_._ 3. _For any finite set_ \(H\subset R\) _such that_ \(X\setminus V(H)\) _is equidimensional, the procedure_ remove _terminates on input_ \(X\) _and_ \(H\) _and outputs a partition of_ \(X\cap V(H)\) _into equidimensional affine cells_ \(Y\) _with_ \(I(X)\subseteq I(Y)\)_;_ Proof.: We proceed by Noetherian induction on \(I(X)\) and assume the statement holds for any affine cell \(X^{\prime}\) with \(I(X)\subsetneq I(X^{\prime})\). We begin with _split_. Let \(f\in R\) and let \(I_{f}=(I(X):f^{\infty})\). If \(I_{f}\subseteq I(X)\), then Lemma 2.2 applies and \(X\cap V(f)\) is equidimensional. So \(\textit{split}(X,f)\) terminates and is correct in this case. Assume now that there is some \(g\in I_{f}\setminus I(X)\). Let \(I_{g}=(I:g^{\infty})\). Lemma 2.3 applies: an equidimensional decomposition of \(X\cap V(f)\) is given by \(X\cap V(I_{g})\) and an equidimensional decomposition of \(\big{(}(X\cap V(g))\setminus V(I_{g})\big{)}\cap V(f)\). Moreover \((X\cap V(g))\setminus V(I_{g})\) is equidimensional. Since \(g\not\in I(X)\), we have \(I(X)\subsetneq I(X\cap V(g))\) so _remove\((X\cap V(g),H)\)_ (using the notations of Algorithm _split_, where \(H\) is a generating set of \(I_{g}\)) is correct and terminates, by induction hypothesis. Moreover, it outputs affine cells \(Y\) such that \(I(X)\subsetneq I(X\cap V(g))\subseteq I(Y)\). So the recursive calls \(\textit{split}(Y,f)\) are correct and terminate. As for _remove_, let \(H\subset R\) finite such that \(X\setminus V(H)\) is equidimensional. If \(H=\varnothing\), then (ii) holds trivially. As for the case \(H\neq\varnothing\), let \(h\in H\) and \(H^{\prime}=H\setminus h\). Since \(V(H)=V(h)\cap V(H^{\prime})\), we have \[X\setminus V(H)=(X\setminus V(h))\sqcup\big{(}(X\setminus V(H^{\prime}))\cap V (h)\big{)}\,. \tag{2}\] The set \(X\setminus V(h)\) and \(X\setminus V(H^{\prime})\) are open in \(X\setminus V(H)\) so equidimensional (or empty). By induction on the cardinal of \(H\), we assume that _remove\((X,H^{\prime})\)_ is a partition of \(X\setminus V(H^{\prime})\) into equidimensional affine cells, and that every cell \(Y\) of this partition satisfies \(I(X)\subseteq I(Y)\). By (i), the calls \(\textit{split}(Y,h)\) terminates and yield a partition of \((X\setminus V(H^{\prime}))\cap V(h)\) into cells \(Y\) with \(I(X)\subseteq I(Y)\). Moreover the affine cell \(Y=X\setminus V(h)\) also satisfies \(I(X)\subseteq I(Y)\). By (2), _remove\((X,H)\)_ terminates too and is a partition of \(X\setminus V(H)\) into cells \(Y\) with \(I(X)\subset I(Y)\). **Corollary 2.5**.: _Algorithm_ _equidim is correct and terminates._ ## 3 Implementation and experimental results ### Implementation Details In this section we give some implementation details and alternatives. In particular, we show a lazier data structure for affine cells which is able to delay some Grobner basis computations at the cost of a Monte Carlo randomization. We have implemented both the method described in Section 2 and the method described in this section. For either method, we will need an algorithm that, given generators for an ideal \(I\) and an element \(f\in R\), computes generators for the saturation \((I:p^{\infty})\). Even for our lazy representation, this will still sometimes be needed to compute a Grobner basis for the ideal \(I(X)\), where \(X\) is an affine cell. In the probabilistic setting, some saturations will be replaced by saturations of zero dimensional ideals. In our implementation we chose the standard method of performing saturations using Grobner bases. To compute generators for \((I:p^{\infty})\), fix a monomial order \(\leq\) on \(R[t]\) for a new variable \(t\) such that \(\leq\) eliminates \(t\). Compute a Grobner basis \(G\) for the ideal \(I+\langle tp-1\rangle\subset R[t]\) w.r.t \(\leq\). Then the elements in \(G\) that do not contain the variable \(t\) give a Grobner basis of \((I:p^{\infty})\) by the elimination theorem. Other saturation methods also exist such as the methods presented in Eder et al. (2022) or Berthomieu et al. (2022). Randomization relies on intersecting with random linear subspaces of appropriate dimension to reduce to the zero-dimensional case. This idea is well known in symbolic computation (Lecerf, 2003) and numerical algebraic geometry (Bates et al., 2013, e.g.) wherein these intersections of algebraic sets with random suitable random linear subspaces are known under the name _witness sets_. **Proposition 3.1**.: _Let \(X\subseteq\mathbb{K}^{n}\) be an equidimensional affine cell of dimension \(d\) and let \(f\in R\). Then, for a generic linear subspace \(L\subset\mathbb{K}^{n}\) of codimension \(d\) the following statements hold:_ 1. \(f\in\operatorname{rad}\,I(X)\) _if and only if_ \(f\in\operatorname{rad}\,I(X\cap L)\)_._ 2. \(I(X\setminus V(f))\subseteq\operatorname{rad}\,I(X)\) _if and only if_ \(X\cap L\cap V(f)=\varnothing\)_._ Proof.: We always have \(\operatorname{rad}\,I(X)\subseteq\operatorname{rad}\,I(X\cap L)\). Conversely, assume that \(f\not\in\operatorname{rad}\,I(X)\). Let \(U=\{p\in X\mid f(p)\neq 0\}/\) It is an open subset of \(X\) and it is non empty, by hypothesis. Since \(X\) is equidimensional, \(U\) has dimension \(d\) and the intersection \(U\cap L\) is nonempty (because \(L\) is generic). Therefore \(f\) is nonzero on a nonempty subset of \(X\cap L\). In particular, \(f\not\in\operatorname{rad}\,I(X\cap L)\). This proves the first point. For the second point note that \(I(X\setminus V(f))\subseteq\operatorname{rad}\,I(X)\) if and only if \(X\) and \(V(f)\) intersect properly, that is \(X\cap V(f)\) is equidimensional of dimension \(d-1\). The intersection of \(X\cap V(f)\) with the codimension \(d\) generic space \(L\) is empty if and only if the dimension of \(X\cap V(f)\) is less than \(d\). The proves the second point. In this setting, we represent an equidimensional affine cell \(X\) by a triple \((F,G,W,d)\), where \(F\), \(G\) and \(W\) are subsets of \(R\) and \(d\) is an integer such that \(\dim X=d\), \(X=V(F)\setminus V(\prod_{g\in G}G)\) and \(W\) (stands for _witness set_) is a Grobner basis of \(I(X\cap L)\) for some generic linear subspace space \(L\) of \(\mathbb{K}^{n}\) of codimension \(d\). We denote \(X=V(F;G,W,d)\). In practice, \(L\) will only be random and sufficient genericity will only hold with high probability (assuming that \(\mathbb{K}\) has enough elements). Given only \(F\), \(G\) and \(d\), we can compute a suitable set \(W\) by choosing a set \(J\subseteq R\) of \(d\) random linear forms and computing a Grobner basis of \((((F:g_{1}^{\infty}):\cdots):g_{r}^{\infty})\), where \(G=\{g_{1},\ldots,g_{r}\}\). This procedure is denoted \(\mathit{witness}(F,G,d)\). The four primitive operations are performed as follows. For the intersection operation, we need some additional knowledge on the expected dimension of the output. Let \(X=V(F;G,W,d)\) be an equidimensional cell. 1. _[Proper intersection]_ Given \(f\in R\) such that \(X\) intersects \(V(f)\) properly, \[X\cap V(f)=V\big{(}F^{\prime};G,\mathit{witness}(F^{\prime},G,d-1),d-1\big{)},\] with \(F^{\prime}=F\cup\{f\}\); 2. _[Purely improper intersection]_ Given \(H\subset R\) such that \(X\cap V(H)\) is a union of components of \(X\), \[X\cap V(H)=V\big{(}F\cup H;G,gb(W\cup H),d\big{)},\] where \(gb(W\cup H)\) denotes a Grobner basis of the ideal generated by \(X\cup H\); 3. for \(f\in R\), \(X\setminus V(f)=V(F;G\cup\{f\},\mathit{sat}(H,f),d)\); 4. \(f\in\operatorname{rad}\,I(X)\) if and only if \(1\in(W:f^{\infty})\); 5. \(I(X)\) is computed by saturating \(\langle F\rangle\) successively by all the elements of \(G\). In the decomposition algorithm, we always know _a priori_ the kind of each intersection. The intersection on line 4 of _split_ is proper, the intersection on line 8 is purely improper. The one on line 9 is more subtle. Indeed, the decomposition algorithm may produce here a nonequidimensional cell when considering \(X\cap V(g)\). With the notations of this algorithm, the cell \(X^{\prime}=X\cap V(g)\) is only equidimensional outside of \(V(H)\) (of dimension \(\dim X\)). This nonequidimensional cell will go through only one operation among the four primitives: \(X^{\prime}\setminus V(h)\) for some \(h\in H\). This operation restores equidimensionality. So we can mostly ignore this issue and compute the intersection \(X\cap V(g)\) as a purely improper intersection, pretending that \(X\cap V(g)\) is equidimensional. ``` 1:An equidimensional affine cell \(X\), an element \(f\in R\) 2:true if\(X\cap V(f)\) is a proper intersection, false otherwise 3:functionis\(\mathit{Proper}(X,f)\) 4:\(W\leftarrow\) the witness set of \(X\) 5:\(W^{\prime}\leftarrow\) a Grobner basis of \((\langle W\rangle:f^{\infty})\) 6:return\(1\in W^{\prime}\) ``` **Algorithm 2** Proper intersection check In addition we obtain a fifth operation: a probabilistic algorithm to check if \(X\cap V(f)\) is empty or equidimensional of dimension one less than \(X\) (or, equivalently \((I(X):f^{\infty})\subseteq\text{rad}\,I(X)\)). This is given by Algorithm _isProper_. Equipped with this algorithm, we can replace the if-condition in line 3 of Algorithm _split_ with _isProper_\((X,f)\). Only if this is not satisfied we proceed to compute a Grobner basis for \(I(X\setminus V(f))\). Lastly we want to note the following: In Algorithm _split_, on input \(X\) and \(f\), we may have to compute \(G:=\mathit{basis}(X\setminus V(f))\) but we use only one element in \(G\) in line 6 of Algorithm _split_. This situation can be improved by a simple caching mechanism: Note that in line 10 of Algorithm _split_ we call \(\mathit{split}(Y,f)\) with affine cells \(Y\) satisfying \(Y\subset X\). This certainly means \(G=\mathit{basis}(X\setminus V(f))\subseteq\mathit{basis}(Y\setminus V(f))\). Hence we may first try to pick an element from the already computed set \(G\) in 6 of the call \(\mathit{split}(Y,f)\) before computing \(\mathit{basis}(Y\setminus V(f))\). #### 3.1.1 Rationale for the new Data Structure Always knowing a Grobner basis for the affine cells appearing in Algorithm _equidim_ puts a large penalty on the cost of our algorithms. This is actually related to well-known observations on the complexity of Grobner bases under some regularity assumptions. Indeed, for a regular sequence in strong Noether position, the cost of linear algebra steps needed to compute intermediate Grobner bases in an incremental manner is higher than the final steps (Bardet et al., 2015). Dimension dependent complexity bounds provide another confirmation of this behaviour (Hashemi and Seiler, 2017). Using witness sets we can potentially avoid a lot of intermediate Grobner basis computations in our algorithms. In our experience, for a large number of cases, using witness sets greatly improves the efficiency of our algorithm which is theoretically backed up by the previously mentioned complexity results. Furthermore, in the data structure for affine cells presented in the last subsection we store the defining inequation of our affine cells as a factorization. If one wants to saturate a polynomial ideal \(I\) by an element \(f\in R\) which is known to have a factorization \(f=\prod_{g\in G}g\) given by a finite set \(G\) then it is expected to be cheaper to saturate by the elements \(g\in G\) one-by-one instead of saturating by \(f\) directly using the above elimination method. This lowers the degrees of the polynomials involved. #### 3.1.2 A Better Version of _remove_ Furthermore, we encountered the following problem when implementing Algorithm _remove_ as presented in Section 2. In this algorithm \(H\) is a Grobner basis so it tends to be very redundant (that is very far from being a minimal set of generators). So it often happens that there are two or more elements \(h_{1},h_{2}\in H\) such that for \(X_{1}:=X\setminus V(h_{1})\) and \(X_{2}:=X\setminus V(h_{2})\) we have \(I(X_{1})=I(X_{2})\). Eventually the sets \(X_{1}\) and \(X_{2}\) become disjoint, since eventually \(X_{1}\) is intersected with \(V(h_{1})\) or \(X_{2}\) is intersected with \(V(h_{1})\), but Algorithm _split_ may have to split \(X_{1}\) and \(X_{2}\) before that happens. Since splitting an affine cell with Algorithm _split_ depends only on the underlying ideals we may then repeat the exact same operations on the level of ideals twice or more. This issue then compounds exponentially due to the recursive nature of our algorithms. We therefore modified Algorithm _remove_ to obtain disjoint equidimensional affine cells from \(X_{1}\) and \(X_{2}\) as fast as possible, resulting in Algorithm 3. Note that when we use witness sets this algorithm avoids knowing Grobner bases for the ideals \(I(X_{i})\) until potentially line 12. ``` 1:An affine cell \(X\), a finite set \(H\subset R\) 2:A partition of \(X\setminus V(H)\) into equidimensional affine cells 3:functionremove\({}^{\prime}(X,H)\) 4:\(\mathcal{D}\leftarrow\emptyset\) 5:for\(i\)from\(1\)to\(r\) 6:\(X_{i}\gets X\setminus V(h_{i})\) 7:\(H_{i}\leftarrow\emptyset\) 8:for\(j\)from\(1\)to\(i-1\) 9:ifisProper\((X_{i},h_{j})\) 10:\(X_{i}\gets X_{i}\cap V(h_{j})\) 11:else 12:\(H_{i}\gets H_{i}\cup\{h_{j}\}\) 13:end 14:end 15:\(\mathcal{D}_{i}\leftarrow\) decomposition of \(X_{i}\cap V(H_{i})\) 16: by repeated application of split 17:\(\mathcal{D}\leftarrow\mathcal{D}\cup\mathcal{D}_{i}\) 18:end 19:return\(\mathcal{D}\) 20:end ``` **Algorithm 3** remove\({}^{\prime}\) ### Experimental Results In this section we give some experimental results. We compare the timings of our implementation of Algorithm _equidim_ and methods for equidimensional decomposition of algebraic sets available in various computer algebra systems in a table given below. Some of the timings are discussed in more detail in the next section. We compared to the following implementations: 1. The function Triangularize from the RegularChains library in Maple (Lemaire et al., 2005), which decomposes a polynomial system into regular chains, 2. The function equidimensional_decomposition_weak in Oscar (The OSCAR team, 2023) which is a wrapper around a corresponding Singular function (Decker et al., 2021). 3. The Magma (Bosma et al., 1997) functions EquidimensionalDecomposition (corresponding to the column "Magma" in the table) and ProbablePrimeDecomposition (corresponding to the column "Magma (prime dec.)" in the table) and 4. The numerical polynomial systems solver Bertini (Bates et al., 2013) which we ran on each system at hand by requesting a witness set decomposition into irreducible components with fixed precision set to Bertini's default value. The implementation of our algorithms is itself done in Oscar which is written in the programming language Julia (Bezanson et al., 2017). Its source code is available at [https://github.com/RafaelDavidMohr/Decomp.jl](https://github.com/RafaelDavidMohr/Decomp.jl) For all necessary Grobner basis computations we employ the library msolve(Berthomieu et al., 2021) for which Oscar offers an interface. Our suite of example systems is comprised as follows: 1. Cyclic\((8)\), coming from the classical Cyclic\((n)\) benchmark. 2. The systems P4L 1 to 3 come from the perspective-four line problem in robotics, see Garcia Fontan et al. (2022). 3. The systems C1 to C3 are certain jacobian ideals of single multivariate polynomials which define singular hypersurfaces. 4. \(\mathrm{Ps}(n)\), encoding pseudo-singularities via polynomials \[f_{1},\ldots,f_{n-1},g_{1},\ldots,g_{n-1}\] with \(f_{i}\in\mathbb{K}[x_{1},\ldots,x_{n-2},z_{1},z_{2}]\), \(g_{i}\in\mathbb{K}[y_{1},\ldots,y_{n-2},z_{1},z_{2}]\), the \(f_{i}\) being chosen as a random dense quadrics, and \(g_{i}\) chosen such that \(g_{i}(x_{1},\ldots,x_{n-2},z_{1},z_{2})=f\), i.e. as a copy of \(f_{i}\) in the variables \(y_{1},\ldots,y_{n-2},z_{1},z_{2}\). 5. \(\mathrm{sos}(s,n)\), encoding the critical points of the restriction of the projection on the first coordinate to a hypersurface which is a sum of \(s\) random dense quadrics in \(\mathbb{K}[x_{1},\ldots,x_{n}]\). \[f,\frac{\partial f}{\partial x_{2}},\ldots,\frac{\partial f}{\partial x_{n}},\quad f=\sum_{i=1}^{s}g_{i}^{2}.\] 6. \(\mathrm{sing}(n)\), encoding the critical points of the restriction of the projection on the first coordinate to a (generically singular) hypersurface which is defined by the resultant of two random dense quadrics \(A,B\) in \(\mathbb{K}[x_{1},\ldots,x_{n+1}]\): \[f,\frac{\partial f}{\partial x_{2}},\ldots,\frac{\partial f}{\partial x_{n}},\quad f=\mathrm{resultant}(A,B,x_{n+1}).\] 7. The Steiner polynomial system, coming from Breiding et al. (2020). 8. All remaining examples are part of the BPAS library (Asadi et al., 2021). The BPAS library offers an alternative to the RegularChains library in Maple with special emphasis on paralellism and it will be interesting to compare it to our algorithm in the future. To obtain the timings in the table below we almost exclusively used the witness set based data structure for affine cells. Every polynomial system was computed with in characteristic 65521 with the exception of Bertini which, as a numerical piece of software, computes over the complex numbers. Due to this difference a comparison between Bertini's and our timings needs to be considered carefully. We tried to indicate this in the table below by coloring the Bertini column in grey. All computations except for Magma were done on an single core of an Intel Xeon Gold 6244 CPU @ 3.6oGHz. All Magma computations were done on a single core of an Intel Xeon E5-2690 @ 2.9oGHz. We let every algorithm run for at least an hour or 50 times the time it took for the fastest algorithm to complete the system in question, whichever was bigger. Using the witness sets of our output we also did the following to compare to Bertini: We ran our algorithm in a large random prime charateristic. We then removed the embedded irreducible components from each of our output components and computed the degrees of the output components. This gives us the degree in each dimension of the algebraic set defined by the input. Whenever Bertini reports different degrees, we marked it in the respective column. Due to the randomly chosen large characteristic these degrees should be the same one obtains when considering the algebraic set in question over the complex numbers. In the second column of this table, we additionally provide the number of affine cells that Algorithm _equidim_ decomposed the respective system into. All timings in this table are given in seconds. Due to the way we measured the timings of Bertini we can only report them without any decimal places, rounded up. ### Discussion of Experimental Results We provide here some further information about some of the examples and the behaviour of the different implementations on these examples compared below. Our algorithm, i.e. Algorithm _equidim_, seems to behave best in comparison to the other implementations when the input system is dense in the sense that each of the input equations of the system in question involves most, or all, of the variables. This is the case for cyclic 8, the class of the \(\mathrm{Ps}(\bullet)\) systems, the class of the \(\mathrm{Sing}(\bullet)\) systems, the class of the \(\mathrm{sos}(\bullet,\bullet)\) systems and the Steiner polynomial system. On certain polynomial systems, where each input equation involves only a small subset of the variables, we were able to improve our timings by foregoing the witness set based data structure and instead running a deterministic version of our algorithm akin to the version in Section 2. The improvement we thusly obtained can be explained by the fact that intersecting very sparse systems with random hyperplanes can "destroy their sparsity" and make certain Grobner basis computations much harder. This was the case for the example Leykin-1: Here running the deterministic version improved our timing to 2.6 seconds. The Gonnet and dgp6 polynomial systems demonstrate that our algorithm is highly sensitive to the ordering of the input equations: By default we ran our implementation by iterating over the input equations degree by degree in Algorithm _equidim_. With this ordering, our algorithm did not terminate within several hours of computation. When we changed this ordering on these two examples and sorted the input equations instead by length of support, our algorithm terminated in less than one second on these two examples. The system sys2874 can be attacked by both changing the order of the input equations to be ordered by length of support and by using the deterministic version of our algorithm: Doing this, the timing improved by several orders of magnitude to 0.26 seconds. We also remark that Oscar's timings improved significantly on the examples sys2449, sys2297 and Leykin-1 (each to less than one second) if one decomposes the radicals of these systems instead of the systems themselves. For the examples KdV and sys2882 we seem to be bottlenecked by very difficult Grobner basis computations and less by the inherent structure of our algorithm. Informal experiments where we tried to compute just a Grobner basis for these systems using msolve suggest that even this is a highly non-trivial computation. For these two systems, techniques involving regular chains seem to be vastly superior over anything that involves Grobner basis computations. All in all, these experiments illustrate that on a wide range of examples, our algorithm performs on average better than state-of-the-art implementations and can tackle some problems which were previously unrea ## Acknowledgements The authors wish to thank Marc Moreno Maza for providing feedback on the BPAS library and its benchmark examples. This work has been supported by the Agence nationale de la recherche (ANR), grant agreements ANR-18-CE33-0011 (SESAME), ANR-19-CE40-0018 (Re Rerum Natura); by the joint ANR-Austrian Science Fund FWF grant agreements ANR-19-CE48-0015 (ECARP) and ANR-22-CE91-0007 (EAGLES); by the EOARD-AFOSR grant agreement FA8665-20-1-7029; by the DFG Sonderforschungsbereich TRR 195; the Forschungsinitiative Rheinland-Pfalz; and by the European Research Council (ERC) under the European Union's Horizon Europe research and innovation programme, grant agreement 101040794 (10000 DIGITS). Timings are in seconds, except otherwise indicated. The ratio with respect to the best time is given when the latter is over 1 second. * We made some minor preparation of the input (like reordering the input equations, or disabling the probabilistic representation of affine cells) to improve the timing. * Bertini terminated the computation with an error. * The result given by Bertini is not consistent with our result in terms of degree/dimension.
2302.07247
Holographic Aspects of Non-minimal $RF^{(a)}_{μα}F^{(a)μ α} $ Black Brane
In this paper, we consider Einstein-Hilbert gravity in the presence of cosmological constant and an electric field of Yang-Mills type, which is minimally coupled to gravity. We couple the Ricci scalar to the Yang-Mills invariant to obtain a modified theory of gravity. The black brane solution of this model is introduced up to the first order of the $RF^{(a)}_{\mu \alpha }F^{(a)\mu \alpha} $ term. Then, the color non-abelian direct current (DC) conductivity and the ratio of shear viscosity to entropy density are calculated for this solution. Our results recover the Yang-Mills Schwarzschild AdS black brane in the limit of $q_2 \to 0$.
Mehdi Sadeghi
2023-02-14T18:43:29Z
http://arxiv.org/abs/2302.07247v2
# Holographic Aspects of Non-minimal \(RF^{(a)}_{\mu\alpha}F^{(a)\mu\alpha}\) Black Brane ###### Abstract In this paper, we consider Einstein-Hilbert gravity in presence of cosmological constant and electric field of Yang-Mills type which is minimally coupled to gravity. We couple the Ricci scalar to Yang-Mills invariant to obtain modified theory of gravity. Black brane solution of this model is introduced up to first order of \(RF^{(a)}_{\mu\alpha}F^{(a)\mu\alpha}\) term and then the color non-abelian direct current (DC) conductivity and the ratio of shear viscosity to entropy density are calculated for this solution. Our results recover the Yang-Mills Schwazchild AdS black brane in the limit of \(q_{2}\to 0\). PACS numbers: 11.10.Jj, 11.10.Wx, 11.15.Pg, 11.25.Tq **Keywords:** AdS/CFT duality, DC Conductivity, Black brane, Shear viscosity to entropy density ## 1 Introduction Black holes are the solution of Einstein equations that have event horizon(s). The event horizon is a null hyper surface that there is no causal relation between two sides of it. The fact, that nothing can escape from the event horizon is the classical description of black hole. Therefore, the key question is how to study the interior of the black hole. Radiation of black hole is discovered by Hawking and Page [[1]] known as quantum description of it that can help us to study the interior of black hole. So the quantum aspects of black hole is key question and we interest to study. The cosmological constant that describes the expansion of the Universe is positive in the cosmology and the solution of Einstein's equations with positive cosmological constant known as de Sitter (dS) spacetime. Gravity theory with negative cosmological constant known as Anti-de Sitter (AdS) spacetime and it has a dual theory on the boundary. This proposal introduced by Maldacena called AdS/CFT duality [[2]]-[[5]]. This is a powerful tool for describing strongly coupled field theories. This duality in low energy limit called fluid-gravity duality [6]-[11]. It means the theory on the boundary is described by hydrodynamics. Hydrodynamics is an effective description of field theory in the low energy limit. Determining the transport coefficients help us to study fluid theory very well. In this paper, we use Green-Kubo formula [6] for calculation of the transport coefficients. The color non-abelian DC conductivity and shear viscosity to entropy density are investigated in this paper. The motivation of non-minimal theory, that is defined by coupling the gravitational field to other fields, is to introduce an alternative theory of gravity. The non-minimal theory has five classes and in this paper we interest to couple the Ricci scalar to Yang-Mills invariant [[12],[13]]. Modified Maxwell-\(F(R)\) gravity as a non-minimal theory can explain inflation and the late-time acceleration of the universe[14]. Power-law inflation can be understand by the non-minimal gravitational coupling of the electromagnetic field[14]. Therefore, dark matter and dark energy as unknown parts of the universe can be realized by non-minimal model. Conductivity and shear viscosity to entropy density are bounded by universal value. Kovtun, Son and Starinets (KSS) bound states that \(\frac{\eta}{s}\geq\frac{1}{4\pi}\) for all quantum field theories. This bound is saturated for Einstein-Hilbert gravity with field theory dual and it is violated for higher derivative gravities[15], massive gravity theory [16]-[17], hairy anti-de-Sitter black hole solutions in generalized scalar-tensor gravity[18], planar hairy black hole configurations for a special subclass of the Horndeski theory[19], degenerate-higher-order-scalar-tensor theories[20]. We mention that this value is in a good agreement with the experimental data of quark-gluon-plasma (QGP). This value is an evidence of string theory is reliable. The bound of DC conductivity is as \(\sigma\geq\frac{1}{e^{2}}=1\) where \(e\) is the unit of charge carried by gauge field - not the unit of charge in the boundary of theory. This bound is violated for massive gravity [21], theories with background fields [22], Non-abelian Einstein-Born-Infeld AdS theory[23], and AdS black brane coupled to non-abelian logarithmic gauge theory[24]. In this paper, we want to study the non-minimal electric black branes with a cosmological constant and study the holographic aspects by calculation of conductivity and shear viscosity to entropy density. Then, we want to investigate if these bounds are preserved for our model or not. ## 2 Non-minimal \(RF^{(a)}_{\mu\alpha}F^{(a)\mu\alpha}\) AdS Black Brane Solution The non-minimal Einstein-Yang-Mills theory with negative cosmological constant can be described in terms of the action functional below[25], \[S=\int d^{4}x\sqrt{-g}\bigg{[}\frac{1}{\kappa}(R-2\Lambda)+\frac{q_{1}}{2}F^{( a)}_{\mu\alpha}F^{(a)\mu\alpha}+q_{2}RF^{(a)}_{\mu\alpha}F^{(a)\mu\alpha}\bigg{]}, \tag{1}\] where \(R\) is the Ricci scalar, \(\Lambda=-\frac{3}{L^{2}}\) is the cosmological constant where \(L\) is the AdS radius, and \({\cal F}={\bf Tr}(F^{(a)}_{\mu\nu}F^{(a)\ \mu\nu})\) is Yang-Mills invariant. \(F^{(a)\ \mu\nu}\) is the Yang-Mills field tensor, \[F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}-i[A_{\mu},A_{\nu}], \tag{2}\] in which the gauge coupling constant is 1, and \(A_{\nu}\)'s are the Maxwell potentials. \(q_{2}\) is the dimensionless coupling constant and it is also interaction term between gauge field and the Ricci scalar[26]. Variation of the action (1) with respect to the spacetime metric \(g_{\mu\nu}\) yields the field equations, \[R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R+\Lambda g_{\mu\nu}=\kappa T^{\rm(eff)}_{\mu\nu} \tag{3}\] where, \[T^{\rm(eff)}_{\mu\nu}=q_{1}T^{\rm(YM)}_{\mu\nu}+q_{2}T^{(I)}_{\mu\nu} \tag{4}\] \[T^{\rm(YM)}_{\mu\nu}=\frac{1}{4}g_{\mu\nu}F^{(a)}_{\alpha\beta}F^{(a)\alpha \beta}-F_{\mu}^{\ (a)\alpha}F^{(a)}_{\nu\alpha} \tag{5}\] \[T^{(I)}_{\mu\nu}=\frac{1}{2}F^{(a)}_{\alpha\beta}F^{(a)\,\alpha \beta}g_{\mu\nu}R-R_{\mu\nu}F^{(a)}_{\alpha\beta}F^{(a)\,\alpha\beta}-2F^{(a) \,\alpha}_{\mu}F^{(a)}_{\nu\alpha}R\] \[-2F^{(a)}_{\alpha\beta}g_{\mu\nu}\nabla_{\gamma}\nabla^{\gamma}F^ {(a)}_{\alpha\beta}-2g_{\mu\nu}\nabla_{\gamma}F^{(a)}_{\alpha\beta}\nabla^{ \gamma}F^{(a)\,\alpha\beta}+F^{(a)\,\alpha\beta}\nabla_{\mu}\nabla_{\nu}F^{(a )}_{\alpha\beta}\] \[+2\nabla_{\mu}F^{(a)\,\alpha\beta}\nabla_{\nu}F^{(a)\,\alpha \beta}+F^{(a)\,\alpha\beta}\nabla_{\nu}\nabla_{\mu}F^{(a)}_{\alpha\beta}. \tag{6}\] Variation of the action (1) with respect to the \(A_{\mu}\) yields the field equations, \[\nabla_{\mu}\Big{(}q_{1}F^{(a)\mu\nu}+2q_{2}F^{(a)\mu\nu}R\Big{)}=0. \tag{7}\] Since the space-time of our model is 4-dimensional with flat symmetric line element, we consider the following form as an ansatz for the metric, \[ds^{2}=-e^{-2H(r)}f(r)dt^{2}+\frac{dr^{2}}{f(r)}+\frac{r^{2}}{L^{2}}(dx^{2}+dy ^{2}). \tag{8}\] We apply the ansatz 8 for solving Eq.(7) where the potential 1-form are expressed by, \[{\bf A}^{(a)}=\frac{i}{2}h(r)dt\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}, \tag{9}\] the gauge group is the diagonal generator of the Cartan subalgebra of \(SU(2)\)[27]. Now, we can write the field equations of motion by consideration of Eq.(7-9). The \(tt\)-part of the field equations of motion is as follows, \[rh^{\prime}\Big{(}h^{\prime}\left(4q_{2}f^{\prime}\left(4rH^{ \prime}-3\right)+r\left(q_{1}-4q_{2}f^{\prime\prime}\right)\right)+4q_{2}rf^{ \prime}h^{\prime\prime}\Big{)}\] \[-\frac{2e^{-2H}\left(r\left(f^{\prime}+\Lambda r\right)+f\right) }{k}+4q_{2}f\Bigg{(}h^{\prime 2}\left(2r\left(H^{\prime}\left(rH^{\prime}+4 \right)+2rH^{\prime\prime}\right)-1\right)\] \[+2rh^{\prime}\left(h^{\prime\prime}\left(4rH^{\prime}+2\right)+ rh^{(3)}(r)\right)+2r^{2}h^{\prime\prime 2}\Bigg{)}=0, \tag{10}\] for \(q_{2}=0\) we have, \[2re^{-2H}\left(f^{\prime}+\Lambda r\right)-q_{1}kr^{2}h^{\prime 2}+2fe^{-2H}=0, \tag{11}\] The \(rr\)-component of the field equations of motion is as follows, \[ke^{2H}h^{\prime}\Bigg{(}r\bigg{(}h^{\prime}\left(4q_{2}\left(f^{ \prime}\left(3-4rH^{\prime}\right)+rf^{\prime\prime}\right)-q_{1}r\right)-4q_{ 2}rf^{\prime}h^{\prime\prime}\Bigg{)}\] \[+4q_{2}f\Bigg{(}2rh^{\prime\prime}\left(rH^{\prime}-2\right)+h^{ \prime}\left(2r\left(H^{\prime}\left(2rH^{\prime}-3\right)-rH^{\prime\prime} \right)+1\right)\Bigg{)}\Bigg{)}\] \[+2r\left(f^{\prime}+\Lambda r\right)+2f\left(1-2rH^{\prime} \right)=0, \tag{12}\] in which \(rr\)- part of Einstein equations for \(q_{2}=0\) is the same as Eq.(11). Non-zero component of Eq.(7) is as follows, \[B_{1}(r)h^{\prime\prime}(r)+B_{2}(r)h^{\prime}(r)=0 \tag{13}\] where, \[B_{1}(r)=r\left(2q_{2}f^{\prime}\left(3rH^{\prime}-4\right)+r\left(q_{1}-2q_{ 2}f^{\prime\prime}\right)\right)+4q_{2}f\left(r^{2}H^{\prime\prime}-\left(rH^ {\prime}-1\right)^{2}\right), \tag{14}\] \[B_{2}(r)=r\left(rH^{\prime}\left(4q_{2}f^{\prime\prime}+q_{1} \right)+2\left(-6q_{2}f^{\prime\prime}-q_{2}rf^{(3)}(r)+q_{1}\right)\right)\] \[+2q_{2}f^{\prime}\left(r\left(H^{\prime}\left(rH^{\prime}+6 \right)+5rH^{\prime\prime}\right)-6\right)\] \[+4q_{2}f(r)\left(H^{\prime}\left(1-r^{2}H^{\prime\prime}\right)- r^{2}H^{\prime 3}+r\left(4H^{\prime\prime}+rH^{(3)}(r)\right)\right). \tag{15}\] The solution of \(h(r)\) is as, \[h(r)=C_{1}\int^{r}\frac{e^{-H(u)}}{B(u)}du+C_{2}, \tag{16}\] \[B(u)=6q_{2}u^{2}f^{\prime}H^{\prime}-2q_{2}u^{2}f^{\prime\prime} -8q_{2}uf^{\prime}-4q_{2}u^{2}fH^{\prime 2}\] \[+4q_{2}u^{2}fH^{\prime\prime}+8q_{2}ufH^{\prime}-4q_{2}f+q_{1}u^{ 2}. \tag{17}\] These equations don't have analytical solutions. Therefore, we will solve the metric and the gauge field equations perturbatively up to first order in \(q_{2}\) as [[28]]-[[30]]. We consider the following forms for \(f(r)\), \(H(r)\) and \(h(r)\). \[f(r)=f_{0}(r)+q_{2}f_{1}(r), \tag{18}\] \[h(r)=h_{0}(r)+q_{2}h_{1}(r), \tag{19}\] \[H(r)=H_{0}(r)+q_{2}H_{1}(r), \tag{20}\] where \(f_{0}(r)\), \(h_{0}(r)\) and \(H_{0}(r)\) are the leading order solutions of Einstein-Yang-Mills AdS black brane in four dimensions. The \(h_{0}(r)\), \(f_{0}(r)\) and \(H_{0}(r)\) are found exactly as, \[h_{0}(r)=C_{2}-C_{1}\int^{r}\frac{1}{q_{1}u^{2}}du=Q(\frac{1}{r}-\frac{1}{r_{h}}), \tag{21}\] where \(C_{1}=q_{1}Q\) and \(C_{1}=-C_{2}\), \[f_{0}(r)=\frac{2M}{r}-\frac{\Lambda r^{2}}{3}+\frac{kq_{1}}{2r}\int^{r}u^{2}h_ {0}^{\prime 2}du=\frac{2M}{r}+\frac{r^{2}}{L^{2}}-\frac{kq_{1}Q^{2}}{2r^{2}}, \tag{22}\] \[H_{0}(r)=0. \tag{23}\] Blackenig factor on the event horizon should be suppressed,\(f(r_{h})=0\). \(M\) is mass of black brane and it can be fixed by applying this condition, \[M=\frac{kq_{1}Q^{2}}{4r_{h}}-\frac{r_{h}^{3}}{2L^{2}}. \tag{24}\] By plugging Eq.(24) in Eq.(22) we have, \[f_{0}(r)=\frac{r^{2}}{L^{2}}(1-\frac{r_{h}^{3}}{r^{3}})-\frac{kq_{1}Q^{2}}{2r} (\frac{1}{r}-\frac{1}{r_{h}}). \tag{25}\] Eq.(10) and Eq.(12) should be the same up to first order of \(q_{2}\). Therefore, \(H_{1}\) is calculated as follows, \[H_{1}(r)=C_{3}+2krh_{0}^{\prime}h_{0}^{\prime\prime}-kh_{0}^{\prime 2}=-\frac{5 kQ^{2}}{r^{4}}+C_{3}, \tag{26}\] where \(C_{3}\) is dimensionless integration constant. Since the metric of our model is flat near the boundary \(r\rightarrow\infty\) [[31]], so \(C_{3}=0\). It also guarantees the speed of light to unity on the theory of boundary. \[h_{1}(r)=C_{4}+C_{5}\int^{r}\frac{8uf_{0}^{\prime}+2u^{2}f_{0}^{\prime\prime} +4f_{0}-q_{1}u^{2}H_{1}}{q_{1}^{2}u^{4}}du=C_{4}-\frac{kC_{5}Q^{3}}{q_{1}r^{5} }-\frac{24C_{5}Q}{L^{2}q_{1}^{2}r} \tag{27}\] where \(C_{5}=q_{1}^{2}Q\). \(C_{4}\) is determind by applying this condition \(h_{1}(r_{h})=0\). \[h_{1}(r)=-kq_{1}Q^{3}(\frac{1}{r^{5}}-\frac{1}{r_{h}^{5}})-\frac{24Q}{L^{2}}( \frac{1}{r_{h}}-\frac{1}{r}), \tag{28}\] by substituting \(H_{0}(r)=0\) we have, \[f_{1}(r)=\frac{1}{r}\int^{r}D(u)du+\frac{C_{6}}{r}, \tag{29}\] where, \[D(u)=-2ku^{2}f_{0}^{\prime\prime}h_{0}^{\prime 2}+2ku^{2}f_{0}^{ \prime}h_{0}^{\prime}h_{0}^{\prime\prime}-6kuf_{0}^{\prime}h_{0}^{\prime 2}+8 kuf_{0}h_{0}^{\prime}h_{0}^{\prime\prime}\] \[-2kf_{0}h_{0}^{\prime 2}+2uf_{0}H_{1}^{\prime}+kq_{1}u^{2}h_{0}^{ \prime}h_{1}^{\prime}+kq_{1}u^{2}H_{1}h_{0}^{\prime 2}, \tag{30}\] \[f_{1}(r)=\frac{C_{6}}{r}+\frac{2kQ^{2}}{L^{2}r^{2}}-\frac{24kQ^{2}}{L^{2}r^{3}}- \frac{7kQ^{2}}{r^{5}}(\frac{kq_{1}Q^{2}}{2r_{h}}-\frac{r_{h}^{3}}{L^{2}})+\frac{4 Q^{4}q_{1}k^{2}}{r^{6}}-\frac{5q_{1}k^{2}Q^{4}}{r^{7}}, \tag{31}\] \(C_{6}\) is determined by applying the condition \(f_{1}(r_{h})=0\) as follows, \[C_{6}=\frac{24\kappa Q^{2}}{L^{2}r_{h}^{2}}-\frac{2\kappa Q^{2}}{L^{2}r_{h}}+ \frac{7\kappa Q^{2}}{r_{h}^{4}}(\frac{kq_{1}Q^{2}}{2r_{h}}-\frac{r_{h}^{3}}{L^ {2}})-\frac{4\kappa^{2}Q^{4}q_{1}}{r_{h}^{5}}+\frac{5\kappa^{2}Q^{4}q_{1}}{r_{ h}^{6}}, \tag{32}\] by inserting Eq.(32) in Eq.(31) we have, \[f_{1}(r)=-\frac{5\kappa^{2}Q^{4}q_{1}}{r^{7}}(1-\frac{r^{6}}{r_{ h}^{6}})-\frac{\kappa^{2}Q^{4}q_{1}}{2r^{6}r_{h}^{5}}(7rr_{h}^{4}+r^{5}-8r_{h}^{5})\] \[+\frac{\kappa Q^{2}}{L^{2}r^{5}r_{h}}\left(2r^{3}r_{h}-9r^{4}+7r_ {h}^{4}\right)+\frac{24\kappa Q^{2}\left(r^{2}-r_{h}^{2}\right)}{L^{2}r^{3}r_{ h}^{2}}. \tag{33}\] ## 3 Holographic Aspects of this Solution We want to calculate the non-abelian DC conductivity and shear viscosity to entropy density as two important transport coefficients via fluid-gravity for description of the dual of our model. We use the Green-Kubo formula [8] for calculating the non-abelian color DC conductivity, \[\sigma^{ij}(k_{\mu})=-\lim_{\omega\to 0}\frac{1}{\omega}\Im G^{ij}(k_{\mu}). \tag{34}\] Retarded Green's function is calculated by AdS/CFT duality. First, we perturb the gauge field as \(A_{\mu}\to A_{\mu}+\tilde{A}_{\mu}\) and then putting it in to the action and expand the resulted action 1 up to second order of the perturbed part. Finally, Green's function is calculated by getting twice variation with respect to the value of gauge field on the boundary[8], \[\sigma^{\mu\nu}(\omega)=\frac{1}{i\omega}<J^{\mu}(\omega)J^{\nu}(-\omega)>= \frac{\delta^{2}S}{\delta\tilde{A}_{\mu}^{0}\delta\tilde{A}_{\nu}^{0}}. \tag{35}\] There is a \(SO(2)\) symmetry on the boundary. So this condition ensures that the conductivity is a scalar quantity. \[\sigma^{ij}_{ab}=\sigma_{ab}\delta^{ij} \tag{36}\] We consider the perturbed part of gauge field as \(\tilde{A}_{x}=\tilde{A}_{x}(r)e^{-i\omega t}\) where \(\omega\) should be small - we are on the hydrodynamics regime. By inserting the perturbed part in the action Eq.(1) and keeping terms up to second order of \(\tilde{A}\) we have, \[S^{(2)} =-\int d^{4}x\frac{2\mathcal{Y}}{fr^{2}}\Bigg{[}-f^{2}\left(( \partial_{r}\tilde{A}_{x}^{(1)})^{2}+(\partial_{r}\tilde{A}_{x}^{(2)})^{2}+( \partial_{r}\tilde{A}_{x}^{(3)})^{2}\right)\] \[+\left((\tilde{A}_{x}^{(1)})^{2}+(\tilde{A}_{x}^{(2)})^{2}\right) \left(\omega^{2}+h^{2}\right)+\omega^{2}(\tilde{A}_{x}^{(3)})^{2}\Bigg{]}, \tag{37}\] where, \[\mathcal{Y}=\Bigg{[}r\left(f^{\prime}\left(8q_{2}-6q_{2}rH^{\prime}\right)-r \left(q_{1}-2q_{2}f^{\prime\prime}\right)\right)+4q_{2}f\left(r^{2}H^{\prime 2 }-r^{2}H^{\prime\prime}-2rH^{\prime}+1\right)\Bigg{]}. \tag{38}\] By variation of action \(S^{(2)}\) with respect to \(\tilde{A}_{x}^{(1)}\) we have, \[f\left(f\tilde{A}_{x}^{(1)^{\prime}}\right)^{\prime}+2\tilde{A}_{x}^{(1)} \left(h^{2}+\omega^{2}\right)+\frac{q_{2}}{q_{1}r^{2}}\Big{[}E_{0}-\frac{4f}{r }(E_{1}+E_{2}+E_{3})\Big{]}=0, \tag{39}\] where, \[E_{0}=-4\tilde{A}_{x}^{(1)}\left(h^{2}+\omega^{2}\right)\left(r \left(f^{\prime}\left(4-3rH^{\prime}\right)+rf^{\prime\prime}\right)+2f\left( \left(rH^{\prime}-1\right)^{2}-r^{2}H^{\prime\prime}\right)\right) \tag{40}\] \[E_{1}=r^{2}\tilde{A}_{x}^{(1)^{\prime}}f^{\prime}\left(f^{ \prime}\left(4-3rH^{\prime}\right)+rf^{\prime\prime}\right),\] (41) \[E_{2}=r^{2}\tilde{A}_{x}^{(1)^{\prime\prime}}f\left(f^{\prime} \left(4-3rH^{\prime}\right)+rf^{\prime\prime}\right)\] \[+r^{2}f\tilde{A}_{x}^{(1)^{\prime}}\left(f^{\prime\prime}\left(4 -3rH^{\prime}\right)+f^{\prime}\left(4H^{\prime}\left(rH^{\prime}-2\right)-7rH ^{\prime\prime}\right)+rf^{(3)}(r)\right),\] (42) \[E_{3}=2f^{2}\left(r\tilde{A}_{x}^{(1)^{\prime\prime}}\left(\left( rH^{\prime}-1\right)^{2}-r^{2}H^{\prime\prime}\right)\right)\] \[2\tilde{A}_{x}^{(1)^{\prime}}f^{2}\left(2\left(rH^{\prime}-1 \right)\left(r^{2}H^{\prime\prime}+1\right)-r^{3}H^{(3)}(r)\right). \tag{43}\] The \(\tilde{A}_{x}^{(2)}\) part is the same as \(\tilde{A}_{x}^{(1)}\). The \(\tilde{A}_{x}^{(3)}\) part is as follows, \[q_{2}\tilde{A}_{x}^{(3)}r\omega^{2}\bigg{(}2rf^{\prime}\left(3rH ^{\prime}-4\right)-2r^{2}f^{\prime\prime}-4f\left(1-2rH^{\prime}+r^{2}H^{ \prime 2}\right)-r^{2}H^{\prime\prime}\bigg{)}\] \[+q_{2}rf^{2}\Bigg{(}\tilde{A}_{x}^{(3)^{\prime\prime}}\Big{(}2f^ {\prime}\left(-4+3rH^{\prime}\right)-2rf^{\prime\prime}\Big{)}+r^{3}q_{1} \Bigg{(}f\left(f\tilde{A}_{x}^{(3)^{\prime}}\right)^{\prime}+\omega^{2} \tilde{A}_{x}^{(3)}\Bigg{)}\] \[-2\tilde{A}_{x}^{(3)^{\prime}}r^{2}f\left(f^{\prime\prime}\left( 4-3rH^{\prime}\right)+f^{\prime}\left(4H^{\prime}\left(rH^{\prime}-2\right)-7rH ^{\prime\prime}\right)+rf^{(3)}\right)\] \[+4f^{2}\left(r\tilde{A}_{x}^{(3)^{\prime\prime}}\left(2rH^{\prime }-1-r^{2}H^{\prime 2}+r^{2}H^{\prime\prime}\right)+\tilde{A}_{x}^{(3)^{\prime}} \left(2r^{2}H^{\prime\prime}+2-2H^{\prime}(r+r^{3}H^{\prime\prime})+r^{3}H^{(3 )}(r)\right)\right)\Bigg{)}=0 \tag{44}\] By using these relations \(f_{0}\sim 4\pi f_{0}^{\prime}(r_{h})(r-r_{h})\) and \(f_{1}\sim 4\pi f_{1}^{\prime}(r_{h})(r-r_{h})\), we find the solution of Eq.(39) and Eq.(44) near the event horizon. Since \(\tilde{A}_{x}^{(a)}\) should be vanished on the event horizon, we consider \(\tilde{A}_{x}^{(a)}\) as follows, \[\tilde{A}_{x}^{(a)}\sim(r-r_{h})^{z_{a}}\,,\qquad a=1,2,3 \tag{45}\] where, \[z_{1}=z_{2}=\pm i\frac{\sqrt{h(r_{h})^{2}+\omega^{2}}}{4\pi T} \tag{46}\] \[z_{3}=\pm i\frac{\omega}{4\pi T}. \tag{47}\] The Hawking temperature of the black brane \(T\) is as follows, \[T=\frac{1}{2\pi}\Big{[}\frac{1}{\sqrt{g_{rr}}}\frac{d}{dr}\sqrt{-g_{tt}}\Big{]} \bigg{|}_{r=r_{h}}=\frac{e^{-H(r_{h})}f^{\prime}(r_{h})}{4\pi}. \tag{48}\] For solving the \(\tilde{A}_{x}^{(a)}\) from event horizon to boundary, we consider this ansatz as following, \[\tilde{A}_{x}^{(1)}=\tilde{A}_{\infty}^{(1)}\Big{(}\frac{-3f}{\Lambda r^{2}} \Big{)}^{z_{1}}\Big{(}1+i\omega b_{1}(r)+\cdots\Big{)}, \tag{49}\] \[\tilde{A}_{x}^{(2)}=\tilde{A}_{\infty}^{(2)}\Big{(}\frac{-3f}{\Lambda r^{2}} \Big{)}^{z_{2}}\Big{(}1+i\omega b_{2}(r)+\cdots\Big{)}, \tag{50}\] \[\tilde{A}_{x}^{(3)}=\tilde{A}_{\infty}^{(3)}\Big{(}\frac{-3f}{\Lambda r^{2}} \Big{)}^{z_{3}}\Big{(}1+i\omega b_{3}(r)+\cdots\Big{)}, \tag{51}\] where \(\tilde{A}_{\infty}^{(a)}\) is the value of fields in the boundary and \(z_{i}\)'s are the minus sign of Eq.(46) and Eq.(47). Where the term \(b_{3}(r)\) in Eq.(51) is as follows, \[b_{3}(r)=\int^{r}\left(\frac{2}{u}-\frac{f^{\prime}}{f}+\frac{C_{7}u^{2}}{fN} \right)du+C_{8}, \tag{52}\] where, \[N=2uq_{2}f^{\prime}\left(4-3uH^{\prime}\right)-u^{2}\left(q_{1}-2q_{2}f^{ \prime\prime}\right)+4q_{2}f\left(uH^{\prime}-1\right)^{2}-4q_{2}fu^{2}H^{ \prime\prime} \tag{53}\] The solution of \(b_{3}(r)\) to first order of \(q_{2}\) is as following, \[b_{3}(r)=\int^{r}\left(\frac{2}{u}-\frac{f^{\prime}_{0}}{f_{0}}- \frac{C_{7}}{q_{1}f_{0}}\right)du\] \[+q_{2}\int^{r}\ \left(\frac{f_{1}f^{\prime}_{0}}{f_{0}^{2}}-\frac{8C _{1}f^{\prime}_{0}}{q_{1}^{2}uf_{0}}-\frac{2C_{1}f^{\prime\prime\prime}_{0}}{q _{1}^{2}f_{0}}-\frac{f^{\prime}_{1}}{f^{\prime}_{0}}+\frac{C_{1}f_{1}}{q_{1}f _{0}^{2}}-\frac{4C_{1}}{q_{1}^{2}u^{2}}\right)du, \tag{54}\] where \(C_{7}\) and \(C_{8}\) are integration constants. We find the solution of \(b_{3}(r)\) near the event horizon as, \[b_{3}\approx\left(-1-\frac{C_{7}}{q_{1}f^{\prime}_{0}(r_{h})}+q_{2}\bigg{(} \frac{f^{\prime}_{1}(r_{h})}{f^{\prime}_{0}(r_{h})}-\frac{8C_{7}}{q_{1}}-\frac {2C_{7}f^{\prime\prime\prime}_{0}(r_{h})}{q_{1}f^{\prime}_{0}(r_{h})}+\frac{C_ {7}f^{\prime}_{1}(r_{h})}{q_{1}f^{\prime\prime}_{0}(r_{h})}\bigg{)}\right)\log (r-r_{h})+\text{finite terms}. \tag{55}\] The solution should be regular on the event horizon. Therefore, \(C_{7}\) is determined by demanding the following condition, \[C_{7}=\frac{1-q_{2}\frac{f^{\prime}_{1}(r_{h})}{f^{\prime}_{0}(r_{h})}}{\frac {-1}{q_{1}f^{\prime}_{0}(r_{h})}-\frac{8q_{2}}{q_{1}}-\frac{2q_{2}f^{\prime \prime\prime}_{0}(r_{h})}{q_{1}f^{\prime}_{0}(r_{h})}+\frac{q_{2}f^{\prime}_{1} (r_{h})}{q_{1}f^{\prime\prime}_{0}(r_{h})}}, \tag{56}\] \(C_{7}\) is found up to first order of \(q_{2}\) as follows, \[C_{7}=-q_{1}f^{\prime}_{0}(r_{h})+q_{1}q_{2}\left(f^{\prime}_{0}(r_{h})^{2}\left( 8-\frac{f^{\prime}_{1}(r_{h})^{2}}{f^{\prime 2}_{0}(r_{h})}\right)+2f^{\prime}_{0}(r_{h})f^ {\prime\prime}_{0}(r_{h})+f^{\prime}_{1}(r_{h})\right). \tag{57}\] Considering the solution of \(\tilde{A}^{(3)}_{x}\) in Eq.(37) and variation with respect to \(\tilde{A}^{(3)}_{\infty}\), Green's function can be read as, \[G^{(33)}_{xx}(\omega,\vec{0})=-i\omega\frac{C_{7}}{q_{1}f^{\prime}_{0}(r_{h})} =-i\omega+i\omega q_{2}\left(f^{\prime}_{0}(r_{h})\left(8-\frac{f^{\prime}_{1} (r_{h})^{2}}{f^{\prime 2}_{0}(r_{h})}\right)+2f^{\prime\prime}_{0}(r_{h})+ \frac{f^{\prime}_{1}(r_{h})}{f^{\prime}_{0}(r_{h})}\right). \tag{58}\] The conductivity is as following, \[\sigma^{(33)}_{xx}=-\lim_{\omega\to 0}\frac{1}{\omega}\Im G^{ij}(k_{\mu})=1-q_{2} \left(f^{\prime}_{0}(r_{h})\left(8-\frac{f^{\prime}_{1}(r_{h})}{f^{\prime 2}_{0}( r_{h})}\right)+2f^{\prime\prime}_{0}(r_{h})+\frac{f^{\prime}_{1}(r_{h})}{f^{ \prime}_{0}(r_{h})}\right) \tag{59}\] by substituting \(f_{0}(r)\), \(f_{1}(r)\) in above equation and using Eq.(34) we have, \[\sigma^{(33)}_{xx}=1-4q_{2}\Bigg{(}\frac{6r_{h}}{L^{2}}-\frac{\kappa Q^{2}q_{ 1}(r_{h}-1)}{r_{h}^{4}}\Bigg{)}. \tag{60}\] It shows that the conductivity bound is violated for non-abelian non-minimal \(RF^{2}\) black brane theory for \(\left(\frac{6r_{h}}{L^{2}}-\frac{\kappa Q^{2}q_{1}(r_{h}-1)}{r_{h}^{4}} \right)>0\). In the limit of \(q_{2}\to 0\), we have, \[\sigma^{(33)}_{xx}=1. \tag{61}\] For calculating shear viscosity to entropy density ratio, we follow the recipe in [[32]]. We must perturb the metric as \(g_{\mu\nu}\to g_{\mu\nu}+\delta g_{xy}\). Where \(\delta g_{xy}=\frac{r^{2}}{L^{2}}\phi(r)e^{i\omega t}\) and we consider \(\omega=0\). \(\frac{\eta}{s}\) is read as follows, \[\frac{\eta}{s}=\frac{1}{4\pi}\phi(r_{h})^{2}. \tag{62}\] We insert the perturbed part of the metric in the action and expand the action up to the second order of \(\phi\) and varaition of the resulted action with respect to \(\phi\) we have, \[rf^{\prime}\phi^{\prime}-rfH^{\prime}\phi^{\prime}+3f\phi^{\prime}+rf\phi^{ \prime\prime}=0 \tag{63}\] The solution of the \(\phi(r)\) is as follows, \[\phi(r)=C_{5}+C_{6}\int^{r}\frac{e^{2H(u)}}{u^{3}f(u)}du. \tag{64}\] By inserting Eq. (18) - Eq.(20) in Eq. (64), we find the solution of \(\phi\) in the leading order of \(q_{2}\) as follows, \[\phi(r)=\phi_{0}(r)+q_{2}\phi_{1}(r)=C_{5}+C_{6}\int^{r}\frac{du}{u^{3}f_{0}(u )}+C_{6}q_{2}\int^{r}\frac{f_{0}H_{1}-f_{1}}{u^{3}f_{0}^{2}}du. \tag{65}\] We calculate the solution of \(\phi_{0}(r)\) near the event horizon, we obtain, \[\phi(r)=\phi_{0}(r)+q_{2}\phi_{1}(r)=C_{5}+\frac{C_{6}}{4\pi Tf_{0}^ {\prime}(r_{h})r_{h}^{3}}\log(r-r_{h})\] \[+C_{6}q_{2}\Bigg{(}\frac{2H_{1}(r_{h})}{r_{h}^{3}f_{0}^{\prime}(r_ {h})}-\frac{f_{1}^{\prime}(r_{h})}{r_{h}^{3}f_{0}^{\prime}(r_{h})^{2}}\Bigg{)} \log(r-r_{h}), \tag{66}\] \(\phi_{0}(r)\) should be regular on the event horizon so \(C_{6}=0\) and we set \(C_{5}=1\) for normalization of \(\phi_{0}(r)\). \[\phi_{0}(r)=1 \tag{67}\] \(\phi(r)\) up to first order of \(q_{2}\) is as, \[\phi_{1}(r)=0, \tag{68}\] then, the solution of \(\phi(r)\) is as follows, \[\phi(r)=\phi_{0}(r)+q_{2}\phi_{1}(r)=1. \tag{69}\] Finally, the ratio of shear viscosity to entropy density up to first order of \(q_{2}\) is as, \[\frac{\eta}{s}=\frac{1}{4\pi}. \tag{70}\] Kovtun, Son and Starinets (KSS) bound states that \(\frac{\eta}{s}\geq\frac{1}{4\pi}\) for all quantum field theories. This bound is saturated for Einstein-Hilbert gravity with field theory dual and it is violated for higher derivative gravities. The KSS bound is preserved for this model. ## 4 Conclusion In this paper, we introduced non-minimal \(RF_{\mu\alpha}^{(a)}F^{(a)\mu\alpha}\) black brane solution in AdS spacetime in four dimensions. Since, this model does not have analytical solution, we solve it perturbativily in terms of non-minimal coupling \(q_{2}\). We investigate the duality of this model via fluid-gravity duality by calculation of the color non-abelian DC conductivity and the ratio of shear viscosity to entropy density as two important transport coefficients. Our result shows that the conductivity bound is violated here for some values of \(Q^{2}\). The reason is the effect of \(RF_{\mu\alpha}^{(a)}F^{(a)\mu\alpha}\) term and in the limit of \(q_{2}\to 0\) the conductivity bound is saturated and it recovers the result of Yang-Mills model. The \(RF_{\mu\alpha}^{(a)}F^{(a)\mu\alpha}\) term also does not affect on the value of \(\frac{\eta}{s}\) up to first order of \(q_{2}\). Since \(\frac{\eta}{s}\) is proportional to inverse square of the field theory side coupling, it means that the dual of our model is the same as the dual of Einstein-Hilbert gravity- because the coupling of field theory is the same. AcknowledgmentAuthor would like to thank Shahrokh Parvizi for useful comments and suggestions. ## Data availability statement All data that support the findings of this study are included within the article.
2301.07717
Ergodicity Breaking Under Confinement in Cold-Atom Quantum Simulators
The quantum simulation of gauge theories on synthetic quantum matter devices has gained a lot of traction in the last decade, making possible the observation of a range of exotic quantum many-body phenomena. In this work, we consider the spin-$1/2$ quantum link formulation of $1+1$D quantum electrodynamics with a topological $\theta$-angle, which can be used to tune a confinement-deconfinement transition. Exactly mapping this system onto a PXP model with mass and staggered magnetization terms, we show an intriguing interplay between confinement and the ergodicity-breaking paradigms of quantum many-body scarring and Hilbert-space fragmentation. We map out the rich dynamical phase diagram of this model, finding an ergodic phase at small values of the mass $\mu$ and confining potential $\chi$, an emergent integrable phase for large $\mu$, and a fragmented phase for large values of both parameters. We also show that the latter hosts resonances that lead to a vast array of effective models. We propose experimental probes of our findings, which can be directly accessed in current cold-atom setups.
Jean-Yves Desaules, Guo-Xian Su, Ian P. McCulloch, Bing Yang, Zlatko Papić, Jad C. Halimeh
2023-01-18T19:00:01Z
http://arxiv.org/abs/2301.07717v3
# Ergodicity Breaking Under Confinement in Cold-Atom Quantum Simulators ###### Abstract The quantum simulation of gauge theories on synthetic quantum matter devices has gained a lot of traction in the last decade, making possible the observation of a range of exotic quantum many-body phenomena. In this work, we consider the spin-1/2 quantum link formulation of 1 + 1D quantum electrodynamics with a topological \(\theta\)-angle, which can be used to tune a confinement-deconfinement transition. Exactly mapping this system onto a PXP model with mass and staggered magnetization terms, we show an intriguing interplay between confinement and the ergodicity-breaking paradigms of quantum many-body scarring and Hilbert-space fragmentation. We map out the rich dynamical phase diagram of this model, finding an ergodic phase at small values of the mass \(\mu\) and confining potential \(\chi\), an emergent integrable phase for large \(\mu\), and a fragmented phase for large values of both parameters. We also show that the latter hosts resonances that lead to a vast array of effective models. We propose experimental probes of our findings, which can be directly accessed in current cold-atom setups. ## I Introduction Gauge theories are quantum many-body models possessing gauge symmetries that dictate an intrinsic local relationship between matter and gauge fields [1; 2; 3]. The quantum simulation of gauge theories has progressed tremendously in recent years across various platforms of synthetic quantum matter [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17]. This offers the opportunity to probe high-energy physics on low-energy table-top platforms [18; 19; 20; 21; 22; 23; 24; 25], establishing a valuable venue complementary to dedicated high-energy experiments, such as the Large Hadron Collider, and to classical computational methods, such as Quantum Monte Carlo. As an example, a cold-atom quantum simulator has recently been successfully employed in probing gauge invariance and thermalization dynamics in a U(1) gauge theory [13; 14], with concrete proposals [26; 27] of extending it to observe the confinement-deconfinement transition [28; 29]. In particular, Ref. [26] introduced a scheme that allows the realization of a tunable topological \(\theta\)-angle term, which emerges in quantum electrodynamics [30; 31; 32] on account of the topological structure of the vacuum, and has profound consequences on quantum phases, dynamical behavior, and inherent symmetries. Tuning this angle can lead to confinement, an extremely active topic of research in gauge theories [17; 33], and thus a tunable topological \(\theta\)-angle term in a quantum simulator can allow the calculation of the time evolution of confined dynamics from first principles. The understanding of ergodicity-breaking mechanisms in gauge theories sheds light on dynamics from high-energy particle physics to condensed matter physics and to the evolution of the early universe. Gauge-theory quantum simulators offer the prospect of probing ergodicity-breaking phenomena that are relevant to condensed matter physics, such as disorder-free localization [35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48], quantum many-body scars (QMBS) [6; 29; 50; 49; 51], and Hilbert-space fragmentation (HSF) [51; 52], the understanding of which can provide a deeper generic picture of far-from-equilibrium dynamics in closed quantum many-body systems [53; 54; 55]. QMBS, a paradigm of weak ergodicity breaking, comprise states of anomalously low bipartite entanglement entropy that are usually equally spaced in energy across the entire spectrum of the Hamiltonian. Preparing a system in their subspace and quenching it will lead to persistent oscillations in local observables, an anomalously slow growth of bipartite entanglement entropy, and a corresponding significant delay in thermalization beyond any relevant timescales [6; 29; 49; 50]. Indeed, Bernien _et al._[6] was the first experiment that observed QMBS in Rydberg arrays, and Surace _et al._[29] were able to show that the implemented quantum Ising-like model maps onto the spin-1/2 U(1) quantum link model, which is a regularization of the lattice Schwinger model where the infinite-dimensional gauge fields of the latter are represented by spin-1/2 operators [56; 57; 58]. More recently, it has been shown that QMBS are ubiquitous in gauge theories, occurring over an extensive set of initial states [59; 60], in 2+1D [61], and for spin-\(S\) representations of the gauge field when the system is prepared in highly excited vacua [62; 63]. This opens the question as to the gauge-theoretic origin of QMBS, which motivates exploring them further and extending their regimes in realistic gauge-theory quantum simulators. On the other hand, HSF is a mechanism of ergodicity breaking emerging in models where the Hilbert space is fragmented into exponentially many invariant subspaces due to a exponentially large commutant algebra [51; 52; 64]. In the seminal case of dipole-conserving models [51; 52], HSF can be completely characterized by nonlocal integrals of motion [65]. More recently, HSF has also been shown to emerge in models with strict confinement [66; 67], including in gauge theories [68]. Experimental investigations of HSF have been carried out [69; 70], but exploring its presence in experimentally relevant parameter regimes of gauge theories would be very appealing, given recent large-scale cold-atom realizations of spin-1/2 U(1) quantum link models [13; 14]. In this vein, one wonders whether QMBS can occur in regimes of confinement in gauge theories where HSF also arises, or whether scars are a purely deconfined phenomenon. Indeed while Ref. [29] showed a slowing of the dynamics under confinement, and Ref. [68] showed the emergence of a symmetry in the limit of strong confinement at zero mass, a full analysis of the fate of scarring with both mass and confining potential at intermediate values of experimental interest is lacking. In this work, we explore the regime of finite mass and confining potential using exact diagonalization and matrix product state (MPS) techniques [71; 72]. We find prominent scarring regimes concomitantly with confinement, which we explain analytically and demonstrate numerically. We further map out the rich dynamical phase diagram where we find an ergodic phase for small values of the mass and confining potential. For larger values of these parameter, emergent conserved quantities at intermediate times appear, displaying a wide range of prethermal regimes with an emergent integrable phase for large mass in the deconfined regime, in addition to a fragmented phase for large values of both the mass and the confining potential,1 as sketched in Fig. 1. Footnote 1: We note that the emergent conserved quantities are only truly conserved for infinite values of \(\mu/\kappa\) and \(\chi/\kappa\). For smaller values of these parameters the timescale it takes to propagate between Hilbert space fragments can already be too long to probe in experiment. We append the prefix “prethermal” to reflect this. Our findings are based on the study of the PXP model, which exactly corresponds to the spin-1/2 U(1) quantum link model (QLM) [29]. The latter has been demonstrated experimentally on a Bose-Hubbard quantum simulator [14]. The mapping between the different models is sketched on Fig. 1(b) and explained in details in Sec. II. Further, we demonstrate the experimental implementation of our findings numerically. Figure 1: (Color online). Tuning the topological \(\theta\)-angle in the Bose–Hubbard quantum simulator. (a) The U(1) quantum link model Hamiltonian (1) involves a topological \(\theta\)-angle, where charges are deconfined at \(\theta\)=\(\pi\), and Coleman’s phase transition occurs at effective mass \(\mu\)=\(0.3275\kappa\), which corresponds to the spontaneous breaking of a global \(\mathbb{Z}_{2}\) symmetry [34] that arises from the invariance of Hamiltonian (1) to transformations of parity and charge conjugation. (b) The staggered Bose–Hubbard model realizes the U(1) quantum link model with direct tunneling \(J\), on-site interaction \(U\), staggered potential \(\delta\), and a small linear tilting potential \(\Delta\). By further adding a period-4 potential \(\chi\), the system is tuned away from \(\theta\)=\(\pi\), leading to confinement. The U(1) quantum link model or equivalently the staggered Bose–Hubbard model can be further mapped onto the PXP model where spins reside on the gauge links, and doublons on the gauge links represent spin excitations. (c) Schematic representation of the rich phase diagram in terms of confinement potential \(\chi\) and effective mass \(\mu\). Fragmentation happens around \(\chi\approx 0.4\kappa\). The rest of this paper is organized as follows: In Sec. II, we present the U(1) quantum link model with a topological \(\theta\)-term and discuss its exact mapping onto the PXP model with a mass and staggered magnetization terms. In Secs. III and IV, we provide our analysis of scarring and fragmented dynamics, respectively, in the presence of confinement. In Sec. V, we discuss experimental probes in a state-of-the-art tilted Bose-Hubbard optical lattice that can facilitate the observation of our findings. We conclude and provide outlook in Sec. VI. Several Appendices and the Supplemental Material (SM) [73] contain supporting analytic and numerical results. ## II Models and mapping We now discuss the main models considered in this work, and briefly outline the mappings between them. For further details, the reader is referred to Refs. [26; 29]. ### U(1) quantum link model The Schwinger model, which is \(1+1\)D quantum electrodynamics, can be conveniently represented through the quantum link formulation, where the gauge and electric fields are represented by spin-\(S\) matrices [56; 57; 58], thereby facilitating experimental feasibility. Upon restricting to \(S=1/2\), and employing a Jordan-Wigner transformation to map the fermions onto Pauli operators and a particle-hole transformation on both the matter and gauge (electric) fields [26; 74], the resulting U(1) quantum link model (QLM) Hamiltonian takes the form \[\hat{H}_{\text{QLM}}=\sum_{\ell=1}^{L_{m}} \biggl{[}-\frac{\kappa}{2}\bigl{(}\hat{\sigma}_{\ell}^{z}\hat{s} _{\ell,\ell+1}^{+}\hat{\sigma}_{\ell+1}^{-}+\text{H.c.}\bigr{)}\] \[+\frac{\mu}{2}\hat{\sigma}_{\ell}^{z}-\chi(-1)^{\ell}\hat{s}_{ \ell,\ell+1}^{z}\biggr{]}, \tag{1}\] where now the Pauli operator \(\hat{\sigma}_{\ell}^{z}\) represents the matter field on site \(\ell\), and the spin-1/2 operator \(\hat{s}_{\ell,\ell+1}^{+(z)}\) represents the gauge (electric) field at the link between sites \(\ell\) and \(\ell+1\). We consider a lattice with \(L_{m}\) sites and \(L_{m}\) links under periodic boundary conditions (PBC). The deviation from the deconfined point \(\theta\)=\(\pi\) is quantified by the _confining potential_\(\chi\)=\(g^{2}(\theta-\pi)/(2\pi)\)[26]. At \(\chi=0\), the U(1) QLM undergoes a second-order quantum (Coleman) phase transition at \(\mu/\kappa=0.3275\) related to the spontaneous breaking of a global \(\mathbb{Z}_{2}\) symmetry arising due to the invariance of Eq. (1) under a parity and charge (CP) transformation at \(\chi=0\)[75; 76; 34]. When \(\chi\neq 0\), the global \(\mathbb{Z}_{2}\) symmetry is explicitly broken by the last term in Eq. (1), and the Coleman phase transition is no longer present. The generator of the U(1) gauge symmetry of Hamiltonian (1) is \[\hat{G}_{\ell}=(-1)^{\ell}\biggl{(}\hat{s}_{\ell,\ell+1}^{z}+\hat{s}_{\ell-1, \ell}^{z}+\frac{\hat{\sigma}_{\ell}^{z}+\mathds{1}}{2}\biggr{)}, \tag{2}\] which can be construed as a discretized version of Gauss's law. Throughout this work, we will work in the physical sector of Gauss's law, defined by the set of gauge-invariant states \(\{\ket{\psi}\}\) satisfying \(\hat{G}_{\ell}\ket{\psi}=\)0, \(\forall\ell\). Despite being a regularization of the Schwinger model, the U(1) QLM (1) still captures a wealth of the physics of the Schwinger model, including Coleman's phase transitions at \(\chi=0\)[34], and the confinement-deconfinement transition [77]. ### PXP Hamiltonian The U(1) QLM with a topological \(\theta\)-term, described by Hamiltonian (1), can be mapped onto the PXP model with a staggered-magnetization term [62; 63; 29], described by the Hamiltonian \[\hat{H}_{\text{PXP}}= -\kappa\,\hat{\mathcal{P}}\biggl{(}\sum_{l=1}^{L_{m}}\hat{s}_{l}^{ z}\biggr{)}\hat{\mathcal{P}}-\sum_{l=1}^{L_{m}}\left[2\mu+(-1)^{l}\chi\right]\hat{s }_{l}^{z}\] \[= -\kappa\,\sum_{l=1}^{L_{m}}\hat{P}_{l-1}\hat{s}_{l}^{z}\hat{P}_{l +1}-\sum_{l=1}^{L_{m}}\left[2\mu+(-1)^{l}\chi\right]\hat{s}_{l}^{z}, \tag{3}\] where we have assumed PBC, integrated out the degrees of freedom on the matter sites, and relabelled the gauge sites as \(\hat{s}_{l,l+1}^{\alpha}\rightarrow\hat{s}_{l}^{\alpha}\) for notational brevity.2 The projector \(\hat{\mathcal{P}}\) annihilates any configuration with neighboring up-spins, in order to only allow configurations that respect Gauss's law. The Hamiltonian also admits a local formulation using the on-site projector \(\hat{P}_{l}=\frac{1}{2}-\hat{s}_{l}^{z}\) that annihilates any component with \(s_{l}^{z}=+1/2\). This Hamiltonian corresponds to the PXP model with detuning \(\mu\) and staggered detuning \(\chi\). For \(\chi=0\), the PXP model has been extensively studied in the context of QMBS both theoretically and experimentally [59; 6; 29; 50]. It has also been studied in the context of HSF in the limit of \(\chi/\kappa\gg 1\)[68]. Footnote 2: For open boundary conditions, Eqs. (1) and (3) can also be mapped onto each other, where the results of the latter are similar to those obtained for the U(1) QLM, but with \(\mu\) and \(\chi\) halved on the two boundary spins. In the following sections, we will explore both the ergodic and fragmented regimes of the PXP model with a staggered magnetization term. The approximate values of \(\mu\) and \(\chi\) to which these correspond for \(L_{m}\approx 30\) are schematically illustrated in Fig. 1, (see Appendix A for actual data). ## III Quantum many-body scarring in the presence of confinement For simplicity, we shall set \(\kappa=1\) as the overall energy scale and focus on the PXP model (3) in this section. The PXP Hamiltonian has been known to host QMBS linked to the Neel state \(|\circ\bullet\ldots\circ\bullet\rangle\) and anti-Neel state \(|\bullet\circ\ldots\bullet\circ\rangle\) for \(\mu=\chi=0\)[6; 50]. Scarring was also observed experimentally from the polarized state \(|\circ\ldots\circ\circ\rangle\) for \(\mu\approx\pm 0.4\)[59]. Here, for historical purposes [6], a filled (empty) circle denotes a Rydberg atom in the excited (ground) state, but one can equivalently think of it as a spin-1/2 particle in the up (down) eigenstate. The Neel and anti-Neel states correspond to the two vacua of the PXP model, i.e., its degenerate ground states at \(\mu\to\infty\), while the polarized state corresponds to the charge-proliferated state, which is the nondegenerate ground state of the PXP model at \(\mu\to-\infty\). These two realizations of QMBS occur in the ergodic regime in Fig. 1(c). This means that other initial states display the expected thermalizing behavior and other probes of chaos such as level spacing statistics are those of a chaotic system. In this section, we focus on this region and discuss the impact of confinement on QMBS. One would generally expect it to strongly modify the dynamics and lead to the disappearance of revivals from scarred initial-states. While this indeed happens for the polarized state at intermediate values of \(\chi\), we will show that this is not true for the Neel state, where the confining term actually _enhances_ scarring. In order to assess the effect of the confining potential, we compute the self-fidelity \(\mathcal{F}(t){=}|\bra{\psi_{0}}e^{-i\hat{H}_{\rm PXP}t}\ket{\psi_{0}}|^{2}\), where \(|\psi_{0}\rangle\) is the initial state of the system. We will denote the first peak of the self-fidelity by \(\mathcal{F}_{1}\), and the self-fidelity at half that time by \(\mathcal{F}_{1/2}\). As QMBS are characterized by the wave function spreading into the Hilbert space before refocusing close to the initial state, their signature is \(\mathcal{F}_{1}\approx 1\) and \(\mathcal{F}_{1/2}\approx 0\). Hence, we will use the quantity \(\mathcal{F}_{1}-\mathcal{F}_{1/2}\) as a probe of QMBS. This quantity distinguishes QMBS from the trivial situation where the initial state is close to an eigenstate, in which case \(\mathcal{F}(t)\approx 1\) at all times. We show \(\mathcal{F}_{1}(t)\) and \(\mathcal{F}_{1}(t)-\mathcal{F}_{1/2}(t)\) for the PXP model with \(\mu=0\) and staggered magnetization in Fig. 2(a), bottom panel, starting in the Neel state. We find interesting nonmonotonic behavior as a function of \(\chi\) in these quantities that we further analyze in the following. The effect of the staggered magnetization on its own in the PXP model has been studied in Ref. [68] in the limit of \(\chi\gg\kappa\). In this regime, the Hilbert space fractures and an emergent symmetry appears due to an approximate su(2) algebra. The same inexact spectrum-generating algebra has been linked to quantum many-body scars in the PXP model [78; 79; 80]. In that algebraic picture, the PXP Hamiltonian can be thought of as the global \(\hat{J}^{x}\) operator, leading to a dynamics that resembles the precession of a large spin when initialized in the Neel state. This state is in fact the highest-weight state of the effective global \(\hat{J}^{z}\) operator, hence why it would have perfect revivals when acted upon by \(\hat{J}^{x}\) if the algebra was exact. It has recently been proposed that the origin of this approximate algebra is a parent spin-1 Hamiltonian [81]. When \(\mu=0\), the PXP Hamiltonian can be obtained by projecting the global spin operators from the parent model onto the constrained Hilbert space and applying the mapping \(|-\rangle=|\uparrow\downarrow\rangle\), \(|0\rangle=|\downarrow\downarrow\rangle\), and \(|+\rangle=|\downarrow\uparrow\rangle\), where site \(j\) of the spin-1 model maps to sites \(2j-1\) and \(2j\) of PXP 3. As such, we can write Footnote 3: One can alternatively take a mapping where site \(j\) of the spin-1 model maps to sites \(2j\) and \(2j+1\). In that case the results are the same except that the minus sign in front of the \(\hat{\mathcal{P}}\hat{S}^{x}\hat{\mathcal{P}}\) term in Eq. 4 is replaced by a plus sign. \[-\frac{1}{\sqrt{2}}\hat{\mathcal{P}}\hat{S}^{x}\hat{\mathcal{P}}-\chi\hat{ \mathcal{P}}\hat{S}^{z}\hat{\mathcal{P}}=\hat{H}_{\rm PXP}\oplus 0_{\perp}, \tag{4}\] when \(\mu=0\) and \(\kappa=1\), where \(\hat{S}^{\alpha}\) are the global spin operators in the spin-1 model and \(0_{\perp}\) is the part of the spin-1 Hilbert space that is annihilated by the PXP constraint. Had the projection operator commuted with the spin operators, we would have had perfect revivals in the PXP model from the Neel state with a period \[T=\frac{2\pi}{\sqrt{\chi^{2}+\frac{1}{2}}}. \tag{5}\] Figure 2: (Color online.) (a) Revival amplitude and period from the Néel state for the PXP model with staggered detuning and \(N=26\). For all values of \(\chi\) we see a high value of \(\mathcal{F}_{1}\) indicating high revivals. However as \(\chi\) becomes large the Néel state gets closer and closer from being an eigenstate, leading to an increase of \(\mathcal{F}_{1/2}\) and so a decrease of \(\mathcal{F}_{1}-\mathcal{F}_{1/2}\). The nonmonotonicity of \(\mathcal{F}\) with \(\chi\) is likely due to the interaction with lower band of eigenstates. (b) Overlap between the Néel state and the eigenstates for \(\chi=0\), \(\chi=0.3\) and \(\chi=1.52\). For the anti- Néel or for \(\chi<0\), the dynamics is exactly the same but the spectrum is flipped with respect to \(E=0\). Figure 2(a), top panel, shows that while it is not exactly the case, we still obtain results that closely resemble this picture. The agreement also gets better as \(\chi\) is increased. This is likely due to the fact that \(\hat{S}^{z}\) commutes with \(\hat{\mathcal{P}}\) while \(\hat{S}^{x}\) does not. As \(\chi\) gets larger than roughly 1.5, we see that \(\mathcal{F}_{1}-\mathcal{F}_{1/2}\) starts to decrease as the minimum fidelity between the revivals increases. This can be understood easily in the spin-precession picture. Indeed, we can visualize this whole process on a Bloch sphere. The Neel state is at the top, along the \(Z\) axis. For \(\chi=0\), the precession axis is along the \(X\) axis, on the equator. As \(\chi\) is increased, the precession axis is moved closer and closer to the \(Z\) axis. Hence, the "opposite point" of the trajectory is no longer the anti-Neel state, but another state placed on the opposite side of the precession axis. As \(\chi\) is further increased, this opposite point gets closer and closer to the Neel state. In the limit of \(\chi\to\infty\), the precession axis becomes the \(Z\) axis and the Neel state an exact eigenstate of the system. Figure 3 shows this schematically on a Bloch sphere. The effect of this "axis tilting" can also be seen directly in the overlap between the Neel state and the corresponding PXP eigenstates at various values of \(\chi\), as shown in Fig. 2(b). The top band of scarred states is always visible. However, as \(\chi\) is increased, the eigenstates with the highest overlap shift from the middle of the spectrum to its left edge where the ground state is. Note that for the largest value of \(\chi\) plotted the energy spectrum starts to split into bands. This will be addressed in Sec. IV. While this precession picture predicts revivals at all values of \(\chi\) and gives a good approximation of the period, it fails to capture the presence of the optimal revival point around \(\chi=0.35\); see Fig. 2(a). At that value, the Neel state is still far from the edges of the spectrum and the level statistics are also indicative of ergodicity (see Appendix A). The revivals at that point converge very quickly in system size and are not visible for other initial states, which generically thermalize relatively fast; see Supplemental Material (SM) [73]. Focusing on the Neel state, we find that the growth of entanglement entropy is also highly suppressed at \(\chi=0.35\), as shown in Fig. 4(a). This nonmonotonicity in the post-quench behavior relative to \(\chi\) can be witnessed in local observables, making it measurable in a large-scale experiment. For a local observable, we choose the excitation density \(\langle\hat{n}_{o}\rangle\) on the odd sublattice, as this should be easily measurable. In order to avoid finite-size effects, we also consider only the times where the dynamics is converged in system size for \(L_{m}=32\). As this observable is oscillating in time, we first identify the local maxima in order to fit its envelope. The details of this procedure are explained in the SM [73], and the results are shown in Fig. 4(b) for several values of \(\chi\). Overall, we find a clear difference between \(\chi=0.3\) and the other values already after three to four revivals. The decay time \(\tau\) also shows a much larger value around \(\chi=0.3\) (see legend-equations). Finally, we see that for \(\chi=0.6\) the peaks themselves show an oscillatory behavior and Figure 3: (Color online). Schematic picture of the effect of the confining potential on the dynamics after a quench form the Néel (or vacuum) state. The \(X\), \(Y\) and \(Z\) labels refer to the approximate su(2) algebra. Figure 4: (Color online). (a) Bipartite von Neumann entanglement entropy after a quench from Néel state for \(L_{m}=26\) and various values of \(\chi\). The entropy growth is strongly suppressed around \(\chi=0.35\). (b) Excitation density on odd sites for \(L_{m}=32\) and various values of \(\chi\) and their exponential fit. Only times for which the expectation values are converged in syste mare used. Once \(\chi\geq 0.5\) oscillations are visible in the peaks and an exponential decay no longer describes their behavior with time. are no longer well approximated by a decaying exponential. We emphasize that this is not a finite-size effect but rather a sign that we enter the low-energy regime where only a handful of eigenstates contribute. The oscillation in the maxima is then a beating frequency linked to the mismatch in the energy spacing of these eigenstates. The above results all show that setting \(\chi\approx 0.35\) significantly enhances scarring compared to \(\chi=0\) when starting in the Neel state. This enhancement can be linked to a top band well separated from the bulk and with a more even energy spacing between states in it. This implies that at that point the approximate su(2) structure is closer to exact than for other values of \(\chi\). This is shown in details in Appendix B, and further investigated using the forward scattering approximation (FSA) [50]. This method allows to build an approximate version of the \(L_{m}+1\) scarred eigenstates, thus removing any influence from the bulk of thermal states. Even in that case, the same nonmonotonic behavior in \(\chi\) is seen. Its origin in this simplified picture can then be understood as stemming from two competing factor: (i) how well it can be approximated by a \(L_{m}+1\) level system, and (ii) how close the hopping strength are to that of a spin \(L_{m}/2\). While the former is optimal at \(\chi=0\) and \(\chi=\pm\infty\), for the latter it is at \(\chi\approx\pm 0.42\) and \(\chi=\pm\infty\). Due to these two competing factors, we find three local maxima in the fidelity at \(\chi=0\), \(\chi\approx\pm 0.4\), and \(\chi=\pm\infty\). While the exact value of the optimal point is not exactly the same in the FSA as in the full model (likely due to the further influence of the states outside of the top band), this approach provides an explanation to the nonmonotonic behaviour. ### Defects and confinement While adding a nonzero \(\chi\) affects the algebra and the revivals from the Neel state, from a lattice gauge theory point of view it also causes confinement [26; 27; 28; 29]. Notably, each particle-antiparticle pair has an energy cost \(\propto d\chi\), where \(d\) is the distance between the particle and the antiparticle. For \(\chi=0\), this cost is zero and so they can spread ballistically in opposite directions, i.e., they are deconfined. As soon as \(\chi>0\), any separation distance is penalized, leading to their confinement. In the picture of the PXP model, neighboring unexcited sites \(|\!\circ\!\circ\!\rangle\) means the presence of a particle or antiparticle between them (depending if the sites are even-odd or odd-even). Similarly, an excited site next to an unexcited site (\(|\!\circ\!\bullet\!\rangle\) or \(|\!\bullet\!\circ\!\rangle\)) means that there is no particle/antiparticle between them. So a single particle-antiparticle pair on top of the vacuum will take the form of a single "defect" on top of the Neel (or anti-Neel) state. More precisely, we will use the state \(|\!\circ\!\bullet\!\circ\!\bullet\!\cdots\!\circ\!\bullet\!\circ\!\circ\! \circ\!\bullet\!\cdots\!\circ\!\bullet\!\bullet\!\bullet\!\rangle\), which corresponds to the Neel state with an excitation removed on site \(M\) near the middle of the chain. This can also be understood as creating two domain-walls between Neel domains \(\!\circ\!\bullet\!\circ\!\bullet\!\) and anti-Neel domains \(\!\bullet\!\circ\!\bullet\!\circ\!\bullet\!\). We quench this state and monitor the excitation occupancy on each site \(\langle\hat{n}_{j}\rangle\). We also monitor the presence of domain-walls using the quantity \(\langle(1-\hat{n}_{j})(1-\hat{n}_{j+1})\rangle=1-\langle\hat{n}_{j}\rangle- \langle\hat{n}_{j+1}\rangle\), where we have utilized that \(\langle\hat{n}_{j}\hat{n}_{j+1}\rangle=0\). Finally, we track the ZZ connected correlator \(\langle\hat{s}^{z}_{M}\hat{s}^{z}_{j}\rangle_{c}=\langle\hat{s}^{z}_{M}\hat{ s}^{z}_{j}\rangle-\langle\hat{s}^{z}_{M}\rangle\langle\hat{s}^{z}_{j}\rangle\). The corresponding results at \(\chi=0\) and \(\chi=0.35\) are shown in Fig. 5, which have been obtained by matrix product state (MPS) techniques with OBC in conformance with experimental relevance. MPS allows us to probe large system sizes where the boundary is not reached even at long times (more details about this can be found in SM [73]). The ZZ correlator shows a clear difference between the two values of \(\chi\) used. Interestingly, for \(\chi=0.35\) at late times the effects of the defects are barely visible whereas for \(\chi=0\) they still are. The oscillations between the two domain walls (so the anti-Neel domain) seems to synchronize with the oscillation outside of the domain walls (the Neel domain). For \(\chi=0\) the two regions seem to oscillate at the same frequency but with a \(\pi\) phase difference. While it is not clear if this is linked to the enhancement of revivals without any defect, this kind of "self-correction" of defects is also experimentally desirable. In order to quantify confinement more precisely, we compute the "root mean square spread" of the ZZ correlator as \[\sigma(|zz|)=\sqrt{\frac{\sum_{j=1}^{L_{m}}|\langle\hat{s}^{z}_{j}\hat{s}^{z}_ {M}\rangle|(j-M)^{2}}{\sum_{j=1}^{L_{m}}|\langle\hat{s}^{z}_{j}\hat{s}^{z}_{M} \rangle|}}. \tag{6}\] We plot this quantity at all times for \(\chi=0\) and \(\chi=0.35\) in Fig 6. There, we can see a clear difference between the two, as for \(\chi=0.35\) the spreading plateaus around \(t\approx 30\) while for \(\chi=0\) it continues at later times until it reaches the boundary. ## IV Hilbert space fragmentation induced by confinement The mass term in Eq. (3) does not fit into the approximate algebra discussed above, and it generically destroys revisals from the Neel state, thereby destroying scarring from this state. However the combination of nonzero mass and confining potential allows a more complex picture to emerge due to the interplay between these two terms. In the regime \(\chi,\mu\gg\kappa\), one can perform a Schrieffer-Wolff transformation. For general values of \(\mu\) and \(\chi\), one can use a procedure such as in Ref. [68], which studies the case with \(\mu=0\). We show the main results in this section, while all details are relegated to the SM [73]. We find that the odd-order terms are identically zero. The leading term is then the second-order one, which is diagonal and reads \[\hat{H}^{(2)}_{\rm eff}=\ -\frac{\kappa^{2}}{2}\sum_{j=1}^{L_{m}}\frac{\hat{P}_{j-1} \hat{s}^{z}_{j}\hat{P}_{j+1}}{2\mu+(-1)^{j}\chi}. \tag{7}\] As this term is diagonal, we need to go to the next nonzero order for dynamics to occur in the \(Z\) basis. This happens at fourth order with an effective Hamiltonian given by \[\hat{H}_{\text{eff}}^{(4)}=\frac{\kappa^{4}}{32}\sum_{j=1}^{L_{m}}4 \frac{\hat{P}_{j-1}\hat{s}_{j}^{z}\hat{P}_{j+1}}{(2\mu+(-1)^{j}\chi)^{3}}\] \[+2\frac{4\mu-(-1)^{j}\chi}{\left(4\mu^{2}-\chi^{2}\right)^{2}} \left(\hat{P}_{j-2}\hat{P}_{j-1}\hat{s}_{j}^{z}\hat{P}_{j+1}+\hat{P}_{j-1}\hat{ s}_{j}^{z}\hat{P}_{j+1}\hat{P}_{j+2}\right)\] \[+\frac{4\mu+(-1)^{j}\chi}{\left(4\mu^{2}-\chi^{2}\right)^{2}} \left(\hat{P}_{j-2}\hat{s}_{j-1}^{+}\hat{P}_{j}\hat{s}_{j+1}^{-}\hat{P}_{j+2}+ \text{H.c.}\right). \tag{8}\] We obtain a similar expression as in Ref. [68], with a purely diagonal term at second order. This is despite the addition of the mass term that "breaks" the approximate su(2) algebra. This is because additional terms due to a nonzero \(\mu\) are nonresonant at low order except in the special case of \(\mu=0\). So in general, additional terms due to a nonzero \(\mu\) will generally appear only at higher order when away from resonances. ### \(\chi=\pm 2\mu\) Resonance One clear feature of both Hamiltonians (7) and (8) is that some of their terms diverge if \(\chi=\pm 2\mu\) due to a resonance condition. Let us focus on the case \(\chi=-2\mu\). Then we have that \(2\mu+(-1)^{j}\chi=2\mu\left[1-(-1)^{j}\right]\) is equal to \(4\mu\) for odd \(j\) odd and to \(0\) for even \(j\). So only spin-flips on odd sites lead to an energy change. In the limit \(\chi,\mu\gg\kappa\) Figure 6: (Color online). Root mean square of the spread of the connected correlator between Pauli \(Z\) matrices at the middle site and other sites. The exact and MPS data coincide well at shorter times, when the influence of the chain boundaries are small. In both cases, the confinement induced by \(\chi\) is clear. Figure 5: (Color online). Observable dynamics after a quench from the Néel state with a defect with \(L_{m}=61\) and OBC. The data is truncated to only focus on sites near the middle of the chain where the defect was initially localized. (a)-(c) \(\chi=0\) (d)-(f) \(\chi=0.35\). The effect of confinement is visible for the ZZ correlator in panels (c) and (f). At late times, the Néel and anti- Néel domains are still clearly distinguishable in panels (a) and (b) but not in panels (c) and (d). then the odd sites are effectively frozen as the energy cost to change them is too high. Hence, the Hilbert space fractures into \(2^{L_{m}/2}\) sectors corresponding to all the possible combinations of the spins on odd sites. The sites that are frozen in the excited position also freeze their neighboring sites due to the PXP constraint. The remaining spins on even sites that have no up-spin neighbors can be excited freely and independently, and thus the effective model at leading order is that of an free spin-1/2 paramagnet. We can extend our analysis to other values of \(\mu\) and \(\chi\) around the resonance such that \(\chi=-2\mu+\gamma\) and \(\mu,\chi\gg\kappa,\gamma\). The leading Schrieffer-Wolff term is at first order and reads \[\hat{H}^{(1)}_{\rm eff}\!\!=\!\!-\kappa\sum_{j\in\mathcal{K}}\hat{s}^{x}_{j}\! -\!\sum_{j=0}^{M}\gamma\hat{s}^{z}_{2j}=E_{0}\!-\sum_{j\in\mathcal{K}}\big{(} \kappa\hat{s}^{x}_{j}\!+\!\gamma\hat{s}^{z}_{j}\big{)}, \tag{9}\] where \(\mathcal{K}\) is the set of all even sites with both neighbors frozen in the down position and \(E_{0}\) is the (constant) contribution of all frozen even sites (so with at least one neighbour in the up position). This Hamiltonian is then clearly integrable for all values of \(\kappa\) and \(\gamma\). We show the effects of this resonance in quenching from the Neel and polarized states in Fig. 7. While this fragmentation affects all initial states unlike QMBS, we only show these two initial states for brevity and as they are the most relevant one for experimental preparation. For both of them, strong revivals of the wavefunction can be seen around the resonance already as relatively small values of \(\chi\) and \(\mu\). For the Neel state, only \(\chi=-2\mu\) leads to revival and not \(\chi=2\mu\). Indeed, in the latter case this state is completely frozen as you cannot deexcite even sites, which in turn prevent the odd ones to be excited. For the polarized state the pattern is more complicated. This might seem surprising as this state is in the same Hilbert space sector as the Neel state. However, the terms in the Hamiltonian connecting the sectors (changing the state of off sites) are much less suppressed for the polarized state than for the Neel state, as for the latter two even sites must first be de-excited before such a move can be done. Hence even for large values of \(\mu\) the odd sites cannot be considered as totally frozen and we have some weak couplings between the Hilbert space sectors. This leads to further restriction for revivals in order to make all sectors aligned in energy, as shown on Fig. 8. The additional resonance condition can be cast as \[\chi=\frac{n\sqrt{1+\gamma^{2}}}{2},\ \mu=\frac{\gamma-\chi}{2} \tag{10}\] with \(n\) an integer (see Appendix C for details). The case \(\gamma=0\) yields the condition for \(\chi\) to be be half-integer or integer. Alternatively, we can get rid of \(\gamma\) and write the relation between \(\mu\) and \(\chi\) directly as \[\frac{(2\chi)^{2}}{n^{2}}-(2\mu+\chi)^{2}=1. \tag{11}\] We recognize here the equation for a hyperbola, and it is shown on Fig. 7(b) for various values of \(n\). Fig. 8 illustrates how all eigenstates are approximately equally spaced as long as these equations are followed. ### Other resonances Beyond \(\chi=\pm 2\mu\), there are other resonant ratios between \(\mu\) and \(\chi\). Exciting an even site leads to a change in energy of \(\Delta E_{1}=-2\mu-\chi\) while on an odd site it costs \(\Delta E_{2}=-2\mu+\chi\). In order to have resonances, we want \(\Delta E_{1}\) and \(\Delta E_{2}\) to be commensurate. The two simplest cases are if \(\Delta E_{1}\) is an integer multiple of \(\Delta E_{2}\) or vice versa. This leads to the resonance condition \[\mu=\pm\frac{n+1}{2(n-1)}\chi, \tag{12}\] with \(n\) an integer (which can also be negative or equal to \(0\)). These lines are plotted in red an dashed on Fig. 7(b). The only notable ones that lead to changes in the effective Hamiltonian at order \(4\) or below, are \(\chi=0\) and \(\mu=\frac{3}{2}\chi\). Note that unlike the resonance at \(\chi=\pm 2\mu\), in these two cases we do not expect to see revivals on Fig. 7 at large values of \(\chi\) and \(\mu\), as the polarized sand Neel states become eigenstates. Nonetheless, revivals can be seen at intermediate values along these resonances for the polarized state. The effective Hamiltonian around \(\chi=0\) also leads to interesting properties. If \(\mu\gg\chi\), the total number of excitations becomes conserved. The effective Hamiltonian at second order then gains a nearest-neighbor XY Figure 7: (Color online). Self-fidelity from (a) the Néel state and (b) polarized state in the PXP model with \(L_{m}=20\). The black dash-dotted line shows the resonance line \(\chi=-2\mu\), while the red hyperbolas show the optimal revival around it. The red dashed lines show the other resonances \(\mu=-\frac{n+1}{n-1}\chi\). type term, and a small \(\chi\) will add some additional diagonal term at first order. The effective Hamiltonian then becomes \[\hat{H}_{\rm eff}^{(1,2)} = -\frac{\kappa^{2}}{8\mu}\sum_{j=1}^{L_{m}}\Big{(}2\hat{P}_{j-1} \hat{s}_{j}^{z}\hat{P}_{j+1}+\hat{P}_{j-1}\hat{\sigma}_{j}^{+}\hat{\sigma}_{j+1 }^{-}\hat{P}_{j+2} \tag{13}\] \[+\text{H.c.}\Big{)}-\sum_{j=1}^{L_{m}}(-1)^{l}\chi\hat{s}_{l}^{z}.\] There is no fragmentation for that Hamiltonian, as each U(1) sector is fully connected. But what is interesting to note, is that for \(\chi=0\) this effective Hamiltonian resembles the lattice fermions \(\mathcal{M}_{1}\) supersymmetric model introduced in Ref. [82]. The two terms that compose the Hamiltonian are the same, but their prefactors are different. Nonetheless, the same mapping to the an XXZ-type model can be done, albeit with a different value of \(\Delta\). Consequently, in each of the U(1) sectors the effective Hamiltonian is integrable. However, as soon as \(\chi\neq 0\), this Hamiltonian no longer maps to an XXZ-type model, and when \(\chi\) and \(\frac{\kappa^{2}}{8\mu}\) are of comparable strength we find level spacing statistics close to Wigner-Dyson (see Appendix A). Overall, the interplay of the mass and confining terms allows for a vast array of regimes. It should be noted that while our Schrieffer-Wolff transformations are performed assuming that \(\mu\) or \(\chi\) are much larger than \(\kappa\), we observe the onset of fragmentation already for relatively small values of these parameters such as \(\mu=0\), \(\chi=\kappa/2\) for \(L_{m}=28\). This can be seen in the dynamics and even in the energy levels statistics (see Appendix A). Nonetheless, this is likely a finite-size effect as larger values of \(\chi\) and \(\mu\) are needed to escape the ergodic region as \(L_{m}\) is increased. ## V Experimental probes in a Bose-Hubbard simulator The U(1) QLM has been experimentally implemented on a Bose-Hubbard quantum simulator for the deconfined case of \(\chi=0\), where gauge invariance was directly observed [13], and thermalization dynamics was probed [14]. More recently, an experimental proposal outlines how these experiments can be feasibly updated to study confinement in this model [26; 27]. The Hamiltonian of this tilted Bose-Hubbard model employed in these works is \[\hat{H}_{\rm BHM}= -J\sum_{j=1}^{L-1}\big{(}\hat{b}_{j}^{\dagger}\hat{b}_{j+1}+\text {H.c.}\big{)}+\frac{U}{2}\sum_{j=1}^{L}\hat{n}_{j}\big{(}\hat{n}_{j}-1\big{)}\] \[+\sum_{j=1}^{L}\bigg{[}(-1)^{j}\frac{\delta}{2}+j\Delta+\frac{ \chi_{j}}{2}\bigg{]}\hat{n}_{j}, \tag{14}\] where \(J\) is the tunneling strength, \(U\) is the on-site interaction strength, \(\hat{b}_{j}\) and \(\hat{b}_{j}^{\dagger}\) are bosonic ladder operators satisfying the canonical commutation relations \(\big{[}\hat{b}_{j},\hat{b}_{r}^{\dagger}\big{]}\)=\(\delta_{j,r}\), \(\hat{n}_{j}\)=\(\hat{b}_{j}^{\dagger}\hat{b}_{j}\) is the bosonic number operator at site \(j\), and \(\Delta\) is an overall tilt. The staggering potential \(\delta\) distinguishes between matter sites (even \(j\)) and gauge links (odd \(j\)). Connecting this to Eq. (1) means that \(\ell\) corresponds to even bosonic sites \(j\), while the link between \(\ell\) and \(\ell\)+1 corresponds to odd bosonic sites \(j\), where the bosonic model (14) hosts a total of \(L\)=\(2L_{m}\) sites, with \(L_{m}\) the number of matter sites. The second staggering potential, \[\chi_{j}=\begin{cases}0&\text{if }\,j\,\text{mod}\,2=0,\\ \chi&\text{if }\,j\,\text{mod}\,4=1,\\ -\chi&\text{if }\,j\,\text{mod}\,4=3,\end{cases} \tag{15}\] is related to the topological \(\theta\)-term, and in the bosonic lattice distinguishes between odd and even gauge links, but has no effect on the matter sites. The mapping between the bosonic and QLM representations is such that on an even bosonic site, which represents a site of the QLM, the presence of a single boson represents matter occupation, while no bosons means Figure 8: (Color online). Quenches from the polarized state in the PXP model with \(L_{m}=24\) around the resonance line as \(\chi=5\sqrt{1+\gamma^{2}}\) and \(\mu=\frac{\gamma-\chi}{2}\). (a) Self-fidelity after a quench. As \(\gamma\) is varied the period changes but the revivals remain close to perfect (b) Overlap between polarized state and the eigenstates. The color indicates the occupation density of the odd sites, and os the various sectors. The red dashed lines are all spaced in energy by \(\sqrt{1+\gamma^{2}}\), while the grey dashed line is placed with energy difference \(2\chi\) from the highest overlap eigenstate. In all cases of \(\gamma\) we see that all eigenstates show close to equal spacing. matter is absent. On an even bosonic site, which represents a link of the QLM, zero (two) bosons indicate that the local electric field points down (up). As such, we need to enforce these local occupations in an experiment. This is achieved in the regime of \(U,\delta\gg J,\mu\), where Eq. (1) derives from Eq. (14) up to second order in perturbation theory [13; 26]. The dominating terms of Eq. (14) are then diagonal in the on-site bosonic number operator, and can be collected as \[\hat{H}_{\rm d}=\sum_{\ell}\bigg{\{}\frac{U}{2}\big{[}\hat{n}_{ \ell}\big{(}\hat{n}_{\ell}-1\big{)}+\hat{n}_{\ell,\ell+1}\big{(}\hat{n}_{\ell, \ell+1}-1\big{)}\big{]}\] \[+\Big{[}(-1)^{\ell}\frac{\chi}{2}-\delta\Big{]}\hat{n}_{\ell,\ell +1}+\Delta\big{[}2\ell\hat{n}_{\ell}+(2\ell+1)\hat{n}_{\ell,\ell+1}\Big{]}\bigg{\}}, \tag{16}\] where we have resorted to the QLM indexing, which relates to that of Eq. (14) as \(\ell\) corresponding to an even bosonic site \(j\), while the link between sites \(\ell\) and \(\ell\)+1 corresponds to \(j\)+1, the odd (i.e., gauge) site between the even (i.e., matter) sites \(j\) and \(j\)+1. We can now define a "proto-Gauss's law" with the generator \[\hat{\mathcal{G}}_{\ell}=(-1)^{\ell}\bigg{[}\frac{1}{2}\big{(}\hat{n}_{\ell-1,\ell}+\hat{n}_{\ell,\ell+1}\big{)}+\hat{n}_{\ell}-1\bigg{]}. \tag{17}\] Relating the configurations allowed by Gauss's law in both the bosonic and QLM representations, we find that \(\mu=\delta-U/2\), which we can insert in Eq. (16) and utilize Eq. (17) to get \[\hat{H}_{\rm d}=\sum_{\ell}\bigg{\{}\frac{U}{2}\big{[}\hat{n}_{ \ell}\big{(}\hat{n}_{\ell}-1\big{)}+\hat{n}_{\ell,\ell+1}\big{(}\hat{n}_{\ell, \ell+1}-2\big{)}\big{]}\] \[+\Big{[}(-1)^{\ell}\frac{\chi}{2}-\mu\Big{]}\hat{n}_{\ell,\ell+1} +c_{\ell}\hat{\mathcal{G}}_{\ell}\bigg{\}}, \tag{18}\] up to an inconsequential energy constant, with \(c_{\ell}\)=2\(\Delta(-1)^{\ell}\ell\). This formulation with the "Stark" co Figure 9: (Color online) Proposed experimental realization of the confinement-induced scarring and fragmentation in a Bose–Hubbard quantum simulator. (a) By tuning confinement potential \(\chi\) from 0, regimes of the deconfined scarring (\(\chi=0\)), confinement-enhanced scarring (\(\chi=0.35\kappa\)), and confinement-induced fragmentation (using the resonant case \(\chi=\kappa\), \(\mu=-0.5\kappa\) as an example) can be realized. (b) and (d): Schematic and MPS simulation of the deconfined scarring, the “state transfer” between the two vacuums. (c) and (f): In the fragmented regime, at resonance, the Schwinger pair creation mechanism still happens, but dynamics are localized within each building block due to confinement. (e) MPS simulation of the confined scarring at \(\chi=0.35\kappa\). The enhancement of scarring in our numerical simulation for the BHM is less clear compared to the PXP model. This is mainly due to the boundary effects, and we expect better results for larger system sizes in the experiment, which is currently beyond the capability of our numerics. efficients \(c_{\ell}\) has recently been shown to suppress couplings between different gauge sectors, and hence sta Figure 10: (Color online). The experimental probe of a particle-antiparticle excitation in the vacuum background. (a) Preparation of the particle-antiparticle excitation in the Bose-Hubbard quantum simulator. (b) Schematic of the coupling between the excitation and the background in the deconfined case. Coupling between the unit cells leads to the spreading of the excitation. (e) and (g): MPS simulation of the light-cone spreading. (e) The particle density, (g) the density-density correlation between a center matter site with the particle and the rest of the matter sites. (c) In the fragmented case, at resonance \(\chi=\kappa,\mu=-0.5\kappa\), the defect is unable to couple to the neighboring unit cells due to the staggering \(\chi\), and is hence unable to spread, resulting in decoupled dynamics between the excitation and the background. (d) and (f): Same simulation as for (e) and (g), for the fragmented case. The excitation is localized due to confinement. bilize gauge invariance, up to all numerically accessible times [83] based on the concept of _linear gauge protection_[84]. Looking at Eq. (18), we see that a large on-site potential constraints the local bosonic configurations on sites to \(\{\ket{0}_{\ell},\ket{1}_{\ell}\}\) and on links to \(\{\ket{0}_{\ell,\ell+1},\ket{2}_{\ell,\ell+1}\}\), as desired. The latter correspond to the local eigenstates of the Pauli operator \(\hat{\sigma}_{\ell}^{z}\) and \(\hat{s}_{\ell,\ell+1}^{z}\) of Eq. (1). In this regime, Eqs. (17) and (2) become equivalent. Using degenerate perturbation theory, the parameters of Eqs. (14) and (1) can be related through \[\kappa=2\sqrt{2}J^{2}\bigg{[}\frac{\delta}{\delta^{2}-\Delta^{2}}+\frac{U- \delta}{(U-\delta)^{2}-\Delta^{2}}\bigg{]}, \tag{19}\] where, as mentioned, \(\mu=\delta-U/2\) is the fermionic mass. The extended BHM Hamiltonian (14) can be realized by a three-period optical superlattice formed by standing waves of lasers. The main lattice laser with wavelength \(\lambda\) forms a lattice with period \(\lambda/2\), and two additional lattices with period \(\lambda\) and \(2\lambda\), respectively. These lasers can be phase stabilized with respect to each other to generate the desired superlattice potential as described in Ref. [26]. The two vacuum states in the gauge theory correspond to states \(\ket{00200020\ldots}\) and \(\ket{20002000\ldots}\) in the BHM. They can be prepared with site-selective addressing techniques using the spin-dependent superlattice, also described in Ref. [26]. In keeping with experimental relevance, we have set the microscopic parameters of Hamiltonian (14) to \(U=1368\) Hz, \(J=58\) Hz, and \(\Delta=57\) Hz, which are close to the values of these parameters as employed in the experiment of Ref. [14]. We first investigate the crossover from deconfined scarring to fragmentation. In the deconfined case (\(\chi=0\)), when quenching the vacuum state to \(\mu=0\), the system displays scarred dynamics in form of "state transfer" between the two vacuum states, see Fig. 9(b). We obtain the numerical results for the BHM using the TenPy toolkit [85]. The system shows persistent many-body oscillations between the two vacuum states, as can be seen in Fig. 9(d). When the confinement potential is increased to \(\chi=0.35\kappa\), the system remains ergodic, while scarring from the Neel state persists. This regime of confined scarring is discussed in Sec. III. The transfer to the opposite vacuum state is suppressed due to confinement, but dynamics between the unit cells are still present. However, the enhancement of scarring is not as clear as in the PXP model mainly due to the boundary effects, see Fig. 9(e). We expect the enhancement effect to be better for larger system sizes, as the boundaries become less relevant. But this is currently beyond the capability of our numerics. The system becomes fragmented when confinement potential \(\chi\) is above \(0.5\kappa\). Here, we use the resonant case with \(\chi=\kappa\) and \(\mu=-0.5\kappa\) as an example. As shown in Fig. 9(c), unit cells marked in gray shades are tuned on resonance with \(\mu=-\chi/2\), while the adjacent unit cells are out-of-resonance. Therefore dynamics are restricted within the unit cells, see the numerical results in Fig. 9(f). We further propose to probe confinement and fragmentation by investigating the spread of a particle-antiparticle excitation in the vacuum background. In the Bose-Hubbard quantum simulator, this state corresponds to a \(\ket{\ldots 0101\ldots}\) impurity at the center of the \(\ket{\ldots 0020\ldots}\) background, see Fig. 10(a). This state can be prepared by a local addressing operation in the optical superlattice, as described in [26]. In the deconfined case, the excitation couples with neighboring unit cells, see Fig. 10(b). The light-cone spreading can be seen in the numerical results in Fig. 10(e) and (g), calculated numerically using MPS. Additional to the mean density \(\langle\hat{n}_{j}\rangle\) in panel (e), in panel (g) we calculate the density-density correlation between site 18 (a matter site with the excitation) and the rest of the matter sites, \(\langle\hat{n}_{\rm M}\hat{n}_{j}\rangle\). In the fragmented case, however, the dynamics are restricted within each unit cell, as we demonstrate for the case \(\chi=\kappa\) and \(\mu=-0.5\kappa\) in Fig. 10(c). The excitation is disconnected from the rest of the system and goes on its own dynamics, see Fig. 10(d), therefore, no spreading can be observed in the density-density correlation, see Fig. 10(f). ## VI Conclusions and outlook Using a combination of analytic and numerical methods, we have calculated the dynamical phase diagram of the U(1) quantum link model with a topological \(\theta\)-term, or equivalently, the PXP model with mass and staggered magnetization terms. By tuning the topological \(\theta\)-angle on a quantum simulator as recently proposed in Ref. [26], we can controllably induce confinement in this model. Accordingly, we map out various ergodicity-breaking phases that can be observed experimentally by tuning the confining potential \(\chi\) and the effective mass \(\mu\) in an cold-atom setup. Starting from the ergodic phase at small values of the mass and confining potential, we study the crossover to the fragmented phase at large \(\chi\), and to the integrable phase at large \(\mu\). Additionally, we have identified various resonant processes in the \((\chi,\mu)\) dynamical phase diagram, which we have analyzed in detail. We further uncovered regimes of robust quantum many-body scarring in the presence of confinement. Our results are readily accessible in modern cold-atom quantum simulators [13; 14] by adding a third staggering potential in the associated tilted Bose-Hubbard optical superlattice. Our findings highlight a very interesting phenomenological and technological aspect. They show that confinement and quantum many-body scarring, two paradigmatic phenomena of gauge theories, can arise concomitantly. Importantly, our results also show that this behavior can be captured on current quantum simulators, which is encouraging given the current drive of probing high-energy physics on table-top quantum devices. Our work also further sheds light on the question of the gauge-theoretic origin of quantum many-body scarring. Previously, scarring and confinement were thought to arise in completely different regimes, but here we show that they can coexist and give rise to interesting dynamics. Confinement is a fundamentally gauge-theoretic phenomenon, so it is interesting to further investigate the nature of confined scarring in more generic gauge theories such as \(\mathbb{Z}_{2}\) lattice gauge theories [86] or quantum link models in higher spatial dimensions [61]. ###### Acknowledgements. J.C.H. acknowledges stimulating discussions with Pablo Sala. The authors are grateful to Aiden Daniel, Andrew Hallam, Philipp Haule, Ana Hudomal, Jian-Wei Pan, and Zhen-Sheng Yuan for discussions and works on related topics. J.C.H. acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programm (Grant Agreement no 948141) -- ERC Starting Grant SimUeQuam, and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2111 - 390814868. I.P.M. acknowledges support from the Australian Research Council (ARC) Discovery Project Grants No. DP190101515 and DP200103760. J.-Y.D. and Z.P. acknowledge support by EPSRC grant EP/R513258/1 and by the Leverhulme Trust Research Leadership Award RL-2019-015. Statement of compliance with EPSRC policy framework on research data: This publication is theoretical work that does not require supporting research data. ## Appendix A Level statistics with \(\mu\) and \(\chi\) In order to identify non-ergodic regimes, we can resort to computing level-spacing statistics, and in particular the \(\langle r\rangle\) value. This is only meaningful once symmetries have been resolved, and we consider this quantity only in the most symmetric sector. The relevant symmetries and the sector will change depending whether \(\mu\) and \(\chi\) are zero or not. Let us denote the translation operator by \(\hat{\mathcal{T}}\) and the spatial inversion operator by \(\hat{\mathcal{L}}\). When \(\chi=0\), the Hamiltonian commutes with both \(\hat{\mathcal{T}}\) and \(\hat{\mathcal{L}}\), and we consider the symmetry sector with momentum \(k=0\) and spatial inversion eigenvalue \(p=+1\). When \(\chi\neq 0\), the Hamiltonian no longer commutes with either \(\hat{\mathcal{T}}\) nor \(\hat{\mathcal{L}}\) (as we consider even system sizes). However it does commute with \(\hat{\mathcal{T}}^{2}\) and \(\hat{\mathcal{T}}\hat{\mathcal{L}}\). Just as \(\hat{\mathcal{T}}\) and \(\hat{\mathcal{L}}\) commute in the zero-momentum sector, \(\hat{\mathcal{T}}^{2}\) and \(\hat{\mathcal{T}}\hat{\mathcal{L}}\) commute when the momentum \(k_{2}\) (with respect to \(\hat{\mathcal{T}}^{2}\)) is \(0\). So in these cases we focus on the symmetry sector \(k_{2}=0\) and \(p_{T}=+1\). Finally, as shown in SM [73], when \(\mu=0\) the Hamiltonian has a particle-hole symmetry and consequently a large number of zero modes. So we always truncate the spectrum to remove these modes. We also remove the edges of the spectrum. Once these symmetries have been resolved, for small value of \(\mu\) and \(\chi\) we expect the system to consist of a single connected component with a non-integrable Hamiltonian. But when fragmentation sets in, we then have several disconnected sectors that do not have level repulsion between them. We have also seen that away from resonances the effective Hamiltonian at second order is non-interacting and so trivially integrable. As such we expect a quick departure from Wigner-Dyson level statistics as soon as fragmentation starts to occur. Figure 11 shows that a value of \(\chi\) of \(\sim 0.5\kappa\) is enough to break ergodicity for 26 sites. We note that for such values of \(\chi/\kappa\), the energy spacing between sectors is not many orders of magnitudes larger than the intra-sector spacing. Following a quench, we see the wave function spreading into other sectors after a finite time that grows with \(\chi/\kappa\). Hence we dub this regime "prethermal fragmented." This is confirmed on Fig. 12 that shows the scaling of the \(r\) ratio for various values of \(\chi\) and \(L_{m}\). As expected, a larger perturbation is needed to induce fragmentation as \(L_{m}\) is increased. Going away from \(\mu=0\), we see a non-monotonic behavior of \(\langle r\rangle\) as \(\chi\) is increased. This is manifested by the lighter "rays" emanating from region I and into region IV of Fig. 11. These seemingly special ergodic lines are also visible on the top panel of Fig. 13, which show the influence of \(\chi\) for various fixed \(\mu\) (with a finer \(\chi\) resolution than Fig. 11). In order to show that these regions are not ergodic either, we will concentrate on the case with \(\mu=0.8\) and \(\chi=0.385\), which correspond to a local maximum of \(\langle r\rangle\) for that value of \(\mu\). We compute the integrated density of states \[G(\epsilon)=\frac{1}{\mathcal{D}}\sum_{E}\Theta(\epsilon-E), \tag{14}\] Figure 11: (Color online). Level spacing statistics in the PXP model with PBC with \(L_{m}=28\). All values below \(0.39\) are set to this value and all values above \(0.53\) are set to \(0.53\). The red lines show the approximate limits between the different regimes. which simply counts what fraction of energy levels lies below energy \(\epsilon\). For a fully chaotic system in which symmetries have been resolved, we expect this function to be smooth. However plotting \(G\) for \(\mu=0.8\) and \(\chi=0.385\) as in Fig. 14(a) shows clear plateaus, indicating gaps where no levels are found. If we now consider each set of eigenstates between the flat plateaus as independent spectra, we can unfold them and plot the distribution of level spacings as in Fig. 14(b). They show good agreement with the Wigner surmise, and consequently the \(\langle r\rangle\) value is close to \(0.53\). However, due to this splitting into bands, the full system is clearly not ergodic. Thus, we understand the non-uniformity of region IV in Fig. 11 as the result of interplay between disconnected components. Indeed, the first few orders of the effective Hamiltonian for generic \(\mu\) and \(\chi\) do not connect the entire Hilbert space together. So for large enough values of these parameters, we will end up with an extensive number of disconnected components that can be diagonalized independently. Each of these will have energies centered around some energy \(\overline{E}\), which depends on \(\mu\) and \(\chi\). Around resonances, several disconnected components can end up close in energies. This can happen without them becoming connected in the low-order terms of the effective Hamiltonian. In that case, their energy levels will mix, thus impacting the energy spacing and the average \(\langle r\rangle\) value. This will lead to lower \(\langle r\rangle\) value, as in the case of unresolved symmetries. For \(\mu\) the picture is different, as even for \(\mu=1.2\) Figure 14: (Color online). Level spacing statistics in the PXP model with PBC with \(L_{m}=28\) for \(\mu=0.8\) and \(\chi=0.385\). The staircase-like behavior of \(G(E)\) in (a) is emblematic of the presence of sectors with many energy levels separated by empty intervals where \(G(E)\) does not grow. The red dashed lines in the inset indicate the fraction of eigenstates chosen as limits between sectors. In each sector the spectrum was then unfolded and the probability distribution of level spacings \(s\) was plotted in (b). Figure 12: (Color online). Level spacing statistics in the PXP model with PBC and \(\mu=0\). Each colored curve corresponds to a different system size. The red dashed lines indicated the expected values for Poisson and Wigner-Dyson distributions. The inset shoes the system size scaling at the optimal value of \(\chi\) to maximize revivals from the Néel state. Figure 13: (Color online). Level spacing statistics in the PXP model with PBC with \(L_{m}=30\). Top: Each curve corresponds to a fixed value of \(\mu\). The values for \(\chi=0\) are not included as they have different symmetries. As \(\mu\) is increased, a lower value of \(\chi\) is needed to deviate from \(0.53\). For \(\mu\geq 0.5\), one can see peaks in the \(r\) value. However further investigation shows that these regimes are not truly ergodic either. Bottom: Results for various values of \(\mu\) with fixed \(\chi\). Note that both cases have different spatial symmetries, which are resolved. and \(\chi=0.1\) the \(\langle r\rangle\) value is close to \(0.53\). However further investigation shows that this does not indicate the Hamiltonian is fully ergodic. Indeed, for a small \(\chi\) and a large \(\mu\) the dominant Hamiltonian is in the one in Eq. (S19), which is non-integrable for any nonzero \(\chi\). We still have \(L_{m}/2+1\) U(1) sectors (corresponding to the total excitation number), but they are very far apart energy due to \(\mu\) being large. So their energies do not overlap and as such do not create any accidental degeneracies (or near-degeneracies). The level spacing statistics is dominated by the spacings _within_ each sector, which all obey Wigner-Dyson distribution. Hence starting in a sector will lead to prethermalization within that sector for an exponential long time depending on \(\mu/\kappa\), before then starting to explore the other ones. This means that at short and intermediate times, for the system size investigated, the dynamics will only explore a part of the system and so it is non-ergodic. If instead \(\chi=0\), then the Hamiltonian in each U(1) sector is integrable and we see much faster convergence towards Poissonian statistics with \(\mu\) (see bottom panel of Fig. 13). Nonetheless, as we have noted above, for \(\mu\gg\chi,\kappa\) there is an emergent U(1) conservation law at short and intermediate time but no fragmentation within each U(1) sector. ## Appendix B Algebra enhancement for \(\mu=0\) In this section, we show how the confining potential \(\chi\) leads to an enhancement of the su(2) algebra linked to QMBS from the Neel state. In order to understand the origin of the non-monotonicity of scarring with \(\chi\), we can look at the properties of the top band of states. To do that we algorithmically identify them for various values of \(\chi\) between \(0\) and \(0.5\) based on their energies and overlap with the Neel state. In this regime we still see relatively clear towers of states throughout the spectrum with a state at the top that is well separated from the rest. On Fig. 15, we show that the total overlap between these states and the Neel state has a local maximum around \(\chi=0.34\). At the same time, the standard deviation of the level spacings between consecutive eigenstates in the top band has a minimum around \(\chi=0.28\). Both of these phenomena are expected to lead to better revivals, and would also generally indicate that the approximate su(2) algebra obeyed by the scarred state is "closer" to be exact. In order to show this in a more quantitative way, we turn to the forward scattering approximation (FSA) [50]. This methods allows to construct approximations to the \(L_{m}+1\) state in the top band. This completely removes the problem of eigenstate hybridization and the risk of error in the identification of the top band states. We decompose the Hamiltonian in three parts as \[\hat{H}=\hat{H}^{+}+\hat{H}^{-}+\hat{H}^{Z}, \tag{20}\] where \(\hat{H}^{+}=\sum_{j=1}^{L_{m}/2}\hat{s}_{2j-1}^{+}+\hat{s}_{2j}^{-},\hat{H}^{ -}=\left(\hat{H}^{+}\right)^{\dagger}\) and \(\hat{H}^{z}=\sum_{l}(-1)^{l}\chi\hat{s}_{l}^{z}\) is the diagonal part. We form the FSA basis states using \(\hat{H}^{+}\), and as such these are the same for any value of \(\chi\). All states in the FSA basis are eigenstates of \(\hat{H}^{z}\), and so this operator is perfectly captured by this approximation. These two facts mean that the energy subspace variance of the FSA does not vary with \(\chi\), and so it cannot be used as a metric for the algebra being correct. We form the FSA basis states by starting from the Neel state and applying \(\hat{H}^{+}\). This means that the basis state \(|F_{k}\rangle\propto\left(\hat{H}^{+}\right)^{k}|Z_{2}\rangle\). The procedure terminates at \(|F_{L_{m}}\rangle\) as applying \(\hat{H}^{+}\) to this state annihilates it. This construction produces the same basis states for any value of \(\chi\), including \(\chi=0\). All \(|F_{k}\rangle\) are eigenstates of \(\hat{H}^{z}\), so this operator is perfectly captured by this approximation and only adds a diagonal term to the FSA Hamiltonian. This gives us a relatively simple picture, with a tridiagonal Hamiltonian where only the diagonal depends on \(\chi\). Nonetheless, we can still compute the revivals and the standard deviation of the eigenstate spacing in the FSA subspace up to \(L_{m}=100\). This is shown on Fig. 16 and is very similar to results in the full system. The simpler form of the FSA Hamiltonian (a tridiagonal matrix), also allow us to get a clearer picture of how the algebra is improved by the \(\chi\) term. Indeed, if the algebra was exact, the \(\beta_{n}\) on the off diagonal should obey \(\beta_{n}=\lambda\sqrt{(n+1)(N-n)}\), where \(\lambda\) is an overall strength factor. We can match the off-diagonal couplings of PXP for \(n=1,2,N-2\) and \(N-1\) by setting \(\lambda=1/\sqrt{2}\) Figure 15: (Color online). Properties of the top band of \(N+1\) scarred eigenstates in the PXP with \(L_{m}=20\). Top: total overlap between the scarred states and the Néel state. Bottom: standard deviation of the spacing between consecutive scarred eigenstates. As the scarred eigenstates were detected algorithmically using the energy and overlap with the Néel state, it might not be totally accurate for all values of \(\chi\). However the middle couplings are too low in strength. As stated before, adding a nonzero \(\chi\) does not modify these off-diagonal couplings but add diagonal terms \(\alpha_{n}=-\chi(N-n)\). In order to visualize the effect of \(\chi\) on the algebra, we can get rid of the diagonal entries through a change of basis. Indeed, if the algebra was exact, the FSA matrix would simply be \(\frac{1}{\sqrt{2}}\hat{S}^{x}+\chi\hat{S}^{z}\) and applying a rotation of angle \(\theta=\tan^{-1}(\sqrt{2}\chi)\) around the \(Y\) axis would change it to \(\sqrt{1/2+\chi^{2}}\hat{S}^{x}\). As the algebra is not exact, this is no longer the case. However, we find that there is an optimal angle \(\theta^{\star}\) that still leads to a matrix that has large entries on the two off-diagonals and small values everywhere else. For each \(\chi\), we find the optimal \(\theta^{\star}\) such that the norm of the diagonal is minimized. The value of the off-diagonal couplings \(\beta_{n}\) will then vary with \(\chi\), and we can compare them to those of the spin matrix \(\hat{S}^{x}\) that would lead to perfect revivals. Figure 17 shows this for \(L_{m}=100\) and for a large range of \(\chi\). For small values of \(\chi\), the middle couplings rise faster than the edge ones, and thus "correct" the former that are lower than the exact su(2) ones, before overshooting. Once \(\chi\) becomes large, the edge couplings start to catch up, and will match the su(2) ones in the limit of \(\chi\to\infty\). The other point where the \(\beta_{n}\) couplings resemble the su(2) ones the most are around \(\chi=0.4\), this can be seen more clearly on Fig. 18. In order to make sure that the matrix obtained after the change basis only has nonzero matrix elements on the upper and lower diagonals, we can look at the ration of its norm with and without these elements. This is shown on Fig. 18, and the results are of the order of \(1\%\) for all \(\chi\). Importantly, these undesired couplings are minimal for \(\chi=0\) and \(\chi=\infty\). The competition of these various minima creates the picture that we see in the full model, with revival peaking at \(\chi=0\), \(\chi\approx 0.35\), and \(\chi\to\infty\). We note that other more complex approximation schemes can also be used to study the nonmonotonic Figure 16: (Color online). Properties of the FSA eigenstates for various system sizes. Both the best revival fidelity and the most equal energy spacing show optimal points between \(0.3\) and \(0.5\). Figure 17: (Color online). Upper and lower diagonal couplings of the FSA Hamiltonian after rotation for \(L_{m}=100\). The pink curve in the lower panel corresponds to the couplings for an exact su(2) algebra. The Figure 18: (Color online). Properties of the FSA Hamiltonian after rotation for \(L_{m}=100\). Top panel: ratio between the norm of the matrix without the upper- and lower-diagonal, and with it. This quantity is low for all values of \(\chi\), showing that the matrix is close to a tridiagonal one. Bottom panel: behavior in \(\chi\). In the SM [73], we use the symmetric subspace approximation [87] which captures more states than the FSA. Another advantage of this approximation is that it allows to get the classical limit for the dynamics from the Neel state. Even in this case, the same non-monotonic behavior with respect to \(\chi\) can be witnessed, showing the strong robustness of this effect. ## Appendix C Polarized state at \(\chi\approx\pm 2\mu\) At the resonance point \(\chi\approx-2\mu\), the Hilbert space fractures when \(\chi,\ \mu\gg\kappa\) as creating an excitation on an odd site is extremely costly in energy. However when \(1<\chi/\kappa<10\) such moves are still possible from the polarized states. As a consequence, the dynamics is not restricted to the Hilbert space sector with no excitations on the even sites, but the sector with one excitation of these sites also has some nonzero contribution. For a detuning \(\gamma=2\mu+\chi\), the Hamiltonian in each fragment is given by Eq. (9) in the main text. It is straightforward to see that the energy spacing between eigenstates in that system is \(\sqrt{1+\gamma^{2}}\) for \(\kappa=1\). Meanwhile, the energy spacing between the "center" of fragments with respectively \(0\) and \(1\) excitations on odd sites is largely unaffected by \(\gamma\) and is approximately \(2\chi\). As a consequence, in order for all energies to be spaced by a regular amount we require one of these quantities to be a multiple of the other. As \(\chi\gg\gamma\), it must be that \(2\chi>\sqrt{1+\gamma^{2}}\) and so the condition can be expressed as \[\chi=\frac{n\sqrt{1+\gamma^{2}}}{2},\ \mu=\frac{\gamma-\chi}{2} \tag{10}\] with \(n\) an integer. Alternatively, one can combine both equations to remove \(\gamma\) and get \[\frac{\left(2\chi\right)^{2}}{n^{2}}-\left(2\mu+\chi\right)^{2}=1, \tag{11}\] which parametrizes a hyperbola. There is such a hyperbola for any \(n\), although the agreement gets better as \(n\) increases as the condition \(\mu,\chi\gg\gamma,\kappa\) is better satisfied. At the same time, points between hyperbolas also get better revivals between as \(\mu\) and \(\chi\) increase. This is best seen on the resonance line \(\gamma=0\). In that case, the additional condition for energies to align is simply to require \(\chi\) to be half-integer or integer multiples of \(\kappa\), as the spacing within each sector is simply \(\kappa\). The effect of this can be seen on Fig. 19 for \(\kappa=1\). When the condition is met, the polarized state revives with period \(2\pi\). One can monitor that period as well as the fidelity at the first revival as \(\chi=2\mu\) is increased. Figure 20 shows this for the Neel and polarized states. As expected, the fidelity gets better as \(\chi\) is increased because other Hilbert space sectors are more suppressed. Figure 19: (Color online). Quenches from the polarized state in the PXP model with \(L_{m}=24\) along the resonance line \(\chi=-2\mu\). (a) Self-fidelity after a quench. When \(\chi\) is integer or half-integer, the first peak is significantly higher. (b) Overlap between polarized state and the eigenstates. The color indicates the number of excitations on odd sites and so differentiates states in different sectors. The red dashed-lines are all equally spaced by one unit of energy. When \(\chi\) is integer or half-integer, the spacing between sectors is a multiple of the energy spacing within each of them and all eigenstates are are approximately equally spaced. Figure 20: (Color online). Self-fidelity and period of revival in the PXP model with \(L_{m}=22\) along the resonance line \(\chi=-2\mu\) for the polarized and Néel states. As \(\chi\) gets large, contributions from sectors with excitations on odd sites become smaller and the further resonance condition for \(\chi\) to be integer or half-integer matters less.
2302.06159
MeV Gamma-Ray Constraints for Light Dark Matter from Semi-Annihilation
Exploring the realm of Dark Matter research, Light DM, which has a mass in the range of 1 MeV to 1 GeV, is a fascinating topic both theoretically and experimentally. We assume that the light dark matter is composed of complex scalars and produced from semi-annihilation, which is close to the scale of the MeV Gamma-ray satellite, allowing us to explore the implications of this hypothesis. The experimental data we used to constrain the scenario is from five different sources: the COMPTEL, EGRET, INTEGRAL, Fermi Gamma-ray Space Telescope, and the e-ASTROGAM future reach. We use the analytical formula to measure the X-ray spectra allowing us to determine the annihilation cross-section bounds from $10^{-28}\mathrm{cm}^3/\mathrm{s}$ to $10^{-22}\mathrm{cm}^3/\mathrm{s}$ for different combinations of dark matter and mediator masses. We found that the MeV gamma-ray provides valuable insight into the structure of the semi-annihilation dark matter, where EGRET contributes to the stringent constraint to the semi-annihilation, and the e-ASTROGAM future reach could probe the whole parameter space of the model.
Jun Guo, Lei Wu, Bin Zhu
2023-02-13T07:44:41Z
http://arxiv.org/abs/2302.06159v1
# MeV Gamma-Ray Constraints for Light Dark Matter from Semi-Annihilation ###### Abstract Exploring the realm of Dark Matter research, Light DM, which has a mass in the range of 1 MeV to 1 GeV, is a fascinating topic both theoretically and experimentally. We assume that the light dark matter is composed of complex scalars and produced from semi-annihilation, which is close to the scale of the MeV Gamma-ray satellite, allowing us to explore the implications of this hypothesis. The experimental data we used to constrain the scenario is from five different sources: the COMPTEL, EGRET, INTEGRAL, Fermi Gamma-ray Space Telescope, and the e-ASTROGAM future reach. We use the analytical formula to measure the X-ray spectra allowing us to determine the annihilation cross-section bounds from \(10^{-28}\)cm\({}^{3}\)/s to \(10^{-22}\)cm\({}^{3}\)/s for different combinations of dark matter and mediator masses. We found that the MeV gamma-ray provides valuable insight into the structure of the semi-annihilation dark matter, where EGRET contributes to the stringent constraint to the semi-annihilation, and the e-ASTROGAM future reach could probe the whole parameter space of the model. ## I Introduction Dark matter is a hypothetical form of matter that cannot be seen but must exist because of the gravitational effects it has on our universe and the formation of galaxies. It is widely believed that dark matter is composed of particles that do not emit light or other radiation, which accounts for about 27% of the total mass and energy in the universe and is difficult to detect by conventional techniques. Due to the fact that we are unsure of their composition and behavior, they rank among the most significant physics riddles of the present. There are several possible candidates for dark matter, but the most theoretically appealing ones are the WIMPs (Weakly Interacting Massive Particles) [1] since their production mechanisms are naturally related to thermal equilibrium. The relic density of dark matter is determined by the freeze-out process occurring when dark matter particles become so rare that their number no longer changes significantly with time evolution. By computing the Boltzmann equation describing the evolution of dark matter number density, we can determine the amount of dark matter present in the universe today. These particles are massive and interact only weakly with the visible sector, making them very difficult to detect. There are some approaches to searching for them, including direct detection [2] and indirect detection [3], both of which provide a robust constraint to WIMPs [4]. Direct detection experiments are proposed to search for the scattering of WIMPs off nuclei in detectors, and Indirect detection experiments search for the products of WIMP annihilations, such as gamma rays, positrons, and anti-protons. The parameter space that typically governs dark matter self-annihilation also dictates the dark matter-nucleon scattering cross-section. However, the dark matter-nucleon scattering cross section is now severely restricted by direct detection experiments, which rules out most of the available parameter space of WIMP annihilation [5]. One easy way to weaken the direct detection bounds is to decrease the dark matter mass [6; 7], as the Direct detection constraint is not applicable when the recoil energy is smaller than the detector threshold. It is because the conventional detection approach relies on detecting the tiny amounts of energy deposited by DM via nuclear recoils, which is rendered useless for DM considerably lighter than a typical nucleus. Therefore, efforts are being made to create novel detection techniques, such as the application of new targets [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18] or novel processes [19; 20; 21], most of which belong to direct detection, while only a few pieces of research [22; 23; 24; 25; 26] focus on the indirect detection of light dark matter. Another strategy is to disentangle direct detection observables from the dark matter relic density by dissolving the link between dark matter annihilation and scattering cross-section. The link is based on the widely held belief that Dark Matter is stabilized by a \(Z_{2}\) parity. However, this situation is not generic and should not be a rule of thumb. Each scenario other than \(Z_{2}\) parity should be evaluated on its own merits, among which the \(Z_{3}\) parity is a natural and minimal extension. Processes that involve an odd number of dark matter fields can be found without leading to DM decay. We have only one sort of process if we restrict ourselves to \(2\to 2\) annihilations, semi-annihilation [27], which has been probed in the context of indirect detection signatures for GeV-scale DM particles [28; 29]. To our best knowledge, there is no research on the light dark matter produced by semi-annihilation in the literature. The main idea of this paper is to fill the gap, where we use the MeV gamma-ray as a probe for the properties of light dark matter in semi-annihilation. The two main and novel ingredients are: (i) the dark matter belongs to sub-GeV dark matter, which has never been considered in semi-annihilation; Over the years, most of the experiments conducted in particle physics have focused on either Weakly Interacting Massive Particles (WIMPs) or axions, leaving out other potential alternatives which are equally valid and justified. While these two particles have been the primary focus of research, it is crucial to recognize the value of exploring other avenues of injury. (ii) the semi-annihilation mechanism lacks a definite signature in the MeV gamma-ray search in the literature. The answer is that the sensitivity of these experiments such as Fermi-LAT loses sensitivity. Indeed, research has already demonstrated the efficacy of utilizing the data to constrain Light Dark Matter (LDM) models. As a result, scientists have been able to narrow down the range of possible LDM, and thus gain a better understanding of the nature of dark matter, which has been a step in the effort to learn more about the Universe. We will show that such an approach generates the most severe constraints to the semi-annihilated light dark matter. We mention that we do not consider the semi-annihilation process involving SM particles in the final state, since dark matter is sub-GeV, thus forbidding the possibility of the Z and Higgs boson final state kinematically. The canonical realization is to include a scalar or vector mediator, that could couple the standard model particles and dark matter simultaneously. In this paper, we choose a scalar Higgs portal mediator and scalar dark matter as a representative framework, while other possibilities of different spin of dark matter and mediator are easy to generalize. ## II Semi-annihilation dark matter: models, relic density and direct detection constraints ### \(Z_{3}\) dark matter model Our dark matter (DM) model is motivated by \(Z_{3}\) symmetry and its semi-annihilation mechanism. Even though there are lots of previous introductions [30; 31; 32; 27], our work is focusing on MeV scale DM, so we first give a brief introduction to our \(Z_{3}\) DM model. The minimal form of the well-known \(Z_{3}\) DM model contains only one complex scalar \(S\), but in this work, we focus on the MeV scale DM, to make DM annihilate efficiently, we introduce another scalar field \(\Phi\), which is \(Z_{3}\) singlet and gauge singlet. Under these assumptions, our model has the Lagrangian containing the following terms: \[-\mathcal{L}_{Z_{3}}\supset M_{s}^{2}SS^{*}+\lambda_{sh}|S|^{2}|H|^{2}+\lambda_{s\phi}|S|^{2}| \Phi|^{2} \tag{1}\] \[+\left(\frac{A_{s}S^{3}}{3}+\lambda_{h\phi}|H|^{2}|\Phi|^{2}+c.c \right)\] The DM candidate receive mass term from bar mass term \(M_{S}^{2}SS^{*}\), Higgs portal term \(\lambda_{sh}|S|^{2}|H|^{2}\) and \(\Phi\) coupling term \(\lambda_{s\phi}|S|^{2}|\Phi|^{2}\). Since the strong limitation of Higgs invisible decay, we suppress the Higgs-portal coupling term by hand, so the squared mass of DM is \[m_{S}^{2}\simeq M_{s}^{2}+\lambda_{s\phi}v_{\phi}^{2}, \tag{2}\] where we have written \(\Phi=\phi+v_{\phi}\). The \(Z_{3}\) singlet \(\phi\) couples with SM Higgs through gauge invariant term \(\lambda_{h\phi}|H|^{2}|\Phi|^{2}\), resulting \(\phi\) mixing with the Higgs. After diagonalizing the \(\Phi\)/H mixing matrix, we can replace \(\phi\) and h with eigenstates in the form: \[h\to h\cos\theta-\phi\sin\theta\hskip 28.452756pts\to h\sin\theta+\phi\cos\theta \tag{3}\] with mixing angle \(\theta\). This results in couplings between \(\phi\) and SM particles in the form: \[-\mathcal{L}_{\phi}\supset +\sin\theta\sum_{f}\frac{y_{f}}{\sqrt{2}}\phi\bar{f}f+3\sin\theta \frac{\alpha_{EM}}{4\pi}\frac{\phi}{\Lambda}F_{\mu\nu}F^{\mu\nu} \tag{4}\] \[-\frac{5}{6}\sin\theta\frac{\alpha_{s}}{4\pi}\frac{\phi}{\Lambda} G^{a}_{\mu\nu}G^{a\mu\nu}\] the last two terms need to integrate out SM particles [33], where \(\Lambda\) is the cut-off scale of the theory, which usually equals \(v_{h}=246\) GeV. The interaction terms in Eq. 4 help the decay of \(\phi\), since our model focus on MeV-scale DM phenomenology, the mass of \(\phi\) shall be \(\sim\mathcal{O}(100)\) MeV, which means \(\phi\) could only decay into some light particles (such as photon, electron, muon, and pions) and \(m_{\phi}\) will give some direct affection on DM indirect detection. The semi-annihilation channel we concentrate on is \(SS\to S^{*}\phi\) contributes by the Lagrangian term \(\lambda_{s\phi}|S|^{2}|\Phi|^{2}\), to open such a channel, the mass relationship \(m_{\phi}\leq m_{S}\) should be satisfied, and the cross-section is proportional to \(\propto A_{s}^{2}\lambda_{s\phi}^{2}v_{\phi}^{2}\). But at the same time, \(SS^{*}\rightarrow\phi\phi\) channel with cross-section proportional to \(\propto\lambda_{s\phi}^{2}\) will open since we only concentrate on semi-annihilation in this work, it is necessary to turn off Higgs-portal coupling and suppressing \(\lambda_{s\phi}\). At the same time, a large enough \(A_{S}\) will allow us to get the correct relic density without opening the Higgs-portal sector and double \(\phi\) final state annihilation channel. Compare with the usual Higgs/\(\phi\)-portal DM model [34] in which annihilation processes include double Higgs or \(\phi\), the semi-annihilation feature of our model will leave some impact on both DM relic density and indirect detection gamma-ray experiment because semi-annihilation will only contribute half contribution to the DM effective annihilation cross-section, and for the different masses between final states, the boosting level is different from the usual case, which results in a different \(\phi\) decay photon spectrum. The free parameters of our semi-annihilation DM model are Higgs mixing angle \(\sin\theta\), mass of DM \(m_{S}\), mass of mediator \(m_{\phi}\), \(Z_{3}\) term coupling \(A_{S}\) and coupling of DM-mediator \(\lambda_{s\phi}\). For simplicity, we treats \(A_{S}v_{\phi}\) as one parameter \(g_{s\phi}\), since the semi-annihilation cross section is directly proportional to \(\propto\lambda_{s\phi}^{2}g_{s\phi}^{2}\), and we need analysis \(\lambda_{s\phi}\) alone to suppress \(SS^{*}\rightarrow\phi\phi\) cross-section. ### Relic Density Take \(Z_{3}\)-symmetric theory as a thermal freeze-out target, which suggests a single candidate for dark matter \(S\) as usual. The novel aspect is the new semi-annihilation process following the Boltzmann equation to describe the evolution of dark matter number density, \[\frac{dY}{dt}=-s\langle\sigma v\rangle\left(Y^{2}-rY\bar{Y}-(1-r)\bar{Y}^{2}\right) \tag{5}\] where the yield \(Y=n/s\) is the ratio between number density and entropy \(s\), \(\langle\sigma v\rangle\) is the combination of the thermally averaged cross-section for direct annihilation \(SS^{*}\rightarrow\phi\phi\) and semi-annihilation \(SS\to S^{*}\phi\) process, \[\langle\sigma v\rangle\equiv\left\langle\sigma^{SS^{*}\rightarrow\phi\phi}v \right\rangle+\frac{1}{2}\left\langle\sigma^{SS\to S^{*}\phi}v\right\rangle \tag{6}\] with the fraction \(r\) being, \[r=\frac{1/2\left\langle\sigma^{SS\to S^{*}\phi}v\right\rangle}{\langle \sigma v\rangle} \tag{7}\] Here \(r=1\) corresponds to the pure semi-annihilation process. The semi-annihilation cross-section times DM relative velocity of our case is given as: \[\sigma^{SS\to S^{*}\phi}v=\frac{1}{64\pi}\frac{|\vec{p}_{\phi}|A_{s}^{2}( \lambda_{s\phi}v_{\phi})^{2}}{9m_{S}^{2}} \tag{8}\] where \(|\vec{p}_{\phi}|\) is the momentum of final state \(\phi\) in the center-of-mass frame, we may represent it in the form: \[|\vec{p}_{\phi}|\simeq m_{S}\lambda(1,m_{S}^{2}/4m_{S}^{2},m_{\phi}^{2}/4m_{S }^{2}) \tag{9}\] with \(\lambda(1,x,y)=\sqrt{(1-x-y)^{2}-4xy}\), which usually results in a moderate phase space suppression. At same time, the interaction term \(\lambda_{s\phi}|S|^{2}|\Phi|^{2}\) will contribute annihilation process \(SS^{*}\rightarrow\phi\phi\) with: \[\sigma^{SS^{*}\rightarrow\phi\phi}v =\frac{\lambda_{s\phi}^{2}}{32\pi m_{S}^{2}}\frac{|\vec{p}_{\phi }|}{m_{S}}\simeq\frac{\lambda_{s\phi}^{2}}{32\pi m_{S}^{2}}\sqrt{1-m_{\phi}^ {2}/m_{S}^{2}} \tag{10}\] \[\simeq\frac{\lambda_{s\phi}^{2}}{32\pi}\frac{1}{0.3^{2}}\left( \frac{0.3}{m_{S}}\right)^{2}\simeq 0.1\times\lambda_{s\phi}^{2}~{}~{}{\rm GeV}^{-2}\] Since we only concentrate on semi-annihilation with \(r\simeq 1\) in this work, the main contributed channel of getting correct relic density is \(SS\to S^{*}\phi\), which means: \[\sigma^{SS^{*}\rightarrow\phi\phi} \ll 1\times 10^{-8}~{}~{}{\rm GeV}^{-2} \tag{11}\] \[\sigma^{SS\to S^{*}\phi} \simeq 1\times 10^{-8}~{}~{}{\rm GeV}^{-2}\] from Eq.10, we may set \(\lambda_{s\phi}\simeq 10^{-5}\), and according to Eq.9, we can estimate semi-annihilation thermal average cross section as: \[\sigma^{SS\to S^{*}\phi}v \simeq\frac{1}{64\pi}\frac{2\times 10^{-8}}{9\times 0.3^{6}} \left(\frac{0.3}{m_{S}}\right)^{6}\left(\frac{A_{s}\lambda_{s\phi}v_{\phi}}{2 \times 10^{-4}}\right)^{2} \tag{12}\] \[\simeq 1.5\times 10^{-8}\left(\frac{0.3}{m_{S}}\right)^{6}\left( \frac{\lambda_{s\phi}g_{s\phi}}{2\times 10^{-4}}\right)^{2}~{}~{}{\rm GeV}^{-2}\] which means \(\lambda_{s\phi}g_{s\phi}\simeq 10^{-4}\), resulting \(A_{S}v_{\phi}\) of order \(\mathcal{O}(10)\) GeV\({}^{2}\). We display the numerical result by using micrOMEGAs [35]. To learn how well the model fits the relic density requirement \(0.094\leq\Omega h^{2}\leq 0.129\)[27], we scan the parameter space with Metropolis-Hastings algorithm [36], we study two extreme cases, the mass degenerate case with \(k=m_{S}/m_{\phi}=1.1\), and light mediator case with \(k=m_{S}/m_{\phi}=10\). Despite the fact \(\theta\) will not affect DM relic density, but according to [45], SN1987A excludes \(1.0\times 10^{-7}\lesssim\sin\theta\lesssim 3.0\times 10^{-5}\) and scalar mass up to \(219\) MeV. We set the mixing angle in the range \(\sin\theta\in[5\times 10^{-5},10^{-4}]\), for there still exists a sizeable non-excluded area according to the right panel of Fig.5 in [45]. In Table. 2, we show the range and step size of the parameters. The numerical result is shown in Fig.1. \begin{tabular}{|c|c|c|} \hline parameter & range & step size \\ \hline \(\lambda_{s\phi}\) & \(1\times 10^{-5}\) & \(0\) \\ \hline \(\log_{10}g_{s\phi}\) & \([-2,2]\) & \(0.3\) \\ \hline \(m_{S}\) & \([50,800]\) MeV & \(300\) MeV \\ \hline \(k=m_{S}/m_{\phi}\) & \(1.1\), \(10\) & \(0\) \\ \hline \(\sin\theta\) & \([5\times 10^{-5},10^{-4}]\) & \(2\times 10^{-5}\) \\ \hline \end{tabular} ## III Precision calculation on MeV gamma-ray Gamma-ray photon intensity is given in the following form: \[\frac{d\Phi}{dE}=\frac{\langle\sigma v\rangle}{8\pi f_{\rm DM}m_{S}^{2}}\frac{dN}{ dE}J \tag{13}\] it shows the gamma-ray flux generated by DM annihilation inside the Galactic halo depends on the thermal average annihilation cross section \(\langle\sigma v\rangle\), DM mass \(m_{S}\), gamma-ray spectrum \(\frac{dN}{dE}\) and J-factor, where \(f_{\rm DM}\) is 2 for \(S\) is not self-conjugate. The Gamma-ray spectrum per annihilation depends on DM mass, mediator mass \(m_{\phi}\), and coupling of \(\phi\)-SM. All the signal photon flux comes from the decay of the mediator which is generated by the semi-annihilation. According to the interaction terms in Eq. 4, all the \(\phi\)-SM couplings are proportional to \(\sin\theta\), this means the Higgs mixing will not affect final spectrum, for the decay branching fractions will not change with \(\sin\theta\). The mass of mediator \(m_{\phi}\) will determine the available decay channels of \(\phi\). For a light enough \(\phi\) with \(m_{\phi}<2m_{\mu}\) it can only decay into electrons with final state radiations. When \(2m_{\mu}<m_{\phi}<2m_{\pi}\), for the coupling between \(\phi\) and \(\mu\) is proportional to \(m_{\mu}\), the \(\mu^{+}\mu^{-}\) final state will dominant decay product of \(\phi\), result in suppressing sharp photon from \(e^{+}e^{-}\) final state radiation. When \(m_{\phi}>2m_{\pi^{0(\pm)}}\), the decay spectrum of pions will dominate high energy spectrum region, which includes a so-called box spectrum, as shown in Fig. 2 In the usual Higgs/\(\phi\)-portal case, DM will annihilate into a pair of mediator \(\phi\), but in our model only one \(\phi\) is included, which means semi-annihilation will generate less photon signal, resulting in a weaker limitation from indirect detection. At the same time, in the usual case model, the energy of final state \(\phi\) is \(E_{\phi}\simeq m_{S}\), but in our model, for the mass splitting between DM and mediator, the energy of \(\phi\) is \[E_{\phi} =\frac{E_{cm}^{2}-m_{S}^{2}+m_{\phi}^{2}}{2E_{cm}} \tag{14}\] \[\simeq m_{S}-\frac{m_{S}^{2}-m_{\phi}^{2}}{4m_{S}}\] This means, the larger the masses split, the less boosted \(\phi\) is, resulting in a less energetic gamma-ray spectrum. In Fig. 3, we can see the usual case Higgs/\(\phi\)-portal model generates almost twice the number of photon signal of our \(Z_{3}\) model produces, and for the different boosting result of \(\phi\), compare with single \(\phi\) spectrum of usual case Higgs/\(\phi\)-portal model, our model generate more photon below 1 MeV, which means less photon signal will concentrate at high energy region. The J-factor contains information about dark matter density distribution in the Galactic halo, which is integrated over the observed line of sight s and solid angle \(\Omega\): \[J=\iint_{r.o.i}d\Omega ds\rho(r(s,l,b))^{2} \tag{15}\] where DM density \(\rho\) is a function of radial distance from Galactic center \(r\), while \(r\) is given as a function of line of sight \(s\), Galactic coordinate \((l,b)\) and distance of sun to Figure 1: All points satisfy the strict relic density constraint \(0.094\leq\Omega h^{2}\leq 0.129\). We can see the value of \(g_{s\phi}=A_{s}v_{\phi}\) is in our estimated range. Figure 2: Gamma-ray spectrum from DM semi-annihilation, generated by Hazma. In all cases with DM mass \(m_{S}=500\) MeV, We find spectrum generate by case \(m_{\phi}=200\) MeV \(<2m_{\mu}\) generate less high energy photon signal. The spectrum is highly \(m_{\phi}\) dependent. Galactic center \(R=8.5\) kpc: \[r=\sqrt{s^{2}+R^{2}-2sR\cos l\cos b} \tag{16}\] For the DM density distribution \(\rho\), we consider the Navarro-Frenk-White(NFW) profile[37] \[\rho_{\rm NFW}(r)=\frac{\rho_{s}}{(r/r_{s})(r/r_{s}+1)^{2}} \tag{17}\] and Isothermal profile[46; 47]. \[\rho_{\rm Iso}(r)=\frac{\rho_{s}}{1+(r/r_{s})^{2}} \tag{18}\] Following [44], we set the scale factor \(r_{s}=24.42\) kpc and \(\rho_{s}=0.184\) GeV cm\({}^{-3}\) in the NFW profile. In the Isothermal profile, we set \(r_{s}=4.38\) kpc and \(\rho_{s}=1.387\) GeV cm\({}^{-3}\). We use **Hazma[38]** to set limits on the DM annihilation cross-section. In **Hazma**, two kinds of experiments detecting gamma-ray are implemented: the existing one and the upcoming one. For the existing experiment, we choose EGRET [39] and Fermi-LAT [40], while for the upcoming experiment, we choose e-ASTROGRAM [41]. EGRET mainly focuses on gamma rays in the energy range 27 MeV - 8.6 GeV, and **Hazma** chooses the r.o.i as \(20^{\circ}<|b|<60^{\circ}\) and \(|l|<180^{\circ}\). For Fermi-LAT, it focuses on gamma-ray in the energy range 150 MeV - 95 GeV, and **Hazma** chooses the r.o.i as \(8^{\circ}<|b|<90^{\circ}\) and \(|l|<180^{\circ}\). For the upcoming e-ASTROGRAM, the detecting energy range is 0.3 MeV - 3 GeV, which is much more sensitive than EGRET and Fermi-LAT, the r.o.i choosen by **Hazma** is \(|b|<10^{\circ}\) and \(|l|<10^{\circ}\). The averaged J-factor in r.o.i \(\bar{J}\) values are show in table 3, both NFW and isothermal case are given. In the limit setting process, **Hazma** use binned method for the existing experiments, it requires the flux generated by the model at any single bin not to exceed the observed value plus twice the error bar. And for the upcoming experiment, **Hazma** uses an unbinned procedure, it requires a background model, **Hazma** implements a power law background. The number of total photons generate by the DM (background) model from \(E_{min}\) to \(E_{max}\) obey Poisson distribution with average ratio value: \[\mu\simeq T_{obs}\int_{E_{min}}^{E_{max}}dEA_{\rm eff}(E)\frac{d\Phi}{dE} \tag{19}\] \(A_{\rm eff}\) is the detector effective area, and \(\frac{d\Phi}{dE}\) is the photon spectrum generated by DM annihilation or background model. The unbinned method requires the signal-to-noise ratio to be significant at \(5\sigma\) level, meaning \(N_{\rm DM}/\sqrt{N_{\rm BG}}<5\). We study the exclusion limits of two cases: \(k=1.1\) and \(k=10\) for both the NFW profile and the Isothermal profile. The result is shown in Fig. 4, to learn the limitations clearly, we project points in Fig. 1 on two panels. The corresponding limitations on thermal cross-section change mildly and even become weaker compared with the NFW profile. For the limitation from Fermi-LAT, the constraint is weak in the low mass region \(m_{S}<200\) MeV for both two cases. This is because Fermi-LAT energy detecting range is 150 MeV - 95 GeV, and mass of mediator is \(m_{\phi}<m_{S}<200\) MeV, meaning the energy of gamma-ray produced by \(\phi\) decay is almost out of Fermi-LAT energy detecting range. At the same time, in mass region \(m_{S}>200\) MeV of case \(k=10\), the constrain is still quite weak, this is because in this case, the mediator mass is \(m_{\phi}<100\) MeV, resulting in limited \(\phi\) decay channels and a blunt \(\phi\) decay spectrum. But in the case of \(k=1.1\) in the mass region \(m_{S}>200\) MeV, the result is quite different, a bulge present on the excluding line around mass region \(m_{S}\in[200,300]\) MeV, meaning the constrained limit is relaxed, the reason why this happened is that the decay channel \(\phi\rightarrow\mu^{+}\mu^{-}\) is opened and it dominant the decay branching fractions, this will suppress high energy final state radiation coming from \(\phi\to e^{+}e^{-}\) decay channel, as explained in Fig. 2. When \(m_{S}>300\) MeV (\(m_{\phi}\) is also in this mass region for the highly degenerated mass), the newly opened \(\phi\to p^{0(\pm)}\pi 0(\mp)\) will dominant and produce a massive number of high energy photon, making the constrain strict again. For the result given by EGRET, the excluding result is similar to that of Fermi-LAT, except in low mass region \(m_{S}<200\) MeV, is much more intensive, mainly because EGERT Figure 3: Comparison of spectrum from our model, from usual case \(\phi\)-portal \(\phi\phi\) final state and from one of the generated \(\phi\) of \(\phi\)-portal \(\phi\phi\) final state. All models are set with parameters \(m_{S}\) = 800 MeV, and \(m_{\phi}\) = 300 MeV. concentrates on the energy range 27 MeV - 8.6 GeV. From Fig. 4, we can see the phenomenology points survived from EGRET and Fermi-LAT exclusion in the case of light \(m_{\phi}\) (\(k=10\)) of all \(m_{S}\) region, and for the case \(k=1.1\), when \(m_{S}>300\) MeV, all points are excluded, while in low \(m_{S}\) region, EGRET excludes parts of region. Unfortunately, all points of the two cases will be excluded by the e-ASTROGAM future reach, mainly because e-ASTROGAM is highly sensitive to low-energy gamma-ray signals. ## IV Conclusion The traditional WIMP is growing more unrealistic due to DM direct detection's increasingly strict constraints. We consider light DM with \(Z_{3}\) symmetry, which is easy to evade the DM direct detection constraint. Additionally, our \(Z_{3}\) DM annihilates in a different manner: semi-annihilation. In this work, our \(Z_{3}\) DM model contains one complex scalar with \(Z_{3}\) symmetry as a DM candidate and one extra scalar mediator \(\phi\) which mixed with SM Higgs after the SSB of origin \(\Phi\), and the Yukawa couplings of \(\phi\)-SM are quite SM like. The scalar mediator act as a connecting bridge between DM and SM sector. We only consider DM in the low mass region with \(m_{S}\simeq\mathcal{O}(100)\) MeV. To learn the specificity of semi-annihilation, we consider the pure semi-annihilation situation by turning off the Higgs portal part and suppressing the channel \(SS\to\phi\phi\). We consider two cases of \(k=m_{S}/m_{\phi}\) mass ratio, \(k=1.1\) highly degenerate case, and \(k=10\) extremely light mediator case. Both cases we get the correct relic density in the narrow band \(0.094\leq\Omega h^{2}\leq 0.129\). In terms of Hamza, we also discuss the MeV gamma-ray produced by present now DM annihilation in the center of the galaxy, which regards an indirect detection signal. The MeV gamma-ray signal comes from the decay of annihilating produced \(\phi\), which is highly \(m_{\phi}\) dependent. This is mainly because the available decay channel is strictly related to \(m_{\phi}\), when \(m_{\phi}<2m_{\mu}\simeq 100\) MeV, the dominant source of \(\phi\) decay spectrum comes from \(e^{\pm}\) final state radiation, but when \(m_{\phi}>2m_{\mu}\), for the large decay branching ratio of \(\phi\to\mu^{+}\mu^{-}\), the number of hard photon coming from double electron final state radiation will be suppressed, which relax indirect constrain. Since our model is a semi-annihilation one, compare with the traditional model such as the Higgs/\(\phi\)-portal case, in which DM annihilates into a pair of mediator \(\phi\), DM in our model will only generate one mediator \(\phi\) with another dark matter \(S^{*}\), suppressing the indirect detection signal. What's more, for the mass difference between annihilation products, the \(\phi\) will be less boosted, making another difference compared with the traditional Higgs-portal DM model. We also get the exclusion limits from the existing Fermi-LAT and EGRET, all phenomenology points with correct relic density in \(k=10\) case will survive, but in the case of highly degenerate case \(k=1.1\), our model receives stringent limitation in the region \(m_{S}>300\) MeV. For the high sensitivity of upcoming e-ASTROGAM, our model would be excluded if there was no observed signal from e-ASTROGAM. ###### Acknowledgements. This work was supported by the National Natural Science Foundation of China under grants No. 12275134, 12275232, and 12005180, by the Natural Science Foundation of Shandong Province under Grant No. Figure 4: We project points in Fig.1 on two panels, the meaning of the points are the same as Fig.1. We choose four representative experiments to set limits on our semi-annihilation DM. INTEGRAL [42] gives almost no limitation to our model, for the low sensitivity of COMPTEL [43], we ignore the result. For the high sensitivity of e-ASTROGAM, all the phenomenology points with correct relic density will be excluded. We can see that the DM profile difference has a minor impact on the results. ZR2020QA083, and by the Project of Shandong Province Higher Educational Science and Technology Program under Grants No. 2019KJJ007.
2308.13269
Heterogeneous Decentralized Machine Unlearning with Seed Model Distillation
As some recent information security legislation endowed users with unconditional rights to be forgotten by any trained machine learning model, personalized IoT service providers have to put unlearning functionality into their consideration. The most straightforward method to unlearn users' contribution is to retrain the model from the initial state, which is not realistic in high throughput applications with frequent unlearning requests. Though some machine unlearning frameworks have been proposed to speed up the retraining process, they fail to match decentralized learning scenarios. In this paper, we design a decentralized unlearning framework called HDUS, which uses distilled seed models to construct erasable ensembles for all clients. Moreover, the framework is compatible with heterogeneous on-device models, representing stronger scalability in real-world applications. Extensive experiments on three real-world datasets show that our HDUS achieves state-of-the-art performance.
Guanhua Ye, Tong Chen, Quoc Viet Hung Nguyen, Hongzhi Yin
2023-08-25T09:42:54Z
http://arxiv.org/abs/2308.13269v2
# Heterogeneous Decentralized Machine Unlearning with Seed Model Distillation ###### Abstract As some recent information security legislation endowed users with unconditional rights to be forgotten by any trained machine learning model, personalized IoT service providers have to put unlearning functionality into their consideration. The most straightforward method to unlearn users' contribution is to retrain the model from the initial state, which is not realistic in high throughput applications with frequent unlearning requests. Though some machine unlearning frameworks have been proposed to speed up the retraining process, they fail to match decentralized learning scenarios. In this paper, we design a decentralized unlearning framework called HDUS, which uses distilled seed models to construct erasable ensembles for all clients. Moreover, the framework is compatible with heterogeneous on-device models, representing stronger scalability in real-world applications. Extensive experiments on three real-world datasets show that our HDUS achieves state-of-the-art performance. Machine Unlearning, Decentralized Learning, Heterogeneous Collaboration, Knowledge Distillation. ## I Introduction The surge of edge computing and big data brings people personalized services in various domains like product recommendation [1] and personalized healthcare analysis [2]. In those services, users' edge devices (e.g., smartphones and smartwatches) play an important role in collecting user data and generating analytical feedback [3; 4; 5], while providing a security and timeliness advantage compared with the outgoing centralized services. With an irreversible trend of decentralization where user data is treated as a non-shareable asset, enhancing collaborations between on-device machine learning models becomes the key to a high-performance, flexible, and privacy-preserving distributed learning framework. Federated learning (FL) [6] and fully decentralized learning (FDL) [7] are two representative distributed learning paradigms. A typical FL architecture is composed of multiple user clients and a central server, where the clients individually perform model updates based on their local data and the server gathers all local updates (e.g., by taking the average of submitted gradients) and then synchronizes all client models [8]. Compared with depending on one centralized model, the use of local client models shortens the response time and lowers the risk of sensitive data leakage [9]. However, FL still relies on a trusted central server to coordinate all clients throughout model training, which is not always guaranteed in practice. In contrast to FL, FDL frameworks train local models by allowing clients to directly exchange knowledge with neighbors in the communication network [10], which bypasses the need for a central server. In both FL and FDL, device-wise collaborations are enabled by sharing local model parameters (e.g., weights or gradient) with the server or neighbors. It requires all clients to maintain a homogeneous model structure, such that knowledge can be shared across clients via equidimensional aggregation operations over different models' parameters [11; 12; 13; 14]. However, in real-world applications, users are more likely to possess devices with various hardware configurations, hence requiring different model structures for optimized performance [11; 15; 16]. For most of the existing distributed learning paradigms that only support collaborations among homogeneous client models, this is a fatal disadvantage in data-driven learning that heavily hinders flexibility and generalizability. This has been the key driver of enabling heterogeneous collaboration in distributed learning [11; 17; 18], where the key is to replace model parameter sharing with knowledge co-distillation between client models [19] via a publicly shared reference dataset. Specifically, each client's local model produces its own prediction on the reference dataset, commonly represented as logits in the prediction layer. As such, the implicit knowledge carried by the logits [20] can be used to train a performant global model in FL [21], or to improve every local model by contrasting the logits between local and neighbor client models in FDL[11]. Meanwhile, as personalized algorithms are highly reliant on sensitive user data, a higher privacy standard is raised to protect user rights beyond decentralized learning paradigms. In recent information security legislations like the General Data Protection Regulation (GDPR)[22] and the California Consumer Privacy Act (CCPA)[23], a user's unconditional rights to be forgotten have been highlighted. Specifically, when a user quits any services, service providers should be able to not only delete the data collected from the user, but also fully remove her contribution to the learned machine learning models upon request[24]. Also, the rise of poisoning attacks to distributed learning frameworks[25] further amplifies the need for unlearning clients with malicious or low-quality data to ensure robustness. Unfortunately, most existing distributed learning paradigms only allow users to deposit contributions to the global model without an option to withdraw. In this case, service providers will have to re-train a model from the ground up with the target user's data deleted, which incurs prohibitive time and resource consumption. In this regard, some fast distributed unlearning techniques have been proposed to avoid a complete retraining cycle. SISA[26] proposes to divide the data into multiple shards, and deploy a client model on each shard. Then on each shard, the model is trained in an incremental way (Fig 1.b) with dynamically collected data samples, and a copy is saved for every checkpoint. The server will host and parallelly ensemble all learned client models. It has to be mentioned that SISA is a **sample-wise** unlearning framework, i.e., each unlearning request applies to only one data sample on the client. When the system receives an unlearn request, only the client hosting that data sample will restore its model to the checkpoint before this data sample is collected (Fig 1.e), and then retrain the model with the updated data. Though SISA is an exact unlearning method that guarantees the elimination of all information about a data sample, it lacks the practicality for large-scale applications because of the need for keeping a copy of the model parameters for every checkpoint, which is inefficient for storage and infeasible in high throughput data streams. In contrast,[27] designs a federated unlearning (FedUnl) framework that stores the contributions of each client to the central model (Fig 1.c). FedUnl is a **client-wise** unlearning approach, which can subtract an arbitrary client's contribution from the global model (Fig 1.f), and remedy its performance by distilling knowledge from the full global model with a global reference dataset (Fig 1.g). However, as the knowledge distillation inevitably introduces the target user's information back to the unlearned global model, it is categorized as an approximate unlearning method that provides a suboptimal privacy guarantee. Additionally, unlike SISA which takes a parallel ensemble of different models, FedUnl has to use a homogeneous model architecture across all clients, which contradicts the necessity of allowing heterogeneous model structures in distributed learning. It may seem that batch sample unlearning could serve as a potential solution to the problem of client-wise unlearning. However, in practice, sample-wise exact unlearning techniques necessitate a rollback of the model to a state where the target sample had not yet appeared. In the context of decentralized learning, this would mean rewinding the model to a state before the client joined the framework. As such, for any client that has joined the framework for a considerable amount of time, the framework needs to be retrained from a very early stage, which is impractical for real applications. In light of this, we propose a **H**eterogeneous **D**ecentralized **U**nlearning framework with **S**eed model distillation (**HDUS**), which is designed for fast, memory-efficient, and exact unlearning that supports heterogeneous collaborative distributed learning. HDUS is an instance of the FDL paradigm, and an overview is presented in Fig.1.a. In HDUS, each client owns: (1) its unique local dataset; (2) a reference dataset shared across all clients; (3) a main model trained with local data; and (4) a group of lightweight seed models shared by neighboring clients. Notably, each seed model in a client is trained on its corresponding neighbor, facilitated by distilling knowledge from the neighbor's main model with the shared reference dataset. The reference dataset only needs to follow the format of local client data and contains no client-specific data points (e.g., constructed with simulation/public data), thus ensuring that the distilled seed model does not reflect any personal and sensitive information. The main model and seed models in the client constitute an ensemble model, so as to provide stronger performance and generalizability. Utilizing seed models, HDUS presents several advantages over existing distributed counterparts, as follows: * HDUS enables direct information sharing between clients using seed models as secure intermediaries, significantly reducing dependency on central servers and achieving full decentralization. Rather than exchanging information from main models as in current heterogeneous distributed learning paradigms[11; 21], knowledge transfer is facilitated by sharing distilled seed models across clients, which remain independent of their main models. * By circumventing traditional parameter sharing across client models, HDUS's design is compatible with heterogeneous model architectures, thereby maximizing usability within heterogeneous device networks. * The dissociative seed models serve as an add-on module comprising a neighbors' ensemble, allowing for straightforward removal of each neighbor's contribution for unlearning purposes, without necessitating retraining. Seed models are deliberately lightweight to ensure efficiency in memory usage and computation for clients. To the best of our knowledge, this represents the first study aimed at devising an unlearning solution for decentralized collaborative learning with heterogeneous client models. As discussed in subsequent sections, our extensive experiments conducted on three real-world datasets demonstrate that HDUS is a highly versatile and effective framework in comparison with cutting-edge baselines. ## II Related Work In this section, we analyze and summarize research backgrounds that are relevant to our work. ### Exact and Approximate Unlearning Machine learning is a field known for the development of algorithms capable of learning from data to make predictions or decisions[28]. In contrast, unlearning is a more recent concept that has emerged as a response to the growing concerns about privacy and data security in machine learning applications[22; 23]. Several approaches have been proposed to tackle the unlearning problem. One approach involves the use of deletion tokens, where each data point is associated with a unique token, allowing for efficient removal of the corresponding data[29]. Another approach is the use of selective amnesia, which is a technique to forget specific subsets of data without affecting the overall model[30]. An alternative approach to unlearning is to leverage the inherent properties of differentially private machine learning techniques, which provide a formal privacy guarantee by adding noise to the model during training[31; 32]. Exact unlearning methods aim to completely remove a user's contribution from a learned model, providing a strong privacy guarantee. However, such methods often Figure 1: Comparison of different unlearning frameworks. Note that four clients are used for demonstration purposes. **(a)** Our fully decentralized HDUS framework. Each local main model \(M^{i}\) updates its parameters based on its local data and then supervises its seed model \(s^{i}\) via a shared reference dataset. The received seed models collaboratively participate in the local final decision (\(E^{i}(\mathbf{x})\)) without introducing external information into the local main model (\(M^{i}\)). **(b)** Federated SISA framework. Clients train local models (with batch size = 1) based on the local data and then send each updated model to the server. In this way, each sample \(x_{t}\) corresponds to one unique model \(M_{t}\). The up-to-date models from all clients form an ensemble on the server for the final decision. **(c)** Federated FedUnI framework. Clients receive a central model from the server and train it based on local data. The server collects and stores all the updates to generate a new central model. **(d)** When client 4 quits, HDUS removes its seed model from all recipients. **(e)** SISA does not support client-wise unlearning. Assuming the second sample from client 4 is requested to be unlearned, the SISA server will replace client 4’s latest model \(M_{t}^{4}\) with the model before the second sample has appeared (\(M_{t}^{4}\)). **(f)** When client 4 quits, FedUnI subtracts all its contributions from the central model. **(g)** After that, the unlearned central model distills knowledge from the previous one (before subtraction) to regain performance. require the model to be retrained from scratch, which can be computationally expensive and time-consuming. One example of an exact unlearning approach is the leave-one-out retraining method, where the model is retrained using the entire dataset except for the user's data to be unlearned[33]. Another approach involves the use of selective influence estimators[34], which can identify the influence of individual data points on the model parameters. Although effective in terms of privacy preservation, the high computational cost of these methods limits their practicality in real-world applications[35]. In contrast, approximate unlearning methods attempt to remove a user's contribution from a learned model without requiring complete retraining. This reduces the time and computational resources needed for unlearning, but at the cost of potentially weaker privacy guarantees. One such approach is the gradient surgery method proposed by Cao and Yang[36], which involves updating the model parameters using the negative gradient of the target user's data. Another example is[37], which employs a probabilistic model to approximate the unlearning process. The use of elastic weight consolidation[38] and model compression techniques[39] have also been explored to facilitate approximate unlearning. Although approximate unlearning is generally more efficient than exact unlearning, its compliance with strict privacy regulations remains to be an open question[40]. ### Fully Decentralized Learning Fully decentralized learning (FDL) is a subfield of distributed learning that allows multiple clients to collaboratively learn models without relying on a central server. Consensus optimization has been widely used in FDL to facilitate coordination among clients, which aims to minimize the global objective function by reaching consensus on the model parameters among all clients[41]. A straightforward algorithm in this context is the decentralized gradient descent (DGD) method[42], where local gradients are employed to update model parameters in isolation. Another important aspect of FDL is its applicability to various machine learning tasks, such as classification, regression, and reinforcement learning. For example, decentralized consensus ADMM (Alternating Direction Method of Multipliers) has been applied to solve decentralized support vector machine (SVM) problems[43]. In reinforcement learning, distributed Q-learning and actor-critic algorithms have also been proposed for FDL settings[44]. To ensure privacy in FDL, various techniques have been proposed, such as secure multi-party computation (SMPC)[45], which allows clients to perform computations on encrypted data without revealing the original data. Homomorphic encryption is another approach that enables clients to perform computations on encrypted data without the need for decryption[46]. However, those methods come at a cost of excessive computational overheads, which are less favored by large-scale applications. ### Ensemble Learning Ensemble learning is a technique that combines multiple learning models to improve the overall performance and generalization of the final model. This approach has been widely used in various machine learning applications, including classification, regression, and reinforcement learning[47]. Some popular ensemble learning methods include bagging[48], boosting[49], and stacking[50]. One of the key advantages of ensemble learning is its ability to reduce overfitting and improve the robustness of the final model[51]. This is achieved by aggregating the predictions of multiple base models, which are typically trained on different subsets of the data or using different algorithms[52]. Research has shown that ensembles of diverse models can often achieve better performance than any individual model, as they can effectively capture the strengths of each base model while mitigating their weaknesses[53]. In the context of the proposed HDUS framework, ensemble learning is employed to enable clients to effectively leverage the knowledge distilled from neighboring clients' seed models. This not only enhances the performance and generalizability of the ensemble model, but also allows for easy removal of a neighbor's contribution for unlearning purposes without the need for retraining. The incorporation of ensemble learning in decentralized and distributed settings has been explored in previous work, such as federated ensemble learning[54] and decentralized ensemble learning[55], which further facilitates ensemble learning in the HDUS framework. ## III Heterogeneous Decentralized Machine Unlearning We unfold the design of the HDUS framework in this section. ### Preliminaries In a decentralized collaborative system with \(N\) clients/users \(\mathcal{A}=\{a_{1},a_{2},...,a_{N}\}\), client \(a_{i}\) possesses \(M_{i}\) local training samples with \(F\)-dimensional features \(X_{i}\in\mathbb{R}^{M_{i}\times F}\) and their one-hot labels \(Y_{i}\in\mathbb{R}^{M_{i}\times C}\) over \(C\) classes. We name \(\mathcal{D}_{i}^{loc}=\{X_{i},Y_{i}\}\) as the local dataset of client \(a_{i}\). For each \(a_{i}\in\mathcal{A}\), it can train a local deep neural network (DNN) \(f(\theta_{i},\cdot)\) with parameterization \(\theta_{i}\) by minimizing the loss \(\ell(f(\theta_{i},X_{i}),Y_{i})\). Assuming \(a_{i}\) has identified \(K_{i}\) neighboring clients \(\mathcal{B}_{i}=\{b_{1}^{i},b_{2}^{i},...,b_{K}^{i}\}\) (\(\mathcal{B}\in\mathcal{A},a_{i}\notin\mathcal{B}\)) and \(\mathcal{S}_{i}=\{s_{1}^{i},s_{2}^{i},...,s_{K}^{i}\}\) represents the knowledge/information (e.g., model parameters in FL) from those neighbors, then the collaboratively learned model for \(a_{i}\) is denoted by \(f(\theta_{i}^{\prime},\cdot)=F(f(\theta_{i},\cdot),\mathcal{S}_{i})\), where \(\theta_{i}^{\prime}\) is updated model parameters that integrates knowledge from \(\mathcal{S}_{i}\). Intuitively, \(f(\theta_{i}^{\prime},\cdot)\) performs better than \(f(\theta_{i},\cdot)\). When a client, say \(b_{j}^{i}\in\mathcal{B}_{i}\) quits the system, in addition to deleting its own model \(f(\theta_{j},\cdot)\) and data \(X_{j}\), its information also needs to be unlearned by \(f(\theta_{i}^{\prime},\cdot)\) for all \(a_{i}\) having \(b_{j}^{i}\in\mathcal{B}_{i}\). Specifically, client \(a_{i}\) needs to adjust its model \(R(f(\theta_{i}^{\prime},\cdot),s_{j}^{i})=F(f(\theta_{i},\cdot),\mathcal{S}_{i} -s_{j}^{i})\) by removing \(b_{j}^{i}\)'s contribution as if \(b_{j}^{i}\) never participated in the learning process. For this purpose, we aim to design an exact unlearning framework that can fully eliminate the influence of \(b_{j}^{i}\) while keeping the services in all other clients uninterrupted. ### HUDS framework Fig.2 provides a complementary graphical view of HDUS from a single client's perspective. In the training phase (blue arrow in Fig.2), the local main model \(f(\theta_{i},\cdot)\) updates its parameters based on the local training data \(X_{i}\). To avoid privacy breaches, user data \(X_{i}\) and local model parameter \(\theta_{i}\) cannot be shared during collaborative learning. Therefore, we build a new communication pathway by introducing a shared reference dataset \(X_{i}^{ref}\) in every client \(a_{i}\) to train a seed model \(\theta_{i}^{seed}\). Instead of designing supervised tasks to learn \(\theta_{i}^{seed}\), the seed model \(f(\theta_{i}^{seed},\cdot)\) distills knowledge from the main model \(f(\theta_{i},\cdot)\) by minimizing: \[\mathcal{L}^{*}=\min_{\theta_{i}^{seed}}\mathcal{L}\bigg{(}f(\theta_{i}^{seed },X_{i}^{ref}),\sigma_{T}(f(\theta_{i},X_{i}^{ref}))\bigg{)}, \tag{1}\] where the \(\mathcal{L}\) is some loss function like Kullback-Leibler (KL) divergence. We term this process the incubating phase (purple arrow in Fig.2). \(\sigma_{T}(\cdot)\) is a modified softmax function with a temperature parameter \(T\) to control the strength of distillation [56]. Specifically, for a model output vector \(y=\{d_{1},d_{2},\cdots d_{C}\}\), we define \(\sigma_{T}(y)=\{s(d_{1},T),s(d_{2},T),\cdots,s(d_{C},T)\}\) where \(s(d_{i},T)=\frac{\exp(\frac{\pi}{T})}{\sum_{i=1}^{C}\exp(\frac{\pi}{T})}\). Notably, each client aims to align its seed model as closely as possible with its main model, and therefore, \(X_{i}^{ref}\) is neither required to be labelled nor identical across all clients. This makes the construction of reference datasets more convenient and feasible for various decentralized applications, e.g., each client can use a simulated dataset in healthcare. Furthermore, since \(f(\theta_{i}^{seed},\cdot)\) is not associated with \(\mathcal{D}_{i}^{loc}\) during training, a user's personal data is highly unlikely to be restored from \(\theta_{i}^{seed}\). The seed model is smaller than the main model while being able to mimic its decision-making behaviors, which provides a lightweight yet more secure communication protocol. During collaborative learning, client \(a_{i}\) will send its seed model parameters \(\theta_{i}^{seed}\) to all neighbors \(\mathcal{B}_{i}\) and receive \(K\) seed models \(\mathcal{S}_{i}\) where \(s_{j}^{i}=\theta_{i}^{seed}\) for \(b_{j}^{i}\in\mathcal{B}_{i}\). These model parameters are stored in a dedicated repository of \(a_{i}\). Then in the test/inference phase (black arrow), each client generates an ensemble model output for an unseen data point \(x\) via: \[F(f(\theta_{i},x),\mathcal{S}_{i})=(1-\lambda)f(\theta_{i},x)+\frac{\lambda}{K }\sum_{k=1}^{K}f(s_{k}^{i},x), \tag{2}\] where \(\lambda\) is a trade-off hyperparameter adjusting the con Figure 2: The detailed framework of HDUS. For a user client \(a_{i}\) in the network, it trains its main model (orange) with its local training dataset (blue) and a seed model (yellow) that distills knowledge from the main model. An unlabeled reference dataset (purple) is introduced to support the knowledge distillation. The collaboration between \(a_{i}\) and its neighbor \(b_{i}^{i}\) to \(b_{K}^{i}\) is actualized by exchanging seed model parameters. The receiving parameters are stored in the seed model repository (grey). In the testing stage (black arrow), the seed models in the repository will form a submodel ensemble (red) to generate \(K\) soft labels. The final decision is composed of the output from the main model and these soft labels. Note that the seed model uses a model structure that is different from the main model and it is isolated from the sensitive local training data. Meanwhile, the main model structure is heterogeneous across clients. tribution from all \(K\) neighbors' seed models. Note that the local seed model \(\theta^{seed}_{i}\) is not involved in Eq.(2) as the main model is essentially its stronger variant. ### Handling Unlearning Requests In conventional weight/gradient-based decentralized learning frameworks, the knowledge from neighbors influences the main model \(f(\theta^{\prime}_{i},\cdot)\) throughout the whole training process. Therefore, when \(b^{i}_{j}\) quits \(\mathcal{B}_{i}\) (and \(\mathcal{A}\)) and requests its information \(s^{j}_{i}\) (i.e., model parameters in conventional frameworks) to be unlearned by \(f(\theta^{\prime}_{i},\cdot)\), client \(a_{i}\) needs to restore its model to the initial state before \(s^{j}_{i}\) was involved. Moreover, client \(a_{i}\) will spread its model parameters (e.g., \(\theta^{\prime}_{i}\) and \(\Delta\theta^{\prime}_{i}\)) to other clients. In the worst case, all clients in the network have to restore their main models, which introduces a prohibitive cost in model retraining. In contrast, in our proposed framework HDUS, the knowledge from neighbors is never fused with \(a_{i}\)'s main model. Specifically, each received seed model is treated as a sub-model in \(a_{i}\)'s ensemble, which allows for convenient machine unlearning while giving \(a_{i}\)'s main model a performance boost. In this way, upon receiving an unlearning request from neighbor \(b^{i}_{j}\) with seed model \(s^{i}_{j}\), the updated model ensemble for \(a_{i}\) is: \[F(f(\theta_{i},x),\mathcal{S}_{i}-s^{i}_{j})\] \[= (1-\lambda)f(\theta_{i},x)+\frac{\lambda}{K-1}\sum_{k=1,k\neq j}^ {K}f(s^{i}_{k},x), \tag{3}\] which essentially reflects the process of removing \(b^{i}_{j}\) from \(\mathcal{B}_{i}\) and thus removing \(s^{i}_{j}\) from \(\mathcal{S}_{i}\). Since all the knowledge from neighbor \(b^{i}_{j}\) is contained and only contained by \(s^{i}_{j}\), Eq.(3) can be regarded as an exact unlearning process. In other words, the remaining elements in the ensemble can be put into use without any modification right after the unlearning operation. ## IV Experiments This section presents our findings about the proposed HDUS framework via comparative analyses and experiments. ### Comparative Analysis on Functionality We discuss and compare the functionality of our approach with four representative baselines that support both learning and unlearning in a distributed environment: * **ISGD**[57]: A naive framework where isolated stochastic gradient descent (ISGD) assumes each client trains its own model independently, which puts all local models at the risk of overfitting and underperforming when local training data is insufficient. Unlearning a client's information simply corresponds to deleting this client's model and data. * **SISA**[26]: A sample-wise unlearning approach designed for FL. Each client holds a unique part of the full dataset and trains a local model in an incremental, instance-by-instance way. The central server then collects all client models and aggregates their outputs during inference. For each local model, SISA stores \(T\) historical states (checkpoints) for \(T\) data slices. In sample-wise unlearning, suppose the target data sample is in the \(t\)-th (\(t\leq T\)) slice, the client only needs to retrain the local model from the checkpoint \(t-1\) rather than the initial state. * **FedUnI**[27]: A variant of FedAvg [58] that enables client-wise unlearning. In FedUnI, each client receives a copy of the global model from the central server, and updates the model based on its local dataset. The central server then collects all the updated model weights from the clients, calculates an average model, and distributes the new global model back to all clients. For unlearning an entire client's information, FedUnI further requires the central server to store historical updates from all clients. Whenever one client quits, the central server will subtract all its updates from the global model. As a remedy for the potentially biased global after unlearning, additional knowledge distillation from the original model is introduced, which consequently brings back implicit knowledge about the quitting client and makes FedUnI an inexact/approximate unlearning method. * **DSGD**[59]: A homogeneous FDL framework based on decentralized SGD, where each client has a unique dataset and a local model with the same architecture. Without any central servers, the clients directly communicate with their neighbors via model weight sharing. When a client quits, all remaining clients need to retrain their models from the initial state. Table I gives a functional comparison between our method and the four baselines. While HDUS ticks all the boxes, we briefly analyze the functional differences of all baselines. ISGD is the simplest framework where all clients are trained in silos without any connection. Since there is no information dissemination, other clients do not need to respond to any unlearning request. SISA and FedUnI are two federated unlearning methods. Both of them need to store the model's historical updates for unlearning, bringing immeasurable storage demand for large-scale applications. Besides, the sample-wise unlearning design in SISA makes it struggle to scale to client-wise unlearning. FedUnI only supports inexact client-wise unlearning, which offers a suboptimal privacy guarantee and dissatisfies some latest legislations. DSGD is an FDL framework that unlearns a client by retraining its primordial model. It is worth noting that, among all baseline frameworks, only ISGD supports learning heterogeneous models across clients, and only SISA can keep its service uninterrupted during the unlearning pro cess (i.e., no model retraining needed for any involving clients), which are two critical factors associated with the real-life applicability of distributed learning systems. ### Learning Effectiveness Next, we compare the performance of HDUS and four baselines. The experiments are conducted on three benchmark datasets commonly used in decentralized classification tasks, i.e., MNIST[60] (a greyscale image dataset for handwritten digits from 0 to 9), FMNIST[61] (a greyscale image dataset for 10 types of fashion items), and Cifar10[62] (a 10-class color image dataset). Detailed statistics of these three datasets are listed in Table 2. The numbers of clients on all three datasets are 6, 6, and 5, respectively. All datasets have 10 distinct class labels. To assign clients with local data, we randomly draw an equal amount of non-overlapping instances from the training set, while each client only samples from a unique set of 9 classes to further simulate the non-I.I.D. nature of distributed datasets. For the determination of the optimal value for \(|D^{ref}|\), an grid search was conducted, ranging from \(4,000\) to \(10,000\). The investigation unveiled that as \(|D^{ref}|\) escalated, each seed model progressively aligned with the main model, thereby enhancing the overall framework performance. Consequently, for each data source, we reserve \(10,000\) samples to use as a reference dataset. It is of note that these sample labels were exclusively provided to FedUnl upon request. To make SISA compatible with client-wise unlearning, we modified SISA by treating each non-overlapping data shard as a user's personal dataset, and the ensemble of all local models as a central server model. We mark this amended version of SISA as SISA-A. For FDL frameworks (DSGD and HDUS), we let every client in FDL frameworks communicate with all other clients during learning/unlearning, so as to maintain a fair comparison with FL frameworks (SISA-A and FedUnl) that coordinate all clients via the global model. The homogeneous scenarios and heterogeneous scenarios are simulated by deploying different ResNet[63] and MobileNetV2[64] versions on clients, as shown in Table 3. In heterogeneous scenarios, MobileNet-L is the default model size, while MobileNet-M and MobileNet-S are manually pruned versions to approximate the size of ResNet8 and ResNet18, respectively. The detailed model sizes are listed in Table 4. Note that for baselines that do not support heterogeneous model communication (i.e., SISA-A, FedUnl, and DSGD), all clients are assigned the smallest model in heterogeneous scenarios to accommodate the lowest budget. We validate the classification effectiveness of HDUS and four baselines, where the average test accuracy and standard deviations of five independent runs are reported in Table 5. In the homogeneous setting, HDUS's performance is on par with different distributed learning frameworks. We also observe that HDUS is superior to SISA-A on FMNIST and Cifar10 under the homogeneous setting, which suggests that sharing a distilled small model is more cost-effective than sharing the entire local main model. When transferred to the heterogeneous setting, all methods are subject to a noticeable performance drop. Moreover, baselines with inter-client communication (i.e., SISA-A, FedUnl, and DSGD) perform even worse than naive ISGD in Cifar10. This is because the constraint on the total model parameter submerges the improvement from communication, especially in complex tasks. On the contrary, HDUS merely loses 1.25% accuracy on average in heterogeneous scenarios, outperforming all baselines in the heterogeneous setting. The results also showcase the \begin{table} \begin{tabular}{c c c c c c} \hline Dataset & Input Size & \#Training & \#Test & \#Reference & \#Class \\ \hline MNIST & \(28\times 28\) & \(48,000\) & \(12,000\) & \(10,000\) & \(10\) \\ FMNIST & \(28\times 28\) & \(48,000\) & \(12,000\) & \(10,000\) & \(10\) \\ Cifar10 & \(30\times 30\times 3\) & \(40,000\) & \(10,000\) & \(10,000\) & \(10\) \\ \hline \end{tabular} \end{table} Table 2: Characteristics of datasets. \begin{table} \begin{tabular}{c c c c c c c} \hline Type & Model & Size (MB) & Model & Size (MB) \\ \hline Small & ResNet8 & 0.015 & MobileNet-S & 0.022 \\ Medium & ResNet18 & 1.044 & MobileNet-M & 1.362 \\ Large & ResNet50 & 2.907 & MobileNet-L & 8.665 \\ \hline \end{tabular} \end{table} Table 4: Stages of different models in both settings. \begin{table} \begin{tabular}{c c c c c c c} \hline & Exact & \begin{tabular}{c} Performance \\ Recovery \\ Acceleration1 & Client-wise \\ Unlearning \\ \end{tabular} & \begin{tabular}{c} Without \\ Not \\ Not \\ \end{tabular} & \begin{tabular}{c} Without \\ Not \\ \end{tabular} & \begin{tabular}{c} Heterogeneous \\ Central \\ Model \\ Server \\ \end{tabular} \\ \hline ISGD & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\checkmark\) & \(\checkmark\) \\ SISA & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\times\) & \(\times\) & \(\times\) \\ FedUnl & \(\times\) & \(\checkmark\) & \(\times\) & \(\checkmark\) & \(\times\) & \(\times\) & \(\times\) \\ DSGD & \(\checkmark\) & \(\times\) & \(\times\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\times\) \\ **HDUS** & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) \\ \hline \end{tabular} \end{table} Table 3: Client model allocations in both settings. advantage of the collaborative learning protocol for heterogeneous models proposed in HDUS. While the HDUS framework is primarily evaluated on classification tasks in this work, the ensemble of knowledge from decentralized models can provide generalizability across a broad spectrum of tasks, such as ranking and regression, where non-I.I.D. data is also present. ### Unlearning Effectiveness We simulate the scenario that after all models are well-trained, one client in the framework sends an unlearning request to its neighbors (or the server) at time \(t=0\). This means all remaining clients need to erase the quitting client's historical contributions from their local models while maintaining performance. The unlearning procedure is relatively straightforward for SISA-A and HDUS: since the knowledge from the quitting client is not fused into its neighbors' local models, they can simply adjust the ensemble (e.g., the global ensemble model in SISA-A and the seed model ensemble in HDUS) by removing the quitting client to achieve exact unlearning. However, for FedUnl and DSGD, it will take much more effort to remove the footprint of the quitting client. As the knowledge of the quitting client has blended into all client models in the learning process, these frameworks have to execute a complicated inexact unlearning procedure (FedUnl) or retrain all models from the ground up (DSGD). The test accuracy curves of all methods in the heterogeneous setting are presented in Fig.3. All unlearning processes are performed on the same GeForce RTX 3090 \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Base Model} & \multirow{2}{*}{Framework} & \multicolumn{3}{c|}{Homogeneous Setting} & \multicolumn{3}{c}{Heterogeneous Setting} \\ \cline{3-8} & & MNIST & FMNIST & Cifar10 & MNIST & FMNIST & Cifar10 \\ \hline \multirow{6}{*}{ResNet} & ISGD & \(0.9872\pm 0.0018\) & \(0.9034\pm 0.0076\) & \(0.7026\pm 0.0092\) & \(0.9828\pm 0.0022\) & \(0.8796\pm 0.0054\) & \(0.6563\pm 0.0083\) \\ & SISA-A & \(0.9913\pm 0.0022\) & \(0.9112\pm 0.0054\) & \(0.7408\pm 0.0167\) & \(0.9822\pm 0.0031\) & \(0.8660\pm 0.0068\) & \(0.5190\pm 0.0351\) \\ & FedUnl & \(0.9929\pm 0.0011\) & \(\mathbf{0.9254}\pm 0.0091\) & \(\mathbf{0.7834}\pm 0.0075\) & \(0.9825\pm 0.0026\) & \(0.8871\pm 0.0091\) & \(0.6204\pm 0.0180\) \\ & DSGD & \(\mathbf{0.9932}\pm 0.0017\) & \(0.9185\pm 0.0050\) & \(0.7236\pm 0.0083\) & \(0.9816\pm 0.0012\) & \(0.8637\pm 0.0069\) & \(0.5039\pm 0.0229\) \\ & HDUS & \(0.9910\pm 0.0006\) & \(0.9118\pm 0.0032\) & \(0.7482\pm 0.0094\) & \(\mathbf{0.9856}\pm 0.0026\) & \(\mathbf{0.8884}\pm 0.0050\) & \(\mathbf{0.6808}\pm 0.0097\) \\ \hline \hline \multirow{6}{*}{MobileNet} & ISGD & \(0.9851\pm 0.0016\) & \(0.9012\pm 0.0027\) & \(0.7027\pm 0.0083\) & \(0.9829\pm 0.0042\) & \(0.8825\pm 0.0021\) & \(0.6637\pm 0.0101\) \\ & SISA-A & \(0.9914\pm 0.0021\) & \(0.9140\pm 0.0039\) & \(0.7492\pm 0.0081\) & \(0.9813\pm 0.0021\) & \(0.8673\pm 0.0055\) & \(0.5217\pm 0.0092\) \\ \cline{1-1} & FedUnl & \(\mathbf{0.9945}\pm 0.0012\) & \(\mathbf{0.9268}\pm 0.0021\) & \(\mathbf{0.7915}\pm 0.0032\) & \(0.9825\pm 0.0017\) & \(0.8878\pm 0.0057\) & \(0.6283\pm 0.0129\) \\ \cline{1-1} & DSGD & \(0.9940\pm 0.0015\) & \(0.9162\pm 0.0027\) & \(0.7330\pm 0.0085\) & \(0.9817\pm 0.0036\) & \(0.8890\pm 0.0078\) & \(0.5136\pm 0.0178\) \\ \cline{1-1} & HDUS & \(0.9900\pm 0.0014\) & \(0.9100\pm 0.0029\) & \(0.7528\pm 0.0060\) & \(\mathbf{0.9875}\pm 0.0044\) & \(\mathbf{0.9023}\pm 0.0049\) & \(\mathbf{0.7152}\pm 0.0095\) \\ \hline \end{tabular} \end{table} Table 5: Classification accuracy of all frameworks in both homogeneous and heterogeneous settings. Figure 3: The unlearning process of different frameworks in the heterogogift-bus setting. All frameworks unlearn one client from \(t=0\). The time consumption of deleting the corresponding (seed) model in HDUS, SISA-A, and ISGD is lower than 1 minute. Runtime is measured on a single RTX 3090 Ti GPU. Ti GPU to allow for fair runtime measurement. When \(t<0\), the test accuracy is evaluated on all clients, and when \(t\geq 0\), it is evaluated on all but the quitting clients. As the results demonstrate, HDUS outperforms all baselines consistently across datasets and model selections. Principally, ensemble frameworks (HDUS and SISA-A) can function seamlessly during unlearning, showcasing their potential in real-life scenarios with highly frequent unlearning requests. On the contrary, FedUnl and DSGD need much more time to return to their original performance level. The retraining/distillation process will be shortened if stronger computing power is available, but a service interruption is still unavoidable. If the unlearning requests come after another (and even collide with the unfinished unlearning process), the usability of FedUnl and DSGD will degrade further. Besides, we notice that DSGD recovers faster than FedUnl, especially on the most complicated Cifar10 dataset. The reason is that models in FedUnl have two optimization objectives: (1) the classification error on each local dataset and (2) the discrepancy from the global model before the unlearning operation. Consequently, FedUnl's convergence rate is slower than DSGD's. However, given enough time, FedUnl can theoretically surpass DSGD in performance. Another observation is that both DSGD and FedUnl recover slower when deploying MobileNet, which is due mainly to the substantially larger model size of MobileNet-L. ### Hyperparameter Sensitivity There are two main hyperparameters, namely \(T\) and \(\lambda\), that influence the performance of HDUS. In this section, we study the effect of different hyperparameter values to our proposed framework in the heterogeneous setting. \(T\) denotes the temperature in knowledge distillation, which can soften the logits produced by the softmax function in local main models (teacher models). \(T=1\) means the normal softmax function is applied. \(\lambda\in[0,1)\) is a trade-off coefficient that balances the contribution of local model and seed models in each client. Higher \(\lambda\) means the ensemble relies more on the knowledge from neighbors. In general, the results in Fig.4 illustrate that the magnifying effect from \(\lambda\) is more obvious, and MobileNet generally performs better than ResNet in FMNIST and Cifar10 under a wide range of parameter settings. ## V Conclusion One main obstacle in designing a decentralized unlearning framework is that the unlearning requests send after the knowledge is shared. In this case, some clients may fail to erase their information from all involved clients since the medium clients who built the connections may have already left the network and hence suspend the unlearning requests. On the other hand, clients in conventional decentralized frameworks blend with peer knowledge via training. This means restoring the model to the state right before collaboration is not significantly faster than retraining the model from the initial state. Figure 4: The impact of different hyperparameter values in the heterogeneous setting, where the results are recorded by fixing all other hyperparameters and varying the value of \(T\) or \(\lambda\).
2306.02810
Novel approach to investigate $η$ decays via $η'\rightarrowππη$
To avoid the impact from the background events directly from $e^+e^-$ annihilations or $J/\psi$ decays, we propose a novel approach to investigate $\eta$ decays, in particular for its rare or forbidden decays, by using $\eta^\prime\rightarrow\pi\pi\eta$ produced in $J/\psi$ decays at the $\tau-$charm factories. Based on the MC studies of a few typical decays, $\eta\rightarrow \pi\pi$, $\gamma l^+l^- (l= e, \mu)$, $l^+l^-$, as well as $l^+l^-\pi^0$, the sensitivities could be obviously improved by taking advantage of the extra constraint of $\eta^\prime$. Using one trillion $J/\psi$ events accumulated at the Super $\tau$-Charm facility, the precision on the investigation of $\eta$ decays could be improved significantly and the observation of the rare decay $\eta\rightarrow e^+e^-$ is even accessable.
Xiaolin Kang, Yuyao Ji, Xiaoqing Yuan, Benhou Xiang, Xiaorong Zhou, Haiping Peng, Xingtao Huang, Shuangshi Fang
2023-06-05T12:04:28Z
http://arxiv.org/abs/2306.02810v1
# Novel approach to investigate \(\eta\) decays via \(\eta^{\prime}\to\pi\pi\eta\) ###### Abstract To avoid the impact from the background events directly from \(e^{+}e^{-}\) annihilations or \(J/\psi\) decays, we propose a novel approach to investigate \(\eta\) decays, in particular for its rare or forbidden decays, by using \(\eta^{\prime}\to\pi\pi\eta\) produced in \(J/\psi\) decays at the \(\tau-\)charm factories. Based on the MC studies of a few typical decays, \(\eta\to\pi\pi\), \(\gamma l^{+}l^{-}(l=e,\mu)\), \(l^{+}l^{-}\), as well as \(l^{+}l^{-}\pi^{0}\), the sensitivities could be obviously improved by taking advantage of the extra constraint of \(\eta^{\prime}\). Using one trillion \(J/\psi\) events accumulated at the Super \(\tau\)-Charm facility, the precision on the investigation of \(\eta\) decays could be improved significantly and the observation of the rare decay \(\eta\to e^{+}e^{-}\) is even accessable. ## I Introduction Since its strong, electromagnetic, and weak decays are forbidden in the first order, \(\eta\) meson plays an important role as a test of low-energy Quantum Chromodynamics (QCD) calculations in the framework of chiral perturbation theory (ChPT). In addition, \(\eta\) is an eigenstate of the charge conjugation (\(C\)) and parity (\(P\)) operators, and thus it provides an important experimental tool for investigations of the degree of conservation of these symmetries in strong and electromagnetic interactions. In addition to the promising numbers of \(\eta\) directly produced from hadron-production or photo-production processes, huge samples of the \(\eta\) can be collected in the radiative decays of the vector meson from the \(e^{+}e^{-}\) annihilations (\(\phi\to\gamma\eta\) at KLOE-2 [1] and \(J/\psi\to\gamma\eta\) at BESIII [2]). In recent years, with the world's largest \(J/\psi\) samples collected with the BESIII detector, a series of interesting results on \(\eta\) decays was achieved with the decays of \(J/\psi\to\gamma\eta\) (see the reviews [3; 4; 5; 6] for details). However, it was found that the large background contributions from \(J/\psi\) decays makes it hard to improve the sensitivity for the investigation on the \(\eta\) rare or forbidden decays. Take \(\eta\to\pi^{0}\pi^{0}\) as an example, the dominant background events come from \(J/\psi\to\gamma\pi^{0}\pi^{0}\) due to the direct pions production. In particular the production of the intermediate state \(f_{0}(600)\) makes the background events unreduceable [7]. To avoid the background impacts directly from \(J/\psi\) decays, we introduce a novel approach to investigate the \(\eta\) decays via \(\eta^{\prime}\to\pi\pi\eta\) process. According to the Particle Data Group (PDG) [8], the producted branching fraction of \(J/\psi\to\gamma\eta^{\prime}\), \(\eta^{\prime}\to\pi^{+}\pi^{-}\eta\) is \((2.23\pm 0.04)\times 10^{-3}\), which is about two times larger than that of \(J/\psi\to\gamma\eta\). After taking into account the tracking efficiency of two charged pions, the selected \(\eta\) samples from this approach is larger than, at least compatible with, the directly obtained sample from \(J/\psi\to\gamma\eta\). On the other hand, since the \(\eta^{\prime}\) is quite narrow, one more constraint on the \(\eta^{\prime}\) peak makes it easier to suppress the background events directly from \(J/\psi\) decays. Most recently, a project of the Super \(\tau\)-Charm facility (STCF) [9] was proposed for exploring the \(\tau-\)charm physics and searching for the physics beyond the Standard Model (SM). The STCF is an electron-positron collider, operating at energies from 2 to 7 GeV, together with a state-of-the-art particle detector. The designed luminosity, \(0.5\times 10^{35}\) cm\({}^{-2}\)s\({}^{-1}\) or higher is about 100 times larger than that of the BEPCII [10], which enables to collect unprecedented high statistics data samples in one year. As advocated by the BEPCII/BESIII, not only will this facility play a leading role in the investigation of \(\tau\)-charm physics, but they will offer an unprecedented opportunity to explore the light meson decays benifitted from the high production rates of light mesons in the charmonium decays. According to the latest conceptual design report [9], 3.4 trillion \(J/\psi\) events can be produced in one year. To have a conservation estimation on the investigation of \(\eta\) decays, the sensitivities are estimated based on 1 trillion \(J/\psi\) events, which corresponds to 5.2 billion \(J/\psi\to\gamma\eta^{\prime}\) decays. Therefore, a simulated sample of 5.2 billion \(J/\psi\to\gamma\eta^{\prime}\) with \(\eta^{\prime}\) inclusive decays are simulated based on the basic STCF fast simulation package [11]. All the branching fractions of \(\eta^{\prime}\) decays are taken from the PDG [8]. This sample will be denoted as Pseudo-data throughout the text and used to estimate the potential background contributions. Then exclusive MC studies of a few typical decays of \(\eta\) meson are performed in this article to elucidate the feasibility for investigating \(\eta\) decays with \(\eta^{\prime}\to\pi^{+}\pi^{-}\eta\). It is worth mentioning that the detector geometry and performance and the reconstruction software are still under further optimization, such as the spatial resolution for tracks and clusters, the energy reso lution for clusters, the efficiency for tracking and particle identification. ## II \(\eta\to\pi\pi\) The \(P\) and \(CP\) violating decay \(\eta\to\pi\pi\) are usually regarded as the golden channels to search for the unconventional source of \(CP\) violation [12]. The SM and its extended sector predicted the branching fraction of \(\eta\to\pi\pi\) at the level of \(\sim 10^{-15}\)[13]. While the experimental upper limits are highly limited due to the uncradoable background production at both hadronic collisions and \(e^{+}e^{-}\) annihilations. That is why a possible new test in the decay into four pions is performed by many experiments even through the detection efficiency is lower than that of \(\eta\to\pi^{0}\pi^{0}\). The present upper limit for branching fraction of \(\eta\to\pi^{0}\pi^{0}\), \(3.5\times 10^{-4}\)[14], is two order magnitudes larger than that of \(\eta\to 4\pi^{0}\). While the upper limit for branching fraction of \(\eta\to\pi^{+}\pi^{-}\) is \(4.4\times 10^{-6}\)[15] from KLOE-2 experiment. With a sample of \(2.2\times 10^{8}\)\(J/\psi\) events, BESIII performed the search for \(\eta\to\pi\pi\) via \(J/\psi\to\gamma\eta\to\gamma\pi\pi\) process [7]. The dominant background contributions are from \(J/\psi\to\pi^{+}\pi^{-}\pi^{0}\), \(e^{+}e^{-}\) and \(\mu^{+}\mu^{-}\) for \(\eta\to\pi^{+}\pi^{-}\), and \(J/\psi\to\gamma\eta^{0}\pi^{0}\) with the direct pions production for \(\eta\to\pi^{0}\pi^{0}\). In particular the production of the intermediate state \(f_{0}(600)\) makes the background events irreducible as illustrated in Fig. 1. The high background level makes the sensitivity of searching this rare decay quite low via \(J/\psi\to\gamma\eta\), which set the upper limits as \(3.9\times 10^{-4}\) and \(6.9\times 10^{-4}\) for \(\eta\to\pi^{+}\pi^{-}\) and \(\eta\to\pi^{0}\pi^{0}\), respectively. To check the sensitivity of searching the rare decay of \(\eta\to\pi\pi\) via \(J/\psi\to\gamma\eta^{\prime}(\pi^{+}\pi^{-}\eta)\), MC studies are performed on the Pseudo-data produced at STCF. The main background events are found to be \(\eta^{\prime}\to\pi^{+}\pi^{-}\pi^{+}\pi^{-}\) for the charged channel and \(\eta^{\prime}\to\pi^{+}\pi^{-}\pi^{0}\pi^{0}\) for the neutral channel, respectively, which can be well described by the combination of the ChPT and Vector Meson Dominance (VMD) model. In addition, there are also small amount backgrounds from \(\eta\to\gamma\pi^{+}\pi^{-}\) with \(\eta\) from \(\eta^{\prime}\to\pi\pi\eta\), which contribute as peaks in the mass spectra of \(\pi^{+}\pi^{-}\pi^{+(0)}\pi^{-(0)}\) and also \(\pi^{+}\pi^{-}\) for the charged channel, but both are below the \(\eta^{\prime}\) and \(\eta\) signal regions. To eliminate \(\eta\to\gamma\pi^{+}\pi^{-}\) backgrounds and other continuum background contributions under \(\eta^{\prime}\) peak, the same approach in Ref. [16] are adopted. The \(M(\pi^{+}\pi^{-})\) or \(M(\pi^{0}\pi^{0})\) can be divided into a number of bins around the \(\eta\) signal region and a fit to \(M(\pi^{+}\pi^{-}\pi^{+}\pi^{-})\) or \(M(\pi^{+}\pi^{-}\pi^{0}\pi^{0})\) for each bin is performed to extract the strength of \(\eta^{\prime}\to 4\pi\) and other background contributions. Then the background-subtracted \(\pi^{+}\pi^{-}\) and \(\pi^{0}\pi^{0}\) mass spectra are obtained and shown in Fig. 2, together with the possible \(\eta\to\pi\pi\) signal with a random scale. Please note that one \(\eta^{\prime}\to\pi^{+}\pi^{-}\pi^{+}\pi^{-}\) event contributes to more than one entry in \(M(\pi^{+}\pi^{-})\). We then made a test by determining the production upper limit of \(\eta\to\pi\pi\) using the Bayesian approach. A series of unbinned extended maximum likelihood fits is performed to the mass spectrum of \(\pi\pi\) with an expected signal. In the fit, the line shape of the \(\eta\) signal is determined by MC simulation, and the background is represented with a second-order Chebychev polynomial. The likelihood distributions of the fit are taken as the probability density function directly. The upper limit on the number of signal events at the 90% confidence level (C.L.) corresponds to the number of events at 90% of the integral of the probability density function. Considering the estimated detection efficiency, the upper limits on the branching fraction of \(\eta\to\pi^{+}\pi^{-}\) and \(\eta\to\pi^{0}\pi^{0}\) are determined to be \(7.5\times 10^{-8}\) and \(6.9\times 10^{-7}\), respectively, which will be the best experimental upper limits and the one for \(\eta\to\pi^{0}\pi^{0}\) is three order magnitudes better than the present upper limit [8]. A full systematic uncertainty evaluation requires both experimental data and full MC simulation, therefore, we only have a qualitatively discussion below. The possible systematic uncertainties sources for the upper limits include the number of \(J/\psi\), the intermediate branching fractions, and the event selection. The number of \(J/\psi\) can be determined precisely with its hadronic decays, as described in Ref. [17]. The uncertainties associated with the intermediate process will be taken from PDG. The uncertainties associated with event selection mainly from the difference between MC simulation and experimental data in tracking, particle identification, and photon reconstruction, which can be studied with clean and high statistics control samples and are still under optimization. The total systematic uncertainty at STCF is expected to be at the level of several percents or even less, which only has a minor impact on the sensitivities of \(\eta\) rare decays. ## III \(\eta\to\gamma e^{+}e^{-}\) and \(\eta\to\gamma\mu^{+}\mu^{-}\) The \(\eta\to\gamma l^{+}l^{-}\) (\(l=e,\ \mu\)) decays are the simplest radiative dilepton decays, also named as the Dalitz decays, where the lepton pair is formed by internal conversion of an intermediate virtual photon. The deviation of the spectrum, \(M(l^{+}l^{-})\), from the Quantum Electrodynamics (QED) prediction allows one to investigate the electromagnetic structure of the \(\eta\) in terms of a timelike transition form factor, which has an important role in the evaluation of the hadronic light-by-light contribution to the muon anomalous magnetic moment. The latest slope of the form factor measurements for \(\eta\) meson are \(\Lambda^{-2}=1.97\pm 0.11\) (GeV/\(c^{2}\))\({}^{-2}\) and \(\Lambda^{-2}=1.934\pm 0.067\pm 0.050\) (GeV/\(c^{2}\))\({}^{-2}\), respectively, from A2 collaboration using \(\eta\to\gamma e^{+}e^{-}\)[18] and NA60 collaboration using \(\eta\to\gamma\mu^{+}\mu^{-}\)[19], while the branching fractions of them have never been updated for more than one decade. In the study of the \(\eta\to\gamma l^{+}l^{-}\) decays with \(J/\psi\to\gamma\eta\) by the BESIII experiment, it was found that these decays sufferers from the background events directly from \(e^{+}e^{-}\) annihilations and \(J/\psi\) decays that have charged pions in the final states. In particular for the \(\eta\to\gamma\mu^{+}\mu^{-}\) decay, the impact of the backgrounds should be large because of its low branching fraction and the misidentification of muons and pions. However, the MC study indicates that both of these two decay modes could be easily distinguished from events obtained through the \(\eta^{\prime}\to\pi^{+}\pi^{-}\eta\) decay. Using the Pseudo-data at STCF, we selected \(200193\pm 447\)\(\eta\to\gamma e^{+}e^{-}\) events and \(1747071\pm 1321\)\(\eta\to\gamma\mu^{+}\mu^{-}\) events, respectively. It was found that the background contribution is at a level of \(10^{-3}\), which indicates that the selected sample of \(\eta\) Dalitz decays from \(\eta^{\prime}\to\pi^{+}\pi^{-}\eta\) could provide a clean laboratory to measure the transition form factor. After normalization with the QED contribution, the transition form factors, defined as \(F(M_{l^{\prime}l^{-}}^{2};0)\), as a function of \(M(l^{+}l^{-})\) are displayed in Fig. 3. With the single pole model, \(F(M_{l^{\prime}l^{-}}^{2};0)\equiv(1-M_{l^{\prime}l^{-}}^{2}/A^{2})^{-1}\), the slopes of the transition form factor, defined as \(dF(M_{l^{\prime}l^{-}}^{2};0)/dM^{2}(l^{+}l^{-})=\Lambda^{-2}\), are measured to be \(1.653\pm 0.038\)\((\mathrm{GeV}/c^{2})^{-2}\) for \(\eta\to\gamma\mu^{+}\mu^{-}\) and \(1.644\pm 0.012\)\((\mathrm{GeV}/c^{2})^{-2}\) for \(\eta\to\gamma e^{+}e^{-}\), where the errors are statistical only. From the above study, it is clear that the precision of branching fractions and the transition form factor measurement will be improved significantly. In addition, the clean sample of \(\eta\to\gamma\mu^{+}\mu^{-}\) allows to search for the electromagnetic bound states of a \(\mu^{+}\mu^{-}\) pair, named as muonium [20; 21], which, experimentally, has never been observed yet due to its low production rate. The observation of the muonium will be essential for understanding the various potential anomalies involving muons [22] and the possible contributions from the physics beyond the SM [23]. ## IV \(\eta\to e^{+}e^{-}\) and \(\eta\to\mu^{+}\mu^{-}\) \(\eta\to l^{+}l^{-}\) is a fourth order electromagnetic transition and the branching fraction is expected to be tiny. In particular for \(\eta\to e^{+}e^{-}\), which is suppressed compared to \(\eta\to\mu^{+}\mu^{-}\) as a consequence of the helicity factor of the electrons. The unitarity limit gives the branching fraction at a level of \(10^{-9}\)[24], which makes \(\eta\to e^{+}e^{-}\) an attractive prospect for a leptoquark search. New theories [25; 26] beyond the SM, such as composite, grand Figure 1: Adapted from Ref. [7], which is from \(J/\psi\to\gamma\pi\pi\) based on a sample of \(2.2\times 10^{8}\)\(J/\psi\) events at BESIII. The \(\pi^{+}\pi^{-}\) (a) and \(\pi^{0}\pi^{0}\) (b) invariant mass distributions of the final candidate events in the \(\eta\) signal region. The dots with error bars are data, the solid lines are the fit results, and the dashed histograms are the sum of all the simulated normalized backgrounds. The arrows show mass regions which contain around 95% of the signal according to MC simulations. unified and technicolor models, require the existence of new particles. An especially popular type is known as leptoquark, LQ, which couples directly to quarks and leptons. In addition, the interest in the decays was revived due to the observed excess rate of the \(\pi^{0}\to e^{+}e^{-}\) decay [27] with respect to the SM predictions [28]. This triggered theoretical speculations that the excess might be caused by a neutral vector meson responsible for annihilation of a neutral scalar dark matter particle [29]. The consequence could be large (even an oder of magnitude) enhancement of the \(\eta\to e^{+}e^{-}\) decay rate. Therefore, a telling clue to the existence of these new effect would be the enhancement of \(\mathcal{B}(\eta\to e^{+}e^{-})\) much above the unitary limit, which implies that the rare decay of \(\eta\to e^{+}e^{-}\) can be an important probe for the new physics beyond the SM. Since the high production cross section of \(e^{+}e^{-}\to l^{+}l^{-}\) and the large branching fraction of \(J/\psi\to l^{+}l^{-}\), it is hard to investigate \(\eta\to l^{+}l^{-}\) processes using the radiative decay of \(J/\psi\to\gamma\eta\). However, theoretically the \(\eta^{\prime}\to\pi^{+}\pi^{-}l^{+}l^{-}\) decay proceeds via a virtual photon intermediate state, \(\eta\to\pi^{+}\pi^{-}\gamma^{*}\to\pi^{+}\pi^{-}l^{+}l^{-}\). A peak with a long tail just above \(2\mathrm{m}_{e}\) is expected to be seen in the \(M(l^{+}l^{-})\) and a dominant \(\rho\) contribution in \(M(\pi^{+}\pi^{-})\). These two prominent features make these decays could be well separated from the decays of \(\eta^{\prime}\to\pi^{+}\pi^{-}\eta\) with \(\eta\to l^{+}l^{-}\), which is illustrated in Fig. 4. Based on 1.3 billion \(J/\psi\) events, BESIII first observed \(\eta^{\prime}\to\pi^{+}\pi^{-}\mu^{+}\mu^{-}\) signal and found a few dozens of events peaked around the \(\eta\) meson mass in the dimuon mass spectrum [30]. These events come from the \(\eta^{\prime}\to\pi^{+}\pi^{-}\eta\), followed by the rare decay \(\eta\to\mu^{+}\mu^{-}\), which could give a compatible branching fraction with the present world average value \(\mathcal{B}(\eta\to\mu^{+}\mu^{-})=(5.8\pm 0.8)\times 10^{-6}\)[8]. With the current available 10 billion \(J/\psi\) events at the BESIII experiments, which is about eight times larger than that used in Ref. [30], the precision of the evaluated branching fraction of \(\eta\to\mu^{+}\mu^{-}\) can be extracted with a relative uncertainty of the order of 10%. To estimate the background contribution, we performed a MC study by generating \(J/\psi\to\gamma\eta^{\prime},\eta^{\prime}\to\pi^{+}\pi^{-}\mu^{+}\mu^{-}\) and \(J/\psi\to\gamma\pi^{+}\pi^{-}\pi^{+}\pi^{-}\) samples based on the STCF fast simulation package, which are also shown in Fig. 4(b). Based on the Pseudo-data at STCF, the signal yield of \(\eta\to\mu^{+}\mu^{-}\) is estimated to be \(3847\pm 62\) and the corresponding branching fraction is calculated to be \((5.88\pm 0.09)\times 10^{-6}\), the precision is improved by one order of magnitude. With the same Pseudo-data sample, the possible \(\eta^{\prime}\to\pi^{+}\pi^{-}\eta\) with \(\eta\to e^{+}e^{-}\) candidates are also selected. The obtained \(e^{+}e^{-}\) mass spectrum are shown as the blacks dots in Fig. 4(a). An unbinned maximum likelihood fit is then performed to the \(M(e^{+}e^{-})\) distribution, where the signal is described by the MC simulated shape, and the background contribution is described by a first-order Chebychev polynomial function. The branching fraction is expected to reach a level of \(10^{-9}\) with one trillion \(J/\psi\) events at STCF, which is just close to the theoretical calculation. Therefore, an observation of \(\eta\to e^{+}e^{-}\) decay with a branching fraction exceeding the theoretical prediction might be a signature of physics beyond the SM. ## V \(\eta\to\pi^{0}e^{+}e^{-}\) and \(\eta\to\pi^{0}\mu^{+}\mu^{-}\) The investigation of the charge conjugation invariance in the electromagnetic interactions can be done by studying the \(\eta\to\pi^{0}l^{+}l^{-}\) decay. In the framework of the SM and QED, the matrix element for this process should involve the two virtual photon exchange [31] as it is presented in Fig. 5 with the transition according to the reaction of \(\eta\to\pi^{0}+\gamma^{*}+\gamma^{*}\to\pi^{0}+l^{+}+l^{-}\). The decay rate of those \(C\)-conserving process, predicted theoretically ranges from \(10^{-11}\) to \(10^{-8}\)[32; 33; 34] depending on the undertaken assumptions. Since the first order electromagnetic \(\eta\) decays are forbidden and \(\eta\to\pi^{0}\gamma\) also violates the conservation of angular momentum, in principle the decay \(\eta\to\pi^{0}l^{+}l^{-}\) proceeds with a virtual photon is forbidden. At present, the experim Figure 3: The distribution of \(F^{2}(M_{l^{+}l^{-}};0)\) over the \(M(\mu^{+}\mu^{-})\) (a) and \(M(e^{+}e^{-})\) (b). The dots with error bars are the ratio of the background-subtracted Pseudo-data at STCF to the signal MC which is simulated using \(F^{2}(M_{l^{+}l^{-}}^{2};0)\equiv 1\). The solid lines are normalized fit results. branching fraction \(\mathcal{B}(\eta\to\pi^{0}e^{+}e^{-})\) was determined to be \(8\times 10^{-6}\)[8], which is still at least three orders of magnitude remains to be experimentally investigated until reach the prediction based on the SM. While the experimental upper limit for \(\eta\to\pi^{0}\mu^{+}\mu^{-}\), \(5\times 10^{-6}\)[8], has not been updated for more than 40 years. The observation of any higher branching fraction than one calculated in the framework of the SM could provide the evidence that the decay \(\eta\to\pi^{0}l^{+}l^{-}\) is not conserving \(C\)-invariance. To testing the feasibility of search for \(\eta\to\pi^{0}l^{+}l^{-}\) via \(J/\psi\to\gamma\eta^{\prime},\eta^{\prime}\to\pi^{+}\pi^{-}\eta\), studies are performed with the Pseudo-data sample and the dedicated signal MC samples. The main backgrounds for the decay process \(\eta\to\pi^{0}e^{+}e^{-}\) are from \(\eta\to\gamma e^{+}e^{-}\), which presents as a sharp peak in the mass spectrum of \(\pi^{0}e^{+}e^{-}\) in the \(\eta\) signal region, but continuously in the mass spectrum of \(\gamma\gamma\). Therefore, we can easily extract the possible \(\eta\to\pi^{0}e^{+}e^{-}\) signal by fitting to the mass spectrum of \(\gamma\gamma\) with the requirement of \(M(e^{+}e^{-}\gamma\gamma)\) in \(\eta\) signal region. Fig. 6(a) shows the obtained \(\gamma\gamma\) mass spectrum from the Pseudo-data sample and the possible \(\eta\to\pi^{0}e^{+}e^{-}\) signal with a random scale. With one trillion \(J/\psi\) events at STCF, the upper limit is expected around \(2\times 10^{-7}\), which is improved by one order magnitude compared with the PDG value [8]. While for \(\eta\to\pi^{0}\mu^{+}\mu^{-}\) channel, the main backgrounds are from \(\eta\to\pi^{+}\pi^{-}\pi^{0}\), which is flat in the mass spectrum of \(\mu^{+}\mu^{-}\pi^{0}\) around \(\eta\) signal region. Fig. 6(b) shows the background contributions estimated from the Pseudo-data sample and the possible \(\eta\to\pi^{0}\mu^{+}\mu^{-}\) signal with a random scale. By fitting to \(M(\mu^{+}\mu^{-}\pi^{0})\), we can estimate the possible \(\eta\to\pi^{0}\mu^{+}\mu^{-}\) signal yields. Together with the estimated efficiency, the upper limit on the branching fraction for \(\eta\to\pi^{0}\mu^{+}\mu^{-}\) is expected to reach \(8.5\times 10^{-8}\) with one trillion \(J/\psi\) events at STCF, which is improved by two order magnitudes compared with the PDG value [8] and quite close to the theoretical prediction. ## VI Summary Despite the impressive progresses on the investigation of \(\eta\) mesons were achieved recent years, the data on the decay modes of the \(\eta\) are still scarcer and much less accurate than those for the pions and kaons. The reason is that the \(\eta\) mesons were produced with low intensity, which inspired new facilities proposed for dedicating to explore the \(\eta/\eta^{\prime}\) decays [35; 36]. Moreover, the STCF is unique since the charmonium decays (\(J/\psi\)) provides a very clean light meson samples as advocated by the BESIII experiment [9]. For the investigation on the \(\eta\) decays, since its production rate of \(J/\psi\to\gamma\eta\) is five times less than that of \(\eta^{\prime}\) in \(J/\psi\) radiative decays and the irreducible background contributions directly from both \(J/\psi\) decays and \(e^{+}e^{-}\) annihilations, it is hard to improve the sensitivity for exploring the \(\eta\) rare decays. However, \(\eta^{\prime}\to\pi^{+}\pi^{-}\eta\) is one of dominant decays with a branching fraction of \((42.5\pm 0.5)\%\)[8] and the \(\eta\) mesons could be well tagged, these features make the decay of \(\eta^{\prime}\to\pi\pi\eta\) particularly attractive for the study of \(\eta\) decays, which inspired us to present a proposal for exploring the \(\eta\) decays by tagging \(\eta\) with \(\eta^{\prime}\to\pi^{+}\pi^{-}\eta\) at the STCF [9]. STCF was proposed to perform an extensive study of \(\tau\)-charm physics [9] and the designed luminosity is about 100 times larger than that of BEPCII. Therefore, the unprecedented charmonium decays, e.g., \(J/\psi\) and \(\psi(2S)\), are expected to be accumulated in one year. We then present several examples of physics feasibility studies performed with the fast simulation package developed for STCF. The examples are not intended to deliver an applicable message for this novel approach, instead, they are provided to illustrate the STCF capabilities to fulfill this physics program. The MC study indicates that STCF opens the possibility to investigate the \(\eta\) decays with an excellent sensitivity and may make feasible observation of \(\eta\) rare decays. Actually, the above study also advocates that the available 10 billion \(J/\psi\) events [37] can already yield a series of measurements, such as \(\eta\to 2\pi\) and \(\eta\to l^{+}l^{-}\pi^{0}\), with accuracy competitive with the current world averages. **ACKNOWLEDGMENTS** We thank the Hefei Comprehensive National Science Center for their strong support on the STCF key technology research project. This work is supported by National Natural Science Foundation of China (NSFC) under Contracts No. 12005195, No. 12225509, the National Key R&D Program of China under Contract No. 2022YFA1602200, the international partnership program of the Chinese Academy of Sciences Grant No. 211134KYSB20200057, and Wuhan Scientific Research Project under Contract No. 20231250048.
2303.16063
Fractal geometry of the PAM in 2D and 3D with white noise potential
We study the parabolic Anderson model (PAM) \begin{equation} {\partial \over \partial t}u(t,x) =\frac{1}{2}\Delta u(t,x) + u(t,x)\xi(x), \quad t>0, x\in \mathbb{R}^d, \quad \text{and} \quad u(0,x) \equiv 1, \quad \forall x\in \mathbb{R}^d, \end{equation} where $\xi$ is spatial white noise on $\mathbb{R}^d$ with $d \in\{2,3\}$. We show that the peaks of the PAM are macroscopically multifractal. More precisely, we prove that the spatial peaks of the PAM have infinitely many distinct values and we compute the macroscopic Hausdorff dimension (introduced by Barlow and Taylor) of those peaks. As a byproduct, we obtain the exact spatial asymptotics of the solution of the PAM. We also study the spatio-temporal peaks of the PAM and show their macroscopic multifractality. Some of the major tools used in our proof techniques include paracontrolled calculus and tail probabilities of the largest point in the spectrum of the Anderson Hamiltonian.
Promit Ghosal, Jaeyun Yi
2023-03-28T15:43:10Z
http://arxiv.org/abs/2303.16063v1
# Fractal geometry of the parabolic Anderson model in 2D and 3D with white noise potential ###### Abstract. We study the parabolic Anderson model (PAM) \[\begin{cases}\frac{\partial}{\partial t}u(t,x)=\frac{1}{2}\Delta u(t,x)+u(t,x) \xi(x),\quad t>0,x\in\mathds{R}^{d},\\ u(0,x)\equiv 1,\quad x\in\mathds{R}^{d},\end{cases}\] where \(\xi\) is spatial white noise on \(\mathds{R}^{d}\) with \(d\in\{2,3\}\). We show that the peaks of the PAM are macroscopically multifractal. More precisely, we prove that the spatial peaks of the PAM have infinitely many distinct values and we compute the macroscopic Hausdorff dimension (introduced by Barlow and Taylor [10, 11]) of those peaks. As a byproduct, we obtain the exact spatial asymptotics of the solution of the PAM. We also study the spatio-temporal peaks of the PAM and show their macroscopic multifractality. Some of the major tools used in our proof techniques include paracontrolled calculus and tail probabilities of the largest point in the spectrum of the _Anderson Hamiltonian_. _Keywords:_ Parabolic Anderson model, Anderson Hamiltonian, macroscopic Hausdorff dimension, paracontrolled Calculus. _AMS 2020 subject classification:_ Primary. 60H15; Secondary. 35R60, 60K37. ###### Contents * 1 Introduction * 1.1 Proof Ideas * 1.2.1 Acknowledgement * 2 Spectrum of the Anderson Hamiltonian * 3 Feynman-Kac Representation * 4 Existence of transition kernel & its estimate * 4.1 Existence of the transition kernel * 4.2 Transition kernel estimate * 5 Asymptotic bounds for the PAM * 5.1 Bound on the enhanced noise * 5.2 Asymptotics of PAM started from constant initial data * 6 Spatial Multifractality and Asymptotics of the PAM: Proof of Theorem 1.1 * 6.1 Proof of the lower bound in Theorem 1.1 * 6.2 Proof of the upper bound in Theorem 1.1 * 6.3 Spatial asymptotics of the PAM * 7 Spatio-temporal Multifractality: Proof of Theorem 1.2 * 7.1 Proof of the lower bound in Theorem 1.2 * 7.2 Proof of the upper bound in Theorem 1.2 * A Besov space and paracontrolled generator * A.1 Some properties of the Besov-Holder continuous distributions ## 1. Introduction We consider the parabolic Anderson model on \(\mathds{R}^{d}\) with \(d\in\{2,3\}\) \[\begin{cases}\frac{\partial}{\partial t}u(t,x)=\frac{1}{2}\Delta u(t,x)+u(t,x) \xi(x),\quad t>0,x\in\mathds{R}^{d},\\ u(0,x)\equiv 1,\quad x\in\mathds{R}^{d}.\end{cases} \tag{1.1}\] where the random potential \(\xi\) is the spatial white noise on \(\mathds{R}^{d}\) which is a mean zero Gaussian field with delta correlation between any two spatial points. PAM is one of the prototypical framework for modelling conduction of electron in crystals filled with defects. There is a competition between the two terms appearing in the operator: While the eigenfunctions of the Laplacian which depicts the behavior of the electron waves being spread out over the whole space, the multiplication-by-\(\xi\) operator which models the random defects, tends to concentrate the mass of the eigenfunctions in very small regions. A discrete version of the above Hamiltonian was introduced in a seminal paper of Anderson [1] where he showed that the bottom part of the spectrum consists of localized eigenfunctions. This phenomenon is often termed as the _Anderson localization_ which triggered an enormous amount of research activities in last several decades (see [1] for detailed references). The solution theory of (1.1) is obtained by using a mollified version of noise \(\xi_{\epsilon}\) minus a correction \(c_{\epsilon}=\frac{1}{2\pi}\log\epsilon\) and it is proved that the solution \(u_{\epsilon}\) of the PAM with potential \(\xi_{\epsilon}-c_{\epsilon}\) has a limit as \(\epsilon\to 0\). It was first constructed on torus \(\mathds{T}^{2}\) by Hairer [14] using the regularity structure and by Gubinelli, Imkeller and Perkowski [15] by using the framework of para-controlled calculus. Later Hairer and Labbe extended the solution theory for the whole \(\mathds{R}^{2}\) in [14] and furthermore for whole \(\mathds{R}^{3}\) in [14]. With some particular choices of random potential, PAM admits an intriguing concentration property for its tall peaks on large space-time scales which is often referred as _intermittency_. A vast amount of previous works of the PAM on \(\mathds{Z}^{d}\) with i.i.d. potential, and on \(\mathds{R}^{d}\) with regular potential, have revealed that the solution of PAM is highly concentrated on few small islands that are far from each other and carry most of the total mass of the solution. This phenomenon can be attributed to the following spectral representation in terms of the eigenvalues \(\lambda_{1}\geq\lambda_{2}\geq\cdots\) and corresponding \(L_{2}\)-orthonormal basis of eigenfunctions \(e_{1},e_{2},e_{3},.....\) of \(\frac{1}{2}\Delta+\xi\), \[u(t,x)=\sum_{n}e^{tn}e_{n}(x)e_{n}(0). \tag{1.2}\] From this representation, the intermittency of the system comes as consequence of Anderson localization which dictates the leading eigenfunctions \(e_{1},e_{2},\cdots\) to be concentrated in small islands. This phenomenon has been proved inside large centered boxes for few instances including the case where \(\xi\) an i.i.d. potential on \(\mathds{Z}^{d}\) with double exponential tails [1]. See [1] for the past developments on PAM. In the case of white noise potential, the phenomenon of intermittency or Anderson localization makes sense for dimensions \(d=1,2,3\). The one dimensional case is well understood due to three beautiful works by Laure Dumaz and Cyril Labbe [1, 1, 2]. There is no known solution theory for \(d\geq 4\) since the PAM with white noise potential is scaling-critical/supercritical in those cases. As we have mentioned earlier, the cases \(d=2,3\) are dealt with regularity structure as developed by Hairer or paracontrolled calculus by Gubinelli and Perkowski. Intermittency is intimately tied with macroscopic fractality which was studied in [13, 14] for a large collection of parabolic stochastic PDEs including the \((1+1)\)-d stochastic heat equation with multiplicative space-time white noise. They had shown that when the intermittency holds, the peaks of those stochastic PDEs form complex macroscopic multifractal structures. More precisely, their results show that the macroscopic Hausdorff dimension (introduced by Barlow and Taylor [1, 2], see Definition B.1) of the tall peaks take distinct and nontrivial values as the level of the peaks vary, a property which symbolizes the multifractality. The same phenomenon does not hold in the case of Brownian motion where the tall peaks demonstrate a constant Hausdorff dimension (see [14, Theorem 1.4]) along a different length scale. In a recent work, [15] showed that the sizes of the tall peaks in boxes of width \(t^{\alpha}\) for \(\alpha\in(0,1)\) and deep valleys in parabolic Anderson model in \(2\) dimension are asymptotically same. They have also commented that similar result is expected for PAM in \(3\) dimension. This property is in apparent contradiction with the intermittency property for the PAM. In this paper, we seek to study the fractality of the PAM. Our main theorems which are stated below shows that the spatial (Theorem 1.1) and spatio-temporal peaks of the PAM (Theorem 1.2) are macroscopically multifractal (see Section B for definition) for \(d=2,3\). Before proceeding to the main statement of those results, we introduce few notations. For \(\alpha,\beta,v,t>0\), define the set of peaks \[\mathcal{P}^{d}_{t}(\alpha):=\left\{x\in\mathds{R}^{d}\,:\,u(t,x)\geq e^{ \alpha t(\log|x|)^{\frac{2}{4-d}}}\right\},\] and \[\mathcal{P}^{d}(\beta,v):=\left\{(e^{t/v},x)\in(e,\infty)\times\mathds{R}^{d} \,:\,u(t,x)\geq e^{\beta t\frac{6-d}{4-d}}\right\}.\] We also introduce \[\mathfrak{c}_{d}:=\frac{8}{d^{\frac{d}{2}}(4-d)^{2-\frac{d}{2}}\kappa_{d}^{4}}, \tag{1.3}\] where \[\kappa_{d}:=\sup_{f\in H^{1}(\mathds{R}^{d})}\frac{\|f\|_{L^{4}(\mathds{R}^{d} )}}{\|\nabla f\|_{L^{2}(\mathds{R}^{d})}^{d/4}\|f\|_{L^{2}(\mathds{R}^{d})}^{ 1-d/4}}.\] Our first result which is stated below finds the macroscopic Hausdorff dimension (denoted as \(\mathrm{Dim}_{\mathds{H}}[:]\)) of the peaks of PAM for \(d=2,3\) in the spatial direction for all large time \(t\). Furthermore, it also finds the asymptotic shape of the peaks in the spatial direction for any fixed large \(t\). **Theorem 1.1** (**Spatial Multifractality and Asymptotics of the PAM)**.: _For \(\alpha>0\), there exists \(t_{0}=t_{0}(\alpha,d)>0\) such that for all \(t\geq t_{0}\), we have_ \[\mathrm{Dim}_{\mathds{H}}[\mathcal{P}^{d}_{t}(\alpha)]=(d-\alpha^{\frac{4-d}{ 2}}\mathfrak{c}_{d})\lor 0,\quad\text{a.s.} \tag{1.4}\] _In addition, there exists \(t_{1}=t_{1}(d)>0\) such that for all \(t>t_{1}\),_ \[\limsup_{|x|\to\infty}\frac{\log_{+}u(t,x)}{(\log|x|)^{\frac{2}{4-d}}}\overset {a.s.}{=}\left(\frac{d}{\epsilon_{d}}\right)^{\frac{1}{2-d/2}}t. \tag{1.5}\] For \(d=2\), the long time asymptotics of the solution of PAM had been found in [15]. They have show that \(\sup_{x\in\log u(t,x)}\) is approximately equal to \(\frac{2t}{\epsilon_{2}}\) as \(t\) gets larger when the initial data is Dirac delta. Our result shows that the tall peaks of \(\log u(t,x)\) in the spatial direction take the shape of \(\frac{2t}{\epsilon_{2}}\) even for finite value of \(t\). For one dimensional PAM, similar results were proven by Xia Chen [11] using the moment asymptotics. However, the proof techniques for \(d=1\) breaks down in the case of \(d=2,3\) since the moments of PAM in dimension larger than \(1\) blow up in finite time. This poses a serious technical difficulty which we are able to circumvent in this paper by introducing new techniques. Our next result shows the macroscopic Hausdorff dimension of the spatio-temporal peaks of PAM for \(d=2,3\). **Theorem 1.2** (**Spatio-Temporal Multifractality of the PAM**).: _For every \(\beta>0\) and \(v>0\), we have_ \[\mathrm{Dim}_{\mathrm{H}}[\mathcal{P}^{d}(\beta,v)]=(d+1-\beta^{\frac{4-d}{2}} v\mathfrak{c}_{d})\lor d,\quad\text{a.s.} \tag{1.6}\] Macroscopic fractal dimension of the spatio-temporal peaks of the parabolic stochastic PDEs with multiplicative white noise had been investigated by Khoshnevisan, Kim and Xiao [14]. This class of stochastic PDEs contains the \((1+1)\)-dimensional stochastic heat equation with multiplicative spatio-temporal white noise. Recently macroscopic fractal dimension of the the peaks and valleys of \((1+1)\)-d Kardar-Parisi-Zhang (KPZ) equation has been found in [13, 1]. The case of \((2+1)-\) dimensional stochastic heat equation with spatio-temporal white noise remained completely unclear since the solution theory was only known for the sub-critical regime so far [1, 2]. Although the solution theory of parabolic Anderson model in \((2+1)\)-d and \((3+1)\)-d are well studied by now, the depiction of the macroscopic fractal structures in those cases were missing. Theorem 1.2 filled this gap by showing the multifractality of spatio-temporal peaks for higher dimensional PAM. Multifractality of the peaks of intermittent systems were discussed in many occasions in the previous literature including [13] in the context of turbulence and [15] for stochastic Allen-Cahn equation with multiplicative forcing. [14, Theorem 1.1] showed that the spatio-temporal peaks of the \((1+1)-\)dimensional stochastic heat equation (SHE) with space-time white noise form multifractals with peaks of height \(e^{\beta t}\) for every \(\beta>0\). This result leverages on the (moment) intermittency of the \((1+1)\)-d, which means the exponential moment \(\mathrm{E}[u(t,x)^{p}]\) of the solution behaves as \(\exp(\gamma(p)t)\) where \(p\mapsto\gamma(p)\) is a strictly convex function (see [1]). Indeed, the proof of [14, Theorem 1.1] utilized this exponential moment to obtain the tail estimates of the solution. However, the moments of parabolic Anderson model in \((2+1)\)-d or \((3+1)\)-d cases blow up. As a result, the previous approach based on moment intermittency breaks down in those two cases. We rather use the asymptotics of the spectrum of the _Anderson Hamiltonian_ and the Feynman-Kac representation of the PAM built using the theory of para-controlled distributions [13]. Theorem 1.2 exposes that even though there is no moment intermittency, the spatio-temporal tall peaks of the solution to \((2+1)\)-d (resp. \((3+1)\)-d) PAM exhibit multifractality of order \(e^{\beta t^{2}}\) (resp. \(e^{\beta t^{3}}\)), which displays a chaotic nature of the high dimensional multiplicative noise. In a recent work [13], the first author of this paper and his collaborators have introduced the idea of _finite time intermittency_ for the PAM in higher dimension with asymptotically singular noise. We believe that many of our proof techniques can be extended to study the macroscopic fractality of the peaks in those settings. ### Proof Ideas In this section, we discuss the proof ideas behind Theorem 1.1 and Theorem 1.2. Proving fractal dimension of any given set can be done in two steps: first showing a lower bound to the fractal dimension and finally, showing the appropriate upper bound. While showing an appropriate upper bound can pose serious challenges, proving a lower bound to the fractal dimension very often requires more precise insights about the geometry of the associated models. The case of PAM in higher dimension is not exception to this folklore. One of the major challenges in showing both upper and lower bounds to the fractal dimension is to control the tail probabilities of the maximum of PAM solution in compact sets. Since the moments of \((d+1)\)-dimensional PAM blows up in finite time when \(d=2,3\), previous approaches based on moment asymptotics [14, 13] fail to work in this situation. To get around this difficulty, we seek to use the connection between the solution of PAM and the spectrum of _Anderson Hamiltonian_ as formally stated in (1.2). Showing a lower bound to the fractal dimension of a set \(E\subset\mathds{R}^{d}\) requires to show that the associated set is '_sufficiently thick_'. See the illustration in Figure 1 for the definition of thickness of a set \(E\). In order to show enough thickness of \(E\), we first embed \(E\) into a large \(d\)-dimensional box and then divide the large box into smaller boxes. It is then sufficient to show there are enough of such small boxes which carries the points from \(E\) or rather the probability of the set of points of \(E\) escaping most of those small boxes is close to \(0\). Controlling this probability will require two important ingredients which were lacking before: \((a)\) near independence between the solutions of PAM restricted on any two such smaller boxes and \((b)\) upper bound on the probability of the PAM to be bounded above by a large value. On the other hand, following the definition of macroscopic Hausdorff dimension from B.1, \(\mathrm{Dim}_{\mathrm{H}}(E)\) is upper bounded by \(\rho\) if \(\rho\)-dimensional Hausdorff content of \(E\) often computed as \(\mathbb{E}[\sum_{n=1}^{\infty}\nu_{\rho}^{n}(E)]\) is finite. See the paragraph before Definition B.1 for \(\nu_{\rho}^{n}(E)\). Bounding the above expected value requires one more important ingredient which is \((c)\) to bound the tail probability of the supremum of the solution of PAM in a small ball. We obtain those ingredients via combinations of different tools that we develop throughout the paper. Inception of these tools and carrying out the rest of proof of our main results can be broadly divided into three steps: the first step is to show appropriate bounds on the solution of PAM in terms of spectrum of Anderson Hamiltonian, the second step is to derive some tail probability on the solution and the third is to integrate the first two steps with a series expansion (coming from Feynman-Kac representation of the PAM in higher dimension) of the solution to complete the proof. Below we discuss each steps in more details. Figure 2 describes schematic representation of where the different tools are introduced and how they are combined to prove Theorem 1.1 and 1.2. **Step 1.** We derived appropriate bound on the solution of PAM in Section 5.1. We mainly use three tools to show such bounds on the solution. These three tools are respectively, the Feynman-Kac representation of the solution of PAM, transition kernel estimates and appropriate bounds on the noise. The Feynman-Kac representation of the solution is derived in Theorem 3.2 which shows that the solution of (1.1) \(u_{L,y}^{\phi}\) started from the initial data \(\phi\) and restricted on \(y+[-\frac{L}{2},\frac{L}{2}]^{d}\) with Dirichlet boundary condition can be written as \[u_{L,y}^{\phi}(t,x)=\mathbb{E}\Big{[}\exp\left(\int_{r}^{t}(Z_{L}^{y}+\eta Y_ {L}^{y})(X_{s})ds+(Z_{L}^{y}+Y_{L}^{y})(X_{r})-(Z_{L}^{y}+Y_{L}^{y})(X_{t}) \right)\phi(X_{t})\mathds{1}^{X}\Big{]} \tag{1.7}\] Figure 1. A set \(E\) is called \(\theta\)-thick for some \(\theta\in(0,1)\) if \(E\) contains points each cell of side-length \(e^{n\theta}\) in the outer shell of \([-e^{n},e^{n}]^{d}\) for all large \(n\). where \(\eta>0\) is a small number, \(\mathds{1}^{X}:=\mathds{1}^{X}_{X_{[0,t]}\subset y+[-\frac{L}{2},\frac{L}{2}]^{d}}\) and \(X_{t}\) is a diffusion defined by \[X_{t}=x+\int_{0}^{t}\nabla(Z_{L}^{y}+Y_{L}^{y})(X_{s})ds+B_{t}\] such that \(B_{t}\) is Brownian motion independent of \(Z_{L}^{y}\) and \(Y_{L}^{y}\) where \(Z:=(1-\frac{1}{2}\Delta)^{-1}\xi\in\mathscr{C}^{\frac{1}{2}-}\) and \(Y\) solves \[(\eta-\frac{1}{2})Y_{L}^{y}=\frac{1}{2}|\nabla Z_{L}^{y}|^{2}+\nabla Y_{L}^{y }\cdot\nabla Z_{L}^{y}+\frac{1}{2}|\nabla Y_{L}^{y}|^{2}.\] These two random processes are introduced in Proposition 3.1 of Section 3. The reason that the expression (1.7) differs from the classical Feynman-Kac representation is the roughness of the noise \(\xi\) (see (3.6) and the following discussion). We showed the equivalence between the classical form of Feynman-Kac and the modified form by using the Girsanov's theorem along the similar line as in [10]. Similar results have been shown for \(d=2\) by [11]. However the \(d=3\) case requires handling of sufficient technical difficulties which has been overcome in the present paper using similar tools as in [12] based on para-controlled distributions. See Remark 3.3 for more details. The next main tool is the bound on the transition density of the diffusion \(X_{t}\). To this end, the transition density of \(X_{t}\) is a solution of Cauchy problem as shown in (4.2) (see Theorem 4.5). Since \(\nabla(Z_{L}^{y}+Y_{L}^{y})\) is distribution valued, it requires non-trivial fixed point argument to show that the transition density kernel exists. We first lift \(\nabla(Z_{L}^{y}+Y_{L}^{y})\) in the space of rough distributions and then employ tools from para-controlled calculus to achieve this in Section 3. Similar problem had been considered before in [12]. On the way of proving our result, we have extended their result (especially [12, Theorem 3.10]) to cover the singular initial data case. Once the existence is shown, the upper and lower bound on the transition kernel is derived using the ideas of [11]. The next main tool is to bound the mollified noise \(\xi_{\epsilon}\), or more precisely \((1-\frac{1}{2}\Delta)^{-1}\xi_{\epsilon}\) uniformly in \(\epsilon\) using hyper-contractivity of Gaussian noise (see Proposition 5.2 and 5.3). **Step 2.** The second step is to derive the tail probabilities of the solution of PAM in \((d+1)\) case where \(d=2,3\). This is shown in Proposition 6.1 & 6.3. More precisely, we find the tail probability of the supremum value of PAM where the supremum is taken over a finite set of points. The main idea behind its proof lies in using a local'representations' of the solution of PAM. Construction of such proxy is done via the Feynman-Kac representation obtained in Theorem 3.2. In more concrete words, the diffusion \(X_{t}\) in the Feynman-Kac formula could be restricted into a set of disjoint but space-filling boxes to write \(u(t,x)\) as a sum of local representations like \(u_{L,y}^{\phi}\) of (1.7). This is termed as series expansion of the solution of PAM on \(\mathds{R}_{\geq 0}\times\mathds{R}^{d}\). See Lemma 5.6 for more details. By construction, those local representations of \(u(t,x)\) are independent when they are taken from two separate far away boxes. Furthermore, those could be bounded from above and below by some functional of the largest point in the spectrum of Anderson Hamiltonian as shown in Proposition 5.5. Finally the tail probabilities of the solution of PAM restricted in finite boxes are found in terms of the tail probabilities of the largest point in the spectrum of the Anderson Hamiltonian and tail probabilities of \((1-\frac{1}{2}\Delta)^{-1}\xi_{\epsilon}\) obtained from Lemma 5.2. Tail probabilities of the largest point in the spectrum were investigated before in many occasions in the past (see Proposition 2.2). **Step 3.** The third step is to complete the proof of Theorem 1.1 and 1.2 using the tail probabilities and series expansion of \(u(t,x)\) of Lemma 5.6. This is mainly done in Section 6 and 7. As we have indicated earlier, the proof of the lower bound in fractal dimension goes by showing that the probability of the maximum of \(u(t,x)\) over a finite set of points (satisfying the conditions in Proposition 6.1 or 7.1) being less than a certain value decays fast to \(0\). Since the _local representations_ of \(u(t,x)\) around those points (as described in **Step 2**) can be made independent and the tail probabilities of the local representations are determined through Proposition 5.5 and 2.2, these two tools are combined to bound the lower tail probability of the maximum of \(u(t,x)\) which finally leads to the lower bound in Hausdorff dimensions in Theorem 1.1 and 1.2. The upper bound part of Theorem 1.1 and 1.2 is proved using the set of tools from **Step 1** and **Step 2**. These tools provide the upper tail probability of the maximum of \(u(t,x)\) in compact sets which is used to control the expected value of Hausdorff contents of the given level sets. **Acknowledgement**.: PG was supported by NSF grant DMS-2153661 and JY was supported by Samsung Science and Technology Foundation under Project Number SSTF-BA1401-51. Part of the research of this paper was done when JY was visiting the department of Mathematics of MIT during the summer of 2022. ## 2. Spectrum of the Anderson Hamiltonian In this section, we discuss some preliminary facts about Anderson Hamiltonian and parabolic Anderson model which will be used throughout the rest of the paper. On our way, we introduce many notations, provide the context of their use in later sections and explain their roles in proving the main results of this paper. We define \(Q_{L}(d):=[-\frac{L}{2},\frac{L}{2}]^{d}\subset\mathds{R}^{d}\). We often use the symbol \(Q_{L}\) in place of \(Q_{L}(d)\) and the value \(d\) will be clear from the context. For any \(y\in\mathds{R}^{d}\), we set \(Q_{L}^{y}(d):=y+Q_{L}(d)\). We consider the PAM with the Dirichlet boundary condition on \(Q_{L}^{y}(d)\) started from initial data \(\phi\) and with _enhanced noise_\(\xi_{L}\) as constructed in [21, Section 6] \[\begin{cases}\frac{\partial}{\partial t}u_{L,y}^{\phi}(t,x)=\frac{1}{2}\Delta u _{L,y}^{\phi}(t,x)+u_{L,y}^{\phi}(t,x)\xi_{L,y}(x),\quad t>0,x\in Q_{L}^{y},\\ u(0,x)=\phi,\quad x\in Q_{L}^{y},\quad\text{and}\quad u_{L,y}^{\phi}\mid_{ \partial Q_{L}^{y}}=0,\end{cases} \tag{2.1}\] Construction of \(\xi_{L}\) goes by first projecting the white noise on the Neumann space of the box and then take the regularisation corresponding to a Fourier multiplier. Fix any even function Figure 2. Flowchart of the proof of Theorem 1.1 and 1.2 \(\tau\in C^{\infty}_{c}(d,[0,1])\) and define \[\xi^{y}_{L,\varepsilon}=\sum_{k\in\mathbb{N}^{d}}\tau\big{(}\frac{\varepsilon}{L} k\big{)}\langle\xi,\mathfrak{n}_{k,L}\rangle\mathfrak{n}_{k,L},\quad\mathfrak{n}_{k,L} (x):=2^{-\frac{1}{2}\{i:ki_{i}=0\}}\mathbbm{1}(x\in Q^{y}_{L}(d))\big{(}\frac{2 }{L}\big{)}^{d}\prod_{i=1}^{d}\cos\big{(}\frac{\pi}{L}k_{i}x_{i}\big{)}.\] Theorem 6.7 of [20] shows that \(\xi^{y}_{L,\varepsilon}\) converges almost surely to the white noise \(\xi^{y}_{L}\in\mathscr{C}^{\alpha}\) as \(\varepsilon\) goes to \(0\) for \(\alpha<-\frac{d}{2}\) and the limit does not depend on the choice of \(\tau\). The solution of (2.1) could be realized as the weak limit of the following system where we replace \(\xi^{y}_{L}\) by \(\xi^{y}_{L,\varepsilon}\), i.e., \[\begin{cases}\frac{\partial}{\partial t}u^{\phi,y}_{L,\varepsilon}(t,x)=\frac {1}{2}\Delta u^{\phi,y}_{L,\varepsilon}(t,x)+u^{\phi,y}_{L,\varepsilon}(t,x)( \xi^{y}_{L,\varepsilon}(x)-c_{\varepsilon}),\quad t>0,x\in Q^{y}_{L},\\ u(0,x)=\phi,\quad x\in Q^{y}_{L}(d),\quad\text{and}\quad u^{\phi}_{L,y}\mid_ {\partial Q^{y}_{L}(d)}=0,\end{cases} \tag{2.2}\] where \(c_{\varepsilon}\) denotes the renormalization constant which we set as \(c_{\varepsilon}=\frac{1}{2\pi}\log\frac{1}{\varepsilon}\). It has been shown in Section 2 of [20] for all \(T>0\), \(u^{\phi,y}_{L,\varepsilon}(t,x)\) converges in \(C([0,T],B^{\varrho,\beta}_{\infty,\infty}(Q^{y}_{L}(d)))\) uniformly on \([0,T]\times Q^{y}_{L}(d)\) in probability to \(u^{\phi}_{L,y}\). Here \(B^{\varrho,\beta}_{\infty,\infty}(Q^{y}_{L}(d))\) is the Dirichlet Besov space defined in [20, Section 4]. Our next goal is to introduce the spectral representation of \(u^{\phi}_{L,y}\) in \(B^{\varrho,\beta}_{\infty,\infty}(Q^{y}_{L}(d))\) in terms of the spectrum of Anderson Hamiltonian. To this end, we recall definition of the Anderson Hamiltonian operator from [20] and [19]. Theorem 5.4 of [20] characterized the spectrum of Anderson Hamiltonian for \(d=2\) case wheres [19, Theorem 1] did the same for \(d=3\). We summarize below their results. Denote the enhancement of \(B^{\varrho,\beta}_{\alpha,\infty}(Q^{y}_{L}(d))\) by \(\mathfrak{X}^{\alpha}(Q^{y}_{L}(d))\) and their respective Neumann extensions as \(B^{\varrho,\beta}_{\alpha,\infty,\mathfrak{n}}(Q^{y}_{L}(d))\) and \(\mathfrak{X}^{\alpha}_{\mathfrak{n}}(Q^{y}_{L}(d))\). Let \(L>0\), \(y\in\mathds{R}^{d}\) and \(\xi\) be a \(d-\)dimensional spatial white noise. In dimension \(d\in\{2,3\}\), there exists \(\mathscr{H}_{\xi}\) that is densely defined on \(L^{2}(Q^{y}_{L}(d))\), a closed and self-adjoint operator given by \[\mathscr{H}_{\xi}=\Delta u+\xi u,\] with values in \(L^{2}(Q^{y}_{L}(d))\). \(\mathscr{H}_{\xi}\) has a pure spectrum consisting of eigenvalues \(\boldsymbol{\lambda}_{1}(Q^{y}_{L}(d))>\boldsymbol{\lambda}_{2}(Q^{y}_{L}(d)) \geq\boldsymbol{\lambda}_{3}(Q^{y}_{L}(d))\geq\cdots\). We let \(v^{y}_{n,L}\) be an eigenvector with eigenvalue \(\boldsymbol{\lambda}_{n}(Q^{y}_{L}(d))\) such that \(\{v^{y}_{n,L}\}_{n\in\mathds{N}}\) is an orthonormal basis of \(L^{2}(Q^{y}_{L}(d)).\) Due to the lack of regularity of \(\xi\), the product \(\xi u\) is not well-defined in a classical sense. For the rigorous definition of the product, we refer the readers to [20, Theorem 5.4] and [19, Theorem 1]. **Lemma 2.1** (**Spectral representation**, Lemma 2.11 and Theorem 2.12 of [20]).: _For \(L,t>0,y\in\mathds{R}^{d}\) and \(\phi\in L^{2}(Q^{y}_{L}(d))\), we have_ \[u^{\phi}_{L,y}(t,\cdot)=\sum_{n\in\mathds{N}}e^{t\boldsymbol{\lambda}_{n}(Q^{ y}_{L}(d))}\langle v^{y}_{n,L},\phi\rangle v^{y}_{n,L}. \tag{2.3}\] _Moreover, this representation holds for \(\phi=\delta_{z}\). In other words,_ \[u^{\delta_{z}}_{L,y}(t,x)=\sum_{n\in\mathds{N}}e^{t\boldsymbol{\lambda}_{n}(Q^ {y}_{L}(d))}v^{y}_{n,L}(z)v^{y}_{n,L}(x),\quad\text{for }x,z\in Q^{y}_{L}(d). \tag{2.4}\] Proof.: We refer to [20, Lemma 2.11, Theorem 2.12] for the proof in the case of \(d=2\). For \(d=3\), the proof follows from a similar argument as when \(d=2\) combining [19, Theorem 1.1] and Theorem 4.11. Eigenvalues of the Anderson Hamiltonian play very important roles in proving our main results. In Section 5, we discuss how the supremum of the PAM restricted over a growing rectangle can be described in term of the largest eigenvalue of the Anderson Hamiltonian. In the next three results, we record some useful properties of the eigenvalues of \(\mathscr{H}_{\xi}\) such as the tail probabilities, monotonocity and the independence. These results will be instrumental in obtaining asymptotics of the solution of PAM in Section 5. The first result is about the tail probabilities of the eigenvalues of Anderson Hamiltonian. We refer to [23, Theorem 2.17] for \(d=2\) case and [14, Theorem 2] for \(d=3\) case. **Proposition 2.2**.: _Fix \(\epsilon\in(0,1)\). There exist \(c_{2}>c_{1}>0\) and \(s_{0}\) such that for all \(L\geq 1\) and \(s\geq s_{0}\)_ \[\mathds{P}\left(\mathbf{\lambda}_{1}(Q_{L}^{y}(d))\right)\leq s\right)\leq\exp \left(-c_{2}s^{d/2}e^{d\log L-(1+\epsilon)\mathfrak{c}_{d}s^{2-d/2}}\right), \tag{2.5}\] \[\mathds{P}\left(\mathbf{\lambda}_{1}(Q_{L}^{y}(d))\right)\geq s\right)\leq c_{1}s ^{\frac{d}{2}}e^{d\log L-(1-\epsilon)\mathfrak{c}_{d}s^{2-d/2}}, \tag{2.6}\] _where \(\mathfrak{c}_{d}\) is defined in (1.3)._ The second result says that the eigenvalues of Anderson Hamiltonian grows monotonically as the size of the box grows. For the proof of this result, we refer to [23, Theorem 8.6] for \(d=2\) case and [14, Proposition 2.1] for \(d=3\) case. **Proposition 2.3** (**Monotonocity of eigenvalues)**.: _Let \(L\geq r\geq 1\). For all \(x,y\in\mathds{R}^{d}\) such that \(Q_{r}^{y}(d)\subseteq Q_{L}^{x}(d)\),_ \[\mathbf{\lambda}_{1}(Q_{r}^{y}(d))\leq\mathbf{\lambda}_{1}(Q_{L}^{x}(d)). \tag{2.7}\] The final result of this section is about the domain Markov property for the spectrum of the Anderson Hamiltonian with white-noise potential, i.e., the spectrum of \(\mathscr{H}_{\xi}\) restricted on two disjoint regions are independent of each other. We refer to Lemma 7.4 of [23] for the proof. **Proposition 2.4**.: **Independence of eigenvalues**_] Suppose that \(y_{1},...,y_{m}\in\mathds{R}^{d}\) satisfy \(\min_{1\leq i\neq j\leq m}|y_{i}-y_{j}|\geq 3L\) and \(x_{i}\in Q_{L}^{y_{i}}\) for each \(i\). For \(1\leq i\neq j\leq m\), \(\mathbf{\lambda}_{n}(Q_{L}^{y_{i}}(d))\) and \(\mathbf{\lambda}_{n}(Q_{L}^{y_{j}}(d))\) are independent._ ## 3. Feynman-Kac Representation This section is devoted to proving the Feynman-Kac representation of the Anderson Hamiltonian in 3 dimension. The main purpose of deriving such Feynman-Kac representation is to derive useful upper bound on the tail probabilities of the eigenvalues of Anderson Hamiltonian. In the 2d case, [23, Theorem 2.17] provides the modified version of the Feynman-Kac representation using Girsanov's transformation which has been used to derive tail probabilities. However, this upper bound works only when \(d=2\). We slightly refine the representation to cover the 3-dimensional case. Before we present the representation, we introduce an equation related to the noise. For the rest of this section, we only consider the case of \(d=3\). **Proposition 3.1** (**Resolvent equation)**.: _Let \(L>0\)\(y\in\mathds{R}^{d}\), \(\alpha\in(\frac{2}{5},\frac{1}{2})\) and \(\xi_{L}^{y}\) be the spatial white noise on \(Q_{L}^{y}\). Set \(Z_{L}^{y}:=(1-\frac{1}{2}\Delta)^{-1}\xi_{L}^{y}\). Then there exists \(\eta_{L}>0\) such that for all \(\eta\geq\eta_{L}\) there exists a unique solution \(Y_{L}^{y}\in\mathscr{C}^{2\alpha}\) to the following resolvent equation_ \[(\eta-\frac{1}{2}\Delta)Y_{L}^{y}=\frac{1}{2}|\nabla Z_{L}^{y}|^{2}+\nabla Y_{ L}^{y}\cdot\nabla Z_{L}^{y}+\frac{1}{2}|\nabla Y_{L}^{y}|^{2}. \tag{3.1}\] _Furthermore, if \(\{\xi_{L,\epsilon}^{y}\}_{\epsilon}\) is a mollification of \(\xi_{L}^{y}\) such that \(\xi_{L,\epsilon}^{y}\to\xi_{L}^{y}\) in \(\mathscr{C}^{\alpha-2}\), then \(Y_{L,\epsilon}^{y}\to Y_{L}^{y}\) in \(\mathscr{C}^{2\alpha}\)._ Before proceeding to the proof of the above proposition, we state below the main result of this section which shows a Feynman-Kac representation of \(u_{L,\epsilon}^{\phi,y}\). **Theorem 3.2** (**Modified Feynman-Kac representation)**.: _For \(L,t>0,y\in\mathds{R}^{d},\epsilon\in(0,1]\) and \(\phi\in C_{b}(Q_{L}^{y})\), we have_ \[u_{L,\epsilon}^{\phi,y}(t,x)=\mathds{E}_{\mathds{Q}_{L,\epsilon}^{x,y}}\left[ \mathscr{D}_{L,\epsilon}^{y}(0,t)\phi(X_{t})\mathds{1}_{X_{[0,t]}\subset Q_{ L}^{y}}\right],\quad\text{for }x\in Q_{L}^{y}, \tag{3.2}\] _where \(\mathscr{D}_{L,\epsilon}^{y}(r,t)\) for \(r,t\in\mathds{R}_{+}\) is defined by_ \[\mathscr{D}_{L,\epsilon}^{y}(r,t):=\exp\left(\int_{T}^{t}(Z_{L,\epsilon}^{y}+ \eta Y_{L,\epsilon}^{y})(X_{s})ds+(Z_{L,\epsilon}^{y}+Y_{L,\epsilon}^{y})(X_{ r})-(Z_{L,\epsilon}^{y}+Y_{L,\epsilon}^{y})(X_{t})\right) \tag{3.3}\] _with \(\eta>0\) and \(\mathds{Q}^{x,y}_{L,\epsilon}\) be the probability measure on \(C([0,\infty),\mathds{R}^{d})\) such that the coordinate process \(X_{t}\) satisfies \(\mathds{Q}^{x,y}_{L,\epsilon}\)-almost surely_ \[X_{t}=x+\int_{0}^{t}\nabla(Z^{y}_{L,\epsilon}+Y^{y}_{L,\epsilon})(X_{s})ds+B^{ \prime}_{t},\quad t\geq 0, \tag{3.4}\] _for a Brownian motion \(B^{\prime}\). Moreover, if \(\eta=\eta_{L}\) as in Proposition 3.1, we have_ \[\lim_{\epsilon\to 0}\mathds{E}_{\mathds{Q}^{x,y}_{L,\epsilon}}\left[\mathscr{D }^{y}_{L,\epsilon}(0,t)\phi(X_{t})\mathds{1}_{X_{[0,t]}\subset Q^{y}_{L}} \right]=\mathds{E}_{\mathds{Q}^{x,y}_{L}}\left[\mathscr{D}^{y}_{L}(0,t)\phi(X _{t})\mathds{1}_{X_{[0,t]}\subset Q^{y}_{L}}\right]=u^{\phi,y}_{L}(t,x). \tag{3.5}\] Notice that the classical Feynman-Kac representation for the smooth mollifier \(\xi^{y}_{L,\epsilon}\) of \(\xi^{y}_{L}\) is different than (3.2). Let \(L\in(0,\infty)\) and \(y\in\mathds{R}^{d}\). For \(\phi\in C_{b}(Q^{y}_{L})\), \(\epsilon>0\), and \((t,x)\in\mathds{R}_{+}\times Q^{y}_{L}\), the classical Feynman-Kac formula takes the form \[u^{\phi,y}_{L,\epsilon}(t,x)=\mathds{E}_{x}\left[\exp\left(\int_{0}^{t}(\xi^{ y}_{L,\epsilon}-c_{\epsilon})(B_{s})ds\right)\phi(B_{t})\mathds{1}_{B[0,t] \subset Q^{y}_{L}}\right] \tag{3.6}\] where \(B[0,t]:=\{B(s):s\in[0,t]\}\). Due to low regularity of the spatial white noise \(\xi^{y}_{L}\), the \(L_{\infty}-\)norm of \(\xi^{y}_{L,\epsilon}-c_{\epsilon}\) blows up as \(\epsilon\to 0\). We deal with this difficulty by adopting the ideas from the proof of [13, Lemma 2.16, Theorem 2.17]. In a similar way as [13], we use Girsanov's transform and Proposition 3.1 to obtain a modified version of the Feynman-Kac representation. **Remark 3.3**.: Theorem 3.2 is a variant of [13, Theorem 2.17] in dimension \(3\). The difference in the Feynman-Kac representation of [13, Theorem 2.17] and the one in (3.2) is the absence of the term \(\frac{1}{2}|\nabla Y|^{2}\) (see (24) of [13]). Note that since the expected regularity of \(Y\) is \(1^{-}\), \(\frac{1}{2}|\nabla Y|^{2}\) is not controlled in \(L_{\infty}-\)norm. Thus, we used the partial Girsanov theorem via solving the resolvent equation (3.1) which is different from the expression in (23) of [13]. Without loss of generality, we drop the superscript \(y\) and let \(y=0\) for convenience. All the results of this section can be extended for any \(y\in\mathds{R}^{d}\) easily. We prove Theorem 3.2 after showing Proposition 3.1. To this end, that the products \(|\nabla Z_{L}|^{2}\) and \(\nabla Y_{L}\cdot\nabla Z_{L}\) in Proposition 3.1 are ill-defined due to the lack of the regularity. Indeed, for instance in \(d=3\), \(Z_{L}\in\mathscr{C}^{\frac{1}{2}^{-}}\) and the expected regularity of \(Y_{L}\) is \(1^{-}\). Thus the sum of the regularity of \(\nabla Z_{L}\) and \(\nabla Y_{L}\) is negative which makes the solution of (3.1) ill-posed. In order to overcome this difficulty, we use the idea of _enhancing the noise_ following similar arguments as in [14, Section 6]. **Definition 3.4** (**Enhanced noise)**.: Let \(L>0\) and \(\varrho<1/2\). For \((a,b,\theta)\in\mathds{R}\times\mathds{R}\times\mathscr{C}^{2}\), define \(\mathfrak{Z}_{L}\) as \[\mathfrak{Z}_{L}:=\mathfrak{Z}_{L}(a,b,\theta):=(Z_{L},Z^{\boldsymbol{\gamma} }_{L}-a,Z^{\boldsymbol{\gamma}}_{L},Z^{\boldsymbol{\gamma}}_{L},Z^{ \boldsymbol{\gamma}}_{L}-b,\nabla Q_{L}\circ\nabla Z_{L}), \tag{3.7}\] where \(\mathcal{I}_{\eta}:=(\eta-\frac{1}{2}\Delta)^{-1}\), \[\begin{split} Z_{L}&:=\mathcal{I}_{\eta}(\theta), \quad Z^{\boldsymbol{\gamma}}_{L}:=\mathcal{I}_{\eta}(|\nabla Z_{L}|^{2}),\\ Z^{\boldsymbol{\gamma}}_{L}&:=\mathcal{I}_{\eta}( \nabla Z^{\boldsymbol{\gamma}}_{L}\cdot\nabla Z),\quad Z^{\boldsymbol{ \gamma}}_{L}:=\mathcal{I}_{\eta}(\nabla Z^{\boldsymbol{\gamma}}_{L}\cdot\nabla Z _{L}),\\ Z^{\boldsymbol{\gamma}}_{L}&:=\mathcal{I}_{\eta}(| \nabla Z^{\boldsymbol{\gamma}}_{L}|^{2}),\end{split} \tag{3.8}\] and \[\mathcal{Q}_{L}:=\mathcal{I}(\nabla Z_{L}),\qquad\nabla\mathcal{Q}_{L}\circ \nabla Z_{L}:=(\partial_{i}(\mathcal{Q}_{L})^{j}\circ\partial_{i}Z_{L})_{i,j=1,2,3}.\] We define the space \(\mathcal{Z}^{\varrho}_{L}\) of enhanced noise as \[\mathcal{Z}^{\varrho}_{L}:=\mathrm{cl}_{\mathscr{H}^{\varrho}}\{\mathfrak{Z}_{ L}(\theta,a,b):(a,b,\theta)\in\mathds{R}\times\mathds{R}\times\mathscr{C}^{2}\},\] where \(\mathrm{cl}_{\mathscr{H}^{\varrho}}\) denotes the closure with respect to the topology of \(\mathscr{H}^{\varrho}:=\mathscr{C}^{\varrho}_{\mathfrak{n}}\times\mathscr{C} ^{2\varrho}_{\mathfrak{n}}\times\mathscr{C}^{3\varrho}_{\mathfrak{n}}\times \mathscr{C}^{\varrho+1}_{\mathfrak{n}}\times\mathscr{C}^{4\varrho}_{\mathfrak{ n}}\times\mathscr{C}^{2\varrho-1}_{\mathfrak{n}}\) equipped the usual norm. We call \(\mathfrak{Z}\) a enhancement of \(\theta\). In the sequel, we fix \(\theta=\theta_{L,\epsilon}=\xi_{L,\epsilon}\) so that \(Z_{L,\epsilon}:=\mathcal{I}_{\eta}(\xi_{L,\epsilon})\) where \(\{\xi_{L,\epsilon}\}_{\epsilon\in(0,1]}\) is a mollification of the spatial white noise \(\xi_{L}\) restricted on \(Q_{L}\). The following theorem ensures that \(Z_{L}=(1-\frac{1}{2}\Delta)^{-1}\xi_{L}\) can be enhanced. **Theorem 3.5** (Theorem 6.12 of [15a] ).: _Let \(\varrho<\frac{1}{2}\) and \(\xi_{L}\) is the spatial white noise on \(Q_{L}\) on some probability space \((\Omega,\mathcal{F},\mathbb{P}_{\xi})\). Then there exists a mollification \(\{\xi_{L,\epsilon}\}_{\epsilon\in(0,1]}\) such that there exist the renormalizing constants \(c_{\epsilon}^{\boldsymbol{\gamma}},c_{\epsilon}^{\boldsymbol{\gamma}}\in \mathds{R}\) (not depending on \(L\)) and the sequence_ \[\mathfrak{Z}_{\epsilon}:=(Z_{L,\epsilon},Z_{L,\epsilon}^{\boldsymbol{\gamma}} -c_{L,\epsilon}^{\boldsymbol{\gamma}},Z_{L,\epsilon}^{\boldsymbol{\gamma}},Z_{L,\epsilon}^{\boldsymbol{\gamma}},Z_{L,\epsilon}^{\boldsymbol{\gamma}}- c_{\epsilon}^{\boldsymbol{\gamma}},\nabla\mathcal{Q}_{L,\epsilon}\circ\nabla Z_{L, \epsilon}) \tag{3.9}\] _which converges to a limit \(\mathfrak{Z}_{L}:=(Z_{L},Z_{L}^{\boldsymbol{\gamma}},Z_{L}^{\boldsymbol{ \gamma}},Z_{L}^{\boldsymbol{\gamma}},Z_{L}^{\boldsymbol{\gamma}},\nabla \mathcal{Q}_{L}\circ\nabla Z_{L})\in\mathscr{H}^{\varrho}\) in \(L^{p}(\Omega,\mathscr{H}^{\varrho})\) for every \(p>1\)._ In order to prove Proposition 3.1, we will prove a fixed point problem by using the theory of paracontrolled distributions. This is an analogous result of [15a, Proposition 6.8]. At this moment, we omit the subscripts for simplicity like \(Z=Z_{L,\epsilon}\). We rewrite (3.1) as \[Y=\mathcal{I}_{\eta}\Big{(}\frac{1}{2}|\nabla Z|^{2}-c+\nabla Y\cdot\nabla Z+ \frac{1}{2}|\nabla Y|^{2}\Big{)}. \tag{3.10}\] Now set \[v=Y-\frac{1}{2}Z^{\boldsymbol{\gamma}}-\frac{1}{2}Z^{\boldsymbol{\gamma}}. \tag{3.11}\] Substituting (3.11) into (3.10), we observe that (3.10) is equivalent to \[v=\frac{1}{2}Z^{\boldsymbol{\gamma}}+\mathcal{I}_{\eta}(\nabla v\cdot\nabla Z )+R^{v}, \tag{3.12}\] where \(R^{v}\) denotes \[R^{v}=\frac{1}{8}Z^{\boldsymbol{\gamma}}+\frac{1}{2}\mathcal{I}_{\eta}(\nabla (v+\frac{1}{2}Z^{\boldsymbol{\gamma}})\cdot\nabla Z^{\boldsymbol{\gamma}})+ \frac{1}{2}\mathcal{I}_{\eta}(|\nabla(v+\frac{1}{2}Z^{\boldsymbol{\gamma}})|^ {2}). \tag{3.13}\] Note that \(v\) has a higher regularity \(\alpha+1\) than \(Y\) by (3.12) and the definition of \(\mathfrak{Z}_{L}\) in Theorem 3.5. However, this is not sufficient for the well-definedness of \(\nabla v\cdot\nabla Z\) in (3.12) since its expected regularity is \(\alpha+\alpha-1<0\). The key idea is to introduce a new object \(v^{\#}\) with a paracontrolled term \(v^{\prime}\prec\mathcal{Q}\) where \(v^{\prime}\) denotes the pseudo derivative of \(v\). **Definition 3.6** (**Paracontrolled distributions)**.: Let \(\alpha\in(\frac{2}{5},\frac{1}{2})\). Recall the notations of paracontduct from Section A. For \(\mathcal{Q}\in\mathscr{C}^{\alpha+1}\), we define the space of paracontrolled distributions \(\mathcal{D}_{Q}^{\alpha}\) as the set of \((v,v^{\prime})\in\mathscr{C}^{\alpha+1}\times\mathscr{C}^{\alpha}_{\mathds{R} ^{3}}\) such that \[v^{\#}:=v-(v^{\prime}\prec\mathcal{Q})\in\mathscr{C}^{4\alpha}. \tag{3.14}\] We equip \(\mathcal{D}_{\mathcal{Q}}^{\alpha}\) with the norm \[\|(v,v^{\prime})\|_{\mathcal{D}_{\mathcal{Q}}^{\alpha}}:=\|v\|_{3\alpha}+\|v^ {\prime}\|_{3\alpha-1}+\|v^{\#}\|_{\alpha+\beta+1}, \tag{3.15}\] where \(\beta\in(0,3\alpha-1)\). By Proposition 6.6 and 6.7 of [15a], for \(\mathfrak{Z}\in\mathcal{Z}^{\varrho}\) we have \((v,v^{\prime})\in\mathcal{D}_{\mathcal{Q}}^{\alpha}\), \(\nabla\mathcal{Q}\circ\nabla Z\in\mathscr{C}^{2\alpha-1}\) and hence, \(\nabla v\circ\nabla Z\) are well-defined. Note that \(v\) solves (3.12) if and only if \(v^{\#}\) solves \[v^{\#}=\mathcal{I}_{\eta}\Big{(}\nabla\big{(}\frac{1}{2}Z^{\boldsymbol{\gamma }}+v\big{)}\prec\nabla z\Big{)}-v^{\prime}\prec Q+\mathcal{I}_{\eta}(\nabla v \circ\nabla Z)+R^{v}. \tag{3.16}\] **Proposition 3.7** (Proposition 6.7 of [15a]).: _Suppose that_ \[v^{\prime}:=\nabla v+\frac{1}{2}\nabla Z^{\boldsymbol{\gamma}}.\] _Then for \(\beta\in(0,3\alpha-1)\) and \(\epsilon>0\), we have_ \[\|\mathcal{I}(v^{\prime}\prec\nabla Z)-v^{\prime}\prec Q\|_{\alpha+\beta+1} \lesssim\eta^{-\epsilon}\|v^{\prime}\|_{3\alpha-1}\|\|Z\|_{\varrho} \tag{3.17}\] _for the first coordinate of \(Z\) of \(\mathfrak{Z}\in\mathcal{Z}^{\varrho}\) with \(5/2<\alpha<\varrho<1/2\)._ Now we present the key proposition to obtain the contractivity of the solution map of (3.16) which is a variation of [13, Proposition 6.8] **Proposition 3.8**.: _Let \(2/5<\alpha<\varrho<1/2\) and \(\mathfrak{Z}\in\mathcal{Z}^{\varrho}\). For \((v,v^{\prime})\in\mathcal{D}^{\alpha}_{\mathcal{Q}}\), let \(\mathcal{G}:\mathcal{D}^{\alpha}_{\mathcal{Q}}\to\mathscr{C}^{\alpha+1}\times \mathscr{C}^{\alpha}\) be the map defined by \(\mathcal{G}(v,v^{\prime})=(\tilde{v},\tilde{v}^{\prime})\) where_ \[\tilde{v}:=\frac{1}{2}\mathcal{Z}^{\boldsymbol{\mathscr{Y}}}+\mathcal{I}_{ \eta}(\nabla v\cdot\nabla Z)+R^{v},\qquad\tilde{v}^{\prime}:=\nabla v+\frac{1 }{2}\nabla Z^{\boldsymbol{\mathscr{Y}}}. \tag{3.18}\] _Then \(\mathcal{G}(v,v^{\prime})\in\mathcal{D}^{\alpha}_{Q}\) and there exists \(\vartheta>0\) such that_ \[\|\mathcal{G}(v,v^{\prime})\|_{\mathcal{D}^{\alpha}_{\mathcal{Q}}}\lesssim(1+ \|\mathfrak{Z}\|_{\mathcal{Z}^{\varrho}})^{2}(1+\eta^{-\vartheta}\|(v,v^{ \prime})\|_{\mathcal{D}^{\alpha}_{\mathcal{Q}}})^{2}, \tag{3.19}\] _and for \((v_{1},v^{\prime}_{1}),(v_{2},v^{\prime}_{2})\in\mathcal{D}^{\alpha}_{ \mathcal{Q}}\),_ \[\begin{split} d_{\mathcal{D}^{\alpha}_{\mathcal{Q}}}(& \mathcal{G}(v_{1},v^{\prime}_{1}),\mathcal{G}(v_{2},v^{\prime}_{2}))\\ &\lesssim\eta^{-\vartheta}d_{\mathcal{D}^{\alpha}_{\mathcal{Q}}}(( v_{1},v^{\prime}_{1}),(v_{2},v^{\prime}_{2}))(1+\|(v_{1},v^{\prime}_{1})\|_{ \mathcal{D}^{\alpha}_{\mathcal{Q}}}+\|(v_{2},v^{\prime}_{2})\|_{\mathcal{D}^{ \alpha}_{\mathcal{Q}}})(1+\|\mathfrak{Z}\|_{\mathcal{Z}^{\varrho}})^{2},\end{split} \tag{3.20}\] _where \(d_{\mathcal{D}^{\alpha}_{\mathcal{Q}}}((v_{1},v^{\prime}_{1}),(v_{2},v^{\prime }_{2})):=\|(v_{1}-v_{2},v^{\prime}_{1}-v^{\prime}_{2})\|_{\mathcal{D}^{\alpha }_{\mathcal{Q}}}\)._ Proof.: The proof is very similar to the proof of [13, Proposition 6.8] where the transition density operator \(\mathcal{I}\) is used instead of \(\mathcal{I}_{\eta}\). Since \((\eta-\frac{1}{2}\Delta)^{-1}=\int_{0}^{\infty}e^{-\eta t}P_{t}dt\), the estimations in [13, Proposition 6.8] can be applied to our setting with replacing \(T^{\vartheta}\)[13, Proposition 6.8] into \(\eta^{-\vartheta}\). We omit the detail. We are now in the position to prove Proposition 3.1. Proof of Proposition 3.1.: Define \(M_{L}:=\|\mathfrak{Z}_{L}\|_{\mathcal{Z}^{\varrho}}\) and take \[\eta_{L}:=A(1+M_{L})^{\aleph}, \tag{3.21}\] where \(\aleph:=\frac{3}{2\vartheta}\) and \(A>0\). Here \(\vartheta\) is the same constant which appear in Proposition 3.8. By taking \(A\) sufficiently large, for \((v,v^{\prime})\in\mathcal{D}^{\alpha}_{Q}\) with \(\|(v,v^{\prime})\|_{\mathcal{D}^{\alpha}_{Q}}\leq M_{L}\), we have \[\|\mathcal{G}(v,v^{\prime})\|_{\mathcal{D}^{\alpha}_{\mathcal{Q}}}\leq\text{ const.}\cdot A^{-2\vartheta}M_{L}\leq M_{L}\] using Proposition 3.8. Using a similar argument with (3.20), we can conclude that for all \(\eta\geq\eta_{L}\), \(\mathcal{G}\) is a contraction mapping in \(\mathcal{D}^{\alpha}_{\mathcal{Q}}\), hence there exists a unique solution \(v\in\mathscr{C}^{\alpha+1}\) (with a proper \(v^{\prime}\in\mathscr{C}^{\alpha}\)) to (3.12). Since we have set before \(Y=v+\frac{1}{2}(Z^{\boldsymbol{\mathscr{V}}}+Z^{\boldsymbol{\mathscr{Y}}})\), we get the unique solution \(Y\in\mathscr{C}^{2\alpha}\) to (3.10). Proof of Theorem 3.2.: By Proposition 3.1, we know that for \(\epsilon\in[0,1]\), \(L>0\), there exists a unique solution \(Y_{L,\epsilon}\) to \[(\eta_{L}-\frac{1}{2}\Delta)Y_{L,\epsilon}=\frac{1}{2}|\nabla Z_{L,\epsilon}|^ {2}-c_{\epsilon}+\nabla Y_{L,\epsilon}\cdot\nabla Z_{L,\epsilon}+\frac{1}{2}| \nabla Y_{L,\epsilon}|^{2}, \tag{3.22}\] where we define \(Z_{L,0}:=Z_{L}=\lim_{\epsilon\to 0}Z_{L,\epsilon}\) and \(Y_{L,0}:=Y_{L}=\lim_{\epsilon\to 0}Y_{L,\epsilon}\). For simplicity, we write \(Z\) for \(Z_{L,\epsilon}\) and \(Y,\xi,\eta,c\) similarly. Since \(Z-\frac{1}{2}\Delta Z=\xi\), we have \[\xi-c=Z+\eta Y-\frac{1}{2}\Delta(Z+Y)-\frac{1}{2}|\nabla(Z+Y)|^{2},\] or equivalently, \[\frac{1}{2}\Delta(Z+Y)=-(\xi-c)+Z+\eta Y-\frac{1}{2}|\nabla(Z+Y)|^{2}. \tag{3.23}\] On the other hand, by Ito's formula, we obtain \[(Z+Y)(B_{t})=(Z+Y)(B_{0})+\frac{1}{2}\int_{0}^{t}\Delta(Z+Y)(B_{s})ds+\int_{0}^{t }\nabla(Z+Y)(B_{s})ds, \tag{3.24}\] where \(B\) is a Brownian motion. By substituting (3.23) into (3.24), we have \[\exp\left(\int_{0}^{t}[\xi(B_{s})-c]ds\right)=N_{t}\cdot\exp\left(\int_{0}^{t }(Z+\eta Y)(B_{s})ds+(Z+Y)(B_{0})-(Z+Y)(B_{t})\right), \tag{3.25}\] where \[N_{t}:=\exp\left(\int_{0}^{t}\nabla(Z+Y)(B_{s})\cdot dB_{s}-\frac{1}{2}\int_{0 }^{t}|\nabla(Z+Y)(B_{s})|^{2}ds\right). \tag{3.26}\] Notice that the right hand side of (3.25) is equal to \(N_{t}\cdot\mathscr{D}_{L,y}(r,t)\). Applying Girsanov's theorem shows that the right hand side of (3.25) is equal to \(\mathds{E}_{\mathds{Q}^{x}_{L,\epsilon}}\left[\mathscr{D}_{L,0}(0,t)\phi(X_{t })\mathds{1}_{X_{[0,t]}\subset Q_{L}}\right]\) Therefore, (3.2) holds when \(\epsilon\in(0,1]\). In order to prove (3.5), we first show that \(\mathds{Q}^{x}_{L,\epsilon}\) converges weakly to \(\mathds{Q}^{x}_{L}\). To this end, note that by Lemma 4.12, we can apply Theorem 4.5 to the martingale problem associated with (3.4). By Theorem 4.5 and Theorem 3.4, we obtain the weak convergence of \(\mathds{Q}^{x}_{L,\epsilon}\) by the fact that \(\nabla(Z_{L,\epsilon}+Y_{L,\epsilon})\to\nabla(Z_{L}+Y_{L})\) in \(L_{\infty}\) as \(\epsilon\to\infty\). Now the proof of (3.5) follows from that the set \(\{X[0,t]\subset Q_{L}\}\) is a \(\mathds{Q}^{x}_{L}-\)continuity set, which was proven in the proof of [13, Theorem 2.17]. This completes the proof. ## 4. Existence of transition kernel & its estimate The goal of this section is two-fold. We first show that the existence of transition kernel for the diffusion \(X_{t}\) of (3.4) of Theorem 3.2. Secondly we derive bounds on the transition kernel. This bound will later be used to derive suitable bound on the escape probability such as the probability of the event \(\{X_{[0,t]}\not\subset Q_{L}\}\). ### Existence of the transition kernel We fix \(T>0\). Recall that \(X\) is the solution to the SDE with a distributional drift: For any \(x\in\mathds{R}^{d}\) \[X_{t}=x+\int_{0}^{t}\mu(X_{s})ds+B_{t},\quad t\in[0,T] \tag{4.1}\] where \(\mu:=\nabla(Z+Y)\) and \(Z,Y\) are defined in Section 3 (see Section 3.1). Throughout this section, we assume that \(Z\in\mathscr{C}^{\alpha},Y\in\mathscr{C}^{2\alpha}\) where \(\alpha<1/2\) as in the case \(d=3\). Our goal is to show that \(X\) has a transition density \(\Gamma_{t}:\mathds{R}^{d}\times\mathds{R}^{d}\to\mathds{R}\) for all \(t>0\). The following is the main result of this section. **Theorem 4.1**.: _Suppose \(\mu=\nabla(Z+Y)\) in (4.1). Then the solution \(X\) of the SDE (4.1) admits the transition density \(\Gamma_{t}(x,y)=w^{\delta_{y},\mu}(t,x)\) for \((t,x,y)\in(0,T)\times\mathds{R}^{2d}\) where \(w^{\delta_{y},\mu}\) is the solution to the Cauchy problem (4.2) associated with \(\mu\) and the terminal condition \(\delta_{y}\)._ **Remark 4.2**.: It is worthwhile to mention that \(Y\) defined via (3.1) in \(d=2\) case belongs to \(\mathscr{C}^{\frac{3}{2}-}\). In this occasion, [13, Proposition 2.9] has proven the existence of the heat kernel. However the results in [13] can only be applied when \(Y\in\mathscr{C}^{2\alpha}\) for some \(\alpha>1/2\) which is not true for \(d=3\). Therefore, Theorem 4.1 is quintessential for \(d=3\). In fact, we prove a more general result than Theorem 4.1. We show that any solution of (4.1) with a suitable \(\mu\in\mathscr{C}^{\beta}\) with \(\beta\in(-\frac{2}{3},-\frac{1}{2}]\) admits a transition density. Later we will show that the suitable class for \(\mu\) is the space of _rough distributions_ (see Definition 4.4) and \(\mu=\nabla(Z+Y)\) can be lifted to a rough distribution \(\mathbf{\mu}\). In order to study the SDE (4.1), we consider the martingale problem associated with the following Cauchy problem on \([0,T]\times\mathds{R}^{d}\): Define the differetial operator \[\mathcal{L}w:=\frac{1}{2}\Delta+\mu\cdot\nabla\] and the differential equation \[\begin{cases}\partial_{t}w(t,x)+\mathcal{L}w(t,x)=0,\quad(t,x)\in[0,T)\times \mathds{R}^{d},\\ w(T,x)=\phi,\quad x\in\mathds{R}^{d}\end{cases} \tag{4.2}\] where \(\phi:\mathds{R}^{d}\to\mathds{R}\) is the terminal conditions. We denote the solution to (4.2) with the terminal function \(\phi\) by \(w^{\phi}\). We recall the definition of the martingale problem associated to (4.2). **Definition 4.3**.: Let us denote \(\Omega=C([0,T],\mathds{R}^{d})\). A stochastic process \(\mathbf{X}=\{X_{t}\}_{t\in[0,T]}\) on the probability space \((\Omega,\mathcal{B}(\Omega),\mathds{P})\) endowed with the canonical filtration \(\{\mathfrak{F}_{t}\}_{t\geq 0}\) is said to be a solution to the SDE (4.1) if \(X_{0}=x\) and it satisfies the martingale problem for \((\mathcal{L},x)\): for all \(f\in C([0,T],L^{\infty}(\mathds{R}^{d}))\) and all \(\phi\in C^{\infty}_{c}(\mathds{R}^{d})\), \(w=w^{\phi}\) is the solution to the Cauchy problem 4.2 and \[\left\{w(t,X_{t})-\int_{0}^{t}f(s,X_{s})ds\right\}_{t\in[0,T]}\] is a martingale. Before we mention the well-posedness of the martingale problem, we introduce the definition of the _rough distribution_ ([10, Definition 3.6]) of a suitable class of the drift \(\mu\) for the well-posedness. **Definition 4.4** (**Rough distribution)**.: Let \(\beta\in(-\frac{2}{3},-\frac{1}{2})\), \(\gamma<\beta+2\). Set \(\mathcal{H}^{\gamma}:=\mathscr{C}^{\gamma-2}_{\mathds{R}^{d}}\times\mathscr{C }^{2\gamma-3}_{\mathds{R}^{d}}\). We define the space of _rough distributions_ as \[\mathscr{X}^{\gamma}=\mathrm{cl}_{\mathcal{H}^{\gamma}}\left\{\mathcal{K}( \theta):=(\theta,\mathcal{I}(\partial_{j}\theta^{i})\circ\theta^{j})_{i,j=1, \ldots,d},\theta\in\mathscr{C}^{\infty}_{\mathds{R}^{d}}\right\}.\] We denote by \(\mathbf{\mu}=(\mu,\mu^{\prime})\) a generic element of \(\mathscr{X}^{\gamma}\) and we say that \(\mathbf{\mu}\) is a lift (or enhancement) of \(\mu\). We now present the well-posedness of the martingale problem associated to (4.2) proven in [10, Theorem 1.2]. **Theorem 4.5** (Theorem 1.2 of [10]).: _Let \(\beta\in(-\frac{2}{3},-\frac{1}{2}]\) and \(\mu\in\mathscr{C}^{\beta}_{\mathds{R}^{d}}\). We assume that \(\mu\) can be enhanced to a rough distribution \(\mathbf{\mu}\in\mathscr{X}^{\gamma}\) for some \(\gamma<\beta+2\). Then there exists a (stochastically) unique solution to the martingale problem for \((\mathcal{L},x)\) in the sense that there is a unique probability measure \(\mathds{P}_{x}\) on \(\Omega=C([0,T],\mathds{R}^{d})\) such that the coordinate process \(X_{t}(\omega)=\omega(t)\) satisfies the martingale problem for \((\mathcal{L},x)\). Moreover, \(X\) is a strong Markov process under \(\mathds{P}_{x}\) and the \(\mathds{P}_{x}\) depends (weakly) continuously on the drift \(\mu\)._ **Remark 4.6**.: In [10, Theorem 1.2], the continuity of \(\mathds{P}_{x}\) in the drift \(\mu\) is not given. However, the proof of [10, Theorem 1.2] directly implies this fact. See the proof of [10, Theorem 4.3]. As in [11, Section 2], we will show that there exists a solution \(w^{\delta_{y}}\) to the Cauchy problem (4.2) with the terminal condition \(\delta_{y}\). We will then show that the transition density \(\Gamma\) of \(X\) is defined as \(\Gamma_{t}(x,y)=w^{\delta_{y}}(t,x)\). In order to deal with (4.2), we employ the paracontrolled distributions and extend [10, Theorem 3.10] for the delta initial data. Recall that for any \((t,x)\in[0,T]\times^{d}\) and function \(\psi\), \(\mathcal{J}(\psi)(t)\) is defined as \(\int_{t}^{T}P_{r-t}\psi(r)dr\) where \(P_{t}\) is the usual heat flow, i.e., \(P_{t}=e^{\frac{1}{2}t\Delta}\). **Definition 4.7**.: Let \(T>0\), \(\frac{4}{3}<\alpha<\theta<\beta+2\) and \(\rho>\frac{\theta-1}{2}\). For \(\bar{T}\in(0,T)\), \(p\in[1,\infty]\) and \(\delta>0\), we define the space of _paracontrolled distributions_\(\mathscr{D}^{\alpha,\theta,\delta,p}_{\rho,\bar{T},T}\) as the set of pairs of distributions \((w,w^{\prime})\in C_{\bar{T},T}\mathscr{C}^{\theta}_{p}\times C_{\bar{T},T} \mathscr{C}^{\alpha-1}_{p}\) (see (A.6)) such that \[w^{\#}(t):=w(t)-w^{\prime}(t)\prec\mathcal{J}(\mu)(t)\in\mathscr{C}^{2\alpha-1}_ {p} \tag{4.3}\] for all \(t\in(T-\bar{T},T]\). Here '\(\prec\)' denotes the paraproduct which is defined in Section A. We equip \(\mathscr{D}^{\alpha,\theta,\delta,p}_{\rho,\bar{T},T}\) with the norm \[\|(w,w^{\prime})\|_{\mathscr{D}^{\alpha,\theta,\delta,p}_{\rho,\bar{T},T}}:= \|w\|_{C^{\delta}_{T,T}\mathscr{C}^{\theta}_{p}}+\|\nabla w\|_{C^{\delta}_{ \rho,\bar{T},T}L^{\infty}}+\|w^{\prime}\|_{C^{\delta}_{T,T}\mathscr{C}^{\alpha -1}_{p}}+\|w^{\#}\|_{C^{\delta}_{\bar{T},T}\mathscr{C}^{2\alpha-1}_{p}}. \tag{4.4}\] and the metric \(d_{\mathscr{D}^{\alpha,\theta,\delta,p}_{\rho,T,T}}\) defined as \[d_{\mathscr{D}^{\alpha,\theta,\delta,p}_{\rho,\bar{T},T}}((w,w^{\prime}),( \tilde{w},\tilde{w}^{\prime})):=\|(w-\tilde{w},w^{\prime}-\tilde{w}^{\prime}) \|_{\mathscr{D}^{\alpha,\theta,\delta,p}_{\rho,\bar{T},T}}. \tag{4.5}\] Equipped with this metric, the space \((\mathscr{D}^{\alpha,\theta,\delta,p}_{\rho,\bar{T},T},d_{\mathscr{D}^{\alpha,\theta,\delta,p}_{\rho,\bar{T},T}})\) is complete metric space, thus it is closed. We now introduce a solution map for the construction of a fixed point problem for (4.2). **Definition 4.8**.: Let \(\frac{4}{3}<\alpha<\theta<\gamma<\beta+2\), \(p\in[1,\infty]\), \(\rho\in(\frac{\theta-1}{2},\frac{\gamma-1}{2})\), \(T>0\) and \(\boldsymbol{\mu}\in\mathscr{X}^{\theta}\) be an enhancement of \(\mu\). For \(\bar{T}\in(0,T)\), define the map \(M_{\bar{T}}:\mathscr{D}^{\alpha,\theta,\delta,p}_{\rho,\bar{T},T}\to C_{\bar{T },T}\mathscr{C}^{\alpha}_{p}\) by \[M_{\bar{T}}(w,w^{\prime})(t):=P_{T-t}\phi+\mathcal{J}(\nabla w\cdot\mu)(t), \tag{4.6}\] for \((u,u^{\prime})\in\mathscr{D}^{\alpha,\theta,\delta,p}_{\rho,\bar{T},T}\) and \(\phi\in\mathscr{C}^{\gamma}_{p}\). We also define the map \[\mathcal{M}_{\bar{T}}:\mathscr{D}^{\alpha,\theta,\delta,p}_{\rho, \bar{T},T} \to C_{\bar{T},T}\mathscr{C}^{\alpha}_{p}\times C_{\bar{T},T} \mathscr{C}^{\alpha-1}_{p} \tag{4.7}\] \[(w,w^{\prime}) \mapsto(M_{\bar{T}}(w,w^{\prime}),\nabla w).\] The following proposition is a generalization of [10, Proposition 3.9] in the sense that we have used different norms for \((w,w^{\prime})\) in (4.4) to allow a blowup at time \(T\), which depends on \(\delta>0\) (see (A.6)). We will then present a key estimate of the solution map for the rough terminal data which lies in \(\mathscr{C}^{-\epsilon}_{p}\) for some \(\epsilon>0\). **Proposition 4.9**.: _Let \(0<T<1\), \(\frac{4}{3}<\alpha<\theta<\gamma<\beta+2\), \(\rho\in(\frac{\theta-1}{2},\frac{\gamma-1}{2})\) and \(\delta>2\alpha-1\). Let the terminal data \(\phi\in\mathscr{C}^{\gamma}_{p}\), \(\mu\in\mathscr{C}^{\beta}\) and \(\boldsymbol{\mu}\in\mathscr{X}^{\gamma}\) be an enhancement of \(\mu\). Then, there exists \(\kappa>0\) which depends only on \(\alpha,\theta,\rho,\gamma,p\) such that for any for \((w,w^{\prime}),(\tilde{w},\tilde{w}^{\prime})\in\mathscr{D}^{\alpha,\theta, \delta,p}_{\rho,\bar{T},T}\) and \(\bar{T}\in(0,T)\),_ \[\|\mathcal{M}_{\bar{T}}(w,w^{\prime})\|_{\mathscr{D}^{\alpha,\theta,\delta,p}_{ \rho,\bar{T},T}}\lesssim(1+\|\boldsymbol{\mu}\|_{\mathscr{X}^{\gamma}})^{2}(1+ \bar{T}^{\kappa}\|(w,w^{\prime}))\|_{\mathscr{D}^{\alpha,\theta,\delta,p}_{ \rho,\bar{T},T}}+\|\phi\|_{\mathscr{C}^{\gamma}_{p}}, \tag{4.8}\] _and_ \[d_{\mathscr{D}^{\alpha,\theta,\delta,p}_{\rho,\bar{T},T}}(\mathcal{M}_{\bar{T}} (w,w^{\prime}),\mathcal{M}_{\bar{T}}(v,v^{\prime}))\lesssim\bar{T}^{\kappa}(1+ \|\boldsymbol{\mu}\|_{\mathscr{X}^{\gamma}})^{2}d_{\mathscr{D}^{\alpha,\theta, \delta,p}_{\rho,\bar{T},T}}((w,w^{\prime}),(\tilde{w},\tilde{w}^{\prime})). \tag{4.9}\] Proof.: The proof follows similar line of ideas as in [10, Proposition 3.9]. The difference is that we use two more parameters \(p\in[1,\infty]\) and \(\delta>2\alpha-1\) than in [10, Proposition 3.9]. However, with minor modifications we can get (4.8) and (4.9). Indeed, since Bony's estimate (Proposition A.2) holds for all \(p\in[1,\infty]\), we can bound the \(\|\cdot\|_{\mathscr{C}^{\theta}_{p}}\)-norm as in the same way with the \(\|\cdot\|_{\mathscr{C}^{\theta}}\)-norm for any \(a\in\mathds{R}.\) Moreover, by multiplying \((T-t)^{\delta}\) with \(\delta>2\alpha-1\) on the \(\|\cdot\|_{C_{\bar{T},T}\mathscr{C}^{\alpha}_{p}}\)-norm, we can proceed every estimation similarly as in the proof of [10, Proposition 3.9]. The next lemma bounds the distance between two fixed points of the map \(\mathcal{M}_{\bar{T}}\) starting from two distinct initial data. **Lemma 4.10**.: _Let \(0<T<1\), \(\frac{4}{3}<\alpha<\theta<\gamma<\beta+2\), \(\rho\in(\frac{\theta-1}{2},\frac{\gamma-1}{2})\), \(\epsilon>0\) and \(\delta>\frac{2\alpha-1+\epsilon}{2}\). Let \(\mu\in\mathscr{C}^{\beta}\) and \(\boldsymbol{\mu}\in\mathscr{X}^{\gamma}\) be an enhancement of \(\mu\). Then, there exists \(\kappa>0\) which depends only on \(\alpha,\theta,\rho,\gamma,p\) such that for any \(\bar{T}\in(0,T)\) and \((w,w^{\prime}),(\tilde{w},\tilde{w}^{\prime})\in\mathscr{D}^{\alpha,\theta, \delta,p}_{\rho,\bar{T},T}\) being fixed points of the map \(\mathcal{M}_{\bar{T}}\) with the terminal data \(\phi,\tilde{\phi}\in\mathscr{C}^{\gamma}_{p}\) respectively, we have_ \[d_{\mathscr{D}^{\alpha,\theta,\delta,p}_{\rho,\bar{T},T}}((w,w^{\prime}),( \tilde{w},\tilde{w}^{\prime}))\lesssim\bar{T}^{\kappa}(1+\|\boldsymbol{\mu}\|_ {\mathscr{X}^{\gamma}})^{2}d_{\mathscr{D}^{\alpha,\theta,\delta,p}_{\rho,\bar {T},T}}((w,w^{\prime}),(\tilde{w},\tilde{w}^{\prime}))+\|\phi-\tilde{\phi}\| _{\mathscr{C}^{-\epsilon}_{p}}. \tag{4.10}\] Proof.: Note that since \((w,w^{\prime})\) is a fixed point of \(\mathcal{M}_{\bar{T}}\) associated with a terminal function \(\phi\), we know that \(M_{\bar{T}}(w,w^{\prime})=w\) and \(M_{\bar{T}}(w,w^{\prime})^{\prime}=\nabla w\). Moreover, \[w(t)=P_{T-t}\phi+\mathcal{J}(\nabla w\cdot\mu)(t) \tag{4.11}\] holds for all \(t\in(T-\bar{T},T]\) and the same facts hold for \(\tilde{w}\). Since (4.11) is linear, we prove the lemma for \(w\) for simplicity. By Proposition 4.9, we have the first term on the r.h.s in (4.10). For the second term, it suffices to obtain the upper bound (4.8) for \(w\) with \(\|\phi\|_{\mathscr{C}^{-\epsilon}_{p}}\) instead of \(\|\phi\|_{\mathscr{C}^{\gamma}_{p}}\). Using (4.11), we need to bound \[\|(w,w^{\prime})\|_{\mathscr{D}^{\alpha,\theta,\delta,p}_{\rho,\bar{T},T}}=\|w \|_{C^{\delta}_{T,T}\mathscr{C}^{\theta}_{p}}+\|\nabla w\|_{C^{\delta}_{\rho, \bar{T},T}L^{\infty}}+\|\nabla w\|_{C^{\delta}_{T,T}\mathscr{C}^{\theta_{p}-1 }_{p}}+\|w^{\#}\|_{C^{\delta}_{\bar{T},T}\mathscr{C}^{2\alpha-1}_{p}} \tag{4.12}\] to extract the bound by \(\|\phi\|_{\mathscr{C}^{-\epsilon}_{p}}\). The first term on the r.h.s. of the above display, i.e., \(\|w\|_{C^{\delta}_{T,T}\mathscr{C}^{\theta}_{p}}\) is the upper bound by \(\|\phi\|_{\mathscr{C}^{-\epsilon}_{p}}\) since \(\delta>\frac{2\alpha-1+\epsilon}{2}>\frac{\theta+\epsilon}{2}\) and \[\|P_{T-\cdot}\phi\|_{C^{\delta}_{T,T}\mathscr{C}^{\theta}_{p}}=\sup_{t\in(T- \bar{T},T]}(T-t)^{\delta}\|P_{T-t}\phi\|_{\mathscr{C}^{\theta}_{p}}\lesssim\| \phi\|_{\mathscr{C}^{\theta-2\delta}_{p}}\leq\|\phi\|_{\mathscr{C}^{-\epsilon }_{p}}, \tag{4.13}\] where we used Lemma A.3 in the first inequality. For \(\|\nabla w\|_{C^{\delta}_{\rho,\bar{T},T}L^{\infty}}\), we can see that for \(t<s\in(T-\bar{T},T]\) \[\|\nabla P_{T-t}\phi-\nabla P_{T-s}\phi\|_{L^{\infty}} =\|(P_{s-t}-\mathrm{Id})P_{T-s}\nabla\phi\|_{L^{\infty}}\] \[\lesssim|t-s|^{\rho}\|P_{T-s}\nabla\phi\|_{\mathscr{C}^{2\rho}_{p }}\] \[\lesssim|t-s|^{\rho}(T-s)^{-\delta}\|\phi\|_{\mathscr{C}^{2\rho+1- 2\delta}_{p}}.\] This implies that \[\|\nabla P_{T-\cdot}\phi\|_{C^{\delta}_{\rho,\bar{T},T}L^{\infty}}\lesssim\| \phi\|_{\mathscr{C}^{2\rho+1-2\delta}_{p}}\leq\|\phi\|_{\mathscr{C}^{-\epsilon }_{p}}, \tag{4.14}\] since \(2\alpha-1>2\rho+1\) by the conditions on the parameters. For \(\|\nabla w\|_{C^{\delta}_{\bar{T},T}\mathscr{C}^{\alpha-1}_{p}}\), it is enough to see that \(\|\nabla w\|_{C^{\delta}_{\bar{T},T}\mathscr{C}^{\alpha-1}_{p}}\lesssim\|w\|_{ C^{\delta}_{\bar{T},T}\mathscr{C}^{\alpha}_{p}}\). To bound \(\|w^{\#}\|_{C^{\delta}_{\bar{T},T}\mathscr{C}^{2\alpha-1}_{p}}\), note that \[w^{\#}(t)=w(t)-(\nabla w\prec\mathcal{J}(\mu))(t) \tag{4.15}\] by (4.3). The only contribution of \(\phi\) appears in the first term on the r.h.s. We use Lemma A.3 to get \[(T-t)^{\delta}\|P_{T-t}\phi\|_{\mathscr{C}^{2\alpha-1}_{p}}\lesssim\|\phi\|_{ \mathscr{C}^{2\alpha-1-2\delta}_{p}}\lesssim\|\phi\|_{\mathscr{C}^{-\epsilon}_ {p}}. \tag{4.16}\] Putting all together and replacing \(w\) by \(w-\tilde{w}\), we have \[\|(w-\tilde{w},w^{\prime}-\tilde{w}^{\prime})\|_{\mathscr{D}^{\alpha,\theta, \delta,p}_{\rho,\bar{T},T}}\lesssim\bar{T}^{\kappa}(1+\|\boldsymbol{\mu}\|_{ \mathscr{X}^{\gamma}})^{2}\|(w-\tilde{w},w^{\prime}-\tilde{w}^{\prime})\|_{ \mathscr{D}^{\alpha,\theta,\delta,p}_{\rho,\bar{T},T}}+\|\phi-\tilde{\phi}\|_{ \mathscr{C}^{-\epsilon}_{p}},\] which completes the proof. Now we are ready to present one of the main results of this section which extends [14, Theorem 3.10] for a more larger class of terminal data including Dirac delta terminal data. We write \(w^{\phi}_{\mu}\) for the solution \(w\) of (4.2) with the terminal function \(\phi\) and the drift \(\mu\). **Theorem 4.11**.: _Let \(p\in[1,\infty],\beta\in(-\frac{2}{3},-\frac{1}{2}]\), \(\frac{4}{3}<\theta<\gamma<\beta+2\) and \(\epsilon,T>0.\) For \(\phi\in\mathscr{C}_{p}^{-\epsilon}\) and \(\mu\in\mathscr{C}^{\beta}\) that can be enhanced to a lift \(\boldsymbol{\mu}\in\mathscr{X}^{\gamma}\), the Cauchy problem (4.2) has a unique mild solution \(w_{\mu}^{\phi}\) in \(C([0,T],\mathscr{C}^{-2\epsilon\wedge\beta})\) such that \(w_{\mu}^{\phi}(t)\in\mathscr{C}^{\alpha}\) for all \(t\in[0,T)\). Moreover, for all \(t>0\) the map \(\mathscr{C}_{p}^{-\epsilon}\times\mathscr{C}^{\beta}\to\mathscr{C}^{\alpha}\) given by \((\phi,\mu)\mapsto w_{\mu}^{\phi}(t,\cdot)\) is locally Lipshitz._ Proof.: Fix \(\phi\in\mathscr{C}_{p}^{-\epsilon}\). Suppose \(\{\phi_{n}\}_{n\in\mathds{N}}\) be a sequence of functions in \(\mathscr{C}_{p}^{\gamma}\) such that \[\|\phi_{n}-\phi\|_{\mathscr{C}_{p}^{-\epsilon}}\to 0 \tag{4.17}\] as \(n\to\infty\). Then by Proposition 4.10, for each \(n\), and \(\bar{T}\in(0,T)\), we have a unique solution \((w_{n},w_{n}^{\prime})\in\mathscr{D}_{\rho,T}^{\alpha,\theta,\delta,p}\) of the fixed point problem of (4.7). Since \((w_{n},w_{n}^{\prime})\) are fixed point of the map \(\mathcal{M}_{T}\), we have \(\mathcal{M}_{T}(u_{n},u_{n}^{\prime})=(u_{n},u_{n}^{\prime})\) for all \(n\) and use Lemma 4.10 to obtain \[\|(w_{n},w_{n}^{\prime})-(w_{m},w_{m}^{\prime})\|_{\mathscr{D}_{\rho,T,T}^{ \alpha,\theta,\delta,p}}\lesssim\bar{T}^{\kappa}(1+\|\boldsymbol{\mu}\|_{ \mathscr{X}^{\gamma}})^{2}\|(w_{n},w_{n}^{\prime})-(w_{m},w_{m}^{\prime})\|_ {\mathscr{D}_{\rho,T,T}^{\alpha,\theta,\delta,p}}+\|\phi_{n}-\phi_{m}\|_{ \mathscr{C}_{p}^{-\epsilon}}. \tag{4.18}\] We choose \(\bar{T}>0\) (not depending on \(n\)) small such that we may rewrite the above inequality as \[\|(w_{n},w_{n}^{\prime})-(w_{m},w_{m}^{\prime})\|_{\mathscr{D}_{\rho,T,T}^{ \alpha,\theta,\delta,p}}\lesssim\|\phi_{n}-\phi_{m}\|_{\mathscr{C}_{p}^{- \epsilon}}.\] The above display implies that \(\{(w_{n},w_{n}^{\prime})\}_{n\in\mathds{N}}\) is a Cauchy sequence in \(\mathscr{D}_{\rho,T,T}^{\alpha,\theta,\delta,p}\) due to (4.17). Since \(\mathscr{D}_{\rho,T,T}^{\alpha,\theta,\delta,p}\) is closed, there is a unique limit point \((w,w^{\prime})\in\mathscr{D}_{\rho,T,T}^{\alpha,\theta,\delta,p}\) of \(\{(w_{n},w_{n}^{\prime})\}_{n\in\mathds{N}}\). Since \((w,w^{\prime})\) is a limit of the set of fixed points under the map \(\mathcal{M}_{\bar{T}}\), \((w,w^{\prime})\) is also a fixed point under of the same map. As a result, we get \[w=M_{\bar{T}}(w,w^{\prime})=\mathcal{J}(\nabla w\cdot\mu)+P_{T-\cdot}\phi \tag{4.19}\] in \(C_{T,T}^{\delta}\mathscr{C}_{p}^{\theta}\). We now verify that \(w\) solves this equation started from the initial data \(\phi\in\mathscr{C}_{p}^{-\epsilon}\). This follows from the following estimates: \[\|P_{T-t}\phi-\phi\|_{\mathscr{C}_{p}^{-2\epsilon}}\lesssim(T-t)^{\epsilon} \|u_{0}\|_{\mathscr{C}_{p}^{-\epsilon}}, \tag{4.20}\] and \[\|\mathcal{J}(\nabla u\!\cdot\!\mu)(t)\|_{\mathscr{C}_{p}^{-2\epsilon}} \lesssim(T-t)^{\vartheta}\sup_{s\in(T-\bar{T},T]}(T\!-\!s)^{\gamma}\|\nabla w \!\cdot\!\mu\|_{\mathscr{C}^{\alpha+\beta-1}},\quad\vartheta:=\frac{\alpha+ \beta-1}{2}\!+\!\epsilon\!-\!\gamma\!+\!1, \tag{4.21}\] for some \(\gamma\in(0,1)\). We obtain the first inequality by applying the second Schauder's estimates from Lemma A.3. The second inequality is obtained by applying part (1) of Corollary 2.5 from [14]. Recall that \(\alpha>\frac{4}{3}\), \(\beta\in(-\frac{2}{3},-\frac{1}{2}]\). We choose \(\gamma\) to be very small such that \(\vartheta>0\). Therefore the right hand side of the inequalities of the above display tends to \(0\) as \(t\) goes to \(\infty\). This shows \(w\in C([T-\bar{T},T],\mathscr{C}_{p}^{-2\epsilon})\cap C_{T,T}^{\delta} \mathscr{C}_{p}^{\theta}\) is the solution to the Cauchy problem \[w(t)=\mathcal{J}(\nabla w\cdot\mu)(t)+P_{T-t}u_{0},\quad t\in[0,T) \tag{4.22}\] and \(w(T,\cdot)=\phi\), or equivalently to (4.2). Now we claim that for \(t>0\) the map \(\mathscr{C}_{p}^{-\epsilon}\times\mathscr{X}^{\gamma}\to\mathscr{D}_{\rho, \bar{T},T}^{\alpha,\theta,\delta,p}\) given by \((\phi,\boldsymbol{\mu})\mapsto(u_{\boldsymbol{\mu}_{1}}^{\phi}(t,\cdot),\nabla u _{\boldsymbol{\mu}_{2}}^{\phi}(t,\cdot))\) is locally Lipschitz. If we let \(\phi_{1},\phi_{2}\in\mathscr{C}_{p}^{-\epsilon}\), \(\boldsymbol{\mu}_{1},\boldsymbol{\mu}_{2}\in\mathscr{C}^{\beta}\), \(w_{i}=w_{\boldsymbol{\mu}_{i}}^{\phi_{i}}\) and \(w_{i}^{\prime}=\nabla w_{\boldsymbol{\mu}_{i}}^{\phi_{i}}\) be the solution for each \(i=1,2\), then we have \[\|(w_{1},w_{1}^{\prime})-(w_{2},w_{2}^{\prime})\|_{\mathscr{D}_{\rho,T,T}^{ \alpha,\theta,\delta,p}}\leq\|\mathcal{M}_{\bar{T}}^{\boldsymbol{\mu}_{1}}(w_{1},w_{1}^{\prime})-\mathcal{M}_{\bar{T}}^{\boldsymbol{\mu}_{1}}(w_{2},w_{2}^{ \prime})\|_{\mathscr{D}_{\rho,T,T}^{\alpha,\theta,\delta,p}}+\|\mathcal{M}_{\bar {T}}^{\boldsymbol{\mu}_{1}}(w_{2},w_{2}^{\prime})-\mathcal{M}_{\bar{T}}^{ \boldsymbol{\mu}_{2}}(w_{2},w_{2}^{\prime})\|_{\mathscr{D}_{\rho,\bar{T},T}^{ \alpha,\theta,\delta,p}} \tag{4.23}\] Due to (4.8) of Proposition 4.9, we have \[\|\mathcal{M}_{\bar{T}}^{\boldsymbol{\mu}_{1}}(w_{1},w_{1}^{\prime})-\mathcal{M }_{\bar{T}}^{\boldsymbol{\mu}_{1}}(w_{2},w_{2}^{\prime})\|_{\mathscr{D}_{\rho,T,T}^ {\alpha,\theta,\delta,p}}\leq\|\phi_{1}-\phi_{2}\|_{\mathscr{C}_{p}^{-\epsilon}}+ \bar{T}^{\kappa}(1+\|\boldsymbol{\mu}_{1}\|_{\mathscr{X}^{\gamma}})^{2}\|(w_ {1},w_{1}^{\prime})-(w_{2},w_{2}^{\prime})\|_{\mathscr{D}_{\rho,T,T}^{\alpha, \theta,\delta,p}} \tag{4.24}\] Furthermore, notice that \(\mathcal{M}_{\hat{T}}^{\mathbf{\mu}_{1}}(w_{2},w_{2}^{\prime})=(P_{T-}\phi_{2}+\mathcal{ J}(\nabla w_{2}\cdot\mathbf{\mu}_{1}),\nabla w_{2})\) and \(\mathcal{M}_{\hat{T}}^{\mathbf{\mu}_{2}}(w_{2},w_{2}^{\prime})=(P_{T-}\phi_{2}+ \mathcal{J}(\nabla w_{2}\cdot\mathbf{\mu}_{2}),\nabla w_{2})\). As a result, we get \[\mathcal{M}_{\hat{T}}^{\mathbf{\mu}_{1}}(w_{2},w_{2}^{\prime})-\mathcal{M}_{\hat{T} }^{\mathbf{\mu}_{2}}(w_{2},w_{2}^{\prime})=(\mathcal{J}(\nabla w_{2}\cdot(\mathbf{\mu}_ {1}-\mathbf{\mu}_{2})),0).\] Using Corollary 2.5 of [18], we have \[\|(\mathcal{J}(\nabla w_{2}\cdot(\mathbf{\mu}_{1}-\mathbf{\mu}_{2})),0)\|_{\mathscr{Q} ^{\alpha,\theta,\delta,p}_{\rho,T,T}}\lesssim\bar{T}^{\kappa}(1+\|\mathbf{\mu}_{1} -\mathbf{\mu}_{2}\|_{\mathscr{X}^{\gamma}})\|\mathbf{\mu}_{1}-\mathbf{\mu}_{2}\|_{\mathscr{ X}^{\gamma}}(1+\|(w_{2},w_{2}^{\prime})\|_{\mathscr{Q}^{\alpha,\theta,\delta,p}_{ \rho,T,T}}). \tag{4.25}\] Applying the bounds from (4.24) and (4.25) into the right hand side of (4.23) yields \[\text{r.h.s. of \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq \(\mathds{P}_{x}^{(n)}\) be the unique probability measure on \(C([t,T],\mathds{R}^{d})\) such that the coordinate process \(X^{(n)}\) is the solution to the martingale problem for \((\mathcal{L}^{(n)},x)\) where \(\mathcal{L}^{(n)}:=\frac{1}{2}\Delta+\mu_{n}\cdot\nabla\). By Theorem 4.5, we know that \(\mathds{P}_{x}^{(n)}\) weakly converges to \(\mathds{P}_{x}\) that appears in Theorem 4.5. Suppose \(\Gamma_{t}(\cdot,\cdot)\) is the transition kernel of \(\mathbb{P}_{x}\). Therefore, with Proposition 4.13, we can see that for all \(\psi\in C_{c}(\mathds{R}^{d})\) \[\mathds{E}_{x}[\psi(X_{T})]=\lim_{n\to\infty}\mathds{E}_{x}[\psi(X_{T}^{(n)})] =\lim_{n\to\infty}\int_{\mathds{R}^{d}}\psi(y)\Gamma_{t}^{(n)}(x,y)dy=\int_{ \mathds{R}^{d}}\psi(y)\Gamma_{t}(x,y)dy.\] This shows \(\Gamma_{t}(x,y)=w^{\delta_{y},\mu}(t,x)\) and hence, completes the proof. ### Transition kernel estimate We now provide the upper and lower bounds for the estimate of \(\Gamma_{t}(x,y):=w^{\delta_{y},\mu}(t,x)\). We first consider \(\mu\) is the form of \(\nabla U\) where \(U\) denotes a smooth and bounded function. Note that Proposition 4.13 allows us to extend the estimates to the case \(\mu=\nabla(Z+Y)\) as in Section 4.1. **Theorem 4.14**.: _Suppose that \(\mu\) is the form of \(\nabla U\) where \(U:\mathds{R}^{d}\to\mathds{R}^{d}\) is a smooth and bounded function. Then, for the transition density \(\Gamma_{t}(x,y)\) of the solution \(X\) to the SDE (4.1), there exist constants \(C_{1},C_{2},C_{3}>0\) such that for all \(t>0,x,y\in\mathds{R}^{d}\), we have_ \[\Gamma_{t}(x,y) \leq\frac{1}{t^{d/2}}\exp\left(C_{1}\|U\|_{\infty}-\frac{|y-x|^{2} }{4t}\right), \tag{4.27}\] \[\Gamma_{t}(x,y) \geq\frac{1}{t^{d/2}}\exp\left(-C_{2}\|U\|_{\infty}-\frac{C_{3} \|U\|_{\infty}|y-x|^{2}}{t}\right). \tag{4.28}\] _In addition, the same estimates hold when \(U=Z_{L}^{y}+Y_{L}^{y}\) where \(Z_{L}^{y}\) and \(Y_{L}^{y}\) are defined in Section 3._ Proof.: The proof is based on ideas from [11, Section 4.3]. It was shown in (4.3.6) of [11] that if \(\mu\) is of the form \(\nabla U\) for some \(U\in C^{\infty}(^{d})\), then \[\Gamma_{t}(x,y)\leq\frac{K_{d}(U)}{t^{d/2}}\exp\Big{(}\frac{|x-y|^{2}}{4t} \Big{)}.\] where \(K_{d}(U)\) is a constant which is upper bounded by \(\kappa_{d}\exp((d+4)\delta(U)/4)\) and \(\delta(U):=\max_{x\in^{d}}U(x)-\min_{x\in^{d}}U(x)\). Note that \(\delta(U)\leq 2\|U\|_{\infty}\). Thus the upper bound (4.27) follows. We use [11, Lemma 4.3.8] to show the lower bound. It is shown in the paragraph before Lemma 4.3.8 of [11] that there exists \(C_{2}\) such that \[\Gamma_{t}(x,y)\geq\frac{2^{d/2}}{t^{d/2}}\exp(-C_{2}\|U\|_{\infty}),\quad \text{whenever}|x-y|\leq\sqrt{t}.\] This bound is further used in Lemma 4.3.8 of [11] to show that \[\Gamma_{t}(x,y)\geq\frac{2^{d/2}}{t^{d/2}}\exp\Big{(}-C_{2}\|U\|_{\infty}- \frac{C_{3}\|U\|_{\infty}|y-x|^{2}}{t}\Big{)}\] for some constant \(C_{3}>0\). From this bound, (4.28) follows. We also obtain the estimate for the escape probability of \(X.\) **Corollary 4.15**.: _Suppose that \(\mu\) is the form of \(\nabla U\) where \(U:\mathds{R}^{d}\to\mathds{R}^{d}\) is a smooth and bounded function. Then, for the solution \(X\) to the SDE (4.1), \(x\in\mathds{R}^{d}\), \(K>0\), and \(T\geq 1\),_ \[\mathds{P}\Big{(}\sup_{t\in[0,T]}|X_{t}-x|\geq K\Big{)}\leq C\exp\Big{(}CT\|U\| _{\infty}-\frac{K^{2}}{CT}\Big{)}. \tag{4.29}\] _Furthermore, this same upper bound holds for \(U=Z_{L}^{y}+Y_{L}^{y}\) where \(Z_{L}^{y}\) and \(Y_{L}^{y}\) are defined in Section 3._ Proof.: Note that from (4.27), we know that \(\Gamma_{t}(x,y)\lesssim p_{t}(x,y)e^{C_{1}\|U\|_{\infty}}\) where \(p_{t}\) denotes the usual heat kernel density in \(d-\)dimension. Then the proof follows from the same line with the proof of [13, Corollary 5.2]. ## 5. Asymptotic bounds for the PAM This section is devoted to estimating the asymptotic bounds for the solution \(u\) to the PAM (1.1). We aim to showcase two main things in this section. We first derive bounds on the moments of the _enhanced noise_ (see Definition 3.8). We use these estimates to find bounds on the Feynman-Kac formula found in Section 3. We first recall few facts about the white noise. **Definition 5.1**.: On a probability space \((\Omega,\mathds{P})\), we define a (spatial) white noise on \(\mathds{R}^{d}\) by a random variable \(\xi:\Omega\to\mathscr{S}^{\prime}(\mathds{R}^{d})\) such that for all \(f\in\mathscr{S}(\mathds{R}^{d})\), \(\langle\xi,f\rangle\) is a centered Gaussian random variable with covariance \(\mathds{E}[\langle\xi,f\rangle\langle\xi,g\rangle]=\langle f,g\rangle\) for all \(f,g\in\mathscr{S}(\mathds{R}^{d})\). Note that since \(f\mapsto\langle\xi,f\rangle\) is linear and \(\|\langle\xi,f\rangle\|_{L^{2}(\Omega)}=\|f\|_{L^{2}(\mathds{R}^{d})}\), we can extend \(\xi\) to a bounded linear operator \(W:L^{2}(\mathds{R}^{d})\to L^{2}(\Omega,\mathds{P})\) such that \(W(f)\) is also a centered Gaussian random variable with \(\mathds{E}[W(f)W(g)]=\langle f,g\rangle\) for all \(f,g\in L^{2}(\mathds{R}^{d})\). We will abuse the notation using \(\langle\xi,f\rangle\) for \(W(f)\) when \(f\in L^{2}(\mathds{R}^{d})\). Now we define a radially symmetric and nonnegative function \(\psi\in C_{c}^{\infty}(\mathds{R}^{d})\) such that \(\int_{\mathds{R}^{d}}\psi(x)dx=1\). We define \(\psi_{\epsilon}(x)=\frac{1}{d^{\epsilon}}\psi(\frac{\epsilon}{\epsilon})\). Let \(\xi_{\epsilon}=\psi_{\epsilon}*\xi\) be the mollification of \(\xi\). Recall that the mollification of the noise on the box \([-\frac{L}{2},\frac{L}{2}]^{d}\) by \(\xi_{L,\epsilon}\) given as \[\xi_{L,\epsilon}=\sum_{k\in\mathds{N}^{d}}\tau(\frac{\epsilon}{L}k)\langle\xi,\mathfrak{n}_{k,L}\rangle\mathfrak{n}_{k,L}, \tag{5.1}\] where \(\tau\in C_{c}^{\infty}(\mathds{R}^{d},[0,1])\) is an even function such that \(\tau(x)=1\) for \(|x|\leq\frac{1}{2}\) and \(\{\mathfrak{n}_{k,L}\}_{k\in\mathds{N}^{d}}\) is the Neumann basis for the \(Q_{L}=[-\frac{L}{2},\frac{L}{2}]^{d}\) as shown in Section 2. We also define \(\xi_{L,y}^{\epsilon}\) on \(Q_{L}^{y}:=y+Q_{L}\) for \(y\in\mathds{R}^{d}\) by a shifted white noise \[\xi_{L,\epsilon}^{y}:=\mathcal{T}_{y}\left(\sum_{k\in\mathds{N}^{d}}\tau( \frac{\epsilon}{L}k)\langle\xi,\mathcal{T}_{y}\mathfrak{n}_{k,L}\rangle \mathfrak{n}_{k,L}\right) \tag{5.2}\] where \(\mathcal{T}_{y}f(\cdot)=f(\cdot+y)\) for all \(f\in C(\mathds{R}^{d})\). Define \(\sigma_{\eta}(x)=\frac{1}{\eta+\pi^{2}|x|^{2}}\) for all \(x\in\mathds{R}^{d}\). Then, for \(u\in\mathscr{S}_{\mathfrak{n}}^{\prime}(Q_{L})\), \[(\eta-\Delta)^{-1}u=:\sigma_{\eta}(D)u:=\sum_{k\in\mathds{N}^{d}}\sigma_{\eta} (\frac{k}{L})\langle u,\mathfrak{n}_{k,L}\rangle\mathfrak{n}_{k,L}, \tag{5.3}\] We write \(\sigma(D):=\sigma_{1}(D)\). For any \(\delta>0\), it is known that almost surely \(\xi_{L,\epsilon}^{y}\to\xi_{L}^{y}\) in \(\mathscr{C}^{-d/2-\delta}\) for all \(y\in\mathds{R}^{d}\) from [14, Theorem 6.7]. ### Bound on the enhanced noise From the Feynman-Kac representation (see (3.2) and (3.3) in Theorem 3.2), we need to bound the terms \(Z,Y\) and \(\eta\) therein in order to estimate the solution. From the view of Proposition 3.1 and Definition 3.4, these terms are bounded in terms of enhanced white noise. Recall the following notations from the previous sections: For \(L\in(1,\infty]\) and \(\epsilon\in[0,1]\), we denoted \(\xi_{L,\epsilon}\) to be the mollification of the spatial white noise in \(d-\)dimension restricted on \(Q_{L}\) with \(Q_{\infty}:=\mathds{R}^{d}\). Set \(Z_{L,\epsilon}=(1-\frac{1}{2}\Delta)^{-1}\xi_{L,\epsilon}\). Let \(\eta_{L,\epsilon}>0\) be \(\eta_{0}\) in Proposition 3.1 which will be specified later and \(Y_{L,\epsilon}\) be the solution to (3.1) associated with \(Z_{L,\epsilon}\) and \(\eta=\eta_{L,\epsilon}\). \(\mathfrak{Z}_{L,\epsilon}\) is defined by Theorem 3.5 with \(\xi_{L,\epsilon}\) instead of \(\xi_{\epsilon}\). The following proposition says that \(\mathfrak{Z}_{L,\epsilon}\) is bounded by the logarithm of the length \(L\), which enables us to bound \(Z_{L,\epsilon}\), \(Y_{L,\epsilon}\), and \(\eta_{L,\epsilon}\). The following result is for \(d=3\). We refer to [14, Lemma 6.15] for \(d=2\) case. **Proposition 5.2**.: _Let \(\frac{2}{5}<\varrho<\frac{1}{2}\). Let \(\epsilon\in[0,1]\) and define_ \[\mathfrak{a}_{\epsilon}:=\max\left\{1,\sup_{L>e,L\in\mathds{N}}\frac{\| \mathfrak{Z}_{L,\epsilon}\|_{\mathscr{L}^{\varrho}}}{(\log L)^{2}}\right\}. \tag{5.4}\] _Then \(\mathfrak{a}_{\epsilon}\) is almost surely finite. Moreover, there exists \(h_{0}>0\) such that for all \(h\in[0,h_{0}]\) we have \(\sup_{\epsilon\in[0,1]}\mathds{E}[e^{h\sqrt{\mathfrak{a}_{\epsilon}}}]<\infty\)._ Proof.: Before proceeding to the main body of the proof, let us briefly explain the main idea. Below, we first show how to express \(\mathfrak{a}_{\epsilon}\) with the help of inhomogeneous terms (of bounded order) from the Wiener chaos expansion of the white noise. Due to the hyper-contractivity of the Wiener chaos (see [14]), for every random variable \(X\) in the \(k\)-th inhomogeneous Wiener chaos generated by the white noise and for any \(p>2\) we have \[\mathds{E}[|X|^{p}]\leq C_{k,p}\mathds{E}[|X|^{2}]^{p/2}, \tag{5.5}\] where \(C_{k,p}=(p-1)^{pk/2}.\) This allows us to bound higher moments \(\mathbb{E}[(\sqrt{\mathfrak{a}_{\epsilon}})^{p}]\) by some constant multiple time \(\mathbb{E}[\mathfrak{a}_{\epsilon}]^{p}\) which finally leads to the desired results. Details of this argument is given as follows. Recall that \(\Xi_{L,\epsilon}:=\{Z_{L,\epsilon},Z^{\boldsymbol{\gamma}}_{L,\epsilon}-c^{ \boldsymbol{\gamma}}_{\epsilon},Z^{\boldsymbol{\gamma}}_{L,\epsilon},Z^{ \boldsymbol{\gamma}}_{L,\epsilon},Z^{\boldsymbol{\gamma}}_{L,\epsilon}-c^{ \boldsymbol{\gamma}}_{\epsilon},\nabla Q_{L,\epsilon}\circ\nabla Z_{L,\epsilon}\}\). We know that \[\|\mathfrak{Z}_{L,\epsilon}\|_{\mathscr{Z}^{\varrho}}=\sum_{\zeta_{L, \epsilon}\in\Xi_{L,\epsilon}}\|\zeta_{L,\epsilon}\|_{\mathscr{C}^{-\alpha}} \tag{5.6}\] where \(\alpha>0\) depends on each \(\zeta_{L,\epsilon}\in\Xi_{L,\epsilon}\). We bound each term of the right hand side of the above identity to derive the desired bound. Let \(\zeta_{L,\epsilon}\in\Xi_{L,\epsilon}\). Observe that \(\zeta_{L,\epsilon}\) is in the \(k\)-th Wiener chaos where \(k=k(\zeta)\) denotes the number of the occurrences of the noise \(\xi_{L,\epsilon}\) in \(\zeta_{L,\epsilon}\). For instance, \(k(Z)=1\), \(k(Z^{\boldsymbol{\gamma}})=3\), and \(k(Z^{\boldsymbol{\gamma}})=4\). Note also that \(\max_{\zeta}k(\zeta)=4\). At this moment, we assume that for every \(\zeta_{L,\epsilon}\in\Xi_{L,\epsilon}\), there exist some constant \(a\in\mathds{R}\) and \(C_{0}>0\) (not depending on \(L,\epsilon\)) such that all \(i\in\mathds{N}_{-1},x\in[-L/2,L/2]^{d}\), we have \[\mathds{E}[|\Delta_{i}\zeta_{L,\epsilon}(x)|^{2}]\leq C_{0}2^{ai}. \tag{5.7}\] We prove these bounds towards the end of the proof of this proposition. Set \(C_{\kappa}=\sum_{i=-1}^{\infty}2^{-\kappa i}\). Observe that we can use the definition of the Besov space (see Section A) and (5.5) to obtain \[\mathbb{E}\Big{[}\|\zeta_{L,\epsilon}\|_{B_{p,p}^{n,-\frac{q}{2}-\kappa}}^{p} \Big{]}=\sum_{i=-1}^{\infty}2^{(-\frac{q}{2}-\kappa)pi}\mathbb{E}\big{[}\| \Delta_{i}\zeta_{L,\epsilon}\|_{L^{p}}^{p}\big{]}\leq C_{0}^{\frac{p}{2}}p^{ \frac{pk}{2}}L^{d}\Big{(}\sum_{i=-1}^{\infty}2^{-p\kappa i}\Big{)}\leq C_{ \kappa}(C_{0}p^{k})^{\frac{p}{2}}L^{d}.\] By the embedding property of the Besov space, there exists \(C>0\) such that \(\|\cdot\|_{\mathscr{C}^{-\frac{q}{2}-\kappa-\frac{2}{p}}}\leq C\|\cdot\|_{B_{p,p}^{n,-\frac{q}{2}-\kappa}}\). Combining this with the observation in the above display implies that for all \(\kappa>0\) there exists \(C>0\) independent of \(\zeta\) such that for all \(p\geq 1\), \[\mathds{E}[\|\zeta_{L,\epsilon}\|_{\mathscr{C}^{-\frac{q}{2}-\kappa-\frac{2}{p }}}^{p}]\leq C_{\kappa}C^{p}L^{d}p^{\frac{pk}{2}}. \tag{5.8}\] Choose \(p_{0}\in\mathds{N}\) large so that \(\frac{2}{p_{0}}<\kappa\) and \(\frac{2p_{0}}{k}>2\). Then there exists \(C_{k}>0\) such that for \(p\geq p_{0}\) \[\mathds{E}[\|\zeta_{L,\epsilon}\|_{\mathscr{C}^{-\frac{q}{2}-2\kappa}}^{2p}] \leq C_{\kappa}C_{k}^{p}L^{d}p^{p}. \tag{5.9}\] For \(h\geq 0\), by the above upper bounds, we have \[\mathds{E}[\exp(h\|\zeta_{L,\epsilon}\|_{\mathscr{C}^{-\frac{9}{2}-2 \kappa}}^{\frac{2}{k}})] =\sum_{n=0}^{\infty}\frac{h^{n}}{n!}\mathds{E}[\|\zeta_{L,\epsilon }\|_{\mathscr{C}^{-\frac{9}{2}-2\kappa}}^{\frac{2n}{k}}]\leq\sum_{n=0}^{p_{0}} \frac{h^{n}}{n!}\mathds{E}[\|\zeta_{L,\epsilon}\|_{\mathscr{C}^{-\frac{9}{2}-2 \kappa}}^{2p_{0}}]^{\frac{n}{kp_{0}}}+\sum_{n=p_{0}}^{\infty}\frac{h^{n}}{n!} \mathds{E}[\|\zeta_{L,\epsilon}\|_{\mathscr{C}^{-\frac{9}{2}-2\kappa}}^{\frac {2n}{k}}]\] \[\leq\sum_{n=0}^{p_{0}-1}\frac{h^{n}}{n!}(C_{\kappa}C^{2p_{0}}L^{d }(2p_{0})^{p_{0}k})^{\frac{n}{kp_{0}}}+\sum_{n=p_{0}}^{\infty}\frac{h^{n}}{n!} C_{\kappa}C_{k}^{n}L^{d}n^{n}\] \[\leq C_{\kappa}^{k}L^{\frac{d}{k}}\exp(2hC^{\frac{2}{k}}p_{0})+C_ {\kappa}L^{d}\sum_{n=p_{0}}^{\infty}(heC_{k})^{n},\] where we have used Jensen's inequality and \(\frac{1}{n!}\leq(\frac{\epsilon}{n})^{n}\). Then, there exists \(h_{0}\) such that for all \(h\in[0,h_{0}]\) we have \[\mathds{E}[\exp(h\|\zeta_{L,\epsilon}\|_{\mathscr{C}^{-\frac{9}{2}-2\kappa}}^{ \frac{2}{k}})]\leq AL^{\frac{2}{k}\lor d}\] for some \(A>0\) which is independent of \(k\). We can choose \(b>\frac{d}{k}\lor d+1\) such that \[\sum_{L\in\mathds{N}}L^{-b}\mathds{E}[\exp(h_{0}\|\zeta_{L,\epsilon}\|_{ \mathscr{C}^{-\frac{9}{2}-2\kappa}}^{\frac{2}{k}})]<\infty.\] This implies that for \(\tilde{\zeta}_{\epsilon}:=\sum_{L\in\mathds{N}}L^{-b}\exp(h_{0}\|\zeta_{L, \epsilon}\|_{\mathscr{C}^{-\frac{9}{2}-2\kappa}}^{\frac{2}{k}})\), we have \[\frac{\|\zeta_{L,\epsilon}\|_{\mathscr{C}^{-\frac{9}{2}-2\kappa}}^{\frac{2}{ k}}}{\log L}\leq\frac{1}{h_{0}}(b+\log\tilde{\zeta}_{\epsilon})\] for all \(L\in\mathds{N}\) with \(L>e\). We can rewrite this as \[\|\zeta_{L,\epsilon}\|_{\mathscr{C}^{-\frac{9}{2}-2\kappa}}\leq\mathfrak{a}_ {\zeta,\epsilon}(\log L)^{\frac{k}{2}}, \tag{5.10}\] where \(\mathfrak{a}_{\zeta,\epsilon}:=(\frac{b+\log\tilde{\zeta}_{\epsilon}}{h_{0}}) ^{2}\) since \(k(\zeta)\leq 4\). Moreover, Jensen's inequality implies that \[\sup_{\epsilon\in[0,1]}\mathds{E}[e^{h\sqrt{\mathfrak{G}_{\zeta,\epsilon}}}] \leq e^{\frac{hb}{h_{0}}}\sup_{\epsilon\in[0,1]}\mathds{E}[\tilde{\zeta}_{ \epsilon}]^{\frac{h}{h_{0}}}<\infty \tag{5.11}\] for all \(h\in[0,h_{0}]\). From (5.10), we have \[\|\mathfrak{Z}_{L,\epsilon}\|_{\mathscr{Z}^{\varrho}}=\sum_{\zeta_{L, \epsilon}\in\mathbb{E}_{L,\epsilon}}\|\zeta_{L,\epsilon}\|_{\mathscr{C}^{- \frac{9}{2}-2\kappa}}\leq(\log L)^{2}\sum_{\zeta_{L,\epsilon}\in\mathbb{E}_{ L,\epsilon}}\mathfrak{a}_{\zeta,\epsilon}=:(\log L)^{2}\tilde{\mathfrak{a}}_{\epsilon}, \tag{5.12}\] where \(\tilde{\mathfrak{a}}_{\epsilon}:=\sum_{\zeta_{L,\epsilon}\in\Xi_{L,\epsilon}} \mathfrak{a}_{\zeta,\epsilon}.\) Since \(\sqrt{x+y}\leq\sqrt{x}+\sqrt{y}\) for \(x,y\geq 0\), we can take \(\tilde{h}_{0}>0\) such that for all \(\tilde{h}\in[0,\tilde{h_{0}}]\) \[\sup_{\epsilon\in[0,1]}\mathds{E}[e^{\tilde{h}\sqrt{\mathfrak{a}_{\epsilon}}} ]\leq\sup_{\epsilon\in[0,1]}\mathds{E}\Big{[}\exp\Big{(}\tilde{h}\sum_{\zeta_{L,\epsilon}\in\mathbb{E}_{L,\epsilon}}\sqrt{\mathfrak{a}_{\zeta,\epsilon}} \Big{)}\Big{]}\leq\sup_{\epsilon\in[0,1]}\prod_{\zeta_{L,\epsilon}\in\Xi_{L, \epsilon}}(\mathds{E}[e^{5\tilde{h}_{0}\sqrt{\mathfrak{a}_{\zeta,\epsilon}}}])^{ \frac{1}{5}}<\infty, \tag{5.13}\] where we used Holder's inequality and (5.11). Since \(\mathfrak{a}_{\epsilon}\) in (5.4) satisfies \(\mathfrak{a}_{\epsilon}\leq\tilde{\mathfrak{a}}_{\epsilon}\), we get the desired result. Now it remains to show (5.7). In particular, we can show that for any fixed \(\delta\in(0,1)\), \[\mathds{E}[|\Delta_{i}Z_{L,\epsilon}(x)|^{2}] \lesssim 2^{(-1+\delta)i},\quad\mathds{E}[|\Delta_{i}(Z_{L,\epsilon}^{ \boldsymbol{\gamma}}(x)-c_{\epsilon}^{\boldsymbol{\gamma}})|^{2}] \lesssim 2^{(-2+\delta)i},\quad\mathds{E}[|\Delta_{i}Z_{L,\epsilon}^{ \boldsymbol{\gamma}}(x)|^{2}]\lesssim 2^{(-3+\delta)i}\] \[\mathds{E}[|\Delta_{i}Z_{L,\epsilon}^{\boldsymbol{\gamma}}(x)|^{2}] \lesssim 2^{(-3+\delta)i},\quad\mathds{E}[|\Delta_{i}(Z_{L,\epsilon}^{ \boldsymbol{\gamma}}(x)-c_{\epsilon}^{\boldsymbol{\gamma}})|^{2}] \lesssim 2^{(-4+\delta)i},\quad\mathds{E}[|\Delta_{i}(\nabla Q_{L,\epsilon} \circ\nabla Z_{L,\epsilon})(x)|^{2}]\lesssim 2^{\delta i}.\] Indeed, these estimates were shown for the parabolic case on the torus in the proof of [11, Theorem 6.12]. With a similar argument as in the proof of Proposition 3.8, we can obtain the same estimates for the \(L_{2}\) bound of the Wiener-chaos components for the elements in \(\Xi_{L,\epsilon}\) with minor modifications. We demonstrate below the bound on \(\mathds{E}[|\Delta_{i}Z_{L,\epsilon}(x)|^{2}]\). Other cases follow by similar arguments. Recall from Appendix A that for \(f\in\mathscr{S}^{\prime}\) and for any \(\delta>0\), \[\Delta_{i}f=\sum_{k\in\mathds{N}_{0}^{d}}\langle f,\mathfrak{n}_{k,L}\rangle \varrho_{i}\Big{(}\frac{k}{L}\Big{)}\mathfrak{n}_{k,L},\quad\text{where }\quad\varrho_{i}(x)\lesssim\left(\frac{2^{i}}{1+|x|}\right)^{\frac{3+3 \delta}{2}}.\] Since \(\varrho_{i}(x)=0\) if \(|x|\leq 2^{i}\), we have \[\sigma\Big{(}\frac{k}{L}\Big{)}\varrho_{i}\Big{(}\frac{k}{L}\Big{)}\lesssim 2 ^{-2i}\varrho_{i}\Big{(}\frac{k}{L}\Big{)}\lesssim 2^{\frac{-1+3\delta}{2}i} \left(\frac{1}{1+\left|\frac{k}{L}\right|}\right)^{\frac{3+3\delta}{2}},\] where \(\sigma(x)=(1+\pi|x|^{2})^{-1}\). Using similar computation as in [13, Lemma 6.11], one gets \(\mathds{E}[\langle\xi_{L,\epsilon},\mathfrak{n}_{k,L}\rangle\langle\xi_{L, \epsilon},\mathfrak{n}_{l,L}\rangle]\leq\prod_{i=1}^{3}(\epsilon\wedge((k_{i} +\ell_{i})\lor 1)^{-1})\). Since \(\|\mathfrak{n}_{k,L}\|_{L^{\infty}}\lesssim L^{-3/2}\) and noting that \(Z_{L,\epsilon}(x)=\mathcal{I}(\xi_{L,\epsilon})(x)=(1-\frac{1}{2}\Delta)^{-1} \xi_{L,\epsilon}(x)\), we have \[2^{-(-1+3\delta)i}\mathds{E}[|\Delta_{i}Z_{L,\epsilon}(x)|^{2}] \lesssim L^{-3}\sum_{k,l\in\mathds{N}_{0}^{3}}\frac{1}{(1+|\frac{ k}{L}|)^{\frac{3+3\delta}{2}}}\frac{1}{(1+|\frac{l}{L}|)^{\frac{3+3\delta}{2}}} \big{|}\mathds{E}[\langle\xi_{L,\epsilon},\mathfrak{n}_{k,L}\rangle\langle\xi _{L,\epsilon},\mathfrak{n}_{l,L}\rangle]\big{|}\] \[\lesssim\left(\sum_{k,l\in\frac{1}{L}\mathds{N}_{0}}L^{-3}\frac{1 }{(1+k)^{\frac{1+\delta}{2}}}\frac{1}{(1+l)^{\frac{1+\delta}{2}}}\frac{1}{(k+ l)\lor 1}\right)^{3}\] \[\lesssim\left(\sum_{k\in\frac{1}{L}\mathds{N}_{0}}L^{-1}\frac{1}{ (1+k)^{1+\frac{\delta}{2}}}\right)^{9}\leq C,\] where \(C\) is an universal constant. The second inequality is obtained by symmetrizing the sum. The above display shows the bound on \(\mathds{E}[|\Delta_{i}Z_{L,\epsilon}(x)|^{2}]\). This completes the proof. Using Proposition 5.2, we can obtain the following bounds on \(Z_{L,\epsilon}\), \(Y_{L,\epsilon}\), and \(\eta_{L,\epsilon}\). **Proposition 5.3**.: _Let \(\frac{2}{5}<\alpha<\frac{1}{2}\) and \(\aleph>0\) be the constant in (3.21). Then for any \(\epsilon\in[0,1]\), we have_ \[\|Z_{L,\epsilon}\|_{\mathscr{C}^{\alpha}}\leq\mathfrak{a}_{\epsilon}(\log L) ^{2},\quad\|Y_{L,\epsilon}\|_{\mathscr{C}^{2\alpha}}\leq 2\mathfrak{a}_{ \epsilon}(\log L)^{2},\quad\eta_{L,\epsilon}\leq C\mathfrak{a}_{\epsilon}^{ \aleph}(\log L)^{2\aleph} \tag{5.14}\] _where \(\mathfrak{a}_{\epsilon}\) is defined in Proposition 5.2._ Proof.: From the definition of \(\mathfrak{a}_{\epsilon}\), we know \(\|Z_{L,\epsilon}\|_{\mathscr{C}^{\alpha}}\leq\mathfrak{a}_{\epsilon}(\log L)^ {2}\). Furthermore, it also clear that if we take \(\eta_{L,\epsilon}:=C\|\mathfrak{Z}\|_{\mathscr{Z}^{\epsilon}}^{v}\), then \(\eta_{L,\epsilon}\leq C^{\aleph}\mathfrak{a}_{\epsilon}^{\aleph}(\log L)^{2\aleph}\). We are left to show the bound on \(\|Y_{L,\epsilon}\|_{\mathscr{C}^{2\alpha}}\). Recall from Section 3 that \(Y_{L,\epsilon}=v+\frac{1}{2}(Z_{L,\epsilon}^{\mathcal{Y}}+Z_{L,\epsilon}^{ \mathcal{Y}})\) where \(v\) is the fixed point solution of the equation (3.12). Recall the map \(\mathcal{G}:\mathcal{D}_{\mathcal{Q}}^{\alpha}\to\mathscr{C}^{\alpha+1}\times \mathscr{C}^{\alpha}\) from Proposition 3.8. Note that \((v,v^{\prime})\) is the fixed point of the map \(\mathcal{G}\). By choosing \(\eta\) large in (3.19) of Proposition 3.8, it follows that \(\|v\|_{\mathscr{C}^{\alpha+1}}\leq C\|\mathfrak{Z}\|_{\mathscr{Z}^{\varrho}}\) for some \(C>0\). Since \(Y_{L,\epsilon}=v+\frac{1}{2}(Z_{L,\epsilon}^{\mathcal{Y}}+Z_{L,\epsilon}^{ \mathcal{Y}})\), we have for \(\alpha<\varrho<\frac{1}{2}\) \[\|Y_{L,\epsilon}\|_{\mathscr{C}^{2\alpha}}\leq(C+1)\|\mathfrak{Z}\|_{\mathscr{Z}^ {\varrho}}\leq(C+1)\mathfrak{a}_{\epsilon}(\log L)^{2}.\] This completes the proof. ### Asymptotics of PAM started from constant initial data In this section, we reveal how the solution of the parabolic Anderson model started from constant initial data is related to the largest point of the spectrum of Anderson Hamiltonian. We achieve this goal in Lemma 5.7 and 5.8. It is worthwhile to note that similar claims have been proved for \(d=2\) case in [13]. Showing those results for \(d=3\) needs new estimates which are provided in Lemma 5.4 and Proposition 5.5. **Lemma 5.4**.: _Recall \(\mathds{Q}^{x,y}_{L,\epsilon}\) and the diffusion \(X\) from Section 2. Furthermore, recall \(\mathscr{D}^{y}_{L,\epsilon}\) defined in terms of \(Z^{y}_{L,\epsilon}\), \(Y^{y}_{L,\epsilon}\) and \(\eta\) from Section 4. Then, we have_ \[\mathds{Q}^{x,y}_{L,\epsilon}(X[0,t]\not\subset Q^{y}_{L})\leq C\exp\left(Ct \mathfrak{a}_{\epsilon}(\log L)^{2}-\frac{L^{2}}{Ct}\right), \tag{5.15}\] _and_ \[\mathds{E}_{\mathds{Q}^{x,y}_{L,\epsilon}}\left[\mathds{1}_{X[0,t]\not \subset Q^{y}_{r},X[0,t]\subset Q^{y}_{L}}\cdot\mathscr{D}^{y}_{L,\epsilon}(0, t)\right]\leq C\exp\left(C\mathfrak{a}^{\aleph+1}_{\epsilon}(\log L)^{2\aleph+2}- \frac{r^{2}}{Ct}\right) \tag{5.16}\] _where \(\mathfrak{a}_{\epsilon}\) is same as in the one defined in (5.4)._ Proof.: In what follows, we use the upper bound on \(\|Z^{y}_{L,\epsilon}\|_{\mathscr{C}^{\alpha}}\), \(\|Y^{y}_{L,\epsilon}\|_{\mathscr{C}^{2\alpha}}\) and \(\eta^{y}_{L,\epsilon}\) from Proposition 5.3 to derive upper and lower bound on \(\mathscr{D}^{y}_{L,\epsilon}\). Combining (5.14) with the definition of \(\mathscr{D}^{y}_{L,\epsilon}\) (3.3), for \(L>1\), \(s,t\geq 0\) and \(s<t\), we have \[e^{-C\mathfrak{a}^{\aleph+1}_{\epsilon}(t-s)(\log L)^{2\aleph+2}}\leq\mathds{1 }_{X[s,t]\subset Q^{y}_{L}}\cdot\mathscr{D}^{y}_{L,\epsilon}(s,t)\leq e^{C \mathfrak{a}^{\aleph+1}_{\epsilon}(t-s)(\log L)^{2\aleph+2}}, \tag{5.17}\] where \(C>0\) is an absolute constant and \(\mathfrak{a}_{\epsilon}\) is same as in (5.4). To complete the proof of the inequalities in (5.15) and (5.17), we further need bound on the transition probability of the diffusion \(X\) as defined in (3.4). We derive those in the following using the estimates of Theorem 4.14. Recall the upper and lower bound on the transition kernel \(\Gamma^{L}_{t}(x,y)\) from (4.27) and (4.28) respectively. We may bound \(\|U_{L,\epsilon}\|=\|Z_{L,\epsilon}+Y_{L,\epsilon}\|_{\infty}\) by \(\|Z_{L,\epsilon}\|_{\mathscr{C}^{\alpha}}+\|Y_{L,\epsilon}\|_{2\mathscr{C}^{ \alpha}}\) which is further bounded above by \(C\mathfrak{a}_{\epsilon}(\log L)^{2}\) due to (5.14). As a result, we get for \(L>r>e\) and \(L,r\in\mathds{N}\) \[\Gamma^{L}_{t}(x,y) \geq\frac{1}{t^{d/2}}\exp\left(-C\mathfrak{a}_{\epsilon}(\log L)^ {2}(1+\frac{r^{2}}{t})\right) \tag{5.18}\] \[\Gamma^{L}_{t}(x,y) \leq p_{t}(x,y)e^{\mathfrak{a}_{\epsilon}(\log L)^{2}},\] where \(p_{t}\) is the transition density of the \(d-\)dimensional Brownian motion. Note that \(\mathds{Q}^{x,\epsilon}_{L,y}(X[0,t]\not\subset Q^{y}_{L})\) is bounded above by \(\mathbb{P}\big{(}\sup_{s\in[0,t]}|X_{s}-x|\geq L/2\big{)}\). Corollary 4.15 bounds the last probability by \(C\exp(Ct\|Z_{L,\epsilon}+Y_{L,\epsilon}\|_{\infty}-\frac{L^{2}}{CT})\). Due to Proposition 5.3, we may bound \(\|Z_{L,\epsilon}+Y_{L,\epsilon}\|_{\infty}\) by \(C\mathfrak{a}_{\epsilon}(\log L)^{2}\). Plugging this into the bound \(C\exp(Ct\|Z_{L,\epsilon}+Y_{L,\epsilon}\|_{\infty}-\frac{L^{2}}{CT})\) yields the inequality (5.15). To prove (5.16), we first apply (5.17) to obtain \[\mathds{E}_{\mathds{Q}^{x,y}_{L,\epsilon}}\left[\mathds{1}_{X[0,t]\not\subset Q ^{y}_{r},X[0,t]\subset Q^{y}_{L}}\cdot\mathscr{D}^{y}_{L,\epsilon}(0,t)\right] \leq\mathds{Q}^{x,y}_{L,\epsilon}(X[0,t]\not\subset Q^{y}_{L})e^{C \mathfrak{a}^{\aleph+1}_{\epsilon}(t-s)(\log L)^{2\aleph+2}}.\] Substituting (5.15) into the right hand side of the above display yields (5.16). Now we use the bounds of Lemma 5.4 to upper and lower bound of the solution of PAM (2.1) started from constant initial data. Our proof ideas will be similar in spirit to [13, Lemma 3.2]. **Proposition 5.5**.: _Let \(L>1\). Recall that \(u^{\mathds{1},y}_{L}\) is the solution of PAM restricted on box \(Q^{y}_{L}\) under Dirichlet boundary condition started from constant initial data \(\mathds{1}\). Then we have_ \[u^{\mathds{1},y}_{L}(t,x)\leq C\exp\Big{(}t\mathbf{\lambda}_{1}(Q^{y}_{L})+C \mathfrak{a}^{\aleph+1}_{0}(\log L)^{2\aleph+2}\Big{)}, \tag{5.19}\] _where \(\mathfrak{a}_{0}:=\lim_{\epsilon\to 0}\mathfrak{a}_{\epsilon}\). Moreover, for \(t>\delta>1\)_ \[u^{\mathds{1},y}_{L}(t,x) \geq\exp\left(-C\mathfrak{a}^{\aleph+1}_{0}\delta(\log L)^{2 \aleph+2}-\frac{C\mathfrak{a}_{0}(\log L)^{2}r^{2}}{\delta}+(t-\delta)\mathbf{ \lambda}_{1}(Q^{y}_{r})\right) \tag{5.20}\] \[-\exp\left(C\mathfrak{a}^{2\aleph+1}_{0}t(\log L)^{2\aleph+2}- \frac{L^{2}}{C\delta}\right) \tag{5.21}\] _where \(\aleph\) is the same constant as in Proposition 5.3._ Proof.: We start by writing that \[u_{L}^{\mathbf{1},y}(t,x)=\mathds{E}_{\mathsf{Q}^{x,y}_{L}}\left[\mathscr{D}^{y}_ {L}(0,t)\mathds{1}_{X[0,t]\subset Q^{y}_{L}}\right]=\int_{Q^{y}_{L}}u_{L}^{x,y} (t,z)dz. \tag{5.22}\] where \(u_{r}^{x,y}(t,z):=u_{L}^{\delta_{x},y}(t,z)\). Note that \(u_{r}^{x,y}(t,z):=u_{L}^{\delta_{x},y}(t,z)=\lim_{\epsilon\to 0}u_{L}^{ \psi_{\epsilon}^{x},y}(t,z)\) where \(\psi_{\epsilon}^{x}(z):=\psi_{\epsilon}(z-x)\in C^{\infty}(Q_{L})\) (see [19]). For \(\delta\in(1,t),\epsilon\in[0,1]\) and \(0<r<L\), we have \[\begin{split} u_{r}^{x,y}(t,z)&=\lim_{\epsilon\to 0 }\mathds{E}_{\mathsf{Q}^{x,y}_{r,\epsilon}}\left[\mathscr{D}^{y}_{r,\epsilon}( 0,t)\psi_{\epsilon}^{x}(X_{t})\mathds{1}_{X[0,t]\subset Q^{y}_{r}}\right]\\ &\leq\lim_{\epsilon\to 0}e^{C\mathfrak{a}_{\epsilon}^{ \mathbb{R}+1}\delta(\log r)^{2\mathbb{R}+2}}\mathds{E}_{\mathsf{Q}^{x,y}_{r, \epsilon}}\left[\mathscr{D}^{y}_{r,\epsilon}(0,t-\delta)\mathds{1}_{X[0,t- \delta]\subset Q^{y}_{r}}\mathds{E}_{\mathsf{Q}^{x,y}_{r,\epsilon}}[\psi_{ \epsilon}^{x}(X_{t})|X_{t-\delta}]\right]\\ &=\lim_{\epsilon\to 0}e^{C\mathfrak{a}_{\epsilon}^{ \mathbb{R}+1}\delta(\log r)^{2\mathbb{R}+2}}\mathds{E}_{\mathsf{Q}^{x,y}_{r, \epsilon}}\left[\mathscr{D}^{y}_{r,\epsilon}(0,t-\delta)\mathds{1}_{X[0,t- \delta]\subset Q^{y}_{r}}\int_{\mathds{R}^{3}}\Gamma_{\delta}^{r}(X_{t-\delta },z^{\prime})\psi_{\epsilon}^{x}(z^{\prime})dz^{\prime}\right]\\ &\leq Ce^{C\mathfrak{a}_{0}^{\mathbb{R}+1}\delta(\log r)^{2 \mathbb{R}+2}+\mathfrak{a}_{0}(\log r)^{2}}\mathds{E}_{\mathsf{Q}^{x,y}_{r}} \left[\mathscr{D}^{y}_{r}(0,t-\delta)\mathds{1}_{X[0,t-\delta]\subset Q^{y}_ {r}}\right]\\ &\leq Ce^{C\mathfrak{a}_{0}^{\mathbb{R}+1}\delta(\log r)^{2 \mathbb{R}+2}}u_{r}^{\mathbf{1},y}(t-\delta,z).\end{split} \tag{5.23}\] The first inequality in the above display is obtained by splitting \(\mathscr{D}^{y}_{r,\epsilon}(0,t)\mathds{1}_{X[0,t]\subset Q^{y}_{r}}\) as a product of \(\mathscr{D}^{y}_{r,\epsilon}(0,t-\delta)\mathds{1}_{X[0,t-\delta]\subset Q^ {y}_{r}}\) and \(\mathscr{D}^{y}_{r,\epsilon}(t-\delta,t)\mathds{1}_{X[t-\delta,t]\subset Q^ {y}_{r}}\) and (5.17) to bound \(\mathscr{D}^{y}_{r,\epsilon}(t-\delta,t)\mathds{1}_{X[t-\delta,t]\subset Q^ {y}_{r}}\). The second to last inequality is obtained by applying the bound on the transition kernel \(\Gamma_{\delta}^{s}(X_{t-\delta},z^{\prime})\) as shown in (5.18). From (5.22) and (5.17) again, we have for \(q\in[1,\infty)\) \[\left(\int_{Q^{y}_{r}}|u_{r}^{\mathbf{1},y}(t-\delta,z)|^{q}dz\right)^{\frac{ 1}{q}}\lesssim e^{C\mathfrak{a}_{0}^{\mathbb{R}+1}(t-\delta)(\log r)^{2 \mathbb{R}+2}}.\] This implies that for \(q\in[1,\infty]\), \[\|u_{r}^{x,y}(t,\cdot)\|_{L^{q}}\leq Ce^{C\mathfrak{a}_{0}^{\mathbb{R}+1}t( \log r)^{2\mathbb{R}+2}}. \tag{5.24}\] Now observe that for \(\phi\in C(Q_{L})\), using Holder inequality, \[\int_{Q^{y}_{L}}u_{L}^{\phi,y}(t,x)dx=\sum_{n\in\mathds{N}}e^{t\mathbf{\lambda}(Q^ {y}_{L})}\langle v_{n,L}^{y},\phi\rangle\langle v_{n,L}^{y},\mathds{1}_{Q^{y} _{L}}\rangle\leq e^{t\mathbf{\lambda}(Q^{y}_{L})}\|\phi\|_{L^{2}}\|\mathds{1}_{Q^{y }_{L}}\|_{L^{2}}. \tag{5.25}\] Set \(\phi=u_{L}^{x,y}(1,\cdot)\). Then by the Chapman-Kolmogorov equation, we know \(u_{L}^{\mathbf{1},y}(t,x)=\int_{Q^{y}_{L}}u_{L}^{\phi,y}(t-1,z)dz\). Applying (5.22) to the right hand side of the latter equation in conjuction with (5.23) and (5.24) yields \[u_{L}^{\mathbf{1},y}(t,x)\lesssim\exp(t\mathbf{\lambda}(Q^{y}_{L})+C\mathfrak{a}_{ 0}^{\mathbb{R}+1}(\log L)^{2\mathbb{R}+2}). \tag{5.26}\] Now we proceed to prove the lower bound. Using Markov property at time \(\delta\in(1,t)\) and lower bound on \(\mathds{1}_{X[0,\delta]\subset Q^{y}_{L}}\cdot\mathscr{D}^{y}_{L,\epsilon}(0,\delta)\) from (5.17), we have \[u_{L}^{\mathbf{1},y}(t,x)\geq e^{-C\mathfrak{a}_{0}^{\mathbb{R}+1}\delta(\log L )^{2\mathbb{R}+2}}\mathds{E}_{\mathsf{Q}^{x,y}_{L}}\Big{[}\mathds{1}_{X[0, \delta]\subset Q^{y}_{L}}u_{L}^{\mathbf{1},y}(t-\delta,X_{\delta})\Big{]}. \tag{5.27}\] Then for \(r\in(0,L)\), we have \[\begin{split}\mathds{E}_{\mathsf{Q}^{x,y}_{L}}&\Big{[} \mathds{1}_{X[0,\delta]\subset Q^{y}_{L}}u_{L}^{\mathbf{1},y}(t-\delta,X_{ \delta})\Big{]}\\ &\geq\mathds{E}_{\mathsf{Q}^{x,y}_{L}}\Big{[}\mathds{1}_{X_{ \delta}\in Q^{y}_{r}}u_{r}^{\mathbf{1},y}(t-\delta,X_{\delta})\Big{]}-\mathds{E}_ {\mathsf{Q}^{x,y}_{L}}\Big{[}\mathds{1}_{X[0,\delta]\subset Q^{y}_{L}}\mathds{1}_ {X_{\delta}\in Q^{y}_{r}}u_{r}^{\mathbf{1},y}(t-\delta,X_{\delta})\Big{]}. \end{split} \tag{5.28}\] The second term on the r.h.s. can be bounded as \[\mathds{E}_{\mathds{Q}_{L}^{x,y}}\left[\mathds{1}_{X[0,\delta]\not \subset Q_{L}}\mathds{1}_{X_{\delta}\in Q_{r}^{y}}u_{r}^{\mathds{1},y}(t-\delta, X_{\delta})\right] \leq\mathds{P}(X[0,\delta]\not\subset Q_{L})\sup_{z\in Q_{r}^{y}}u_ {r}^{\mathds{1},y}(t-\delta,z)\] \[\leq C\exp\Big{(}C\mathfrak{a}_{0}\delta(\log L)^{2}-\frac{L^{2} }{C\delta}+C\mathfrak{a}_{0}^{\mathds{n}+1}(t-\delta)(\log r)^{2\mathds{n}+2} \Big{)}\] \[\leq C\exp\Big{(}C\mathfrak{a}_{0}^{\mathds{n}+1}t(\log L)^{2 \mathds{n}+2}-\frac{L^{2}}{C\delta}\Big{)}, \tag{5.29}\] where we used (5.15) and (5.17) in the second inequality. Now we bound the first term in the r.h.s. of (5.28). Using (5.18), we have \[\mathds{E}_{\mathds{Q}_{L}^{x,y}}\left[\mathds{1}_{X_{\delta}\in Q \bar{y}}u_{r}^{\mathds{1},y}(t-\delta,X_{\delta})\right] =\int_{Q_{r}^{y}}\Gamma_{\delta}^{L}(x,z)u_{r}^{\mathds{1},y}(t- \delta,z)dz \tag{5.30}\] \[\geq\frac{C}{t^{d/2}}e^{-C\mathfrak{a}_{0}(\log L^{2})-C\frac{ \mathfrak{a}_{0}(\log L)^{2}r^{2}}{\delta}}\int_{Q_{r}^{y}}u_{r}^{\mathds{1},y }(t-\delta,z)dz.\] Note that \[u_{r}^{\mathds{1},y}(t-\delta,z)\geq C^{-1}e^{-C\mathfrak{a}_{0}^{\mathds{n}+1 }\delta(\log r)^{2\mathds{n}+2}}u_{r}^{\delta_{x},y}(t-\delta,z) \tag{5.31}\] from (5.23). This gives \[\mathds{E}_{\mathds{Q}_{L}^{x,y}}\left[\mathds{1}_{X_{\delta}\in Q_{r}^{y}}u_ {r}^{\mathds{1},y}(t-\delta,X_{\delta})\right]\geq\frac{C}{t^{d/2}}e^{-C \mathfrak{a}_{0}(\log L^{2})-C\frac{\mathfrak{a}_{0}(\log L)^{2}r^{2}}{\delta }-C\mathfrak{a}_{0}^{\mathds{n}+1}\delta(\log r)^{2\mathds{n}+2}}\int_{Q_{r}^ {y}}u_{r}^{\delta_{x},y}(t-\delta,z)dz\] We pose to observe that using the spectral decomposition of the solution (Lemma 2.1). \[u_{L}^{\delta_{x},y}(t,x)=\sum_{n\in\mathds{N}}e^{t\mathbf{\lambda}_{n}(Q_{L}^{y}) }v_{n,L}^{y}(z)v_{n,L}^{y}(x),\quad\text{for }x,z\in Q_{L}^{y}.\] Using this, we can derive \(\int_{Q_{L}^{y}}u_{L}^{\delta_{x},y}(t,z)dz\geq e^{t\mathbf{\lambda}_{n}(Q_{L}^{y})}.\) Combining these with the above lower bound, we obtain \[\mathds{E}_{\mathds{Q}_{L}^{x,y}}\left[\mathds{1}_{X_{\delta}\in Q_{r}^{y}}u_ {r}^{\mathds{1},y}(t-\delta,X_{\delta})\right]\geq\frac{C}{t^{d/2}}\exp\Big{(} -C\mathfrak{a}_{0}^{\mathds{n}+1}(\log L)^{2\mathds{n}+2})-\frac{C\mathfrak{a }_{0}(\log L)^{2}r^{2}}{\delta}+(t-\delta)\mathbf{\lambda}_{1}(Q_{r}^{y})\Big{)}. \tag{5.32}\] Putting (5.29) and (5.32) together, we have (5.20). This completes the proof. The following lemma says that the solution \(u\) can be represented as the sum of the localized solution on \(A_{k}^{y}\) where \(A_{k}^{y}:=\left\{X[0,t]\subset(Q_{L_{t}^{k+1}}^{y}\setminus Q_{L_{t}^{k}}^{y})\right\}\). This helps us to employ Proposition 5.5 for the solution \(u\) on \(\mathds{R}^{d}\). **Lemma 5.6**.: _Let \(L_{t}:=\lfloor t^{b}\rfloor\) for \(b>1\). With probability one, for all \(x\in Q_{L_{t}}^{y}\) and \(y\in\mathds{R}^{d}\), we have for \(\epsilon\in[0,1]\),_ \[u_{\epsilon}^{\mathds{1},y}(t,x)=\sum_{k\in\mathds{N}_{0}}\mathcal{U}_{k, \epsilon}^{y}(t,x),\] _where \(\mathcal{U}_{k,\epsilon}^{y}(t,x):=\mathds{E}_{\mathds{Q}_{L_{t}^{k+1}, \epsilon}^{x,y}}\left[\mathscr{D}_{L_{t}^{k+1},\epsilon}^{y}(0,t)\mathds{1}_{ A_{k}^{y}}\right].\)_ Proof.: For simplicity, we prove this result for \(x=y=0\). Let us denote \(u_{\epsilon}^{\epsilon}:=u_{\epsilon}^{\mathds{1},0}(t,0)\). The general case follows easily from the stationarity of the solution \(u\). We let \(\mathcal{U}_{k,\epsilon}(t):=\mathcal{U}_{k,\epsilon}^{0}(t,0)\) for \(\epsilon\in[0,1]\). When \(\epsilon\in(0,1]\), the lemma is proved from the classical Feynmann-Kac representation in the proof of Theorem 3.2. To deal with the case of \(\epsilon=0\), we first estimate \(\mathcal{U}_{k,\epsilon}(t)\). Recall that \(\mathcal{U}_{k,\epsilon}(t)\) is equal to \[\mathds{E}_{\mathds{Q}_{L_{t}^{k+1},\epsilon}^{0,0}}\left[\mathds{1}_{X[0,t] \not\subset Q_{L_{t}^{k}}^{0},X[0,t]\subset Q_{L_{t}^{k+1}}^{0}}\cdot\mathscr{D }_{L_{t}^{k+1},\epsilon}^{0}(0,t)\right].\] Using (5.16) to bound the above display yields \[\begin{split}\mathcal{U}_{k,\epsilon}(t)&\leq C\exp \Big{(}Ct\mathfrak{a}_{\epsilon}^{\aleph+1}((k+1)\log L_{t})^{2\aleph+2}-\frac{L_ {t}^{2k}}{Ct}\Big{)}\\ &\leq C\exp\Big{(}Ct\mathfrak{a}_{\epsilon}^{\aleph+1}(b(k+1) \log t)^{\aleph+2}-\frac{t^{2bk}}{Ct}\Big{)}\end{split} \tag{5.33}\] where the last inequality is obtained by substituting \(L_{t}=\lfloor t^{b}\rfloor\). For a small \(\delta>0\), define the following event: \[\Upsilon_{\epsilon}:=\Big{\{}\mathfrak{a}_{\epsilon}\leq C_{b}t^{\frac{2bk-2- \delta_{0}}{\upsilon+1}}\Big{\}}.\] Under the event \(\Upsilon_{\epsilon}\), we have that for all \(t\geq t_{0}\) and all \(k\geq 1\), \[\mathcal{U}_{k,\epsilon}(t)\leq C\exp\Big{(}-\frac{t^{2bk-1}}{2C}\Big{)}. \tag{5.34}\] By the union bound of the probability to obtain that for all \(K\in\mathds{N}\) and for all \(\epsilon\in(0,1]\) \[\begin{split}\mathds{P}\Big{(}u_{t}^{0}-\sum_{k=0}^{K}\mathcal{ U}_{k}(t)>\delta\Big{)}\leq&\mathds{P}\Big{(}|u_{t}^{0}-u_{t}^{ \epsilon}|>\frac{\delta}{3}\Big{)}+\mathds{P}\Big{(}|u_{t}^{\epsilon}-\sum_{k= 0}^{K}\mathcal{U}_{k,\epsilon}(t)|>\frac{\delta}{3}\Big{)}\\ &+\sum_{k=0}^{K}\mathds{P}\Big{(}|\mathcal{U}_{k,\epsilon}(t)- \mathcal{U}_{k}(t)|>\frac{\delta}{3K}\Big{)}\end{split} \tag{5.35}\] By [11, Theorem 1.1], we know \(u_{t}^{\epsilon}\) converges in probability to \(u_{t}\) as \(\epsilon\to 0\). This shows the first term in the right hand side of (5.35) converges to \(0\) as \(\epsilon\to 0\). Theorem 3.2 shows that the convergence of \(u^{\epsilon}\) in every box \(Q_{L_{t}}\). As a result, the third term in the right hand side of the above display also converges to \(0\). Therefore, we need to show that the second term goes to zero taking \(K\) large. Note that \(u_{t}^{\epsilon}=\sum_{k=0}^{\infty}\mathcal{U}_{k,\epsilon}(t)\) by the classical Feynman-Kac representation. Then, we have \[\mathds{P}\Big{(}|u_{t}^{\epsilon}-\sum_{k=0}^{K}\mathcal{U}_{k,\epsilon}(t)| >\frac{\delta}{3}\Big{)}\leq\mathds{P}\Big{(}\Big{\{}\Big{|}\sum_{k=K+1}^{ \infty}\mathcal{U}_{k,\epsilon}(t)\Big{|}>\frac{\delta}{3}\Big{\}}\cap \Upsilon_{\epsilon}\Big{)}+\mathds{P}(\neg\Upsilon_{\epsilon}).\] Due to (5.34), \(\sum_{k=K+1}^{\infty}\mathcal{U}_{k,\epsilon}(t)\) is bounded above by \(\exp(-t^{2b(K-1)}/C)\) for all \(K>1\) on the event \(\Upsilon_{\epsilon}\). This shows that the first term of the right hand side of the above display converges to \(0\) as \(K\) approaches to \(\infty\). We now seek to bound \(\mathds{P}(\neg\Upsilon_{\epsilon})\). By Markov's inequality, we have \[\mathds{P}(\Upsilon_{\epsilon})=\mathds{P}\big{(}\mathfrak{a}_{\epsilon}>C_{ b}t^{\frac{2bK-2-\delta_{0}}{8+1}}\big{)}\leq\mathds{E}[e^{h_{0}\sqrt{\mathfrak{a}_{ \epsilon}}}]\cdot\exp\Big{(}-h_{0}\sqrt{C_{b}t^{\frac{2bK-2-\delta_{0}}{2(8+1)} }}\Big{)}.\] Recall that \(\mathds{E}[e^{h_{0}\sqrt{\mathfrak{a}_{\epsilon}}}]\) is uniformly bounded as shown in Proposition 5.2. Letting \(K\to\infty\) sends the right hand side of the above display to \(0\). This shows that the middle term of the right hand side of (5.35) also converges to \(0\) as \(\epsilon\to 0\) and \(K\to\infty\). As a result, we get \(u(t,0)=\sum_{k=0}^{\infty}\mathcal{U}_{k}(t)\) in probability. Since each term of the series is non-negative, the latter identity also holds in the almost sure sense. This completes the proof. **Lemma 5.7**.: _Let \(L_{t}:=t^{b}\) for some \(b\in(\frac{1}{2},1]\). With probability one, for all \(y\in\mathds{R}^{d}\) with \(d=2,3\),_ \[\lim_{t\to\infty}\sup_{x\in B(y,1)}\frac{\log u_{L_{t}}^{\mathds{1},y}(t,x)}{ t\mathbf{\lambda}_{1}(Q_{L_{t}}^{y})}=1. \tag{5.36}\] Proof.: We only prove the lemma when \(d=3\). The \(d=2\) case follows from [17, Lemma 3.6]. By (5.19), we have \[\sup_{x\in B(y,1)}\frac{\log u_{L_{t}}^{\mathds{1},y}(t,x)}{t\mathbf{\lambda}_{1}( Q_{L_{t}}^{y})}\leq 1+\frac{C\mathfrak{a}_{0}^{\aleph+1}(\log L_{t})^{2\aleph+2}}{t \mathbf{\lambda}_{1}(Q_{L_{t}}^{y})}.\] By Section 2, we know that for enough large \(t>0\), \(\mathbf{\lambda}_{1}(Q_{L_{t}}^{y})>0\) almost surely. Since \(\mathfrak{a}_{0}\) is almost surely finite, we have the upper bound \[\limsup_{t\to\infty}\sup_{x\in B(y,1)}\frac{\log u_{L_{t}}^{\mathds{1},y}(t,x)} {t\mathbf{\lambda}_{1}(Q_{L_{t}}^{y})}\leq 1. \tag{5.37}\] Let \(b\in(\frac{1}{2},1],b_{1}\in(0,b)\), and \(b_{2}\in(2b_{1}-1,2b-1)\). For the lower bound, by (5.20) with \(L_{t}:=t^{b},r_{t}=t^{b_{1}}\), and \(\delta_{t}:=t^{b_{2}}\), we have \[\sup_{x\in B(y,1)}u_{L_{t}}^{\mathds{1},y}(t,x)\geq\log A_{t}+\log(1-B_{t})\] where \[A_{t}:=\operatorname{const}\cdot\exp\Big{(}(t-\delta_{t})\mathbf{\lambda}_{1}(Q_ {r_{t}}^{y})-\frac{\mathfrak{a}_{0}r^{2}(\log L_{t})^{2}}{C\delta_{t}}-C \mathfrak{a}_{0}^{v+1}\delta_{t}(\log L_{t})^{2v+2}\Big{)}\] and \[B_{t}:=\operatorname{const}\cdot\exp\Big{(}-(t-\delta_{t})\mathbf{\lambda}_{1}(Q_ {r_{t}}^{y})+\frac{\mathfrak{a}_{0}r_{t}^{2}(\log L_{t})^{2}}{C\delta_{t}}- \frac{L_{t}^{2}}{C\delta_{t}}-C\mathfrak{a}_{0}^{v+1}\delta_{t}(\log L_{t})^{2 v+2}+C\mathfrak{a}_{0}^{v+1}t(\log L_{t})^{2v+2}\Big{)}.\] Since \(2b-b_{2}>1\), \(b>b_{1}\) and \(\mathbf{\lambda}(Q_{r_{t}}^{y})>0\) for all large \(t\), we have \(B_{t}\to 0\). Furthermore, since \(2b_{1}-b_{2}<1\) and \(b_{2}<2b-1\leq 1\), we have \[\liminf_{t\to\infty}\log A_{t}\geq\liminf_{t\to\infty}\frac{\mathbf{\lambda}_{1}(Q _{r_{t}}^{y})}{\mathbf{\lambda}_{1}(Q_{L_{t}}^{y})}.\] Note that \(\lim_{L\to\infty}\frac{\mathbf{\lambda}_{1}(Q_{t}^{y})}{(\log L)^{2}}=\chi\) for some constant \(\chi>0\) (see [14, Theorem 1]). This shows that \[\liminf_{t\to\infty}\sup_{x\in B(y,1)}\frac{\log u_{L_{t}}^{\mathds{1},y}(t,x )}{t\mathbf{\lambda}_{1}(Q_{L_{t}}^{y})}\geq\liminf_{t\to\infty}\frac{\mathbf{\lambda} _{1}(Q_{r_{t}}^{y})}{\mathbf{\lambda}_{1}(Q_{L_{t}}^{y})}\geq\liminf_{t\to\infty} \frac{\chi(\log r_{t})^{2}}{\chi(\log L_{t})^{2}}\geq\left(\frac{b_{1}}{b} \right)^{2}.\] Letting \(b_{1}\uparrow b\), we have the lower bound. This completes the proof. **Lemma 5.8**.: _Let \(L_{t}:=t(\log t)^{2\aleph+2}\). With probability one, for all \(y\in\mathds{R}^{d}\) with \(d=2,3\),_ \[\limsup_{t\to\infty}\left|\sup_{x\in B(y,1)}\frac{\log u^{\mathds{1},y}(t,x)} {t\mathbf{\lambda}_{1}(Q_{L_{t}}^{y})}-\sup_{x\in B(y,1)}\frac{\log u_{L_{t}}^{ \mathds{1},y}(t,x)}{t\mathbf{\lambda}_{1}(Q_{L_{t}}^{y})}\right|=0, \tag{5.38}\] _and_ \[\lim_{t\to\infty}\sup_{x\in B(y,1)}\frac{\log u^{\mathds{1},y}(t,x)}{t\mathbf{ \lambda}_{1}(Q_{L_{t}}^{y})}=1. \tag{5.39}\] Proof.: As in the previous lemma, we only prove the case when \(d=3\) since the case of \(d=2\) follows from [11, Proposition 4.5]. Due to the fact \(\lim_{L\to\infty}\frac{\mathbf{\lambda}_{1}(Q_{t}^{y})}{(\log L)^{2}}=\chi\), it suffices to show \[\limsup_{t\to\infty}\left|\sup_{x\in B(y,1)}\frac{\log u^{\mathds{1},y}(t,x)} {t(\log L_{t})^{2}}-\sup_{x\in B(y,1)}\frac{\log u_{L_{t}}^{1,y}(t,x)}{t(\log L _{t})^{2}}\right|=0. \tag{5.40}\] Letting \(\epsilon\to 0\) in Lemma 5.6, we have \(u^{\mathds{1},y}=\sum_{k\in\mathds{N}_{0}}\mathcal{U}_{k}^{y}\). By simple observation, it follows that \[\lim_{t\to\infty}\left|\sup_{x\in B(y,1)}\frac{\log\sum_{k\in\mathds{N}_{0}} \mathcal{U}_{k}^{y}(t,x)}{t(\log L_{t})^{2}}-\max\left\{\sup_{x\in B(y,1)} \frac{\log\mathcal{U}_{0}^{y}(t,x)}{t(\log L_{t})^{2}},\sup_{x\in B(y,1)} \frac{\log\sum_{k\geq 1}\mathcal{U}_{k}^{y}(t,x)}{t(\log L_{t})^{2}}\right\} \right|=0. \tag{5.41}\] Since \(\mathcal{U}_{0}^{y}=u_{L_{t}}^{\mathbbm{1},y}\), applying Lemma 5.7 yields that \[\sup_{x\in B(y,1)}\frac{\log\mathcal{U}_{0}^{y}(t,x)}{t(\log L_{t})^{2}}=\sup_{x \in B(y,1)}\frac{\log\mathcal{U}_{0}^{y}(t,x)}{t\boldsymbol{\lambda}_{1}(Q_{L_{ t}}^{y})}\cdot\frac{\boldsymbol{\lambda}_{1}(Q_{L_{t}}^{y})}{(\log L_{t})^{2}} \geq\frac{\chi}{2} \tag{5.42}\] for all large \(t\). Moreover, from (5.33), we have \[\sup_{x\in B(y,1)}\mathcal{U}_{k}^{y}(t,x)\leq C\exp\Big{(}Ct((k+1)\log L_{t}) ^{2\aleph+2}\Big{[}\mathfrak{a}_{0}^{\aleph+1}-\frac{L_{t}^{2k}}{C^{2}t^{2}((k+1 )\log L_{t})^{2\aleph+2}}\Big{]}\Big{)}.\] Since \(L_{t}=t(\log t)^{2\aleph+2}\), we have for all \(k\geq 1\) \[\sup_{x\in B(y,1)}\mathcal{U}_{k}^{y}(t,x)\leq e^{-Ct(\log t)^{2\aleph+2}} \tag{5.43}\] for all large \(t\). This implies that for all large \(t\), \[\sup_{x\in B(y,1)}\log\Big{(}\sum_{k\geq 1}\mathcal{U}_{k}^{y}(t,x)\Big{)}\leq 0. \tag{5.44}\] By combining (5.42) and (5.44) with (5.41), we can conclude (5.40), which is the first part of the lemma. The second part of the lemma follows immediately from the first part and Lemma 5.7. ## 6. Spatial Multifractality and Asymptotics of the PAM: Proof of Theorem 1.1 In the remaining sections, we will show the multifractality of the solution to (1.1). We only consider the solution with flat initial data and write \(u\) for \(u^{\mathbbm{1}}\). ### Proof of the lower bound in Theorem 1.1 In this section, we prove the lower bound of the dimension in Theorem 1.1. The following proposition is one of the key tools for proving such lower bound. **Proposition 6.1**.: _Let \(\epsilon>0\) and \(\theta>0\). There exists \(n_{0}>0\) such that for all \(n\geq n_{0}\) and \(x_{1},...,x_{m}\in(e^{n},e^{n+1}]^{d}\) satisfying \(\min_{i\neq j}\|x_{i}-x_{j}\|_{\infty}>e^{n\theta},\) we have_ \[\mathds{P}\left(\max_{1\leq j\leq m}\frac{\log u(t,x_{j})}{(\log\|x_{j}\|_{ \infty})^{\frac{2}{4-d}}}\leq\alpha t\right)\leq\exp\left(-cm(\alpha+\epsilon )^{\frac{d}{2}}n^{\frac{d}{4-d}}e^{d\log r_{n}-\epsilon_{d}(1+\epsilon)( \alpha+\epsilon)^{\frac{4-d}{2}}n}\right)+e^{-cm\log n}, \tag{6.1}\] _where \(r_{n}:=n^{\frac{1}{2}\log t}\)._ Proof.: Since \(x_{1},\ldots x_{m}\in(e^{n},e^{n+1}]^{d}\), we have \[\mathds{P}\Big{(}\max_{1\leq j\leq m}\frac{\log u(t,x_{j})}{(\log\|x_{j}\|_{ \infty})^{\frac{2}{4-d}}}\leq\alpha t\Big{)}\leq\mathds{P}\Big{(}\max_{1\leq j \leq m}u(t,x_{j})\leq e^{\alpha t(n+1)^{\frac{2}{4-d}}}\Big{)}. \tag{6.2}\] Let \(L:=L_{n}:=t^{\log n}\). By Proposition 5.6, there exists \(t_{0}>0\) such that for all \(t\geq t_{0}\) the series \(\sum_{k\in\mathds{N}_{0}}\mathcal{U}_{k}^{y}=u\) for all \(y\in\mathds{R}^{d}\). Moreover, since \(0\leq\mathcal{U}_{0}^{y}\leq\sum_{k\in\mathds{N}_{0}}\mathcal{U}_{k}^{y}=u\), we have \[\begin{split}\mathds{P}\left(\max_{1\leq j\leq m}\frac{\log u(t, x_{j})}{(\log\|x_{j}\|_{\infty})^{\frac{2}{4-d}}}\leq\alpha t\right)& \leq\mathds{P}\Big{(}\max_{1\leq j\leq m}\mathcal{U}_{0}^{x_{j}}( t,x_{j})\leq e^{\alpha t(n+1)^{\frac{2}{4-d}}}\Big{)}\\ &=\prod_{j=1}^{m}\mathds{P}\Big{(}\mathcal{U}_{0}^{x_{j}}(t,x_{j}) \leq e^{\alpha t(n+1)^{\frac{2}{4-d}}}\Big{)},\end{split} \tag{6.3}\] where the last equality follows from the independence of \(\mathcal{U}_{0}^{x_{j}}(t,x_{j})\) shown in Lemma 2.4 thanks to the fact that \(\min_{i\neq j}\|x_{i}-x_{j}\|_{\infty}>e^{n\theta}\gg 3L_{t}=3t^{\log n}.\) We also set \(r:=r_{n}:=t^{\frac{1}{2}\log n},\). Let \(\epsilon>0\). Observe that for any fixed \(t\geq t_{0}\), there exists \(n_{0}>0\) such that for all \(n\geq n_{0}\) \[\epsilon tn\gg(\mathfrak{a}_{0}\log L_{n})^{2\mathbb{R}+2}+\frac{\mathfrak{a} _{0}(\log L_{n})^{2}r_{n}^{2}}{C\delta},\quad\frac{L_{n}^{2}}{C\delta}\gg t( \mathfrak{a}_{0}\log L_{n})^{2\mathbb{R}+2}+\frac{\mathfrak{a}_{0}(\log L_{n} )^{2}r_{n}^{2}}{C\delta} \tag{6.4}\] on the event \(\Upsilon_{n}:=\{\mathfrak{a}_{0}\leq(\log n)^{2}\}.\) Using this observation, when \(d=3\), there exist \(t_{0}>0\) and \(n_{0}^{\prime}>0\) such that for all \(n\geq n_{0}^{\prime}\) and all \(t\geq t_{0}\) \[\mathbb{P}\Big{(}\mathcal{U}_{0}^{x_{j}}(t,x_{j})\leq e^{\alpha t (n+1)^{2}}\Big{)}\] \[\leq\mathbb{P}\Big{(}e^{-C\mathfrak{a}_{0}^{\mathbb{R}+1}\delta( \log L_{n})^{2\mathbb{R}+2}-\frac{C\mathfrak{a}_{0}(\log L_{n})^{2}r_{n}^{2}} {\delta}+(t-\delta)\boldsymbol{\lambda}_{1}(Q_{r_{n}}^{y})}-e^{C\mathfrak{a}_ {0}^{\mathbb{R}+1}t(\log L_{n})^{2\mathbb{R}+2}-\frac{L_{n}^{2}}{C\delta}} \leq e^{\alpha t(n+1)^{2}}\Big{)}\] \[\leq\mathbb{P}\Big{(}\Big{\{}\frac{1}{2}\exp\Big{(}(t-\delta) \boldsymbol{\lambda}_{1}(Q_{r_{n}}^{x_{j}})-C\mathfrak{a}_{0}^{\mathbb{R}+1} \delta(\log L_{n})^{2\mathbb{R}+2}-\frac{C\mathfrak{a}_{0}(\log L_{n})^{2}r^{ 2}}{\delta}\Big{)}\leq e^{\alpha t(n+1)^{2}}\Big{\}}\cap\Upsilon_{n}\Big{)}+ \mathbb{P}(\neg\Upsilon_{n})\] \[\leq\mathbb{P}\left(\boldsymbol{\lambda}_{1}(Q_{r_{n}}^{x_{j}}) \leq(\alpha+\epsilon)n^{2}\right)+\mathbb{P}(\neg\Upsilon_{n})\] The first inequality in the above display is obtained by using the lower bound on \(\mathcal{U}_{0}^{x_{j}}(t,x_{j})\) from Proposition 5.5. While the second inequality is just consequence of the union bound, the last inequality utilizes (6.4). When \(d=2\), we use the similar argument except the fact that the lower bound on \(\mathcal{U}_{0}^{x_{j}}(t,x_{j})\) is now provided by Lemma 5.2 of [10]. We obtain similarly \[\mathbb{P}(\mathcal{U}_{0}^{x_{j}}(t,x_{j})\leq e^{\alpha t(n+1)})\] \[\leq\mathbb{P}\Big{(}\Big{\{}\frac{1}{2}\exp\Big{(}(t-\delta) \boldsymbol{\lambda}_{1}(Q_{r_{n}}^{x_{j}})-\frac{r_{n}^{2}}{C\delta}-C\delta( \mathfrak{a}_{0}\log L_{n})^{5}\Big{)}\leq e^{\alpha t(n+1)}\Big{\}}\cap \Upsilon_{n}\Big{)}+\mathbb{P}(\neg\Upsilon_{n})\] \[\leq\mathbb{P}(\boldsymbol{\lambda}_{1}(Q_{r_{n}}^{x_{j}})\leq( \alpha+\epsilon)n)+\mathbb{P}(\neg\Upsilon_{n})\] Here we note that \(t_{0}\) can be chosen independently of \(\alpha\). Now we use Lemma 2.2 to obtain \[\max_{1\leq j\leq m}\mathbb{P}\left(\boldsymbol{\lambda}_{1}(Q_{r_{n}}^{x_{j}} )\leq(\alpha+\epsilon)n^{\frac{2}{4-d}}\right)\leq\exp\left(-c_{2}(\alpha+ \epsilon)^{\frac{d}{2}}n^{\frac{d}{4-d}}e^{d\log r_{n}-\epsilon_{d}(1+ \epsilon)(\alpha+\epsilon)^{\frac{4-d}{2}}n}\right)\] Substituting this into (6.3) and (6.2), we have \[\mathbb{P}\left(\max_{1\leq j\leq m}\frac{\log u(t,x_{j})}{(\log \|x_{j}\|_{\infty})^{\frac{2}{4-d}}}\leq\alpha t\right)\] \[\leq 2^{m-1}\exp\left(-c_{2}m(\alpha+\epsilon)^{\frac{d}{2}}n^{ \frac{d}{4-d}}e^{d\log r_{n}-\epsilon_{d}(1+\epsilon)(\alpha+\epsilon)^{\frac {4-d}{2}}n}\right)+2^{m-1}\mathbb{P}(\neg\Upsilon_{n})^{m}\] On the other hand, since \(\mathrm{E}[e^{h_{0}\sqrt{\mathfrak{a}_{0}}}]<\infty\) (see 5.4), we use Markov's inequality to get \(\mathbb{P}(\neg\Upsilon_{n})\leq e^{-h_{0}\log n}\). This shows that \[\mathbb{P}\left(\max_{1\leq j\leq m}\frac{\log u(t,x_{j})}{(\log \|x_{j}\|_{\infty})^{\frac{2}{4-d}}}\leq\alpha t\right)\leq\exp\left(-cm( \alpha+\epsilon)^{\frac{d}{2}}n^{\frac{d}{4-d}}e^{d\log r_{n}-\epsilon_{d}(1+ \epsilon)(\alpha+\epsilon)^{\frac{4-d}{2}}n}\right)+e^{-cm\log n}\] for some constant \(c>0\) and hence, completes the proof. Now we proceed to prove Theorem 1.1. Recall that \[\mathcal{P}_{t}^{d}(\alpha)=\left\{x\in\mathds{R}^{d}\,:\,u(t,x)\geq e^{\alpha t (\log|x|)^{\frac{2}{4-d}}}\right\}.\] The following result proves a lower bound to the macroscopic Hausdorff dimension of the set \(\mathcal{P}_{t}^{d}(\alpha)\). The proof of the upper bound is deferred to the subsection after the following result. **Theorem 6.2**.: _There exists a non-random finite constant \(t_{0}>0\) such that for all \(t\geq t_{0}\),_ \[\operatorname{Dim_{H}}[\mathcal{P}_{t}^{d}(\alpha)]\geq d-\alpha^{\frac{4-d}{2}} \mathfrak{c}_{d},\quad\text{a.s.}\] Proof.: Let us choose \(\alpha>0\) satisfying \(\alpha^{\frac{4-d}{2}}\mathfrak{c}_{d}<d\). Fix \(\epsilon>0\) such that \(\mathfrak{c}_{d}(1+\epsilon)(\alpha+\epsilon)^{\frac{4-d}{2}}<d\). Define \[\widetilde{\mathcal{P}}_{t}^{d}(\alpha):=\mathcal{P}_{t}^{d}(\alpha)\cap \bigcup_{n=0}^{\infty}\left(e^{n},e^{n+1}\right]^{d}.\] Then it suffices to show that \[\operatorname{Dim_{H}}[\widetilde{\mathcal{P}}_{t}^{d}(\alpha)]\geq d-\alpha ^{\frac{4-d}{2}}\mathfrak{c}_{d},\] with probability one. We now choose \(\gamma\in(\mathfrak{c}_{d}(1+\epsilon)(\alpha+\epsilon)^{\frac{4-d}{2}}/d,1)\) and define, for all integers \(n\geq 0\), \[a_{j,n}(\gamma):=e^{n}+je^{n\gamma},\quad j\in[0,e^{n(1-\gamma)})\cap\mathds{Z},\] and \[I_{n}(\gamma):=\bigcup_{j\in[0,e^{n(1-\gamma)})\cap\mathds{Z}}\{a_{j,n}( \gamma)\},\qquad\mathcal{I}_{n}(\gamma):=\prod_{k=1}^{d}I_{n}^{k}(\gamma),\] where \(I_{n}^{k}(\gamma)\) is a copy of \(I_{n}(\gamma)\) for all \(1\leq j\leq d\). We choose \(x\in\mathcal{I}_{n}(\gamma)\) and \(\theta\in(0,\gamma-\frac{\mathfrak{c}_{d}(1+\epsilon)(\alpha+\epsilon)^{\frac{ 4-d}{2}}}{d})\). By the construction of the set \(\mathcal{I}_{n}(\gamma)\), we first find the points \(\{x_{i}\}_{i=1}^{m(n)}\) satisfying the followings: (a.1) \(x_{i}\in\mathcal{I}_{n}(\gamma)\cap B(x,e^{n\gamma})\) for all \(i=1,...,m(n)\); (a.2) \(\|x_{i}-x_{j}\|_{\infty}\geq e^{n\theta}\) whenever \(1\leq i<j\leq m(n)\); (a.3) \(d^{-1}e^{dn(\gamma-\theta)}\leq m(n)\leq de^{dn(\gamma-\theta)}\). Then by Proposition 6.1, we have \[\mathds{P} \left(\max_{1\leq j\leq m(n)}\frac{\log u(t,x_{j})}{(\log\|x_{j} \|_{\infty})^{\frac{2}{4-d}}}\leq\alpha t\right)\] \[\leq\exp\left(-cm(n)(\alpha+\epsilon)^{\frac{d}{2}}n^{\frac{d}{4- d}}e^{d\log r_{n}-\mathfrak{c}_{d}(1+\epsilon)(\alpha+\epsilon)^{\frac{4-d}{2}}n} \right)+e^{-cm(n)\log n}\] \[\leq\exp\left(-\frac{c}{2}(\alpha+\epsilon)^{\frac{d}{2}}n^{ \frac{d}{4-d}}e^{c\log n+[d(\gamma-\theta)-\mathfrak{c}_{d}(1+\epsilon)( \alpha+\epsilon)^{\frac{4-d}{2}}]n}\right)+e^{-\frac{c}{2}e^{dn(\gamma-\theta )}\log n}.\] By our choice of \(\gamma\) and \(\theta\), \(\kappa:=d(\gamma-\theta)-\mathfrak{c}_{d}(1+\epsilon)(\alpha+\epsilon)^{ \frac{4-d}{2}}>0\). Therefore, \[\mathds{P}\left(\max_{\{x_{i}\}_{i=1}^{m(n)}\subseteq\mathcal{I}_{n}(\theta) \cap B(x,e^{n\gamma})}\frac{\log u(t,x_{i})}{(\log\|x_{i}\|_{\infty})^{2}} \leq\alpha t\right)\leq\exp\left(-C_{1}e^{\kappa n-C_{2}\log n}\right)+e^{- \frac{c}{2}e^{dn(\gamma-\theta)}\log n},\] for some constant \(C_{1},C_{2}>0\). Then we have \[\sum_{n=0}^{\infty}\mathds{P} \left(\min_{x\in\mathcal{I}_{n}(\gamma)}\max_{\{x_{i}\}_{i=1}^{m( n)}\subseteq\mathcal{I}_{n}(\theta)\cap B(x,e^{n\gamma})}\frac{\log u(t,x_{i})}{( \log\|x_{i}\|_{\infty})^{2}}\leq\alpha t\right)\] \[\leq\sum_{n=0}^{\infty}\sum_{x\in\mathcal{I}_{n}(\gamma)}\mathds{P }\left(\max_{1\leq i\leq m(n)}\frac{\log u(t,x_{i})}{(\log\|x_{i}\|_{\infty})^ {2}}\leq\alpha t\right)\] \[\leq\sum_{n=0}^{\infty}Ce^{2n(1-\gamma)}\left(\exp\left(-C_{1}e^{ \kappa n-C_{2}\log n}\right)+e^{-\frac{c}{2}e^{3n(\gamma-\theta)}\log n}\right) <\infty.\] for some constant \(C>0\). Hence, the Borel-Cantelli lemma implies that \(\mathcal{P}^{d}\) contains a \(\gamma-\)thick set (see Definition B.2) almost surely. By Proposition B.3, We get \(\operatorname{Dim_{H}}(\widetilde{\mathcal{P}}_{t}^{d}(\alpha))\geq d(1-\gamma)\) with probability one. Letting \(\gamma\downarrow\frac{\mathfrak{c}_{d}(1+\epsilon)(\alpha+\epsilon)^{\frac{4-d}{ 2}}}{d}\) and \(\theta\downarrow 0\) with \(0<\theta<\gamma-\frac{\mathfrak{c}_{d}(1+\epsilon)(\alpha+\epsilon)^{\frac{4-d}{ 2}}}{d}\), we get \(\operatorname{Dim}_{\text{H}}(\widetilde{\mathcal{P}}_{t}^{d}(\alpha))\geq d- \mathfrak{c}_{d}(1+\epsilon)(\alpha+\epsilon)^{\frac{4-d}{2}}.\) Since \(\epsilon>0\) is arbitrary, by the monotonicity of the macroscopic Hausdorff dimension, we conclude that \[\operatorname{Dim}_{\text{H}}[\mathcal{P}_{t}^{d}(\alpha)]\geq\operatorname{ Dim}_{\text{H}}[\widetilde{\mathcal{P}}_{t}^{d}(\alpha)]\geq d-\alpha^{\frac{4-d}{2}} \mathfrak{c}_{d},\] almost surely. ### Proof of the upper bound in Theorem 1.1 In this section, we prove the upper bound in Theorem 1.1. The following proposition will be used to complete the proof. **Proposition 6.3**.: _Let \(\epsilon>0\) and \(M>0\). There exist \(b:=b(M)>1\) and \(n_{0}:=n_{0}(M,b,\epsilon)>0\) such that for all \(n\geq n_{0}\) and \(t\geq 1\)_ \[\mathds{P}\Big{(}\sup_{x\in\mathcal{B}(y,1)}\frac{\log u(t,x)}{(\log\|x\|_{ \infty})^{\frac{2}{4-d}}}\geq\alpha t,\mathfrak{a}_{0}\leq M\Big{)}\leq c_{1 }(\alpha-\epsilon)^{\frac{d}{2}}n^{\frac{d}{4-d}}e^{d\log L_{t}-(1-\epsilon) \mathfrak{c}_{d}(\alpha-\epsilon)^{\frac{4-d}{2}}n}, \tag{6.5}\] _where \(L_{t}:=t^{b}\) and \(y\in\mathds{S}_{n}\) for \(n\in\mathds{N}\)._ Proof.: Since we can write \(u(t,x)=\sum_{k=0}^{\infty}\mathcal{U}_{k}^{y}(t,x)\) due to Lemma 5.6, we have \[\mathds{P}\Big{(}\sup_{x\in\mathcal{B}(y,1)}\frac{\log u(t,x)}{(\log\|x\|_{ \infty})^{\frac{2}{4-d}}}\geq\alpha t,\mathfrak{a}_{0}\leq M\Big{)}\leq \mathds{P}\Big{(}\sup_{x\in B(y,1)}u(t,x)\geq\frac{1}{2}e^{\alpha tn^{\frac{2} {4-d}}},\mathfrak{a}_{0}\leq M\Big{)}\leq(\mathbf{C_{1}})+(\mathbf{C_{1}}),\] where \[(\mathbf{C_{1}}) :=\mathds{P}\Big{(}\sup_{x\in B(y,1)}\mathcal{U}_{0}^{y}(t,x) \geq\frac{1}{2}e^{\alpha tn^{\frac{2}{4-d}}},\mathfrak{a}_{0}\leq M\Big{)},\] \[(\mathbf{C_{2}}) :=\mathds{P}\Big{(}\sup_{x\in B(y,1)}\sum_{k=1}^{\infty}\mathcal{ U}_{k}^{y}(t,x)\geq\frac{1}{2}e^{\alpha tn^{\frac{2}{4-d}}},\mathfrak{a}_{0}\leq M \Big{)}\] We first bound \((\mathbf{C_{2}})\). When \(d=3\), using (5.33), on the event \(\{\mathfrak{a}_{0}\leq M\}\) we have \[\mathcal{U}_{k}^{y}(t,x)\leq C\exp\left(CtM^{8+1}(b(k+1)\log t)^{28+2}-\frac{ t^{2bk}}{Ct}\right).\] Therefore, we can choose a large \(b:=b(M)>0\) such that for all \(t\geq e\) and \(n\geq 1\), \[\sum_{k=1}^{\infty}\mathcal{U}_{k}^{y}(t,x)\leq\sum_{k=1}^{\infty}C\exp\left( -C_{1}t^{2bk-1}\right)\leq\frac{1}{2}e^{\alpha tn^{\frac{2}{4-d}}}.\] for some constant \(C_{1}>0\). This implies that \((\mathbf{C_{2}})=0\) for all \(t\geq e\). For \(d=2\) case, one can use Lemma 5.2 of [10] to get the same result. Now we proceed to bound \((\mathbf{C_{1}})\). Applying Proposition 5.5 for \(d=3\), there exists \(n_{0}=n_{0}(M,b,\epsilon)>0\) such that for all \(n\geq n_{0}\) \[(\mathbf{C_{1}}) \leq\mathds{P}\Big{(}C\exp(t\boldsymbol{\lambda}_{1}(Q_{L_{t}}^{ y})+CM^{\upsilon+1}(\log L_{t})^{2\upsilon+2})\geq\frac{1}{2}e^{\alpha tn^{\frac{2}{4-d }}}\Big{)}\] \[\leq\mathds{P}\Big{(}\boldsymbol{\lambda}_{1}(Q_{L_{t}}^{y})\geq \alpha n^{\frac{2}{4-d}}-\frac{\log(2C)+CM^{\upsilon+1}(b\log t)^{2\upsilon+2} }{t}\Big{)}\] \[\leq\mathds{P}(\boldsymbol{\lambda}_{1}(Q_{L_{t}}^{y})\geq(\alpha- \epsilon)n^{\frac{2}{4-d}}),\] The \(d=2\) case follows similarly from using Lemma 5.2 of [10]. By Lemma 2.2, we further have \[(\mathbf{C_{1}})\leq\mathds{P}(\boldsymbol{\lambda}_{1}(Q_{L_{t}}^{y})\geq( \alpha-\epsilon)n^{\frac{2}{4-d}})\leq c_{1}(\alpha-\epsilon)^{\frac{d}{2}}n^{ \frac{d}{4-d}}e^{db\log t-(1-\epsilon)\mathfrak{c}_{d}(\alpha-\epsilon)^{\frac{4 -d}{2}}n}.\] Summing the bounds \((\mathbf{C_{1}})\) and \((\mathbf{C_{2}})\), we arrive at \[(\mathbf{C_{1}})+(\mathbf{C_{2}})\leq c_{1}(\alpha-\epsilon)^{\frac{d}{2}}n^{ \frac{d}{4-d}}e^{db\log t-(1-\epsilon)\mathfrak{c}_{d}(\alpha-\epsilon)^{\frac {4-d}{2}}n},\] which completes the proof. **Theorem 6.4**.: _For all \(t\geq e\) and all \(\alpha\in(0,(d/\mathfrak{c}_{d})^{\frac{2}{4-d}})\),_ \[\operatorname{Dim}_{\operatorname{H}}[\mathcal{P}_{t}^{d}(\alpha)]= \operatorname{Dim}_{\operatorname{H}}\Bigl{[}\Bigl{\{}x\in\operatorname{I\!R} ^{d}\,:\,u(t,x)\geq e^{\alpha t(\log\|x\|_{\infty})^{\frac{2}{4-d}}}\Bigr{\}} \Bigr{]}\leq d-\alpha^{\frac{4-d}{2}}\mathfrak{c}_{d}, \tag{6.6}\] _with probability one._ Proof.: Fix \(M>0\). By Markov's inequality and Proposition 5.2, we have \[\mathds{P}(\mathfrak{a}_{0}\geq M)\leq\delta_{M}:=\mathds{E}[e^{h_{0}\sqrt{ \mathfrak{a}_{0}}}]\cdot e^{-h_{0}\sqrt{M}}. \tag{6.7}\] Note that \(\delta_{M}\to 0\) as \(M\to\infty\). Choose \(\epsilon\in(0,\alpha\wedge 1)\) and \(\rho\in(d-(1-\epsilon)\mathfrak{c}_{d}(\alpha-\epsilon)^{\frac{4-d}{2}},d)\). Using Proposition 6.3, we have that for all \(t\geq 1\) \[\begin{split}\sum_{n=0}^{\infty}& e^{-n\rho}\sum_{ \begin{subarray}{c}y\in\operatorname{I\!R}^{d}\\ B(y,1)\subset\operatorname{\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S \!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S \!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S \!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S \!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S \!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S \!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S \!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S \!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S \!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S \!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S \!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S \!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S \!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S \!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S \!S\!S\!S\!S\!S\!S\!\!S\!S\!S\!S\!S\!\!S\!S\!S\!S\!\!S\!S\!S\!S\!\!S\!S\!S\!S\!S \!\!S\!S\!S\!\!S\!S\!S\!S\!S\!S\!S\!\!S\!S\!S\!\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S \!\!S\!S\!S\!S\!S\!\!S\!S\!S\!\!S\!S\!S\!\!S\!S\!S\!S\!\!S\!S\!\!S\!S\!\!S\!S\!S\!S \!\!S\!S\!\!S\!S\!\!S\!S\!S\!\!S\!S\!S\!\!S\!S\!S\!\!S\!S\!\!S\!S\!\!S\!S\!S\!S \!\!S\!S\!S\!\!S\!S\!\!S\!S\!S\!\!S\!S\!S\!S\!S\!S\!\!S\!S\!S\!\!S\!S\!S\!\!S \!S\!\!S\!S\!S\!\!S\!S\!\!S\!S\!\!S\!S\!S\!\!S\!S\!S\!\!S\!S\!\!S\!S\!S\!S\!\!S\!S\!S \!\!S\!S\!S\!S\!S\!\!S\!S\!\!S\!S\!S\!\!S\!S\!\!S\!S\!\!S\!S\!S\!S\!\!S\!S\!S\!S\!S \!\!S\!S\!\!S\!S\!S\!S\!\!S\!S\!S\!S\!\!S\!S\!S\!\!S\!S\!S\!\!S\!S\!S\!S\!S\!S\!\!S\!S \!\!S\!S\!S\!S\!\!S\!S\!\!S\!S\!S\!\!S\!S\!S\!S\!\!S\!S\!\!S\!S\!S\!S\!S\!S\!\!S\!S \!\!S\!S\!\!S\!S\!\!S\!S\!S\!S\!S\!S\!S\!\!S\!S\!S\!S\!\!S\!S\!S\!\!S\!S\!\!S\!S\!S\!S\! \!S\!S\!\!S\!S\!\!S\!S\!\!S\!S\!\!S\!S\!S\!S\!\!S\!S\!S\!S\!\!S\!S\!\!S\!S\!S\!S\!S\!\!S\!S\!S\!S \!\!S\!S\!\!S\!S\!\!S\!S\!\!S\!S\!S\!S\!S\!\!S\!S\!S\!S\!S\!\!S\!S\!S\!S\!\!S\!S\!\!S\!S\!S\! \!S\!S\!S\!\!S\!S\!S\!\!S\!S\!S\!S\!S\!\!S\!S\!S\!\!S\!S\!S\!\!S\!S\!\!S\!S\!S\!S\!S\!\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!\!S\!S\!S\!S\!S\!S\!S\!\!S\!S\!S\!S\!\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!\!S\!S\!S\!S\!S\!S\!\!S\!S\!S\!S\!\!S\!S\!S\!S\!S\!\!S\!S\!S\!\!S\!S\!S\!\!S\!S\!S\!S\!\!S\!S\!S\!S\!S\!\!S\!S\!S\!\!S\!S\!S\!\!S\!S\!S\!S\!S\!\!S\!S\!\!S\!S\!S\!\!S\!S\!S\!S\!\!S\!S\!S\!S\!S\!S\!\!S\!S\!\!S\!S\!S\!S\!\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!S\!\!S\!S\!S\!\!S\!S\!S\!S\!S\!S\!S\!S\!\!S\!S\!\!S\!S\!\!S\!S\!S\!S\!\!S\!S\!S\!\!S\!S\!S\!\!S\!S\!S\!S\!\!S\!S\!S\!S\!\!S\!S\!S\!\!S\!S\!S\!\!S\!S\!S\!\!S\!S\!S\!S\!S\!S\!\!S\!S\!S\!S\!S\!S\!\!S\!S\!S\!S\!\!S\!S\!S\!S\!S\!S\!\!S\!S\!S\!S\!\!S\!S\!S\!\!S\!S\!\!S\!S\!S\!\!S\!S\!\!S\!S\!S\!\!S\!S\!\!S\!S\!S\!\!S\!S\!S\!\!S\!S\!\!S\!S\!S\!\!S\!S\!\!S\!\!S\!S\!S\!\!S\!S\!\!S\!S\!\ This implies that \[\limsup_{\|x\|_{\infty}\to\infty}\frac{\log u(t,x)}{(\log\|x\|_{\infty})^{\frac{2} {4-d}}}\geq\left(\frac{d}{\mathfrak{c}_{d}}\right)^{\frac{2}{4-d}}t,\quad\text{a.s.} \tag{6.11}\] Now we prove the upper bound. Fix \(\epsilon\in(0,\alpha\wedge 1)\). Let \(M>0\) and \(\delta_{M}\) be defined as in (6.7). Note that we can rewrite (6.8) as \[\sum_{n=0}^{\infty}\sum_{\begin{subarray}{c}y\in\mathbb{N}^{d}\\ B(y,1)\subset\mathbb{S}_{n}\end{subarray}}\mathds{P}\Big{(}\sup_{x\in B(y,1)} \frac{\log u(t,x)}{(\log\|x\|_{\infty})^{\frac{2}{4-d}}}\geq g(d,\epsilon)t, \mathfrak{a}_{0}\leq M\Big{)}<\infty, \tag{6.12}\] where \(g(d,\epsilon):=\left(d/\mathfrak{c}_{d}(1-\epsilon)\right)^{\frac{2}{4-d}}+\epsilon.\) The Borel-Cantelli lemma yields that with probability greater than \(1-e^{-h\sqrt{M}}\mathbb{E}[e^{h\sqrt{\mathfrak{a}_{0}}}]\), \[\limsup_{\|x\|_{\infty}\to\infty}\frac{\log u(t,x)}{(\log\|x\|_{\infty})^{ \frac{2}{4-d}}}\leq\left(\frac{d}{\mathfrak{c}_{d}(1-\epsilon)}+\epsilon \right)^{\frac{2}{4-d}}t,\] for all \(t\geq t_{0}.\) Letting \(\epsilon\to 0\) and \(M\to\infty\), we can conclude that for all \(t\geq t_{0}\) \[\limsup_{\|x\|_{\infty}\to\infty}\frac{\log u(t,x)}{(\log\|x\|_{\infty})^{ \frac{2}{4-d}}}\leq\left(\frac{d}{\mathfrak{c}_{d}}\right)^{\frac{2}{4-d}}t\] with probability one. This completes the proof. ## 7. Spatio-temporal Multifractality: Proof of Theorem 1.2 ### Proof of the lower bound in Theorem 1.2 **Proposition 7.1**.: _Let \(\epsilon>0\) and \(\theta>0\). There exists \(c,t_{0}>0\) such that for all \(t\geq t_{0}\) and \(x_{1},...,x_{m}\in\mathds{R}^{d}\) satisfying \(\min_{i\neq j}\|x_{i}-x_{j}\|_{\infty}>3L_{t}\) where \(L_{t}:=t\), we have_ \[\mathds{P}\left(\max_{1\leq j\leq m}\log u(t,x_{j})\leq\beta t^{\frac{6-d}{4- d}}\right)\leq\exp\left(-cm(\beta+\epsilon)^{\frac{d}{2}}t^{\frac{d}{4-d}}e^{d \log r_{t}-\mathfrak{c}_{d}(1+\epsilon)(\beta+\epsilon)^{\frac{4-d}{2}}t} \right)+e^{-cm\log t}, \tag{7.1}\] _where \(r_{t}:=t^{\frac{1}{2}}\)._ Proof.: The proof is similar to the proof of Proposition 6.1. By Lemma 5.6, we have \[\mathds{P}\left(\max_{1\leq j\leq m}\log u(t,x_{j})\leq\beta t^{\frac{6-d}{4- d}}\right)\leq\mathds{P}\Big{(}\max_{1\leq j\leq m}\mathcal{U}_{0}^{x_{j}}(t,x_{j}) \leq e^{\beta t^{\frac{6-d}{4-d}}}\Big{)}=\prod_{j=1}^{m}\mathds{P}\Big{(} \mathcal{U}_{0}^{x_{j}}(t,x_{j})\leq e^{\beta t^{\frac{6-d}{4-d}}}\Big{)},\] whenever \(\min_{i\neq j}\|x_{i}-x_{j}\|_{\infty}>3L_{t}\) where \(L_{t}:=t\). Let \(r:=r_{t}:=t^{1/2}\). The last equality follows due to the independence between \(\{\mathcal{U}_{0}^{x_{j}}(t,x_{j})\}_{1\leq j\leq m}\). There exist \(t_{0}>0\) such that for all \(t\geq t_{0}\) \[\epsilon t^{2}\gg(\mathfrak{a}_{0}\log L_{t})^{28+2}+\frac{\mathfrak{a}_{0}( \log L_{t})^{2}r_{t}^{2}}{C\delta},\quad\frac{L_{t}^{2}}{C\delta}\gg t( \mathfrak{a}_{0}\log L_{t})^{28+2}+\frac{\mathfrak{a}_{0}(\log L_{t})^{2}r_{ t}^{2}}{C\delta}\] on the event \(\Upsilon_{t}:=\{\mathfrak{a}_{0}\leq(\log t)^{2}\}.\) For \(d=3\), by Proposition 5.5 there exists \(t_{0}>0\) such that for all \(t\geq t_{0}\), \[\mathds{P} \Big{(}\mathcal{U}_{0}^{x_{j}}(t,x_{j})\leq e^{\beta t^{3}}\Big{)}\] \[\leq\mathds{P}\Big{(}e^{-C\mathfrak{a}_{0}^{8+1}\delta(\log L_{t} )^{28+2}-\frac{C\mathfrak{a}_{0}(\log L_{t})^{2}r_{t}^{2}}{\delta}+(t-\delta) \boldsymbol{\lambda}_{1}(Q_{r_{t}}^{y})}-e^{C\mathfrak{a}_{0}^{28+1}t(\log L_{ t})^{28+2}-\frac{L_{t}^{2}}{C\delta}}\leq e^{\beta t^{3}}\Big{)}\] \[\leq\mathds{P}\Big{(}\Big{\{}\frac{1}{2}\exp\Big{(}(t-\delta) \boldsymbol{\lambda}_{1}(Q_{r_{n}}^{x_{j}})-C\mathfrak{a}_{0}^{8+1}\delta( \log L)^{28+2}-\frac{C\mathfrak{a}_{0}(\log L)^{2}r^{2}}{\delta}\Big{)}\leq e^ {\beta t^{3}}\Big{\}}\cap\Upsilon_{t}\Big{)}+\mathds{P}(\neg\Upsilon_{t})\] \[\leq\mathds{P}\left(\boldsymbol{\lambda}_{1}(Q_{r_{t}}^{x_{j}}) \leq(\beta+\epsilon)t^{2}\right)+\mathds{P}(\neg\Upsilon_{t}).\] For \(d=2\), we can proceed similarly using Lemma 5.2 of [13] to obtain \[\mathds{P}(\mathcal{U}_{0}^{x_{j}}(t,x_{j})\leq e^{\beta t^{2}})\] \[\leq\mathds{P}\Big{(}\exp\Big{(}(t-\delta)\boldsymbol{\lambda}_{1 }(Q_{r_{t}}^{x^{j}})-\frac{r_{t}^{2}}{C\delta}-C\delta(\mathfrak{a}_{0}\log L_{ t})^{5})-\exp(Ct(\mathfrak{a}_{0}\log L_{t})^{5}-\frac{L_{t}^{2}}{C\delta} \Big{)}\leq e^{\beta t^{2}}\Big{)}\] \[\leq\mathds{P}\Big{(}\Big{\{}\frac{1}{2}\exp((t-\delta)\boldsymbol {\lambda}_{1}(Q_{r_{t}}^{x^{j}})-\frac{r_{t}^{2}}{C\delta}-C\delta(\mathfrak{a} _{0}\log L_{t})^{5})\leq e^{\beta t^{2}}\Big{\}}\cap\Upsilon_{t}\Big{)}+ \mathds{P}(-\Upsilon_{n})\] \[\leq\mathds{P}(\boldsymbol{\lambda}_{1}(Q_{r_{t}}^{x^{j}})\leq( \beta+\epsilon)t)+\mathds{P}(-\Upsilon_{t})\] By Lemma 2.2, we have \[\max_{1\leq j\leq m}\mathds{P}\left(\boldsymbol{\lambda}_{1}(Q_{r_{t}}^{x_{j} })\leq(\beta+\epsilon)t^{\frac{6-d}{4-d}}\right)\leq\exp(-c_{2}(\beta+\epsilon )^{\frac{d}{2}}t^{\frac{d}{4-d}}e^{d\log r_{t}-\epsilon_{d}(1+\epsilon)(\beta+ \epsilon)^{\frac{4-d}{2}}t}).\] Moreover, we have \(\mathds{P}(-\Upsilon_{t})\leq e^{-h_{0}\log t}\) for some \(h_{0}>0\) by the fact that \(\mathds{E}[e^{h_{0}\sqrt{\mathfrak{a}_{0}}}]<\infty\). This yields that \[\mathds{P}\left(\max_{1\leq j\leq m}\log u(t,x_{j})\leq\beta t^{3}\right)\leq 2 ^{m-1}\exp(-cm(\beta+\epsilon)^{\frac{d}{2}}t^{\frac{d}{4-d}}e^{d\log r_{t}- \epsilon_{d}(1+\epsilon)(\beta+\epsilon)^{\frac{4-d}{2}}t})+2^{m-1}e^{-h_{0} m\log t}.\] By choosing \(t_{0}\) large such that \(2^{m-1}e^{-h_{0}m\log t}<e^{-cm\log t}\) for some constant \(c\), we achieve the bound in (7.1). This completes the proof. Now we are ready to prove the lower bound in Theorem 1.2. Recall that \[\mathcal{P}^{d}(\beta,v)=\left\{(t,x)\in(1,\infty)\,:\,u(v\log t,x)>e^{\beta( v\log t)^{\frac{2}{4-d}}}\right\}\] **Theorem 7.2**.: _For every \(\beta,v>0\), with probability one,_ \[\mathrm{Dim}_{\mathrm{H}}[\mathcal{P}^{d}(\beta,v)]\geq(d+1-\beta^{\frac{4-d} {2}}v\mathfrak{c})\lor d. \tag{7.2}\] Proof.: Choose \(\beta,\epsilon,v>0\) such that \((\beta+\epsilon)^{\frac{4-d}{2}}(1+\epsilon)v\mathfrak{c}_{d}<d+1\). Let us define \[\widetilde{\mathcal{P}}^{d}(\beta,v):=\mathcal{P}^{d}(\beta,v)\cap\bigcup_{n= 0}^{\infty}\left(e^{n},e^{n+1}\right]^{d+1}.\] Since \(\widetilde{\mathcal{P}}^{d}(\beta,v)\subseteq\mathcal{P}^{d}(\beta,v)\), it is enough to show \(\mathrm{Dim}_{\mathrm{H}}[\widetilde{\mathcal{P}}^{d}(\beta,v)]\geq(d+1-\beta ^{\frac{4-d}{2}}v\mathfrak{c}_{d})\lor d\) with probability one. We choose \(\gamma\in(\frac{(\beta+\epsilon)^{\frac{4-d}{2}}(1+\epsilon)v\mathfrak{c}_{d} }{d},1)\) and \(\theta\in(0,\gamma-\frac{(\beta+\epsilon)^{\frac{4-d}{2}}(1+\epsilon)v\mathfrak{ c}_{d}}{d})\). We borrow the notations \(a_{j,n}\) and \(I_{n}(\gamma)\) from Theorem 6.2. Suppose \(I_{n}^{k}(\gamma)\) are copies of \(I_{k}(n)\) for \(1\leq k\leq d+1\). We introduce further \[\tilde{\mathcal{I}}_{n}(\gamma):=\prod_{k=1}^{d+1}I_{n}^{k}(\gamma),\] where \(I_{n}^{k}(\gamma)\) is a copy of \(I_{n}(\gamma)\) for all \(1\leq j\leq d+1\). We choose \(x\in\tilde{\mathcal{I}}_{n}(\gamma)\) and take the points \(\{x_{i}\}_{i=1}^{m(n)}\) such that they satisfy the following conditions: \((b.1)\)\(x_{i}\in\tilde{\mathcal{I}}_{n}(\gamma)\cap B(x,e^{n\gamma})\) for all \(i=1,...,m(n)\); \((b.2)\)\(|x_{i}-x_{j}|\geq e^{n\theta}\) whenever \(1\leq i<j\leq m(n)\); \((b.3)\)\(d^{-1}e^{dn(\gamma-\theta)}\leq m(n)\leq de^{dn(\gamma-\theta)}\). Observe that \(e^{n\theta}\gg 3L_{v\log t}=3(v\log t)\) for all \(t\in(e^{n},e^{n+1}]\). We now notice that there exists \(n_{0}>0\) such that for all \(n\geq n_{0}\), \[\mathds{P}\Big{(}\mathcal{P}^{d}_{1}\cap(\{t\}\times B(x,e^{n\gamma }))=\varnothing\text{ for some }t\in(e^{n},e^{n+1}]\text{ and }x\in\tilde{\mathcal{I}}_{n}(\gamma)\Big{)}\] \[\leq\sum_{\begin{subarray}{c}t\cap\mathds{Z}\\ t\in(e^{n},e^{n+1}]\end{subarray}}\sum_{x\in\tilde{\mathcal{I}}_{n}(\gamma)} \mathds{P}\left(\max_{\{x_{i}\}_{i=1}^{m(n)}\subseteq\tilde{\mathcal{I}}_{n}( \theta)\cap B(x,e^{n\gamma})}u(v\log t,x)\leq e^{\beta(v\log t)\frac{6-d}{4-d}}\right)\] \[\leq Ce^{dn(1-\gamma)+n}\cdot\exp\left(-cm(n)(\beta+\epsilon)^{ \frac{d}{2}}(v(n+1))^{\frac{d}{4-d}}e^{d\log r_{v(n+1)}-\epsilon_{d}(1+ \epsilon)(\beta+\epsilon)^{\frac{4-d}{2}}v(n+1)}\right)+e^{-cm(n)v(n+1)}\] \[\leq Ce^{dn(1-\gamma)+n}\cdot\Bigg{[}\exp\left(-c(\beta+ \epsilon)^{\frac{d}{2}}v(n+1)^{\frac{d}{4-d}}e^{\log[v(n+1)]+\kappa n- \epsilon_{d}(1+\epsilon)(\beta+\epsilon)^{\frac{4-d}{2}}v}\right)\] \[\quad+\exp\left(-\frac{c}{2}e^{dn(\gamma-\theta)}v(n+1)\right) \Bigg{]},\] where \(\kappa:=d(\gamma-\theta)-\mathfrak{c}_{d}(1+\epsilon)(\beta+\epsilon)^{\frac{4 -d}{2}}v>0\) by the choice of \(\gamma\) and \(\theta\). The first inequality is straightforward. The inequality follows by applying the union bound and the third inequality is obtained by applying Proposition 7.1. The right hand side of the above inequality is summable w.r.t. \(n\). Hence, by the Borel-Cantelli lemma, there exists \(n_{0}>0\) such that for all \(n\geq n_{0}\) \[\widetilde{\mathcal{P}}^{d}(\beta,v)\cap(\{t\}\times B(x,e^{n\gamma}))\neq \varnothing\text{ for all }t\in(e^{n},e^{n+1}]\text{ and all }x\in\mathcal{I}_{n}(\gamma). \tag{7.3}\] This implies that \(\mu_{n}(\widetilde{\mathcal{P}}^{d}(\beta,v))\geq Ce^{dn(1-\gamma)+n}\) where \(\mu_{n}\) is defined in Proposition B.4. Therefore, by Proposition B.4 we can deduce that \(\sum_{n}\nu_{n,d+1-d\gamma}(\widetilde{\mathcal{P}}^{d}(\beta,v))=\infty\) almost surely, which shows that \(\operatorname{Dim}_{\mathrm{H}}(\widetilde{\mathcal{P}}^{d}(\beta,v))\geq 4-3\gamma\). Letting \(\gamma\downarrow\frac{\mathfrak{c}_{d}(1+\epsilon)(\beta+\epsilon)^{\frac{4-d }{2}}v}{d}\) and \(\theta\downarrow 0\) without violating \(\theta\in(0,\gamma-\frac{\mathfrak{c}_{d}(1+\epsilon)(\beta+\epsilon)^{\frac{4 -d}{2}}v}{d})\), we get \(\operatorname{Dim}_{\mathrm{H}}(\widetilde{\mathcal{P}}^{d}(\beta,v))\geq d+1 -\mathfrak{c}_{d}(1+\epsilon)(\beta+\epsilon)^{\frac{4-d}{2}}v\). Since \(\epsilon>0\) is arbitrary, \(\operatorname{Dim}_{\mathrm{H}}(\widetilde{\mathcal{P}}^{d}(\beta,v))\geq d+1 -\beta^{\frac{4-d}{2}}v\mathfrak{c}_{d}\). Now it remains to show \(\operatorname{Dim}_{\mathrm{H}}(\widetilde{\mathcal{P}}^{d}(\beta,v))\geq d\), a.s. for any \(\beta,v>0\). First note that \[\left\{(s,x)\in(1,\infty)\times\mathds{R}^{d}\,:\,u(v\log s,x)>e^{\beta(v\log s )\frac{6-d}{4-d}}\right\}\supseteq\{t\}\times\left\{x\in\mathds{R}^{d}\,:\,u( v\log t,x)>e^{\beta(v\log t)\frac{6-d}{4-d}}\right\},\] for all \(t\geq 1\). Let us define \[\mathcal{P}^{(t)}:=\left\{x\in\mathds{R}^{d}\,:\,u(v\log t,x)>e^{\beta(v\log t )\frac{6-d}{4-d}}\right\},\] for any \(t>1\). Then, it suffices to show \(\operatorname{Dim}_{\mathrm{H}}(\mathcal{P}^{(t)})\geq d\), a.s. for some \(t>1\). Indeed, \[\operatorname{Dim}_{\mathrm{H}}\left(\{t\}\times\left\{x\in\mathds{R}^{d}\,:\, u(v\log t,x)>e^{\beta(v\log t)\frac{6-d}{4-d}}\right\}\right)=\operatorname{Dim}_{ \mathrm{H}}(\mathcal{P}^{(t)}),\] for any fixed \(t\geq 1\) (see [1, Section 9]). Let \(t_{0}\) be the constant in Theorem 6.2. Let \(\beta,v>0\). Observe that for all \((M,\alpha)\in\mathds{R}^{2}\) such that \[M\geq e,\quad\alpha\geq\frac{\beta(v\log t_{0})^{\frac{6-d}{4-d}}}{t_{0}(\log M )^{\frac{2}{4-d}}},\] we have \[\mathcal{P}_{M}^{(t_{0})}:=\left\{\|x\|_{\infty}\geq M\,:\,u(v\log t_{0},x)>e^{ \beta(v\log t_{0})^{\frac{6-d}{4-d}}}\right\}\supseteq\left\{\|x\|_{\infty}\geq M \,:\,u(v\log t_{0},x)>e^{\alpha t_{0}(\log\|x\|_{\infty})^{\frac{2}{4-d}}} \right\}.\] Note that \(\mathcal{P}_{M}^{(t_{0})}\) and \(\mathcal{P}^{(t_{0})}\) have the same macroscopic Hausdorff dimension since \(\mathrm{Dim}_{\mathrm{H}}[E]=0\) for every bounded set \(E\in\mathds{R}^{d}\). Therefore, by Theorem 6.2, we have \[\mathrm{Dim}_{\mathrm{H}}(\mathcal{P}^{(t_{0})})=\mathrm{Dim}_{\mathrm{H}}( \mathcal{P}_{M}^{(t_{0})})\geq d-\frac{\beta(v\log t_{0})^{\frac{6-d}{4-d}}}{ t_{0}(\log M)^{\frac{2}{4-d}}}.\] By taking \(M\uparrow\infty\), we can conclude that \(\mathrm{Dim}_{\mathrm{H}}(\mathcal{P}^{(t_{0})})\geq d\), a.s. ### Proof of the upper bound in Theorem 1.2 **Proposition 7.3**.: _Let \(0<\epsilon<\beta\) and \(M>0\). There exist \(b:=b(M)>1\) and \(n_{0}:=n_{0}(M,b,\epsilon)>0\) such that for all \(n\geq n_{0}\) and \(t\geq 1\)_ \[\begin{split}\mathds{P}\Big{(}&\text{ For some }t\in(a,a+l]\text{ s.t. }\sup_{x\in B(y,1)}\log u(t,x)\geq\beta t^{\frac{6-d}{4-d}},\mathfrak{a}_{0} \leq M\Big{)}\\ &\leq c_{1}(\beta-\epsilon)^{\frac{d}{2}}a^{\frac{d}{4-d}}e^{db \log(a+l)-(1-\epsilon)\epsilon_{d}(\beta-\epsilon)^{\frac{4-d}{2}}a},\end{split} \tag{7.4}\] _where \(b>1\) and \(y\in\mathds{R}^{d}\)._ Proof.: We use similar argument as in Proposition 6.3. Applying Lemma 5.6 with \(L_{t}:=t^{b}\), we can write \(u(t,x)=\sum_{k=0}^{\infty}\mathcal{U}_{k}^{y}(t,x)\) for any \(y\in\mathds{R}^{d}.\) Then we have \[\mathds{P}\Big{(}\text{For some }t\in(a,a+l]\sup_{x\in B(y,1)}\log u(t,x)\geq \beta t^{\frac{6-d}{4-d}},\mathfrak{a}_{0}\leq M\Big{)}\leq(\mathbf{D_{1}})+( \mathbf{D_{2}}),\] where \[(\mathbf{D_{1}}):=\mathds{P}\Big{(}\text{For some }t\in(a,a+l]\sup_{x\in B(y,1)} \mathcal{U}_{0}^{y}(t,x)\geq\frac{1}{2}e^{\beta t^{\frac{6-d}{4-d}}}, \mathfrak{a}_{0}\leq M\Big{)},\] \[(\mathbf{D_{2}}):=\mathds{P}\Big{(}\text{For some }t\in(a,a+l]\sup_{x \in B(y,1)}\sum_{k=1}^{\infty}\mathcal{U}_{k}^{y}(t,x)\geq\frac{1}{2}e^{\beta t ^{\frac{6-d}{4-d}}},\mathfrak{a}_{0}\leq M\Big{)}.\] For \(d=3\), on the event \(\{\mathfrak{a}_{0}\leq M\}\), we have \[\mathcal{U}_{k}^{y}(t,x)\leq C\exp\left(CtM^{\aleph+1}(b(k+1)\log t)^{2\aleph +2}-\frac{t^{2bk}}{Ct}\right).\] by applying (5.33). For \(d=2\), similar bounds follows again from Lemma 5.2 of [10]. Now we can choose a large \(b:=b(M)>0\) such that for all \(t\geq e\) and \(n\geq 1\) \[\sum_{k=1}^{\infty}\mathcal{U}_{k}^{y}(t,x)\leq\sum_{k=1}^{\infty}C\exp\left( -C_{1}t^{2bk-1}\right)\leq\frac{1}{2}e^{\alpha tn^{\frac{2}{4-d}}}, \tag{7.5}\] which shows that \((\mathbf{D_{2}})=0\) for all \(a\geq e\). We now bound \((\mathbf{D_{1}})\). Fix \(\epsilon\in(0,\beta\wedge 1)\) and use Proposition 5.5 to obtain that there exists \(a_{0}:=a_{0}(b,M,\epsilon)>0\) such that for all \(a\geq a_{0}\) \[\begin{split}(\mathbf{D_{1}})&\leq\mathds{P}\Big{(} \text{For some }t\in(a,a+l],\,C\exp(t\boldsymbol{\lambda}_{1}(Q_{L_{t}}^{y})+CM^{v+1}( \log L_{t})^{2v+2})\geq\frac{1}{2}e^{\beta t^{\frac{6-d}{4-d}}}\Big{)}\\ &\leq\mathds{P}\Big{(}\text{For some }t\in(a,a+l],\, \boldsymbol{\lambda}_{1}(Q_{L_{t}}^{y})\geq\beta t^{\frac{2}{4-d}}-\frac{\log(2 C)+CM^{v+1}(b\log t)^{2v+2}}{t}\Big{)}\\ &\leq\mathds{P}(\text{For some }t\in(a,a+l],\,\boldsymbol{\lambda}_{1}(Q_{L_{t}}^{y}) \geq(\beta-\epsilon)a^{\frac{2}{4-d}}).\end{split} \tag{7.6}\] By Lemma 2.2 and Lemma 2.3, we have \[(\mathbf{D_{1}})\leq\mathbb{P}(\boldsymbol{\lambda}_{1}(Q_{L_{a+l}}^{y})\geq( \beta-\epsilon)a^{\frac{2}{4-d}}))\leq c_{1}(\beta-\epsilon)^{\frac{d}{2}}a^{ \frac{d}{4-d}}e^{db\log(a+l)-(1-\epsilon)\epsilon_{d}(\beta-\epsilon)^{\frac{4 -d}{2}}a}. \tag{7.7}\] This completes the proof. Now we proceed to prove the upper bound of macroscopic Hausdorff dimension of the set \(\mathcal{P}^{d}(\beta,v)\). **Theorem 7.4**.: _For every \(v>0\) and \(\beta\in(0,(d/(v\mathfrak{c}_{d}))^{\frac{2}{4-d}})\), with probability one,_ \[\mathrm{Dim}_{\mathds{H}}[\mathcal{P}^{d}(\beta,v)]\leq(d+1-\beta^{\frac{4-d} {2}}v\mathfrak{c}_{d})\lor d. \tag{7.8}\] Proof.: For \(\epsilon:=(\epsilon_{1},...,\epsilon_{d})\in\{-1,1\}^{d}\), define an (open) orthant as \[\mathcal{O}_{\epsilon}:=\left\{(t,x)=(t,x_{1},...,x_{d})\in(1,\infty)\times \mathds{R}^{d}\,:\,\epsilon_{1}x_{1}>0,\epsilon_{2}x_{2}>0,...,\epsilon_{d}x_ {d}>0\right\}.\] We then define \[\mathcal{P}^{d}_{\epsilon}(\beta,v):=\mathcal{P}^{d}(\beta,v)\cap\mathcal{O}_ {\epsilon}.\] In order to prove this theorem, it suffices to prove that \(\mathrm{Dim}_{\mathds{H}}[\mathcal{P}^{d}_{\epsilon_{+}}(\beta,v)](d+1-\beta ^{\frac{4-d}{2}}v\mathfrak{c}_{d})\lor d\) for any \(\epsilon\in\{-1,1\}^{d}\). Due to symmetry between different orthants, it further suffices to prove for \(\epsilon_{+}:=(1,...,1)\in\{-1,1\}^{d}\) that \[\mathrm{Dim}_{\mathds{H}}[\mathcal{P}^{d}_{\epsilon_{+}}(\beta,v)]\leq(d+1- \beta^{\frac{4-d}{2}}v\mathfrak{c}_{d})\lor d. \tag{7.9}\] For \(q>1\) and \(n\in\mathds{N}\), let us denote \(\mathcal{L}_{n}:=\mathcal{L}_{n}(q,n,\beta,v,d):=\mathcal{P}^{d}_{\epsilon_{+ }}(\beta,v)\cap\mathcal{I}^{(q)}_{n}\) where \(\mathcal{I}^{(q)}_{n}:=(e^{n/q},e^{n+1}]^{d+1}\). By Lemma B.5, we have \[\mathrm{Dim}_{\mathds{H}}\left[\mathcal{P}^{d}_{\epsilon_{+}}(\beta,v)\setminus \bigcup_{n=0}^{\infty}\mathcal{L}_{n}\right]\leq d.\] Since \(\mathrm{Dim}_{\mathds{H}}(A\cup B)=\max\{\mathrm{Dim}_{\mathds{H}}(A), \mathrm{Dim}_{\mathds{H}}(B)\}\) for any two sets \(A,B\subseteq\mathds{R}^{d}\), we have \[\mathrm{Dim}_{\mathds{H}}\left[\mathcal{P}^{d}_{\epsilon_{+}}(\beta,v)\right] \leq\mathrm{Dim}_{\mathds{H}}\left[\mathcal{P}^{d}_{\epsilon_{+}}(\beta,v) \setminus\bigcup_{n=0}^{\infty}\mathcal{L}_{n}\right]\vee\mathrm{Dim}_{ \mathds{H}}\left[\bigcup_{n=0}^{\infty}\mathcal{L}_{n}\right].\] Let \(\bar{\mathcal{L}}:=\bigcup_{n=0}^{\infty}\mathcal{L}_{n}\). The above inequality implies that to show (7.9), it is enough to prove \[\mathrm{Dim}_{\mathds{H}}\left(\bar{\mathcal{L}}\right)\leq(d+1-\beta^{\frac{ 4-d}{2}}v\mathfrak{c}_{d}).\] To this end, observe that Proposition 7.3 implies for all \(a\in(e^{n/q},e^{n+1}]\) \[\begin{split}&\mathbb{P}\Big{(}\text{For some }t\in(a,a+1]\sup_{x\in B (y,1)}\log u(v\log t,x)\geq\beta(v\log t)^{\frac{6-d}{4-d}},\mathfrak{a}_{0} \leq M\Big{)}\\ &=\mathbb{P}(\text{For some }t\in(v\log a,v\log(a+1)]\sup_{x\in B (y,1)}\log u(t,x)\geq\beta t^{\frac{6-d}{4-d}},\mathfrak{a}_{0}\leq M\Big{)}\\ &\leq c_{1}(\beta-\epsilon)^{\frac{d}{2}}\left(\frac{vn}{q} \right)^{\frac{d}{4-d}}\exp\left(db\log\left(2vn\right)-\frac{(1-\epsilon) \mathfrak{c}_{d}(\beta-\epsilon)^{\frac{4-d}{2}}vn}{q}\right),\end{split} \tag{7.10}\] For all sufficiently large \(n\in\mathds{N}\), we cover \(\mathcal{L}_{n}\subseteq\mathcal{I}^{(q)}_{n}\) with \(O(e^{d+1})\)-many boxes of the form \((a,a+1]\times B(y,1)\) satisfying that for some \(t\in(a,a+1]\) \[\sup_{x\in B(y,1)}\log u(v\log t,x)\geq\beta(v\log t)^{\frac{6-d}{4-d}}. \tag{7.11}\] on the event \(\Upsilon_{M}:=\{\mathfrak{a}_{0}\leq M\}\). Choose \[\rho\in\Big{(}d+1-\frac{\mathfrak{c}_{d}\beta_{\epsilon}^{\frac{4-d}{2}}vn}{q},d +1\Big{]}.\] By (7.10), we have that for all sufficiently large \(n\geq 1\) and for all \(\rho>0\), \[\mathds{E}[\nu_{\rho}^{n}(\mathcal{L}_{n})\mathds{1}_{\Upsilon_{M}}] \leq\mathds{E}\Big{[}\sum_{\begin{subarray}{c}(a,a+1]\times B(y,1 )\subseteq\mathcal{I}_{n}^{q}:\\ (\ref{eq:1})\text{ holds}\end{subarray}}e^{-n\rho}\cdot\mathds{1}_{ \Upsilon_{M}}\Big{]}\] \[\leq C\exp\Big{(}\Big{\{}d+1-\rho-\frac{\mathfrak{c}_{d}\beta_{ \epsilon}^{\frac{4-d}{2}}vn}{q}\Big{\}}n+Cb\log n\Big{)},\] where \(C>0\) is a constant which depends only on \((v,\beta)\) and \(\beta_{\epsilon}:=(1-\epsilon)^{\frac{2}{4-d}}(\beta-\epsilon)\). This implies that \[\mathds{P}\Big{(}\Upsilon_{M}\cap\Big{\{}\sum_{n=0}^{\infty}\nu_{\rho}^{n}( \bar{\mathcal{L}})<\infty\Big{\}}\Big{)}=\mathds{P}\big{(}\Upsilon_{M}\big{)}.\] Because \(\mathds{P}(\Upsilon_{M})\geq 1-e^{-h\sqrt{M}}\mathbb{E}[e^{\sqrt{\mathfrak{a}_{0 }}}]\) (see (6.7)), we have \[\mathds{P}\Big{(}\sum_{n=0}^{\infty}\nu_{\rho}^{n}(\bar{\mathcal{L}})<\infty \Big{)}\geq 1-e^{-h\sqrt{M}}\mathbb{E}[e^{\sqrt{\mathfrak{a}_{0}}}],\] which in turn implies \(\mathds{P}(\mathrm{Dim}_{\mathrm{H}}(\bar{\mathcal{L}})\leq\rho)\geq 1-e^{-h \sqrt{M}}\mathbb{E}[e^{\sqrt{\mathfrak{a}_{0}}}].\) Since the definition of \(\bar{\mathcal{L}}\) does not depend on \(M\) and \(M>0\) can be arbitrarily large, we have \(\mathrm{Dim}_{\mathrm{H}}(\bar{\mathcal{L}})\leq\rho\) almost surely. Taking \(\rho\downarrow d+1-\frac{\mathfrak{c}_{d}\beta_{\epsilon}^{\frac{4-d}{2}}vn}{q}\), we also have \(\mathrm{Dim}_{\mathrm{H}}(\bar{\mathcal{L}})\leq d+1-\frac{\mathfrak{c}_{d} \beta_{\epsilon}^{\frac{4-d}{2}}v}{q}\). Moreover, since \(q>1\) and \(\epsilon\in(0,\beta\wedge 1)\) are arbitrary, we can let \(q\to 1\) and \(\epsilon\to 0\) to conclude that \(\mathrm{Dim}_{\mathrm{H}}(\bar{\mathcal{L}})\leq d+1-\mathfrak{c}_{d}\beta^{ \frac{4-d}{2}}v\). This completes the proof. ## Appendix A Besov space and paracontrolled generator We start with introducing few notations about the function spaces and the paracontrolled calculus. Let \(\chi\) and \(\varrho\) be non-negative radial function such that 1. the support of \(\chi\) is contained in a ball and the support of \(\varrho\) is contained in an annulus \(\{x\in\mathds{R}^{d}:1<|x|<2\}\). 2. \(\chi(\xi)+\sum_{j\geq 0}\varrho(2^{-j}\xi)=1\) for all \(\xi\in^{d}\). 3. \(\mathrm{Supp}(\chi)\cap\mathrm{Supp}(\varrho(2^{-j}\cdot))=\emptyset\) for \(i\geq 1\) and \(\mathrm{Supp}(\varrho(2^{-i}\cdot)\cap\mathrm{Supp}(\varrho(2^{-j}\cdot))=\emptyset\) when \(|i-j|>1\). To this end, \((\chi,\varrho)\) satisfying the above properties are said to form a dyadic partition of unity. For the existence, we refer to [1, Proposition 2.10]. For any Schwartz distribution \(f\), we define the Littlewood-Paley blocks by \[\Delta_{i}f=\sum_{k\in\mathds{N}_{0}^{d}}\langle f,\mathfrak{n}_{k,L}\varrho_{ i}\Big{(}\frac{k}{L}\Big{)}\mathfrak{n}_{k,L}\] (A.1) where \(\{\mathfrak{n}_{k,L}:k\in\mathds{N}_{0}\}\) forms an orthonormal basis of \(L^{2}([0,L]^{d})\) given in [13, Section 4] and \(\varrho_{j}(\cdot)=\varrho(2^{-j}\cdot)\). We also note that since \(\varrho_{j}\) is supported in a ball with radius \(2^{j}\) and \(\varrho_{j}\leq 1\), for all \(j\in\mathds{N}_{0}\), \(x\in\mathds{R}^{d}\) and \(\gamma>0\) \[\varrho_{j}(x)\lesssim\left(\frac{2^{j}}{1+|x|}\right)^{\gamma}.\] (A.2) For \(u\in\mathscr{S}^{\prime}\), we define \((1-\frac{1}{2}\Delta)^{-1}u\) by \[(1-\frac{1}{2}\Delta)^{-1}u:=\sum_{k\in\mathds{N}_{0}^{d}}\sigma\Big{(}\frac{k}{ L}\Big{)}\langle u,\mathfrak{n}_{k,L}\rangle\mathfrak{n}_{k,L},\] (A.3) where \(\sigma(x):=(1+\pi|x|^{2})^{-1}\). Denote the Fourier transform operator by \(\mathfrak{F}\) and let \((\chi,\varrho)\) be the dyadic partition of unity. Then the Littlewood-Paley blocks are defined as \[\Delta_{-1}u=\mathfrak{F}^{-1}(\chi\mathfrak{F}(u)),\quad\Delta_{j}u= \mathfrak{F}^{-1}\big{(}\varrho_{j}(\cdot)\mathfrak{F}(u)\big{)},\quad j\geq 0\] (A.4) where \(\varrho_{j}(\cdot)=\varrho(2^{-j\cdot})\) and, for \(\alpha\in,\,p,q\in[1,\infty]\), the Besov space \(B^{\alpha}_{p,q}(^{d\,,\,n})\) is \[B^{\alpha}_{p,q}(^{d\,,\,n}):=\{u\in\mathscr{S}^{\prime}(^{d\,,\,n});\quad \|u\|^{q}_{B^{\alpha}_{p,q}(^{d\,,\,n})}=\sum_{j\geq-1}2^{jq\alpha}\|\Delta_{ j}u\|^{q}_{L^{p}(^{d\,,\,n})}<\infty\}\] (A.5) We often use the notation \(\mathscr{C}^{\alpha}_{p}(^{d\,,\,n})\) to denote \(B^{\alpha}_{p,p}(^{d\,,\,n})\) for \(p\in[1,\infty]\) and write \(\mathscr{C}^{\alpha}(^{d\,,\,n})\) for \(\mathscr{C}^{\alpha}_{\infty}(^{d\,,\,n})\). This notation is consistent with the fact that \(\mathscr{C}^{\alpha}(^{d\,,\,n})\) is indeed space of all \(\alpha\)-Holder continuous function. For simplicity, we sometimes use the notation \(\mathscr{C}^{\alpha}_{p}\) for \(\mathscr{C}^{\alpha}_{p}(\mathds{R}^{d},\mathds{R})\). Let \(\delta,\rho>0\), \(T>0\), and \(\bar{T}\in[0,T)\). Let \((D,\|\cdot\|_{D})\) be a Banach space and \(u,v:[T-\bar{T},T]\to D\) be function (or distribution) valued processes. We say that \(u\in C^{\delta}_{\rho,\bar{T},T}D\) if \(\|u\|_{C^{\delta}_{\rho,\bar{T},T}D}<\infty\) and \(v\in C^{\delta}_{\bar{T},T}D\) if \(\|v\|_{C^{\delta}_{\bar{T},T}D}<\infty\) where \[\begin{split}\|u\|_{C^{\delta}_{\rho,\bar{T},T}D}&: =\sup_{s<t\in(T-\bar{T},T]}(T-t)^{\delta}\frac{\|u(t)-u(s)\|_{D}}{|t -s|^{\rho}},\\ \|v\|_{C^{\delta}_{\bar{T},T}D}&:=\sup_{t\in(T-\bar{ T},T]}(T-t)^{\delta}\|v(t)\|_{D}.\end{split}\] (A.6) In the case of \(\delta=0\) or \(\bar{T}=T\), then we drop the respective subscripts in the above definition. ### Some properties of the Besov-Holder continuous distributions Let \(f\) and \(g\) be two distributions in \(\mathscr{S}^{\prime}(^{d})\). Then the Paley-Littlewood decomposition of \(fg\) is written as \[fg=f\prec g+f\circ g+f\succ g\] where \(f\prec g\) and \(f\succ g\) are called _paraproducts_ and \(f\circ g\) is called the _resonant terms_ and they are defined as \[f\prec g=f\succ g=\sum_{j\geq-1}\sum_{i<j-1}\Delta_{i}\Delta_{j}g,\quad\text{ and }\quad f\circ g=\sum_{j\geq-1}\sum_{|i-j|\leq 1}\Delta_{i}\Delta_{j}g.\] In the following propositions, we note few useful properties of the paraproduct. **Proposition A.1** (Bony's estimates (I), [1]).: _Let \(\alpha,\beta\in\mathds{R}\). Let \(f\in\mathscr{C}^{\alpha}\) and \(f\in\mathscr{C}^{\beta}\),_ 1. _If_ \(\alpha>0\)_, then_ \(f\prec g\in\mathscr{C}^{\beta}\) _and_ \(\|f\prec g\|_{\beta}\leq\|f\|_{L^{\infty}}\|g\|_{\beta}\)__ 2. _If_ \(\alpha<0\)_, then_ \(f\prec g\in\mathscr{C}^{\alpha+\beta}\) _and_ \(\|f\prec g\|_{L^{\infty}}\lesssim\|f\|_{\alpha}\|g\|_{\beta}\)_._ 3. _If_ \(\alpha+\beta>0\)_, then_ \(f\circ g\in\mathscr{C}^{\alpha+\beta}\) _and_ \(\|f\circ g\|_{\alpha+\beta}\lesssim\|f\|_{\alpha}\|g\|_{\beta}\)_._ **Proposition A.2** (Bony's estimates (II), [1]).: _Let \(\alpha<0,\beta>0\) and \(\alpha+\beta>0\). Let \(p,p_{1},p_{2},q_{1},q_{2}\in[1,\infty]\) be satisfy \(\frac{1}{p}=\frac{1}{p_{1}}+\frac{1}{p_{2}}\). Let \(f\in B^{\alpha}_{p_{1},q_{1}}\) and \(g\in B^{\beta}_{p_{2},q_{2}}\). For \(q\geq q_{1}\)_ 1. \(\|f\prec g\|_{B^{\alpha+\beta}_{p,q}}\lesssim\|f\|_{B^{\alpha}_{p_{1},q_{1}}}\| g\|_{B^{\beta}_{p_{2},q_{2}}}\)_._ 2. \(\|f\succ g\|_{B^{\alpha+\beta}_{p_{1},q_{1}}}\lesssim\|f\|_{B^{\alpha}_{p_{1},q_{1 }}}\|g\|_{B^{\beta}_{p_{2},q_{2}}}\)_._ 3. \(\|f\circ g\|_{B^{\alpha+\beta}_{p_{1},q_{1}}}\lesssim\|f\|_{B^{\alpha}_{p_{1},q_{ 1}}}\|g\|_{B^{\beta}_{p_{2},q_{2}}}\)_._ **Proposition A.3** (Schauder's estimate,Lemma 2.5 of [15], Lemma A.8 of [14]).: _Let \(P_{t}\) be the heat semigroup for \(\frac{1}{2}\Delta\). Let \(\theta\geq 0,p\in[1,\infty]\) and \(\alpha\in\mathds{R}\). Then for \(\phi\in\mathscr{C}_{p}^{\alpha}\) and \(0\leq s\leq t\) we have_ \[\|P_{t}\phi\|_{\mathscr{C}_{p}^{\alpha+2\theta}}\lesssim t^{-\theta}\|\phi\|_{ \mathscr{C}_{p}^{\alpha}},\qquad\|(P_{t-s}-\text{Id})\phi\|_{\mathscr{C}_{p}^ {\alpha-2\theta}}\lesssim|t-s|^{\theta}\|\phi\|_{\mathscr{C}_{p}^{\alpha}}.\] (A.7) ## Appendix B Macroscopic Hausdorff dimension In this section, we introduce the notion of the macroscopic Hausdorff dimension given by Barlow and Taylor [1, 2], and Khoshnevisan-Kim-Xiao [17]. We also present a useful propositions that help to provide lower bound and upper bounds to the macroscopic Hausdorff dimension of any given set. ### Definition For all integers \(n\geq 1\), we define the exponential cubes and shells as follows: \[\mathds{V}_{n}:=[-e^{n},e^{n})^{d},\quad\mathds{S}_{0}:=V_{0},\quad\text{and} \quad\mathds{S}_{n+1}:=\mathds{V}_{n+1}\setminus\mathds{V}_{n}.\] (B.1) Let \(\mathcal{B}\) be the collection of all cubes of the form \[B(x,r):=\prod_{i=1}^{d}[x_{i},x_{i}+r),\] (B.2) for \(x=(x_{1},...,x_{d})\in\mathds{R}^{d}\), and \(r\in[1,\infty)\). For any subset \(E\subset\mathds{R}^{d}\), \(\rho>0\), and all integers \(n\geq 1\), we define \[\nu_{\rho}^{n}(E):=\inf\left\{\sum_{i=1}^{m}\left(\frac{s(B_{i})}{e^{n}} \right)^{\rho}:B_{i}\in\mathcal{B},B_{i}\subset\mathds{S}_{n}\text{ and }E\cap\mathds{S}_{n}\subset\cup_{i=1}^{m}B_{i}\right\},\] (B.3) where \(s(B):=r\) denotes the side of \(B=B(x,r)\). We now introduce the definition of the macroscopic Hausdorff dimension. **Definition B.1**.: [[1, 1]] The _macroscopic Hausdorff dimension_ of \(E\subset\mathds{R}^{d}\) is defined as \[\operatorname{Dim}_{\mathrm{H}}(E):=\inf\left\{\rho>0:\sum_{n=1}^{\infty}\nu _{\rho}^{n}(E)<\infty\right\}.\] (B.4) ### Useful bounds for macroscopic Hausdorff dimension Choose and fix any \(\theta\in(0,1)\). We define \[a_{j,n}(\theta):=e^{n}+je^{n\theta},\qquad 0\leq j<e^{n(1-\theta)},\] \[I_{n}(\theta):=\bigcup_{\begin{subarray}{c}0\leq j\leq e^{n(1-\theta)}:\\ j\in\mathds{Z}\end{subarray}}\{a_{j,n}(\theta)\},\] and \[\mathcal{I}_{n}(\theta):=\prod_{i=1}^{d}I_{n}^{i}(\theta),\] where \(I_{n}^{i}(\theta)\) is a copy of \(I_{n}(\theta)\) for each \(i\). We call \(\cup_{n=1}^{\infty}\mathcal{I}_{n}(\theta)\) a \(\theta\)-skeleton of \(\mathds{R}^{d}\) (see [17, Definition 4.2]). Note that \(\operatorname{Dim}_{\mathrm{H}}\left(\cup_{n=k}^{\infty}\mathcal{I}_{n}( \theta)\right)=d(1-\theta)\) for any integer \(k\geq 1\). **Definition B.2** (Definition 4.3 of [17]).: \(E\) is called \(\theta\)-thick if there exists a positive integer \(k=k(\theta)\) such that \[E\cap Q(x,e^{n\theta})\neq\emptyset,\] for all \(x\in\mathcal{I}_{n}(\theta)\) and \(n\geq k\). By the monotonicity of the macroscopic Hausdorff dimension, we get the following lower bound. **Proposition B.3** (Proposition 4.4 of [14]).: _Let \(E\subset\mathds{R}^{d}\). If \(E\) contains a \(\theta\)-thick set for some \(\theta\in(0,1)\), then_ \[\operatorname{Dim_{H}}(E)\geq d(1-\theta).\] For the set of spatio-temporal peaks, we separately provide a proposition for the lower bound of the macroscopic Hausdorff dimension. **Proposition B.4** (Theorem 4.1 of [15]).: _Fix \(\gamma\in(0,d)\). For any set \(E\) and \(n\in\mathds{Z}\), let us define_ \[\mu_{n}(E)=\sum_{\begin{subarray}{c}s\in\mathds{Z}\\ e^{n}<s\leq e^{n+1}\end{subarray}}\sum_{\begin{subarray}{c}j\in\mathds{Z}^{d} \\ j\in[0,e^{n(1-\gamma)})\end{subarray}}\mathds{1}\{(s,j)\in E\}.\] (B.5) _Then, there exists a constant \(C>0\) such that \(\nu_{n,d+1-d\gamma}(E)\geq Ce^{-nd(1-\gamma)-n}\mu_{n}(E)\)._ Proof.: The proof follows from [15, Theorem 4.1]. For the condition of [15, Theorem 4.1], it suffices to verify that \(\mu_{n}((s,s+r]\times B(x,r))\lesssim r^{d+1}\) for all \(r\geq 1\), which is proven below (4.24) of [16]. The following lemma helps us to compute the upper bound of the macroscopic Hausdorff dimension. **Lemma B.5** (Lemma 4.2 of [16]).: _For any \(q>1\) and \(k\in\{1,...,d-1\}\), define a set \(E\subseteq\mathds{R}^{d}\) as_ \[E:=\bigcup_{n=0}^{\infty}E_{n},\] _where_ \[E_{n}:=(0,e^{n/q}]^{k}\times(e^{n/q},e^{n+1}]^{d-k}.\] _Then we have \(\operatorname{Dim_{H}}[E]\leq d-k\)._
2303.02016
Composite Classical and Quantum Channel Discrimination
We study the problem of binary composite channel discrimination in the asymmetric setting, where the hypotheses are given by fairly arbitrary sets of channels, and samples do not have to be identically distributed. In the case of quantum channels we prove: (i) a characterization of the Stein exponent for parallel channel discrimination strategies and (ii) an upper bound on the Stein exponent for adaptive channel discrimination strategies. We further show that already for classical channels this upper bound can sometimes be achieved and be strictly larger than what is possible with parallel strategies. Hence, there can be an advantage of adaptive channel discrimination strategies with composite hypotheses for classical channels, unlike in the case of simple hypotheses. Moreover, we show that classically this advantage can only exist if the sets of channels corresponding to the hypotheses are non-convex. As a consequence of our more general treatment, which is not limited to the composite i.i.d. setting, we also obtain a generalization of previous composite state discrimination results.
Bjarne Bergh, Nilanjana Datta, Robert Salzmann
2023-03-03T15:31:38Z
http://arxiv.org/abs/2303.02016v1
# Composite Classical and Quantum Channel Discrimination ###### Abstract We study the problem of binary composite channel discrimination in the asymmetric setting, where the hypotheses are given by fairly arbitrary sets of channels, and samples do not have to be identically distributed. In the case of quantum channels we prove: \((i)\) a characterization of the Stein exponent for parallel channel discrimination strategies and \((ii)\) an upper bound on the Stein exponent for adaptive channel discrimination strategies. We further show that already for classical channels this upper bound can sometimes be achieved and be strictly larger than what is possible with parallel strategies. Hence, there can be an advantage of adaptive channel discrimination strategies with composite hypotheses for classical channels, unlike in the case of simple hypotheses. Moreover, we show that classically this advantage can only exist if the sets of channels corresponding to the hypotheses are non-convex. As a consequence of our more general treatment, which is not limited to the composite i.i.d. setting, we also obtain a generalization of previous composite state discrimination results. ###### Contents * 1 Introduction and Outline * 2 Mathematical Preliminaries * 2.1 Measurements and POVMs * 2.2 Quantum Information Measures * 2.2.1 Channel Divergences * 3 Composite State Discrimination * 3.1 Classical Adversarial Hypothesis Testing * 4 Composite Channel Discrimination * 4.1 The Parallel Case * 4.2 The Adaptive Case * 4.2.1 An upper bound for adaptive strategies * 4.2.2 A classical example of an adaptive advantage * 4.2.3 Classical equality under convexity * 4.3 Classical parallel exponent for finite sets in the composite i.i.d. setting * 5 Open Problems * 6 References * A Technical Lemmas ## 1 Introduction and Outline Hypothesis testing, or finding optimal strategies and minimal errors for discrimination tasks is one of the oldest and most studied tasks in information theory. In the quantum setting there have been plenty of results regarding the optimal discrimination of states [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22] and also quantum channels [1, 2, 13, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32]. However, most of the quantum literature focuses on the case where the two hypotheses are simple, i.e. the hypotheses state that what we are given is _exactly_ one specific state (or channel). Arguably, much more practically relevant is the case where one allows for composite hypotheses, i.e. hypotheses stating that the given state (or channel) belongs to a certain set. This first of all includes the noisy regime, where we can assume that what we are given is approximately one of two possibilities, but also much more general settings, i.e. questions of discriminating big sets with a certain structure (for states this could for example be sets of separable [1] or coherent [1] states). Throughout this paper we will be looking at binary composite hypothesis testing in the asymmetric setting. We are given \(n\) instances of an unknown object, and have to make a decision between two hypotheses based on these \(n\) instances, and are ultimately interested in the asymptotic limit \(n\to\infty\). We will start with introducing the problem for discriminating two sets of states, and give an overview of previous results in the literature, before moving on to the problem of discriminating two sets of channels. To our knowledge, the task of composite binary quantum _channel_ discrimination has not been studied thus far. Throughout our analysis, we will not restrict ourselves to the composite i.i.d. setting, i.e. we will also allow the provided objects (states or channels) to vary within the sets corresponding to the hypotheses. For binary asymmetric composite channel discrimination we show in this fairly general setting: \((i)\) a characterization of the Stein exponent for parallel channel discrimination strategies (Theorem 9), and \((ii)\) an upper bound on the Stein exponent for adaptive channel discrimination strategies (Proposition 10). We further show that already classically this upper bound can sometimes be achieved and be strictly larger than what is possible with parallel strategies (Example 12), and hence there can be an advantage of adaptive channel discrimination strategies with composite hypotheses. We go on to show that classically this advantage can only exist if the sets of channels corresponding to the hypotheses are non-convex, and additionally assuming this convexity makes parallel strategies asymptotically optimal (Theorem 13). We leave the question open whether an adaptive advantage can exist in the quantum case when the sets of channels are convex. Table 1 gives an overview of what we are able to show regarding composite channel discrimination, and illustrates in which cases an adaptive advantage exists. As a consequence of our more general treatment which is not limited to the composite i.i.d. setting we also obtain a generalization of the composite state discrimination results of [1] (Theorem 5). Note, however, that while we do not require provided states or channels to be identical, we still require them to be independent. Hence, our theorems do not aid in determining whether a generalized Stein's lemma holds in cases where the alternative hypothesis is given by a set of non-independent states, as conjectured in [1, 1]. ## 2 Mathematical Preliminaries We write \(\mathcal{H}\) for a complex finite-dimensional Hilbert space, and \(\mathcal{B}\left(\mathcal{H}\right)\) for the set of linear operators acting on \(\mathcal{H}\). We write \(\mathscr{P}\left(\mathcal{H}\right)\) for the set of positive semi-definite operators acting on \(\mathcal{H}\). For \(A,B\in\mathscr{P}\left(\mathcal{H}\right)\), we further write \(A\ll B\) if \(\operatorname{supp}\left(A\right)\subseteq\operatorname{supp}\left(B\right)\) and \(A\not\ll B\) if \(\operatorname{supp}\left(A\right)\not\subseteq\operatorname{supp}\left(B\right)\). Let \(\mathcal{D}\left(\mathcal{H}\right)\) denote the set of density matrices, i.e., the set of positive semi-definite operators with trace one. A quantum channel (in this paper usually denoted as \(\mathcal{E}\) or \(\mathcal{F}\)) is a completely positive trace preserving map between density operators. We will label different quantum systems by capital Roman letters (\(A\), \(B\), \(C\), etc.) and often use these letters interchangeably with the corresponding Hilbert space or set of density matrices (i.e., we write \(\rho\in\mathcal{D}\left(A\right)\) instead of \(\rho\in\mathcal{D}\left(\mathcal{H}_{A}\right)\) and \(\mathcal{E}:A\to B\) instead of \(\mathcal{E}:\mathcal{D}\left(\mathcal{H}_{A}\right)\to\mathcal{D}\left( \mathcal{H}_{B}\right)\)). We will also concatenate these letters to mean tensor products of systems, i.e. we will write \(\rho\in\mathcal{D}\left(RA\right)\) for \(\rho\in\mathcal{D}\left(\mathcal{H}_{R}\otimes\mathcal{H}_{A}\right)\). We write \(\operatorname{CPTP}(A\to B)\) for the set of all completely positive trace preserving maps from \(\mathcal{D}\left(\mathcal{H}_{A}\right)\) to \(\mathcal{D}\left(\mathcal{H}_{B}\right)\). Throughout the paper we will write \(\mathcal{X}\) and \(\mathcal{Y}\) for classical systems. A classical state \(\rho\in\mathcal{D}\left(\mathcal{X}\right)\) is then diagonal in the computational basis, and we write \(\operatorname{CPTP}(\mathcal{X}\to\mathcal{Y})\) for the set of classical channels. For any subset \(A\) of a vector space, we will write \(\mathcal{C}(A)\) for the convex hull of \(A\). We write \(\log\) for \begin{table} \begin{tabular}{c|c|c|c|c|c} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \\ \cline{2-5} \multirow{2}{*}{**Hypotheses**} & \multicolumn{1}{c|}{**Asymptotic**} & \multicolumn{1}{c|}{**Adaptive**} & \multicolumn{1}{c|}{**Upper**} & \multicolumn{1}{c}{} & \\ & **Parallel Exponent** & & **Exponent** & **Bound** & **Shown in** \\ \hline Quantum & \multirow{2}{*}{\(D_{\operatorname{reg}}(\mathcal{E}\|\mathcal{F})\)} & \multirow{2}{*}{\(=\)} & \multirow{2}{*}{\(D_{A}(\mathcal{E}\|\mathcal{F})\)} & \multirow{2}{*}{\(=\)} & \multirow{2}{*}{\(\min\limits_{\begin{subarray}{c}\mathcal{E}\in\mathcal{S}\\ \mathcal{F}\in\mathcal{T}\end{subarray}}D(\mathcal{E}\|\mathcal{F})\)} & \multirow{2}{*}{\(=\)} & \(\min\limits_{\begin{subarray}{c}\mathcal{E}\in\mathcal{S}\\ \mathcal{F}\in\mathcal{T}\end{subarray}}D(\mathcal{E}\|\mathcal{F})\) & \(=\) & \(\min\limits_{\begin{subarray}{c}\mathcal{E}\in\mathcal{S}\\ \mathcal{F}\in\mathcal{T}\end{subarray}}D(\mathcal{E}\|\mathcal{F})\) & Thm. 13 \\ \hline Classical & \multirow{2}{*}{\(\max\limits_{\begin{subarray}{c}\mathcal{E}\in\mathcal{S}\\ \mathcal{F}\in\mathcal{T}\end{subarray}}D(\mathcal{E}(\nu)\|\mathcal{F}(\nu))\)} & \multirow{2}{*}{\(<\)} & \multirow{2}{*}{?} & \multirow{2}{*}{\(\leq\)} & \multirow{2}{*}{\(\min\limits_{\begin{subarray}{c}\mathcal{E}\in\mathcal{S}\\ \mathcal{F}\in\mathcal{T}\end{subarray}}D(\mathcal{E}\|\mathcal{F})\)} & Prop. 14 \\ \cline{1-1} Composite & \multirow{2}{*}{\(\max\limits_{\begin{subarray}{c}\mathcal{E}\in\mathcal{S}\\ \mathcal{F}\in\mathcal{T}\end{subarray}}D(\mathcal{E}(\nu)\|\mathcal{F}(\nu))\)} & \multirow{2}{*}{\(<\)} & \multirow{2}{*}{?} & \multirow{2}{*}{\(\leq\)} & \(\min\limits_{\begin{subarray}{c}\mathcal{E}\in\mathcal{S}\\ \mathcal{F}\in\mathcal{T}\end{subarray}}D(\mathcal{E}\|\mathcal{F})\) & Exp. 12 \\ \hline Quantum & \multirow{2}{*}{\(\lim\limits_{n\to\infty}\frac{1}{n}\min\limits_{ \begin{subarray}{c}\mathcal{E}_{n}\in\mathcal{C}(\mathcal{S}_{n})\\ \mathcal{F}_{n}\in\mathcal{C}(\mathcal{T}_{n})\end{subarray}}D(\mathcal{E}_{n} \|\mathcal{F}_{n})\)} & \multirow{2}{*}{\(<\)} & \multirow{2}{*}{?} & \multirow{2}{*}{\(<\)} & \multirow{2}{*}{Thm. 9} \\ Composite & \multirow{2}{*}{\(\lim\limits_{n\to\infty}\frac{1}{n}\min\limits_{ \begin{subarray}{c}\mathcal{E}_{n}\in\mathcal{C}(\mathcal{S}_{n})\\ \mathcal{F}_{n}\in\mathcal{C}(\mathcal{T}_{n})\end{subarray}}D(\mathcal{E}_{n} \|\mathcal{F}_{n})\)} & \multirow{2}{*}{i.g.} & \multirow{2}{*}{?} & \(<\) & \(\min\limits_{\begin{subarray}{c}\mathcal{E}\in\mathcal{S}\\ \mathcal{F}\in\mathcal{T}\end{subarray}}D_{A}(\mathcal{E}\|\mathcal{F})\) & Exp. 12 \\ \cline{1-1} & & & & & \(\sum\limits_{\begin{subarray}{c}\mathcal{E}\in\mathcal{S}\\ \mathcal{F}\in\mathcal{T}\end{subarray}}D_{A}(\mathcal{E}\|\mathcal{F})\) & Prop. 10 \\ \cline{1-1} & & & & & Rem. 11 \\ \end{tabular} \end{table} Table 1: Illustration of the relation between adaptive and parallel type II error exponents for various channel discrimination tasks. For the composite problems the task is to discriminate between two sets of channels \(\mathcal{S}\) and \(\mathcal{T}\) and the table also includes an upper bound based on the worst-case simple i.i.d. problem. “Quantum Simple” refers to the quantum channel discrimination problem with simple hypotheses. With “Classical” we mean that all channels are classical, and “Convex Sets” or “Finite Sets” refers to whether the sets of channels \(\mathcal{S}\) and \(\mathcal{T}\) are convex or finite. Please see the respective theorems for a general formulation of the results and a precise definition of the quantities involved; \(\mathcal{C}\) denotes the convex hull. We write i.g. to denote that these inequalities will be strict in general, although there exist specific examples where equality holds. the logarithm to the base two. ### Measurements and POVMs Throughout this paper we will treat measurements and POVMs as quantum-classical channels, i.e. we associate a POVM specified through the operators \(\{M_{i}\}_{i=1}^{n}\subset\mathcal{B}\left(\mathcal{H}\right)\) with the quantum-classical channel \[\mathcal{M}:\mathcal{D}\left(\mathcal{H}\right)\to\mathcal{D}\left(\mathbb{C} ^{n}\right)\qquad\rho\mapsto\sum_{i=1}^{n}\operatorname{Tr}(\rho M_{i})\,|i \rangle\!\langle i|. \tag{1}\] ### Quantum Information Measures For \(\rho\in\mathcal{D}\left(\mathcal{H}\right)\) and \(\sigma\in\mathscr{P}\left(\mathcal{H}\right)\) the (Umegaki) quantum relative entropy is defined as [10] \[D(\rho\|\sigma)\coloneqq\operatorname{Tr}(\rho(\log\rho-\log\sigma)), \tag{2}\] if \(\rho\ll\sigma\) and \(D(\rho\|\sigma)\coloneqq\infty\) if \(\rho\nless\sigma\). One of its most important properties is the data-processing inequality [11], which states that for every quantum channel \(\mathcal{E}\): \[D(\rho\|\sigma)\geq D(\mathcal{E}(\rho)\|\mathcal{E}(\sigma))\,. \tag{3}\] A self-contained proof can be found e.g. in [13]. More generally, we call a function of \(\rho\) and \(\sigma\) a divergence if it satisfies the data-processing inequality. We can also define the measured relative entropy as the maximal classical relative entropy when measuring both states with some POVM. Specifically \[D_{M}(\rho\|\sigma)\coloneqq\sup_{\mathcal{M}\text{ POVM}}D(\mathcal{M}(\rho)\| \mathcal{M}(\sigma)) \tag{4}\] where \(\mathcal{M}\) is a POVM (with arbitrary many outcomes) interpreted as a quantum-classical channel as outlined above. For \(\rho\in\mathcal{D}\left(\mathcal{H}\right)\) and \(\sigma\in\mathscr{P}\left(\mathcal{H}\right)\), define the quantum max-divergence (or the max-relative entropy) as [12] \[D_{\max}(\rho\|\sigma)\coloneqq\log\inf\left\{\,\lambda\in\mathbb{R}\mid\rho \leq\lambda\sigma\,\right\}\,. \tag{5}\] The quantum max-divergence also satisfies the data-processing inequality [12]. We will also frequently use the hypothesis testing relative entropy, which for \(\rho,\sigma\in\mathcal{D}\left(\mathcal{H}\right)\) is defined as follows [14] \[D_{H}^{\varepsilon}(\rho\|\sigma)\coloneqq\min_{\begin{subarray}{c}0\leq M \leq 1\mathcal{H}\\ \operatorname{Tr}(\mathcal{M}\rho)\geq 1-\varepsilon\end{subarray}} \operatorname{Tr}(\sigma M)\,. \tag{6}\] #### 2.2.1 Channel Divergences For every given divergence \(\mathbf{D}\) for states, one can define an associated channel divergence [12] by performing a (stabilized) maximization over all input states, i.e. with \(\mathcal{E},\mathcal{F}:A\to B\) being quantum channels \[\mathbf{D}(\mathcal{E}\|\mathcal{F})\coloneqq\sup_{\rho_{RA}\in\mathcal{D}(R \otimes A)}\mathbf{D}\big{(}(\mathrm{id}_{R}\otimes\mathcal{E})(\rho)\|( \mathrm{id}_{R}\otimes\mathcal{F})(\rho)\big{)}. \tag{7}\] Since \(\mathbf{D}\) satisfies the data-processing inequality by definition, the supremum can be restricted to pure states such that the reference system \(R\) is isomorphic to the channel input system \(A\) (this is shown below as Lemma 17). For the Umegaki relative entropy \(D\), we can also define the regularized and amortized [Wil\({}^{+}\)20] channel divergences as \[D_{\text{reg}}(\mathcal{E}\|\mathcal{F}) \coloneqq\lim_{n\to\infty}\frac{1}{n}\mathbf{D}(\mathcal{E}^{ \otimes n}\|\mathcal{F}^{\otimes n}), \tag{8}\] \[D_{A}(\mathcal{E}\|\mathcal{F}) \coloneqq\sup_{\begin{subarray}{c}\rho,\sigma\in\mathcal{D}(RA) \\ R\text{ arbitrary}\end{subarray}}\left[\mathbf{D}(\mathcal{E}(\rho)\|\mathcal{F}( \sigma))-\mathbf{D}(\rho\|\sigma)\right]. \tag{9}\] Note that for the amortized divergence, there is no known way in which the size of the reference system can be restricted. ## 3 Composite State Discrimination In simple quantum state discrimination, given \(n\) identical copies of an unknown state which is promised to be either \(\rho\) or \(\sigma\), the task is to decide which of the two options it is. In composite quantum state discrimination, we are only promised that the states are all from one of two sets \(S\) or \(T\), and the task is to decide which set they come from (but not to further identify which state exactly was provided). Since there are now multiple states for each hypothesis, there are multiple possible scenarios how the \(n\) input states one receives are related: We could still be given \(n\) identical copies of a state, or alternatively, we could be given \(n\) completely different states but all from the same set \(S\) or \(T\), or something in between, where the states are non-identical but still related. We would like to cover all these different scenarios in our analysis, and hence we will describe composite hypotheses as sequences of sets \(S_{n}\) which include all the possible combinations of \(n\) states we could get. We will make some small assumptions on these sets: **Definition 1**.: _For the purpose of this work, a composite quantum state hypothesis (in the asymptotic setting) is a sequence of sets of states_ \[\mathcal{S}=(S_{n}\subset\mathcal{D}\left(\mathcal{H}^{\otimes n}\right))_{n}\] _such that_ 1. _Each set_ \(S_{n}\) _is topologically closed._ 2. _Each element_ \(\rho_{n}\in S_{n}\) _is a tensor product of states_ \(\rho_{n}=\rho^{(1)}\otimes\ldots\otimes\rho^{(n)}\)_, with each_ \(\rho^{(i)}\in\mathcal{D}\left(\mathcal{H}\right)\) _for_ \(i=1,...,n\)_._ 3. _The sets_ \(S_{n}\) _are closed under tracing out any subsystem, i.e. for any_ \(i=1,...,n\) _and_ \(\rho_{n}\in\mathcal{S}_{n}\) _we have that_ \(\operatorname{Tr}_{i}(\rho_{n})\in\mathcal{S}_{n-1}\)_, where_ \(\operatorname{Tr}_{i}\) _denotes the partial trace over the_ \(i^{th}\) _subsystem._ 4. _Each set_ \(S_{n}\) _is closed under permutation of the_ \(n\) _subsystems, i.e. for any permutation_ \(\pi\in S(n)\) _and associated canonical unitary representation_ \(\Pi\)_, we have for all_ \(\rho_{n}\in S_{n}\)_:_ \(\Pi\rho_{n}\Pi\in\mathcal{S}_{n}\)_._ Interesting examples of this include: 1. The composite i.i.d. case: We have two sets \(S\), \(T\subset\mathcal{D}\left(\mathcal{H}\right)\), and are given \(n\) identical copies of an element from \(S\) if the null hypothesis is true, and \(n\) identical copies of an element from \(T\) if the alternate hypothesis is true. This corresponds to: \[S_{n} \coloneqq\Set{\rho^{\otimes n}\mid\rho\in S\Set},\] (10) \[T_{n} \coloneqq\Set{\sigma^{\otimes n}\mid\sigma\in T\Set}.\] (11) 2. The arbitrarily varying case: This is similar to the composite i.i.d. case, but we are not given \(n\) identical copies, but \(n\) (potentially different) elements from \(S\) or \(T\). This corresponds to: \[S_{n} \coloneqq\{\,\rho_{1}\otimes\ldots\otimes\rho_{n}\mid\rho_{1}, \ldots\rho_{n}\in S\,\}\,,\] (12) \[T_{n} \coloneqq\{\,\sigma_{1}\otimes\ldots\otimes\sigma_{n}\mid\sigma_ {1},\ldots\sigma_{n}\in T\,\}\,.\] (13) 3. The slightly-varying case: This is an example of a scenario that lies in between the arbitrarily varying case (where there is no correlation between the samples, except for them all being in the same set) and the composite i.i.d. case (where there is maximal correlation between the samples, as they are all identical). For any given \(\varepsilon\in[0,1]\) (which might depend on \(n\)) and any distance function \(d:\mathcal{D}\left(\mathcal{H}\right)\cross\mathcal{D}\left(\mathcal{H}\right) \rightarrow[0,1]\) (e.g. trace distance or purified distance) set \[S_{n} \coloneqq\{\,\rho_{1}\otimes\ldots\otimes\rho_{n}\mid\rho_{1}, \ldots\rho_{n}\in S,\quad d(\rho_{i},\rho_{j})\leq\varepsilon\;\forall i,j\,\}\,,\] (14) \[T_{n} \coloneqq\{\,\sigma_{1}\otimes\ldots\otimes\sigma_{n}\mid\sigma_ {1},\ldots\sigma_{n}\in T,\quad d(\sigma_{i},\sigma_{j})\leq\varepsilon\; \forall i,j\,\}\,.\] (15) 4. The simple i.i.d. case: The simple i.i.d. case can be seen as a special case of the above where \(S\) and \(T\) each contain one element. **Lemma 2**.: _If \(\mathbf{S}\) is a quantum state hypothesis, then performing a measurement on any (joint) \(k\) subsystems of a state \(\rho_{n}\in S_{n}\), and conditioning on the measurement result, yields a state \(\rho_{n-k}\) on the remaining subsystems that is an element of \(S_{n-k}\). Precisely, let \(k\in\{1,...,n\}\), \(\rho_{n}\in S_{n}\subset\mathcal{D}\left(\mathcal{H}^{\otimes n}\right)\) and \(0\leq M\leq 1\in\mathcal{B}\left(\mathcal{H}^{\otimes k}\right)\). If_ \[\omega_{n-k}\coloneqq\operatorname{Tr}_{1,...,k}\left[(M\otimes \mathbbm{1}_{\mathcal{H}_{k+1}}\otimes...\otimes\mathbbm{1}_{\mathcal{H}_{n} })\rho_{n}\right] \tag{16}\] _then either \(\operatorname{Tr}(\omega_{n-k})=0\) or \(\omega_{n-k}/\operatorname{Tr}(\omega_{n-k})\in S_{n-k}\)._ Proof.: This follows immediately from the fact that each element \(\rho_{n}\in S_{n}\) is a tensor product, and that removing an element in the tensor product gives an element in \(S_{n-1}\). **Remark 3**.: Note that while we state the results below with the assumptions of Definition 1, we could replace the required tensor product structure (point 2 in Definition 1) with the statement of Lemma 2. While this might be more general, we find the assumptions of Definition 1 to be more natural. Similarly, later on when talking about hypotheses in the context of channel discrimination, we could replace the tensor product structure for channels in (point 2 in Definition 8) with the statement of Lemma 2 for any tensor product input state. For our discrimination problem, given an \(n\) and an unknown state in \(\mathcal{D}\left(\mathcal{H}^{\otimes n}\right)\) we will want to perform a binary POVM (fully specified by one of its elements, which we write as \(M\)) to decide between the two hypotheses. In the end we want to avoid making an error, i.e. claiming that our state comes from \(S_{n}\) when it actually comes from \(T_{n}\) and vice-versa. These two errors are known as type I and type II errors. If we settle on a measurement \(M\), the probability of making an error might still depend on which particular state from either \(S_{n}\) or \(T_{n}\) we actually end up getting. Here we will be focussing on minimizing the worst case errors, i.e. we want to choose measurements which minimize the error uniformly over all states from \(S_{n}\) and \(T_{n}\). More formally, we define the type I and type II error probabilities (also called type I and type II errors) as: \[\alpha(M,S_{n}) =\sup_{\rho\in S_{n}}\operatorname{Tr}((\mathbbm{1}-M)\rho) \tag{17}\] \[\beta(M,T_{n}) =\sup_{\sigma\in T_{n}}\operatorname{Tr}(M\sigma)\,. \tag{18}\] We will be focussing on the asymmetric setting, where we want to minimize the type II error, \(\beta\), under the constraint that the type I error \(\alpha\) is below a certain threshold. The main quantity of interest is then the negative logarithm of this minimal type II error under the type I error constraint, which we also call the hypothesis testing relative entropy of the two sets \(S_{n}\) and \(T_{n}\): \[D_{H}^{\varepsilon}(S_{n}\|T_{n})\coloneqq-\inf_{\begin{subarray}{c}0\leq M \leq 1\\ \alpha(M,S_{n})\leq\varepsilon\end{subarray}}\log\beta(M,T_{n})\,. \tag{19}\] As the expression in (17) is linear in \(\rho\), it is easy to see that \[\alpha(M,\mathcal{C}(S_{n}))=\alpha(M,S_{n}) \tag{20}\] (remember that \(\mathcal{C}\) is the convex hull) as the supremum will be achieved at an extremal point, and the same holds also for \(\beta\). Hence, \[D_{H}^{\varepsilon}(S_{n}\|T_{n})=D_{H}^{\varepsilon}(S_{n}\|\mathcal{C}(T_{n }))=D_{H}^{\varepsilon}(\mathcal{C}(S_{n})\|T_{n})=D_{H}^{\varepsilon}( \mathcal{C}(S_{n})\|\mathcal{C}(T_{n})) \tag{21}\] and hence the discrimination task considered is equivalent to discriminating between these convex hulls of the sets \(S_{n}\) and \(T_{n}\). We are interested in the quantum Stein exponent for this discrimination task, i.e. the optimal exponential decay rate of the type II error in the limit of infinitely many copies of the state, given that the type I error also goes to zero asymptotically. This Stein exponent can be expressed as \[\lim_{\varepsilon\to 0}\lim_{n\to\infty}\frac{1}{n}D_{H}^{\varepsilon}(S_{n}\|T_{n})\,. \tag{22}\] In [2], this problem was studied specifically in the composite i.i.d. case, i.e. with \(S_{n}=\{\,\rho^{\otimes n}\mid\rho\in S\,\}\), \(T_{n}=\{\,\sigma^{\otimes n}\mid\sigma\in T\,\}\), where \(S,T\subset\mathcal{D}\left(\mathcal{H}\right)\) were also assumed to be closed and convex, leading to: **Theorem 4** ([2]).: _Let \(S,T\) be closed and convex, and define for all \(n\): \(S_{n}\coloneqq\{\,\rho^{\otimes n}\mid\rho\in S\,\}\), \(T_{n}\coloneqq\{\,\sigma^{\otimes n}\mid\sigma\in T\,\}\). Then_ \[\lim_{\varepsilon\to 0}\lim_{n\to\infty}\frac{1}{n}D_{H}^{\varepsilon}(S_{n}\|T_{n} )=\lim_{n\to\infty}\frac{1}{n}\inf_{\begin{subarray}{c}\rho_{n}\in\mathcal{S} _{n}\\ \sigma_{n}\in\mathcal{C}(T_{n})\end{subarray}}D(\rho_{n}\|\sigma_{n})\,, \tag{23}\] _and one can find cases where this is strictly smaller than_ \[\inf_{\begin{subarray}{c}\rho\in S\\ \sigma\in T\end{subarray}}D(\rho\|\sigma)\,. \tag{24}\] Remember that \(\mathcal{C}\) stands for the convex hull, and it is precisely this convex hull in the infimum on the right-hand side of (23) which prevents the regularization from collapsing, as the elements of \(S_{n}\) and \(T_{n}\) are tensor products, and the relative entropy is additive. Without this convex hull, the exponent (23) would be exactly equal to the single-letter expression (24), which we call the worst-case i.i.d. exponent, as it is equal to the exponent of the worst-case simple i.i.d. problem. Intuitively, one pays for the compositeness by having to include convex combinations in the Stein exponent, and this makes the discrimination problem strictly harder in some cases. As a consequence of our channel discrimination result further below (Theorem 9), we will arrive at this following generalization of Theorem 4: **Theorem 5**.: _Let \(\boldsymbol{S}=(S_{n})_{n},\boldsymbol{T}=(T_{n})_{n}\) be two composite quantum state hypotheses. Then_ \[\lim_{\varepsilon\to 0}\lim_{n\to\infty}\frac{1}{n}D_{H}^{\varepsilon}(S_{n}\|T_{n })=\lim_{n\to\infty}\frac{1}{\begin{subarray}{c}\rho_{n}\in\mathcal{C}(S_{n} )\\ \sigma_{n}\in\mathcal{C}(T_{n})\end{subarray}}D(\rho_{n}\|\sigma_{n})\,, \tag{25}\] _Furthermore, if each \(S_{n}\) lies in a linear subspace of \(\mathcal{D}\left(\mathcal{H}^{\otimes n}\right)\) with dimension polynomial in \(n\) (this holds for example in the composite i.i.d. case, where each \(\rho_{n}\in S_{n}\) is permutation invariant), then we can remove the first convex hull to get_ \[\lim_{\varepsilon\to 0}\lim_{n\to\infty}\frac{1}{n}D_{H}^{\varepsilon}(S_{n} \|T_{n})=\lim_{n\to\infty}\frac{1}{n}\min_{\begin{subarray}{c}\rho_{n}\in \mathcal{S}_{n}\\ \sigma_{n}\in\mathcal{C}(T_{n})\end{subarray}}D(\rho_{n}\|\sigma_{n})\,. \tag{26}\] Proof.: This follows as a special case of Theorem 9 below. **Remark 6**.: Our Theorem 5 generalizes the previous Theorem 4 in multiple ways: Already in the composite i.i.d. setting it no longer requires the sets \(\mathcal{S}\) and \(\mathcal{T}\) to be convex. Additionally our theorem also includes all the non-i.i.d. cases such as the arbitrarily (or slightly varying) cases defined above. ### Classical Adversarial Hypothesis Testing Similar to [1, 1] our results are based on a reduction to a classical problem, the one of adversarial hypothesis testing. The following is a brief recapitulation of the treatment of adversarial hypothesis testing in [1]. Let \(P,Q\subset\mathbb{R}^{\Omega}\) (for a finite domain \(\Omega\)) be two sets of probability distributions. In the typical composite i.i.d. setting, we are presented with \(n\) samples from a distribution in \(P\) or \(Q\) and have to make a decision which set the distribution comes from. In the adversarial setting, the adversary is allowed to change the distribution within \(P\) or \(Q\) for each sample, and he can make this change based on the samples we observed previously. Note that while the adversary has access to the previous samples, he can only select a probability distribution \(p\in P\) or \(q\in Q\) (depending on which hypothesis is true) for the next sample, but he cannot select the sample outcome itself. The adversary is fully specified by two sets of functions \(\hat{p}_{k}:\Omega^{k-1}\to P\) and \(\hat{q}_{k}:\Omega^{k-1}\to Q\), which for each \(k\) specify how the adversary picks the next probability distribution based on the previous \(k-1\) sample outcomes (\(\hat{p}_{k}\) is used if the null hypothesis is true, and \(\hat{q}_{k}\) is used if the alternate hypothesis is true). The probability of a sample string \(\mathbf{x}\in\Omega^{n}\) is then given by \[\hat{p}(\mathbf{x})\coloneqq\prod_{k=1}^{n}\hat{p}_{k}(x_{1},\ldots,x_{k-1})( x_{k}) \tag{27}\] For any decision region \(A_{n}\subset\Omega^{n}\), the type I and type II errors are then going to be the worst-case errors over all adversarial strategies. We define the corresponding \(n\)-shot error exponent as \[D_{\mathrm{adv},n}^{\varepsilon}(P\|Q)=-\log\inf\left\{\,\sup_{\hat{q}}\hat{q }(A_{n})\,\,\Bigg{|}\,\,A_{n}\subset\Omega^{n},\,\,\sup_{\hat{p}}\hat{p}(A_{n }^{c})\leq\varepsilon\,\right\} \tag{28}\] The key statement of [1] is that if the sets \(P\) and \(Q\) are closed and convex, adversarial hypothesis testing is asymptotically no harder than the worst-case i.i.d. setting, specifically: **Theorem 7** ([1, Theorem 2]).: _Let \(\Omega\) be a finite domain and \(P,Q\subset\mathbb{R}^{\Omega}\) be two closed, convex sets of probability distributions. Then, for any \(\varepsilon\in(0,1)\):_ \[\lim_{n\to\infty}\frac{1}{n}D_{\mathrm{adv},n}^{\varepsilon}(P\|Q)=\min_{p\in P,q\in Q}D(p\|q) \tag{29}\] Note that since we are taking the supremum over all adversaries, by picking an adversary that deterministically picks states in a certain sequence, this result implies that also any composite problem is classically asymptotically equally as hard as the worst-case i.i.d. problem (it is also easy to see that the composite problem cannot be simpler than the worst-case i.i.d. problem). ## 4 Composite Channel Discrimination The task of composite channel discrimination is very similar in nature to composite state discrimination, but also considerably harder to study: Given an unknown quantum channel as a black box and the side information that it comes from two sets of possible channels, the task is again to determine the set (but not necessarily the exact identity) of the channel. The additional complexity here comes from the fact that, on top of finding the best measurement to perform on the output of the channel, we also have to figure out which quantum states to send as inputs to the channel. If we are given access to the black box multiple times (or say we are given multiple black-boxes) the problem becomes considerably more interesting, as the channel inputs could be chosen based on previous channel outputs. Say we are given access to \(n\) black-boxes (we will allow for the case where not all black-boxes are identical and will specify further below what scenarios exactly we consider, intuitively though the scenario is always that we want to distinguish \(n\) black-boxes from one set to \(n\) black-boxes from another set). There are now different strategies (sometimes also called protocols) in which we could set up our decision experiment - the so-called _parallel_ and _adaptive_ strategies. In a parallel strategy one prepares a joint state, usually entangled between the input systems of all the \(n\) channels and an additional reference (or memory) system. This state is then fed as input to all the \(n\) channels at once (with the state of the reference system being left undisturbed). Finally, a binary positive operator-valued measure (POVM) is performed on the joint state at the output of the channels and the reference system in order to arrive at a decision. In an adaptive strategy, on the other hand, one prepares a state of the input system of a single channel (again usually entangled with a reference system) which is fed into the first channel, with the state of the reference system being left undisturbed. The input to the next use of the channel is then chosen depending on the output of the first channel and the state of our reference system. This is done, most generally, by subjecting the latter to an arbitrary quantum operation (or channel), which we call a preparation operation. This step is repeated for each successive black-box channel until all the \(n\) black-boxes have been used. Then a binary POVM is performed on the joint state of the output of the last use of the channel and the reference system. See Figure 1 for a depiction of an adaptive strategy. Adaptive strategies are also sometimes called sequential, which is, however, not to be confused with the setting of sequential hypothesis testing [11, 12, 13], where samples (i.e. states or channels) can be requested one by one. One particularly interesting question is whether and to what degree adaptive strategies give an advantage over parallel ones. Note that any parallel strategy can be written as an adaptive strategy by taking all but one channel input as part of the reference system, and then choosing each preparation operation such that it extracts the next part of the joint input state for the next channel use and replaces it by the output of the previous channel use. However, the converse is not true, and so adaptive strategies are more general. Parallel strategies are conceptually a lot simpler than adaptive ones - aside from the measurement, everything is specified just by the joint input state - in contrast to adaptive strategies, in which after each channel use we can perform an arbitrary quantum operation to prepare the input to the next use of the channel. It is thus interesting to determine to what degree parallel strategies can still be optimal. This problem has been studied extensively for channel discrimination with simple hypotheses, where it is known that in certain cases adaptive strategies can give an advantage over parallel ones. In [11] the authors constructed an example in which an adaptive strategy with only two channel uses could be used to discriminate two channels with certainty, which is shown not to be possible with a parallel strategy, even if arbitrarily many channel uses are allowed. Asymptotically, however, it was shown that in the simple binary asymmetric case adaptive and parallel strategies are equivalent [10, 12, 13]. We will show below that this fails to stay the case with composite hypotheses, already classically. Specifically, in this section we will study the following: 1. We start with a treatment of parallel channel discrimination strategies, where we provide matching achievability and converse bounds for the Stein exponent in terms of a regularized expression (Theorem 9), in analogy to what has previously been shown [1] for state discrimination (i.e. Theorem 4). 2. We prove an upper bound on the Stein exponent for adaptive strategies (Proposition 10), where we show that this upper bound can sometimes but not always be achieved, and can also be larger than the parallel exponent (Example 12), hence demonstrating that adaptive strategies can sometimes be advantageous (we show this even classically). 3. We show that classically, under an additional convexity assumption which was not satisfied in the previous example, parallel and adaptive strategies are asymptotically equivalent in the asymmetric composite setting, and the Stein exponent is given by a single-letter entropic formula (Theorem 13). 4. We further show classically, and in some further restricted setting, that if we replace the convexity assumption with a finiteness assumption, we can still get a single-letter entropic expression for the Stein exponent for parallel strategies (Proposition 14). Following the above discussion for composite state discrimination, we want to apply a similar level of generality to discriminating channels, where we want to allow the \(n\) black-boxes not be identical. Hence, in analogy with Definition 1 we will work with general hypotheses satisfying the following conditions: **Definition 8**.: _For the purpose of this work, a composite quantum channel hypothesis (in the asymptotic setting) is a sequence of sets of channels_ \[\boldsymbol{\mathcal{S}}=(\mathcal{S}_{n}\subset\mathrm{CPTP}(A^{n}\to B^{n}) )_{n}\] _such that_ 1. _Each set_ \(\mathcal{S}_{n}\) _is topologically closed._ 2. _Each element_ \(\mathcal{E}_{n}\in\mathcal{S}_{n}\) _is a tensor product of channels_ \(\mathcal{E}_{n}=\mathcal{E}^{(1)}\otimes\ldots\otimes\mathcal{E}^{(n)}\)_, with_ \(\mathcal{E}^{(i)}\in\mathrm{CPTP}(A\to B)\)_, for_ \(i=1,...,n\)_._ 3. _For every_ \(\mathcal{E}_{n}=\mathcal{E}^{(1)}\otimes\ldots\otimes\mathcal{E}^{(n)}\in \mathcal{S}_{n}\)_, removing any element in the tensor product yields an element in_ \(\mathcal{S}_{n-1}\)_._ 4. _Each set_ \(\mathcal{S}_{n}\) _is closed under permuting the_ \(n\) _subsystems of the input and output systems of a channel, i.e. for any permutation_ \(\pi\in S(n)\) _and associated canonical unitary representations_ \(\Pi_{A}\) _and_ \(\Pi_{B}\) _on_ \(A^{n}\) _and_ \(B^{n}\)_, we have for all_ \(\mathcal{E}_{n}\in\mathcal{S}_{n}\) _that also the permuted channel_ \(\rho\mapsto\Pi_{B}\mathcal{E}_{n}(\Pi_{A}\rho\Pi_{A})\Pi_{B}\) _is an element of_ \(\mathcal{S}_{n}\)_._ One can then define the same scenarios, such as the composite i.i.d. setting, the arbitrarily varying setting, and slightly varying settings, as we did for composite state discrimination in a completely analogous way for composite channel discrimination. ### The Parallel Case Given a set of channels \(\mathcal{A}\) and an input state \(\nu\in\mathcal{D}\left(RA\right)\) (where \(R\) could be any system, possibly also just trivial), we define the set of all output states as \[\mathcal{A}[\nu]\coloneqq\{\,(\mathrm{id}_{R}\otimes\mathcal{E})(\nu)\mid \mathcal{E}\in\mathcal{A}\,\}. \tag{30}\] Similar to the state discrimination problem above, we will be looking for the best input state \(\nu_{n}\) and measurement \(M\) such that _for all_\(\mathcal{E}_{n}\in\mathcal{S}_{n}\) the error of claiming it coming from \(\mathcal{T}_{n}\) (i.e. the type I error) stays below some threshold \(\varepsilon\) and we otherwise minimize the worst case type II error, i.e. we want to make sure that the probability of claiming an element \(\mathcal{F}_{n}\in\mathcal{T}_{n}\) to be from \(\mathcal{S}_{n}\) is as low as possible uniformly over all \(\mathcal{F}_{n}\in\mathcal{T}_{n}\). Given an input state \(\nu\), the parallel channel discrimination problem turns into a state discrimination problem, and so we define the following type II error exponent for any \(\mathcal{S}_{n}\) and \(\mathcal{T}_{n}\) which satisfy the properties of Definition 8: \[D_{H}^{\varepsilon}(\mathcal{S}_{n}\|\mathcal{T}_{n}) \coloneqq\sup_{\nu\in\mathcal{D}(RA)}D_{H}^{\varepsilon}(\mathcal{ S}_{n}[\nu]\|\mathcal{T}_{n}[\nu])=\sup_{\nu\in\mathcal{D}(RA)}\sup_{ \begin{subarray}{c}0\leq M\leq 1\\ \alpha(M,\mathcal{S}_{n}[\nu])\leq\varepsilon\end{subarray}}(-\log\beta(M, \mathcal{T}_{n}[\nu])) \tag{31}\] \[e_{P}(n,\varepsilon,\mathcal{S}_{n},\mathcal{T}_{n}) \coloneqq\frac{1}{n}D_{H}^{\varepsilon}(\mathcal{S}_{n}\|\mathcal{ T}_{n})\,. \tag{32}\] It is easy to see that \(\mathcal{C}(\mathcal{A}[\nu])=\mathcal{C}(\mathcal{A})[\nu]\), and hence as above: \[D_{H}^{\varepsilon}(\mathcal{S}\|\mathcal{T})=D_{H}^{\varepsilon}(\mathcal{ S}\|\mathcal{C}(\mathcal{T}))=D_{H}^{\varepsilon}(\mathcal{C}(\mathcal{S})\| \mathcal{T})=D_{H}^{\varepsilon}(\mathcal{C}(\mathcal{S})\|\mathcal{C}( \mathcal{T}))\,. \tag{33}\] Our main theorem of this section is the following: **Theorem 9**.: _Let \(\boldsymbol{\mathcal{S}}=(\mathcal{S}_{n})_{n},\boldsymbol{\mathcal{T}}=( \mathcal{T}_{n})_{n}\) be two composite quantum channel hypotheses. Then, the quantum Stein exponent of discriminating these two hypotheses with a parallel strategy is given by:_ \[\lim_{\varepsilon\to 0}\lim_{n\to\infty}\frac{1}{n}D_{H}^{\varepsilon}( \mathcal{S}_{n}\|\mathcal{T}_{n})=\lim_{n\to\infty}\frac{1}{n}\min_{ \begin{subarray}{c}\mathcal{E}_{n}\in\mathcal{C}(\mathcal{S}_{n})\\ \mathcal{F}_{n}\in\mathcal{C}(\mathcal{T}_{n})\end{subarray}}\max_{ \begin{subarray}{c}0\in\mathcal{D}(R\otimes A^{\otimes n})\\ \mathcal{F}_{n}\in\mathcal{C}(\mathcal{T}_{n})\end{subarray}}D(\mathcal{E}_{n}( \nu)\|\mathcal{F}_{n}(\nu)) \tag{34}\] _Where the \(\min\) and \(\max\) can be exchanged, and one can choose the reference system \(R\) to be isomorphic to \(A^{\otimes n}\) for all \(n\)._ _Furthermore, if each \(\mathcal{S}_{n}\) lies in a linear subspace of \(\mathrm{CPTP}(A^{n}\to B^{n})\) with dimension polynomial in \(n\) (this is for example the case in the composite i.i.d. setting, where all the \(\mathcal{E}_{n}\in\mathcal{S}_{n}\) are permutation covariant), we can also remove one convex hull:_ \[\lim_{\varepsilon\to 0}\lim_{n\to\infty}\frac{1}{n}D_{H}^{\varepsilon}( \mathcal{S}_{n}\|\mathcal{T}_{n})=\lim_{n\to\infty}\frac{1}{n}\max_{\nu\in \mathcal{D}(R\otimes A^{\otimes n})}\min_{\begin{subarray}{c}\mathcal{E}_{n} \in\mathcal{S}_{n}\\ \mathcal{F}_{n}\in\mathcal{C}(\mathcal{T}_{n})\end{subarray}}D(\mathcal{E}_{n} (\nu)\|\mathcal{F}_{n}(\nu)) \tag{35}\] _where we however cannot say whether \(\min\) and \(\max\) can be exchanged._ Proof.: This proof is very much inspired by the results for composite state discrimination from [1, Theorem 1.1] and [1, Theorem 16]. AchievabilityFor the achievability part, let \(\varepsilon\in(0,1)\), fix an integer \(k\), and let \(\nu_{k}\in\mathcal{D}\left(RA^{k}\right)\) be an input state, where \(R\) is isomorphic to \(A^{k}\). Additionally, let \(\mathcal{M}_{k}\) be a POVM measurement on \(RB^{k}\) (where we interpret \(\mathcal{M}_{k}\) as a quantum-classical channel that maps to the probability distribution of measurement outcomes, as specified in subsection 2.1). Define the two sets of classical probability distributions \(P\coloneqq\{\,\mathcal{M}_{k}(\mathcal{E}_{k}(\nu_{k}))\mid\mathcal{E}_{k} \in\mathcal{S}_{k}\,\}\) and \(Q\coloneqq\{\,\mathcal{M}_{k}(\mathcal{F}_{k}(\nu_{k}))\mid\mathcal{F}_{k}\in \mathcal{T}_{k}\,\}\). The operational procedure is now to take an unknown channel from either \(\mathcal{S}_{nk}\) or \(\mathcal{T}_{nk}\), feed it with the input state \(\nu_{k}^{\otimes n}\) and apply the measurement \(\mathcal{M}_{k}^{\otimes n}\) to the outcome. Crucially, due to the assumed structure of the \((\mathcal{S}_{n})_{n}\) and \((\mathcal{T}_{n})_{n}\) (as specified in Definition 8, and written out more explicitly in Lemma 2), the measurement result of each of the individual \(n\) POVM measurements will be distributed according to a \(p\in P\) or \(q\in Q\), also when conditioned on the outcomes of all the other \(n-1\) POVM measurements. Hence, the overall structure of classical outcomes can be seen as an instance of adversarial hypothesis testing with a particular adversary1. For this classical problem, by Theorem 7, the exponent Footnote 1: In fact, this problem can also be seen to be at most as hard as a composite hypothesis testing task in the arbitrarily varying case, and a similar statement as Theorem 7 for this composite arbitrarily varying task would be sufficient for our purposes. \[\inf_{p\in P_{q}\in Q}D(p\|q) \tag{36}\] is asymptotically achievable as \(n\to\infty\), which just means that \[\liminf_{n\to\infty}\frac{1}{n}D_{H}^{\varepsilon}(\mathcal{S}_{nk}\|\mathcal{T}_ {nk})\geq\inf_{p\in P_{q}\in Q}D(p\|q)=\inf_{\begin{subarray}{c}\rho_{k}\in \mathcal{S}_{k}[\nu_{k}]\\ \sigma_{k}\in\mathcal{T}_{k}[\nu_{k}]\end{subarray}}D(\mathcal{M}_{k}(\rho_{k })\|\mathcal{M}_{k}(\sigma_{k}))\,, \tag{37}\] where dividing by \(k\) yields: \[\liminf_{n\to\infty}\frac{1}{nk}D_{H}^{\varepsilon}(\mathcal{S}_{nk}\| \mathcal{T}_{nk})\geq\frac{1}{k}\inf_{\begin{subarray}{c}\rho_{k}\in \mathcal{S}_{k}[\nu_{k}]\\ \sigma_{k}\in\mathcal{T}_{k}[\nu_{k}]\end{subarray}}D(\mathcal{M}_{k}(\rho_{k })\|\mathcal{M}_{k}(\sigma_{k}))\,. \tag{38}\] Now, to obtain a procedure for discriminating \(m\) channels where \(m\) is not a multiple of \(k\), we can just ignore at most \(k-1\) channels so that we are left with a multiple of \(k\) channels and then do the above. This yields a strategy to distinguish \(\mathcal{S}_{m}\) and \(\mathcal{T}_{m}\) for any \(m\) and asymptotically the \(k-1\) discarded channels do not matter, so we get: \[\liminf_{m\to\infty}\frac{1}{m}D_{H}^{\varepsilon}(\mathcal{S}_{m}\|\mathcal{ T}_{m})\geq\frac{1}{k}\inf_{\begin{subarray}{c}\rho_{k}\in\mathcal{S}_{k}[\nu_{k}] \\ \sigma_{k}\in\mathcal{T}_{k}[\nu_{k}]\end{subarray}}D(\mathcal{M}_{k}(\rho_{k}) \|\mathcal{M}_{k}(\sigma_{k}))\geq\inf_{\begin{subarray}{c}\rho_{k}\in \mathcal{C}(\mathcal{S}_{k}[\nu_{k}]\\ \sigma_{k}\in\mathcal{C}(\mathcal{T}_{k}[\nu_{k}])\end{subarray}}\frac{1}{k}D( \mathcal{M}_{k}(\rho_{k})\|\mathcal{M}_{k}(\sigma_{k}))\,. \tag{39}\] where we added convex hulls on the right-hand side (this just makes the infimum smaller). We can now take the supremum over all measurements \(\mathcal{M}_{k}\) on the right-hand side, and by [1, Lemma 13], we can exchange this supremum with the already present infimum, to find \[\liminf_{m\to\infty}\frac{1}{m}D_{H}^{\varepsilon}(\mathcal{S}_{m}\|\mathcal{ T}_{m})\geq\inf_{\begin{subarray}{c}\rho_{k}\in\mathcal{C}(\mathcal{S}_{k}[\nu_{k}] )\\ \sigma_{k}\in\mathcal{C}(\mathcal{T}_{k}[\nu_{k}])\end{subarray}}\frac{1}{k}D_{ M}(\rho_{k}\|\sigma_{k})\,. \tag{40}\] Note that [1, Lemma 13] requires the infimum to be over a convex set, which is why we introduced convex hulls in the previous step. Additionally, we now take the supremum over \(\nu_{k}\) to find \[\liminf_{m\to\infty}\frac{1}{m}D_{H}^{\varepsilon}(\mathcal{S}_{m }\|\mathcal{T}_{m}) \geq\sup_{\nu_{k}\in\mathcal{D}\big{(}RA^{k}\big{)}}\inf_{ \begin{subarray}{c}\rho_{k}\in\mathcal{C}(\mathcal{S}_{k}[\nu_{k}])\\ \sigma_{k}\in\mathcal{C}(\mathcal{T}_{k}[\nu_{k}])\end{subarray}}\frac{1}{k}D_{ M}(\rho_{k}\|\sigma_{k}) \tag{41}\] \[=\sup_{\nu_{k}\in\mathcal{D}\big{(}RA^{k}\big{)}}\inf_{ \begin{subarray}{c}\mathcal{E}_{k}\in\mathcal{C}(\mathcal{S}_{k})\\ \mathcal{F}_{k}\in\mathcal{C}(\mathcal{T}_{k})\end{subarray}}\frac{1}{k}D_{M}( \mathcal{E}_{k}(\nu_{k})\|\mathcal{F}_{k}(\nu_{k}))\] (42) \[=\inf_{\begin{subarray}{c}\mathcal{E}_{k}\in\mathcal{C}(\mathcal{ S}_{k})\\ \mathcal{F}_{k}\in\mathcal{C}(\mathcal{T}_{k})\end{subarray}}\sup_{\nu_{k}\in \mathcal{D}\big{(}RA^{k}\big{)}}\frac{1}{k}D_{M}(\mathcal{E}_{k}(\nu_{k})\| \mathcal{F}_{k}(\nu_{k})) \tag{43}\] where the first equality is just a rewriting, and for the second equality we used that by Proposition 19 (since the infimum is over convex sets) we can exchange infimum and supremum. We take the \(\limsup\) over \(k\) to get \[\liminf_{m\to\infty}\frac{1}{m}D_{H}^{\varepsilon}(\mathcal{S}_{m}\|\mathcal{T} _{m})\geq\limsup_{k\to\infty}\inf_{\begin{subarray}{c}\mathcal{E}_{k}\in \mathcal{C}(\mathcal{S}_{k})\\ \mathcal{F}_{k}\in\mathcal{C}(\mathcal{T}_{k})\end{subarray}}\sup_{\nu_{k}\in \mathcal{D}\big{(}RA^{k}\big{)}}\frac{1}{k}D_{M}(\mathcal{E}_{k}(\nu_{k})\| \mathcal{F}_{k}(\nu_{k}))\,. \tag{44}\] Now, by Lemma 23 the infimum is achieved for permutation covariant channels \(\mathcal{E}_{k}\), \(\mathcal{F}_{k}\), and by Lemma 24 the supremum is achieved for a permutation invariant state (note that the channels \(\mathcal{E}_{k}\) and \(\mathcal{F}_{k}\) are of course also permutation covariant with regards to permutations within \(R\), as they act with the identity on the reference system). Hence the state \(\mathcal{F}_{k}(\nu_{k})\) is permutation invariant, and thus by Lemma 25 we get \[\liminf_{m\to\infty}\frac{1}{m}D_{H}^{\varepsilon}(\mathcal{S}_{m}\| \mathcal{T}_{m}) \geq\limsup_{k\to\infty}\min_{\begin{subarray}{c}\mathcal{E}_{k}\in \mathcal{C}(\mathcal{S}_{k})\\ \mathcal{F}_{k}\in\mathcal{C}(\mathcal{T}_{k})\end{subarray}}\max_{\nu_{k}\in \mathcal{D}\big{(}R\otimes A^{k}\big{)}}\frac{1}{k}D(\mathcal{E}_{k}(\nu_{k}) \|\mathcal{F}_{k}(\nu_{k})) \tag{45}\] \[=\limsup_{k\to\infty}\min_{\begin{subarray}{c}\mathcal{E}_{k}\in \mathcal{C}(\mathcal{S}_{k})\\ \mathcal{F}_{k}\in\mathcal{C}(\mathcal{T}_{k})\end{subarray}}\frac{1}{k}D( \mathcal{E}_{k}\|\mathcal{F}_{k})\,. \tag{46}\] ConverseFor the converse part, note that by Lemma 16: \[D^{\varepsilon}_{H}(\mathcal{S}_{n}\|\mathcal{T}_{n})=\sup_{\nu_{n}}D^{ \varepsilon}_{H}(\mathcal{S}_{n}[\nu_{n}]\|T_{n}[\nu_{n}])\leq\sup_{\begin{subarray} {c}\nu_{n}\end{subarray}}\inf_{\begin{subarray}{c}\rho_{n}\in\mathcal{S}_{n} [\nu_{n}]\\ \sigma_{n}\in\mathcal{T}_{n}[\nu_{n}]\end{subarray}}D^{\varepsilon}_{H}(\rho_{ n}\|\sigma_{n})\,. \tag{47}\] By Lemma 15 we have that for any two states \(\rho,\sigma\): \[D^{\varepsilon}_{H}(\rho\|\sigma)\leq\frac{1}{1-\varepsilon}(D(\rho\|\sigma)+ h(\varepsilon))\,. \tag{48}\] Thus, \[\lim_{\varepsilon\to 0}\liminf_{n\to\infty}\frac{1}{n}D^{ \varepsilon}_{H}(\mathcal{S}_{n}\|\mathcal{T}_{n}) =\lim_{\varepsilon\to 0}\liminf_{n\to\infty}\frac{1}{n}D^{ \varepsilon}_{H}(\mathcal{C}(\mathcal{S}_{n})\|\mathcal{C}(\mathcal{T}_{n})) \tag{49}\] \[\leq\liminf_{n\to\infty}\sup_{\begin{subarray}{c}\nu_{n}\end{subarray} }\min_{\begin{subarray}{c}\mathcal{E}_{n}\in\mathcal{C}(\mathcal{S}_{n})\\ \mathcal{F}_{n}\in\mathcal{C}(T_{n})\end{subarray}}\frac{1}{n}D(\mathcal{E}_{n} (\nu_{n})\|\mathcal{F}_{n}(\nu_{n}))\] (50) \[\leq\liminf_{n\to\infty}\min_{\begin{subarray}{c}\mathcal{E}_{n} \in\mathcal{C}(\mathcal{S}_{n})\\ \mathcal{F}_{n}\in\mathcal{C}(T_{n})\end{subarray}}\frac{1}{n}D(\mathcal{E}_{n} \|\mathcal{F}_{n}) \tag{51}\] where we used (33), (47), Proposition 19 and the optimizations are achieved as above. Equivalently, one finds the same with \(\liminf\inf\) replaced with \(\limsup\sup\): \[\lim_{\varepsilon\to 0}\limsup_{n\to\infty}\frac{1}{n}D^{ \varepsilon}_{H}(\mathcal{S}_{n}\|\mathcal{T}_{n})\leq\limsup_{n\to\infty}\min_ {\begin{subarray}{c}\mathcal{E}_{n}\in\mathcal{C}(\mathcal{S}_{n})\\ \mathcal{F}_{n}\in\mathcal{C}(T_{n})\end{subarray}}\frac{1}{n}D(\mathcal{E}_{n} \|\mathcal{F}_{n}) \tag{52}\] Combining (51) this with the achievability result (46), we find \[\liminf_{n\to\infty}\min_{\begin{subarray}{c}\mathcal{E}_{n}\in \mathcal{C}(\mathcal{S}_{n})\\ \mathcal{F}_{n}\in\mathcal{C}(T_{n})\end{subarray}}\frac{1}{n}D(\mathcal{E}_{n} \|\mathcal{F}_{n})\geq\lim_{\varepsilon\to 0}\liminf_{n\to\infty}\frac{1}{n}D^{ \varepsilon}_{H}(\mathcal{S}_{n}\|\mathcal{T}_{n})\geq\limsup_{k\to\infty}\min _{\begin{subarray}{c}\mathcal{E}_{k}\in\mathcal{C}(\mathcal{S}_{k})\\ \mathcal{F}_{k}\in\mathcal{C}(T_{k})\end{subarray}}\frac{1}{k}D(\mathcal{E}_{k} \|\mathcal{F}_{k}) \tag{53}\] and hence both inequalities in this line are in fact equalities. Also, combining this again with (51) and (52) we find \[\lim_{k\to\infty}\min_{\begin{subarray}{c}\mathcal{E}_{k}\in \mathcal{C}(\mathcal{S}_{k})\\ \mathcal{F}_{k}\in\mathcal{C}(T_{k})\end{subarray}}\frac{1}{k}D(\mathcal{E}_{k} \|\mathcal{F}_{k}) \leq\lim_{\varepsilon\to 0}\liminf_{n\to\infty}\frac{1}{n}D^{ \varepsilon}_{H}(\mathcal{S}_{n}\|\mathcal{T}_{n}) \tag{54}\] \[\leq\lim_{\varepsilon\to 0}\limsup_{n\to\infty}\frac{1}{n}D^{ \varepsilon}_{H}(\mathcal{S}_{n}\|\mathcal{T}_{n})\leq\lim_{n\to\infty}\min_{ \begin{subarray}{c}\mathcal{E}_{n}\in\mathcal{C}(\mathcal{S}_{n})\\ \mathcal{F}_{n}\in\mathcal{C}(T_{n})\end{subarray}}\frac{1}{n}D(\mathcal{E}_{n} \|\mathcal{F}_{n})\,, \tag{55}\] and hence all limits coincide and exist without requiring \(\liminf\) or \(\limsup\). Finally, the second part of the theorem that applies if \(\mathcal{S}_{n}\) lies in a linear subspace of \(\mathrm{CPTP}(A^{n}\to B^{n})\) with dimension polynomial in \(n\), can be seen as an immediate consequence of the first part of the theorem after using Proposition 19 and Lemma 26. Note that after the application of Lemma 26 we do no longer satisfy the convexity assumption necessary for another application of Proposition 19, and hence we cannot conclude that the \(\min\) and \(\max\) can be exchanged again at this point. ### The Adaptive Case As stated previously, the most general setup of the channels will allow for channel inputs to depend on previous channel outputs, which is called an adaptive protocol. Let \(n\) be fixed and let \(\Lambda_{n}=\Lambda^{(1)}\otimes\ldots\otimes\Lambda^{(n)}\) be \(n\) black-box channels given to us, where the task is to determine whether they come from \(\mathcal{S}_{n}\) or \(\mathcal{T}_{n}\), where \(\mathcal{S}_{n}\) and \(\mathcal{T}_{n}\) are part of quantum channel hypotheses as specified in Definition 8. We write \(\Lambda_{i}\coloneqq\Lambda^{(1)}\otimes\ldots\otimes\Lambda^{(i)}\) for \(i=1,...,n\). A general adaptive channel discrimination protocol for these \(\Lambda_{n}\), can now be fully specified by an initial state \(\omega_{0}\in\mathcal{D}\left(R\otimes A\right)\), a set of \(n-1\) CPTP maps \(\mathcal{N}_{i}:R\otimes B\to R\otimes A\), that transform the state before it is fed into the next black-box channel, and a final binary POVM \(\{M,\mathbbm{1}-M\}\) on \(R\otimes B\). We will assume the size of reference system \(R\) to be fixed and identical throughout the protocol (this is without loss of generality). The protocol consists of alternating applications of a black-box channel and the preparation CPTP maps \(\mathcal{N}_{i}\) (see Figure 1). We define: \[\omega_{i}(\Lambda_{i})\coloneqq\Lambda^{(i)}(\mathcal{N}_{i}(\omega_{i-1}( \Lambda_{i-1}))),\qquad\text{for }i\in\{2,\ldots,n\}, \tag{56}\] where we do not make identities on reference systems explicit (as previously), and \(\omega_{1}(\Lambda_{1})\coloneqq\Lambda^{(1)}(\omega_{0})\). With our notation, the final state before the action of the POVM will be \(\omega_{n}(\Lambda_{n})\). Note that since the sets \(\mathcal{S}_{n}\) and \(\mathcal{T}_{n}\) were assumed to be permutation invariant, there is no advantage to be gained from reordering the black-box channels and so this is indeed the most general setup. For a set \(\mathcal{S}_{n}\) corresponding to a hypothesis, we write \(\omega_{n}(\mathcal{S}_{n})\coloneqq\{\,\omega_{n}(\mathcal{E}_{n})\mid \mathcal{E}_{n}\in\mathcal{S}_{n}\,\}\). Given an \(\omega_{n}\), the problem then reduces to the composite state-discrimination problem \(\omega_{n}(\mathcal{S}_{n})\) vs \(\omega_{n}(\mathcal{T}_{n})\). Note that \(\omega_{n}(\mathcal{S}_{n})\subset\mathcal{D}\left(R\otimes B\right)\), so this state discrimination problem will not be an instance of a many-copy discrimination problem as studied above, the \(n\) just indicates how many channel black-boxes were used in obtaining the states in the set. We can again define the corresponding worst-case type II error exponent as \[e_{A}(n,\varepsilon,\mathcal{S}_{n},\mathcal{T}_{n})\coloneqq\frac{1}{n}\sup _{\omega_{n}}D^{\varepsilon}_{H}(\omega_{n}(\mathcal{S}_{n})\|\omega_{n}( \mathcal{T}_{n}))=\frac{1}{n}\sup_{\omega_{n}}\sup_{\begin{subarray}{c}0\leq M \leq 1\\ \alpha(M,\omega_{n}(\mathcal{S}_{n}))\leq\varepsilon\end{subarray}}[-\log\beta (M,\omega_{n}(\mathcal{T}_{n}))] \tag{57}\] where the supremum over \(\omega_{n}\) goes over all adaptive strategies, i.e. all initial states \(\omega_{0}\) and all preparation maps \(\mathcal{N}_{i}\), \(i=2,...,n\). #### 4.2.1 An upper bound for adaptive strategies We can prove the following upper bound on the Stein exponent for discriminating two composite channel hypotheses with adaptive strategies. This essentially captures the intuition that if the sets \(\mathcal{S}_{n}\) and \(\mathcal{T}_{n}\) are such that they include i.i.d. problem, then the error exponent has to be less than the worst-case i.i.d. error exponent. **Proposition 10**.: _Let \(\boldsymbol{\mathcal{S}}=(\mathcal{S}_{n})_{n},\boldsymbol{\mathcal{T}}=( \mathcal{T}_{n})_{n}\) be two quantum composite channel hypotheses. Let \(\mathcal{S}\coloneqq\mathcal{S}_{1}\) and \(\mathcal{T}\coloneqq\mathcal{T}_{1}\). If the hypotheses are such that for all \(n\)_ \[\mathcal{E}^{\otimes n}\in\mathcal{S}_{n}\quad\forall\,\mathcal{E} \in\mathcal{S} \tag{58}\] \[\mathcal{F}^{\otimes n}\in\mathcal{T}_{n}\quad\forall\,\mathcal{F} \in\mathcal{T} \tag{59}\] Figure 1: Illustration of a general adaptive protocol with \(n\) black-box channels. The top row makes use of the given black-boxes while the bottom row depicts the memory system \(R\). _then the Stein exponent for distinguishing these two composite hypotheses by an adaptive strategy is upper bounded by_ \[\lim_{\varepsilon\to 0}\lim_{n\to\infty}e_{A}(n,\varepsilon,\mathcal{S}_{n}, \mathcal{T}_{n})\leq\min_{\begin{subarray}{c}\mathcal{E}\in\mathcal{S}\\ \mathcal{F}\in\mathcal{T}\end{subarray}}D_{A}(\mathcal{E}\|\mathcal{F})=\min_{ \begin{subarray}{c}\mathcal{E}\in\mathcal{S}\\ \mathcal{F}\in\mathcal{T}\end{subarray}}D_{\mathrm{reg}}(\mathcal{E}\| \mathcal{F})\,. \tag{60}\] Proof.: As mentioned above, let \(\omega_{n}(\Lambda_{n})\) be the state at the end of the adaptive strategy with \(n\) channel uses where the \(n\) black-box channels are given by \(\Lambda_{n}\). As specified above, given an \(\omega_{n}\), the channel discrimination problem then reduces to the state-discrimination problem \(\omega_{n}(\mathcal{S}_{n})\) vs. \(\omega_{n}(\mathcal{T}_{n})\) with associated error exponent \[\frac{1}{n}D_{H}^{\varepsilon}(\omega_{n}(\mathcal{S}_{n})\|\omega_{n}( \mathcal{T}_{n}))\,. \tag{61}\] By Lemma 16 this is upper bounded as follows: \[\frac{1}{n}D_{H}^{\varepsilon}(\omega_{n}(\mathcal{S}_{n})\|\omega _{n}(\mathcal{T}_{n})) \leq\frac{1}{n}\inf_{\begin{subarray}{c}\rho_{n}\in\omega_{n}( \mathcal{S}_{n})\\ \sigma_{n}\in\omega_{n}(\mathcal{T}_{n})\end{subarray}}D_{H}^{\varepsilon}( \rho_{n}\|\sigma_{n})\leq\frac{1}{n(1-\varepsilon)}\inf_{\begin{subarray}{c }\rho_{n}\in\omega_{n}(\mathcal{S}_{n})\\ \sigma_{n}\in\omega_{n}(\mathcal{T}_{n})\end{subarray}}D(\rho_{n}\|\sigma_{n} )+o(1) \tag{62}\] \[=\frac{1}{n(1-\varepsilon)}\inf_{\begin{subarray}{c}\mathcal{E} _{n}\in\mathcal{S}_{n}\\ \mathcal{F}_{n}\in\mathcal{T}_{n}\end{subarray}}D(\omega_{n}(\mathcal{E}_{n}) \|\omega_{n}(\mathcal{F}_{n}))+o(1), \tag{63}\] where the second inequality follows from Lemma 15. By using the sequential structure of the adaptive strategy, we find \[\frac{1}{n}D(\omega_{n}(\mathcal{E}_{n})\|\omega_{n}(\mathcal{F}_{n}))\leq\sup _{k}\sup_{\rho,\sigma}\left[D(\mathcal{E}_{n}^{(k)}(\rho)\|\mathcal{F}_{n}^{(k )}(\sigma))-D(\rho\|\sigma)\right] \tag{64}\] where \(\mathcal{E}_{n}^{(k)}\) denote the \(k^{th}\) channel in the tensor product of channels in \(\mathcal{E}_{n}\). Since tensor products of the form \(\mathcal{E}^{\otimes n}\) for \(\mathcal{E}\in\mathcal{S}\) are included in \(\mathcal{S}_{n}\) by assumption (and similarly for \(\mathcal{T}_{n}\)) this can be further upper bounded as (up to \(o(1)\)): \[\frac{1}{n}D_{H}^{\varepsilon}(\omega_{n}(\mathcal{S}_{n})\|\omega_{n}( \mathcal{T}_{n}))\leq\frac{1}{1-\varepsilon}\inf_{\begin{subarray}{c}\mathcal{ E}\in\mathcal{S}\\ \mathcal{F}\in\mathcal{T}\end{subarray}}\sup_{\rho,\sigma}\left[D(\mathcal{E}( \rho)\|\mathcal{F}(\sigma))-D(\rho\|\sigma)\right]=\frac{1}{1-\varepsilon} \inf_{\begin{subarray}{c}\mathcal{E}\in\mathcal{S}\\ \mathcal{F}\in\mathcal{T}\end{subarray}}D_{A}(\mathcal{E}\|\mathcal{F})\,. \tag{65}\] Finally, the \(o(1)\) term disappears in the limit \(n\to\infty\) and the factor \(\frac{1}{1-\varepsilon}\) goes to \(1\) in the limit \(\varepsilon\to 0\). Also, it is known [10] that \(D_{A}(\mathcal{E}\|\mathcal{F})=D_{\mathrm{reg}}(\mathcal{E}\|\mathcal{F})\). The expression \(\frac{1}{n}D(\mathcal{E}^{\otimes n}\|\mathcal{F}^{\otimes n})\) is monotonically increasing in \(n\), and hence we can replace the limit \(n\to\infty\) in the regularized divergence with a supremum over \(n\). Hence the regularized divergence is lower semi-continuous (as a supremum of lower semi-continuous functions is lower semi-continuous), and hence the infimum is achieved. We will give a classical example in the next section where this upper bound is achieved and is strictly larger than the achievable exponent of parallel strategies. Hence this demonstrates an advantage of adaptive strategies for composite channel discrimination even if everything is classical. **Remark 11**.: While the upcoming example demonstrates that this upper bound can sometimes be achieved, it cannot always be achieved. Hence, it is not a candidate for the optimal asymptotic exponent of adaptive strategies. This can be seen by taking all channels to be replacer channels2. In this case the task of channel discrimination reduces to that of state discrimination, for which adaptive and parallel strategies are equivalent. In the composite i.i.d. setting (i.e. when \(S_{n}=\{\;\rho^{\otimes n}\;|\;\rho\in S\;\}\) and \(T_{n}=\{\;\sigma^{\otimes n}\;|\;\sigma\in T\;\}\)) it has been shown that there exist sets \(S\) and \(T\) such that Footnote 2: A replacer channel is a quantum channel which outputs a fixed quantum state regardless of the input. \[\lim_{\varepsilon\to 0}\lim_{n\to\infty}D_{H}^{\varepsilon}(S_{n}\|T_{n})=\lim_{n \to\infty}\frac{1}{n}\inf_{\begin{subarray}{c}\rho_{n}\in\mathcal{C}(S_{n})\\ \sigma_{n}\in\mathcal{C}(T_{n})\end{subarray}}D(\rho_{n}\|\sigma_{n})<\inf_{ \begin{subarray}{c}\rho\in S\\ \sigma\in T\end{subarray}}D(\rho\|\sigma), \tag{66}\] and different examples exist where \(S\) and \(T\) are either convex [1, Section 4.2] or discrete [2, Section IV.A]. #### 4.2.2 A classical example of an adaptive advantage In the following we give a fully classical example that demonstrates how adaptive strategies can be (also asymptotically) beneficial with composite hypotheses in the composite i.i.d. setting. **Example 12**.: _There exist classical composite channel hypotheses \(\mathcal{S}=\{\mathcal{E}_{1},\mathcal{E}_{2}\}\) and \(\mathcal{T}=\{\mathcal{F}_{1},\mathcal{F}_{2}\}\), such that the adaptive error exponent in the composite i.i.d. setting is strictly larger than the parallel one. Specifically, we show that_ \[\lim_{\varepsilon\to 0}\lim_{n\to\infty}\frac{1}{n}e_{A}(n,\varepsilon, \mathcal{S}_{n},\mathcal{T}_{n})=\min_{i,j\in\{1,2\}}D(\mathcal{E}_{i}\| \mathcal{F}_{j})=2\lim_{\varepsilon\to 0}\lim_{n\to\infty}\frac{1}{n}e_{P}(n, \varepsilon,\mathcal{S}_{n},\mathcal{T}_{n}) \tag{67}\] _where \(\mathcal{S}_{n}=\Big{\{}\,\mathcal{E}_{i}^{\otimes n}\,\Big{|}\,\,i=1,2\, \Big{\}}\), \(\mathcal{T}_{n}=\Big{\{}\,\mathcal{F}_{i}^{\otimes n}\,\Big{|}\,\,i=1,2\, \Big{\}}\)._ When defining the channels, we will use quantum notation for convenience, but everything should be seen as classical, i.e. all states are diagonal in the computational basis. The channels used in our example are then: \[\mathcal{E}_{1}(\rho) =\tau\otimes|0\rangle\!\langle 0| \tag{68}\] \[\mathcal{E}_{2}(\rho) =\tau\otimes|1\rangle\!\langle 1|\] (69) \[\mathcal{F}_{1}(\rho) =\frac{1}{2}\left[\tau+\langle 0|\rho|0\rangle\,|0\rangle\! \langle 0|+\langle 1|\rho|1\rangle\,\tau\right]\otimes|0\rangle\!\langle 0|\] (70) \[\mathcal{F}_{2}(\rho) =\frac{1}{2}\left[\tau+\langle 0|\rho|0\rangle\,\tau+\langle 1| \rho|1\rangle\,|0\rangle\!\langle 0|\right]\otimes|1\rangle\!\langle 1| \tag{71}\] Where \(\tau=\mathds{1}_{2}/2\) is the maximally mixed state. For notational simplicity, we denote \(\mathcal{E}(0)\coloneqq\mathcal{E}(|0\rangle\!\langle 0|)\), \(\mathcal{E}(1)\coloneqq\mathcal{E}(|1\rangle\!\langle 1|)\). The adaptive strategyThe channels are constructed to allow for the following adaptive strategy: Given a black-box channel, we first use it with an arbitrary input state. Depending on the second output bit we will be able to determine with certainty the "index" of the channel, i.e. we will know that the channel is either \(\mathcal{E}_{1}\) or \(\mathcal{F}_{1}\) if the second bit is zero, or alternatively if the second bit is one we will know that the channel is either \(\mathcal{E}_{2}\) or \(\mathcal{F}_{2}\). It is easy to see that the optimal input state to discriminate \(\mathcal{E}_{1}\) from \(\mathcal{F}_{1}\) is \(|0\rangle\!\langle 0|\), whereas the optimal input state to discriminate \(\mathcal{E}_{2}\) from \(\mathcal{F}_{2}\) is \(|1\rangle\!\langle 1|\). Hence, in our adaptive strategy, for all subsequent channel uses, we input the value of the second bit we received out of the first channel use. This will lead to the following exponent: \[\min_{i\in\{1,2\}}\,\,\max_{\rho\in\mathcal{D}(\mathcal{X})}D( \mathcal{E}_{i}(\rho)\|\mathcal{F}_{i}(\rho))=D\big{(}\mathcal{E}_{1}(0)\| \mathcal{F}_{1}(0)\big{)}=D\big{(}\mathcal{E}_{2}(1)\|\mathcal{F}_{2}(1)\big{)} =\log_{2}(4/3)/2\,. \tag{72}\] It is easy to see that this is also equal to \[\min_{i,j\in\{1,2\}}\,\,\max_{\rho\in\mathcal{D}(\mathcal{X})}D( \mathcal{E}_{i}(\rho)\|\mathcal{F}_{i}(\rho)) \tag{73}\] since this minimum is always achieved for \(i=j\), as otherwise the second output bit allows for the two channels to be distinguished with certainty, which makes the relative entropy infinite. Since this is equal to the upper bound from Proposition 10 (for classical channels the regularized channel divergence collapses to the single-letter channel divergence), this is an asymptotically optimal adaptive strategy. The best parallel strategyBy Proposition 14 below, the optimal parallel exponent is given by \[\max_{\nu\in\mathcal{D}(\mathcal{X}^{\prime}\mathcal{X})}\min_{\begin{subarray}{ c}\mathcal{F}\in\mathcal{S}\\ \mathcal{F}\in T\end{subarray}}D(\mathcal{E}(\nu)\|\mathcal{F}(\nu))\,. \tag{74}\] Similarly to the argument used in the proof of Proposition 14, by using the joint convexity of the relative entropy we find that for any state \(\nu\in\mathcal{D}\left(\mathcal{X}^{\prime}\mathcal{X}\right)\) there exists a \(p\in[0,1]\) such that for any \(i,j\): \[D(\mathcal{E}_{i}(\nu)\|\mathcal{F}_{j}(\nu))\leq pD(\mathcal{E}_{i}(0)\| \mathcal{F}_{j}(0))+(1-p)D(\mathcal{E}_{i}(1)\|F_{j}(1))\,, \tag{75}\] and picking \(\nu=p\,|00\rangle\!\langle 00|_{\mathcal{X}^{\prime}\mathcal{X}}+(1-p)\,|11 \rangle\!\langle 11|_{\mathcal{X}^{\prime}\mathcal{X}}\) achieves the right-hand side. Hence, we can write: \[\max_{\nu\in\mathcal{D}(\mathcal{X}^{\prime}\mathcal{X})}\min_{\begin{subarray} {c}\mathcal{E}\in\mathcal{S}\\ \mathcal{F}\in T\end{subarray}}D(\mathcal{E}(\nu)\|\mathcal{F}(\nu))=\max_{0 \leq p\leq 1}\,\min_{i,j\in\{1,2\}}\left(pD(\mathcal{E}_{i}(0)\|\mathcal{F}_{j}(0))+( 1-p)D(\mathcal{E}_{i}(1)\|F_{j}(1))\right). \tag{76}\] Similarly to above, the minimum will be achieved at \(i=j\), and it is easy to see by explicit computation that the optimum value of \(p\) is \(1/2\). Since \(D(\mathcal{E}_{2}(0)\|\mathcal{F}_{2}(0))=D(\mathcal{E}_{1}(1)\|F_{1}(1))=0\) the parallel exponent is thus \[\frac{1}{2}D\big{(}\mathcal{E}_{1}(0)\|\mathcal{F}_{1}(0)\big{)} \tag{77}\] which is half the exponent we were able to achieve with the adaptive strategy. It is also easy to see that a way to achieves this parallel exponent is just to alternate the two input states \(0\) and \(1\). #### 4.2.3 Classical equality under convexity Looking back at the previous example, one finds that the advantage of the adaptive strategy can be seen as coming from the fact that the order of the maximum over input states and minimum over channels (for example in (72)) matters: The parallel strategy has to find a good input state for all channels (this corresponds to taking the maximum over states outside), whereas the adaptive strategy can reduce the problem to a simple discrimination problem between just two channels and then tailor the input state to these two channels (this corresponds to taking the infimum over channels outside). Indeed, one also finds that an application of our exchange result Proposition 19 (or similar minimax theorems) is not permitted in this example, as the sets of channels \(\mathcal{S}\) and \(\mathcal{T}\) are not convex. We show subsequently that, in the classical case, convexity of these sets is indeed sufficient for there not to be an advantage of adaptive strategies. **Theorem 13**.: _Let \(\mathcal{S}=(\mathcal{S}_{\mathrm{n}}\subset\mathrm{CPTP}(\mathcal{X}^{n} \rightarrow\mathcal{Y}^{n}))_{n},\mathcal{T}=(\mathcal{T}_{n}\subset\mathrm{ CPTP}(\mathcal{X}^{n}\rightarrow\mathcal{Y}^{n}))_{n}\) be two composite classical channel hypotheses (still satisfying the properties of Definition 8). If \(\mathcal{S}\coloneqq\mathcal{S}_{1}\) and \(\mathcal{T}\coloneqq\mathcal{T}_{1}\) are convex, and additionally for all \(n\)_ \[\mathcal{E}^{\otimes n}\in\mathcal{S}_{n} \forall\mathcal{E}\in\mathcal{S} \tag{78}\] \[\mathcal{F}^{\otimes n}\in\mathcal{T}_{n} \forall\mathcal{F}\in\mathcal{T} \tag{79}\] _then the Stein exponent of distinguishing these two composite hypotheses is given by_ \[\lim_{\varepsilon\to 0}\lim_{n\rightarrow\infty}e(n,\varepsilon,\mathcal{S}_{n},\mathcal{T}_{n})=\min_{\begin{subarray}{c}\mathcal{E}\in\mathcal{S}\\ \mathcal{F}\in T\end{subarray}}\max_{\nu\in\mathcal{D}(\mathcal{X})}D( \mathcal{E}(\nu)\|\mathcal{F}(\nu)) \tag{80}\] _where this optimal exponent can be achieved with a parallel strategy._ Proof. AchievabilityPicking any classical input state \(\nu\in\mathcal{D}\left(\mathcal{X}\right)\) and feeding identical copies of it into the \(n\) classical channels, turns this problem into the classical composite hypothesis testing problem which is at most as hard as distinguishing the sets \(P=\mathcal{S}[\nu]\), \(Q=\mathcal{T}[\nu]\) in an adversarial setting (this follows from the properties of a composite channel hypothesis as specified in Definition 8). Then, by Theorem 7, the exponent \[\min_{p\in P,q\in Q}D(p\|q) \tag{81}\] is achievable, and hence, by optimizing over \(\nu\), also the exponent \[\sup_{\nu\in\mathcal{D}(\mathcal{X})}\min_{\begin{subarray}{c}p\in\mathcal{S} [\nu]\\ q\in\mathcal{T}[\nu]\end{subarray}}D(p\|q)=\sup_{\nu\in\mathcal{D}(\mathcal{X} )}\min_{\begin{subarray}{c}\mathcal{E}\in\mathcal{S}\\ \mathcal{F}\in T\end{subarray}}D(\mathcal{E}(\nu)\|\mathcal{F}(\nu)) \tag{82}\] is achievable. Now, since \(\mathcal{S}\) and \(\mathcal{T}\) are convex, we can apply Proposition 19 and exchange the minimum and the supremum (where the supremum is also achieved, e.g. by Lemma 22). ConverseFrom Proposition 10 we get: \[\lim_{\varepsilon\to 0}\lim_{n\to\infty}e(n,\varepsilon,\mathcal{S}_{n}, \mathcal{T}_{n})\leq\min_{\begin{subarray}{c}\mathcal{E}\in\mathcal{S}\\ \mathcal{F}\in\mathcal{T}\end{subarray}}D_{\text{reg}}(\mathcal{E}\| \mathcal{F})\,. \tag{83}\] If all channels \(\mathcal{E}\) and \(\mathcal{F}\) are classical, the regularization is not necessary [10]. This can be easily seen follows: Since the relative entropy is jointly convex, the optimization over the input state is achieved at an extreme point of the convex set of input states, and classically all extreme points are product distributions, which makes the regularization collapse and also eliminates the need for any reference system. Hence \[\lim_{\varepsilon\to 0}\lim_{n\to\infty}e(n,\varepsilon,\mathcal{S}_{n}, \mathcal{T}_{n})\leq\min_{\begin{subarray}{c}\mathcal{E}\in\mathcal{S}\\ \mathcal{F}\in\mathcal{T}\end{subarray}}\max_{\nu\in\mathcal{D}(\mathcal{X})}D (\mathcal{E}(\nu)\|\mathcal{F}(\nu)) \tag{84}\] which is what we wanted to prove. ### Classical parallel exponent for finite sets in the composite i.i.d. setting Theorem 13 leaves open the question what we can say about adaptive and parallel exponents with classical channels if the sets \(\mathcal{S}\) and \(\mathcal{T}\) are non-convex. Example 12 shows that these two exponents have to be different in general, however we would still like to find entropic expressions for them. The following proposition establishes a single-letter formula for the parallel exponent in the composite i.i.d. case when the two sets \(\mathcal{S}\) and \(\mathcal{T}\) are finite. **Proposition 14**.: _Let \(\mathcal{S}\), \(\mathcal{T}\subset\mathrm{CPTP}(\mathcal{X}\to\mathcal{Y})\) be two finite sets of classical channels. In the composite i.i.d setting, i.e. with_ \[\mathcal{S}_{n} \coloneqq\Set{\mathcal{E}^{\otimes n}}{\mathcal{E}\in\mathcal{S} } \tag{85}\] \[\mathcal{T}_{n} \coloneqq\Set{\mathcal{F}^{\otimes n}}{\mathcal{F}\in\mathcal{T} } \tag{86}\] _the Stein exponent of distinguishing these two hypotheses with a parallel strategy is given by_ \[\lim_{\varepsilon\to 0}\lim_{n\to\infty}e_{P}(n,\varepsilon,\mathcal{S}_{n}, \mathcal{T}_{n})=\max_{\nu\in\mathcal{D}(\mathcal{X}^{\prime}\mathcal{X})}\min_ {\begin{subarray}{c}\mathcal{E}\in\mathcal{S}\\ \mathcal{F}\in T\end{subarray}}D(\mathcal{E}(\nu)\|\mathcal{F}(\nu))\,, \tag{87}\] _where \(\mathcal{X}^{\prime}\) is another classical system with the size of \(\mathcal{X}\)._ Proof.: AchievabilityAgain, picking any classical input state \(\nu\in\mathcal{D}\left(\mathcal{X}^{\prime}\mathcal{X}\right)\) and feeding identical copies of it into the \(n\) classical channels, turns this problem into the classical composite i.i.d. hypothesis testing problem of distinguishing the sets \(P=\mathcal{S}[\nu]\), \(Q=\mathcal{T}[\nu]\). Since they are both finite, we can apply [20, Theorem III.2], which states that the optimal exponent of this composite state discrimination problem is given by \[\min_{p\in P,q\in Q}D(P\|Q)=\min_{\begin{subarray}{c}\mathcal{E}\in\mathcal{S }\\ \mathcal{F}\in\mathcal{T}\end{subarray}}D(\mathcal{E}(\nu)\|\mathcal{F}(\nu))\,. \tag{88}\] Taking the supremum over all input states \(\nu\) yields the desired achievability result, and the supremum is achieved by an argument similar to Lemma 22, as the minimum over a finite number of elements does not affect any of the required continuity properties. ConverseIt follows immediately from the definition (31), Lemma 16 and Lemma 15 that \[\frac{1}{n}D_{H}^{\varepsilon}(\mathcal{S}_{n}\|\mathcal{T}_{n}) =\frac{1}{n}\sup_{\nu_{n}\in\mathcal{D}(R\mathcal{X}^{n})}D_{H}^{ \varepsilon}(\mathcal{S}_{n}[\nu_{n}]\|\mathcal{T}_{n}[\nu_{n}]) \tag{89}\] \[\leq\frac{1}{n(1-\varepsilon)}\sup_{\nu_{n}\in\mathcal{D}(R \mathcal{X}^{n})}\inf_{\begin{subarray}{c}\mathcal{E}\in\mathcal{S}\\ \mathcal{F}\in\mathcal{T}\end{subarray}}D(\mathcal{E}^{\otimes n}(\nu_{n})\| \mathcal{F}^{\otimes n}(\nu_{n}))+o(1)\,. \tag{90}\] where strictly speaking \(R\) could be a quantum system, and hence \(\nu_{n}\) a quantum-classical state of the form \[\nu_{n}=\sum_{i=1}^{d^{n}}p_{i}\rho_{R}^{(i)}\otimes|i\rangle\!\langle i|_{ \mathcal{X}^{n}} \tag{91}\] where \(\{p_{i}\}_{i=1}^{d^{n}}\) is a probability distribution and the \(\rho_{R}^{(i)}\) are density matrices on \(R\). By using the joint convexity of relative entropy and additivity under tensor products, it is easy to see though that for any such state \(\nu_{n}\) there exists a probability distribution \(\{q_{i}\}_{i=1}^{d}\) such that for all channels \(\mathcal{E}\) and \(\mathcal{F}\): \[\frac{1}{n}D(\mathcal{E}^{\otimes n}(\nu_{n})\|\mathcal{F}^{\otimes n}(\nu_{n }))\leq\sum_{i=1}^{d}q_{i}D(\mathcal{E}(|i\rangle\!\langle i|)\|\mathcal{F}(|i \rangle\!\langle i|))=D(\mathcal{E}(\nu)\|\mathcal{F}(\nu))\,. \tag{92}\] where \(\nu=\sum_{i}q_{i}\,|i\rangle\!\langle i|_{\mathcal{X}^{\prime}}\otimes|i \rangle\!\langle i|_{\mathcal{X}}\). Hence, we can upper bound the last part of (90) with the single-letter expression \[\frac{1}{(1-\varepsilon)}\sup_{\nu\in\mathcal{D}(\mathcal{X}^{\prime}\mathcal{ X})}\min_{\begin{subarray}{c}\mathcal{E}\in\mathcal{S}\\ \mathcal{F}\in\mathcal{T}\end{subarray}}D(\mathcal{E}(\nu)\|\mathcal{F}(\nu))+ o(1) \tag{93}\] and the statement follows in the limit \(n\to\infty\), \(\varepsilon\to 0\). ## 5 Open Problems We have been able to provide new insight into the relation between adaptive and parallel channel discrimination strategies, by studying such strategies for composite channel hypotheses and demonstrating that there is a gap in the asymptotic setting. However, there are still many open questions regarding composite channel discrimination, as can be seen by the number of cells in Table 1 for which we cannot give a definitive answer. Here, we want to briefly describe some of these problems and elaborate on possible solutions. First of all, for classical composite hypotheses which are non-convex, we currently do not have an entropic expression for the optimal achievable rate of adaptive strategies, so far we have not even been able to prove that the worst-case i.i.d. upper bound cannot always be achieved3. Intuitively though, we consider it to be unlikely that this bound is always achieved, and we are also not particularly hopeful that there will be a simple entropic formula for the adaptive exponent. This comes from imagining generalizations of Example 12: In our example, determining the index of the channel within the two sets was possible perfectly after only one use, and hence one was able to use the optimal input state for all subsequent channel uses. One could, however, think about examples where determining this index is not perfectly possible, and hence one is expected to have to pay a certain (asymptotically non-vanishing) number of channel uses to distinguish the individual elements of the sets and then prepare the best input state, which should make the upper bound of Proposition 10 not achievable in this case. This procedure of determining which channels in the set we seem to be provided with also becomes significantly more complex once one stops having the symmetry between the sets \(\mathcal{S}\) and \(\mathcal{T}\) which we have in Example 12, and in the general case it is not obvious at all how one could capture in a simple entropic expression the intricacies of gaining knowledge about which elements in this set one might be given. Footnote 3: [20] provides an example where discriminating states is not possible with this upper bound. This, however, requires infinite-dimensional Hilbert spaces, which we do not consider here. Additionally, we would like to see if there is an advantage for adaptive strategies in the quantum composite i.i.d. case when the sets of channels \(\mathcal{S}_{1}\) and \(\mathcal{T}_{1}\) are convex (recall that we showed that this is not possible classically). Given that the regularization is necessary in general in the quantum case, and the sets \(\mathcal{S}_{n}\) and \(\mathcal{T}_{n}\) will not be convex even if \(\mathcal{S}_{1}\) and \(\mathcal{T}_{1}\) are, we consider it not unlikely that there will again be an asymptotic gap between adaptive and parallel strategies. Finally, we have only studied asymmetric error exponents in this work, even though it would of course also be very interesting to look at similar problems for symmetric error exponents and potentially also Hoeffding exponents.
2306.08483
Simulation Study on Super-Resolution for Coded Aperture Gamma Imaging
Coded Aperture Imaging (CAI) has been proposed as an alternative collimation technique in nuclear imaging. To maximize spatial resolution small pinholes in the coded aperture mask are required. However, a high-resolution detector is needed to correctly sample the point spread function (PSF) to keep the Nyquist-Shannon sampling theorem satisfied. The disadvantage of smaller pixels, though, is the resulting higher Poisson noise. Thus, the aim of this paper was to investigate if sufficiently accurate CAI reconstruction is achievable with a detector which undersamples the PSF. With the Monte Carlo simulation framework TOPAS a test image with multiple spheres of different diameter was simulated based on the setup of an experimental gamma camera from previous work. Additionally, measured phantom data were acquired. The captured detector images were converted to low-resolution images of different pixel sizes according to the super-resolution factor $k$. Multiple analytical reconstruction methods and a Machine Learning approach were compared based on the contrast-to-noise ratio (CNR). We show, that all reconstruction methods are able to reconstruct both the test image and the measured phantom data for $k \leq 7$. With a synthetic high-resolution PSF and upsampling the simulated low-resolution detector image by bilinear interpolation the CNR can be kept approximately constant. Results of this simulation study and additional validation on measured phantom data indicate that an undersampling detector can be combined with small aperture holes. However, further experiments need to be conducted.
Tobias Meißner, Werner Nahm, Jürgen Hesser, Nikolas Löw
2023-06-14T12:57:59Z
http://arxiv.org/abs/2306.08483v1
# Simulation Study on Super-Resolution for Coded Aperture Gamma Imaging ###### Abstract Coded Aperture Imaging (CAI) has been proposed as an alternative collimation technique in nuclear imaging. To maximize spatial resolution small pinholes in the coded aperture mask are required. However, a high-resolution detector is needed to correctly sample the point spread function (PSF) to keep the Nyquist-Shannon sampling theorem satisfied. The disadvantage of smaller pixels, though, is the resulting higher Poisson noise. Thus, the aim of this paper was to investigate if sufficiently accurate CAI reconstruction is achievable with a detector which undersamples the PSF. With the Monte Carlo simulation framework TOPAS a test image with multiple spheres of different diameter was simulated based on the setup of an experimental gamma camera from previous work. Additionally, measured phantom data were acquired. The captured detector images were converted to low-resolution images of different pixel sizes according to the super-resolution factor \(k\). Multiple analytical reconstruction methods and a Machine Learning approach were compared based on the contrast-to-noise ratio (CNR). We show, that all reconstruction methods are able to reconstruct both the test image and the measured phantom data for \(k\leq 7\). With a synthetic high-resolution PSF and upsampling the simulated low-resolution detector image by bilinear interpolation the CNR can be kept approximately constant. Results of this simulation study and additional validation on measured phantom data indicate that an undersampling detector can be combined with small aperture holes. However, further experiments need to be conducted. coded aperture imaging, gamma imaging, image reconstruction, super-resolution, nuclear medicine. ## I Introduction Accurate localization and visualization of radioactive sources is an essential task in nuclear medicine [2], high-energy astrophysics [3] and in monitoring of nuclear waste [4, 5]. Recently, small handheld gamma cameras for localizing sentinel lymph nodes in breast cancer patients are under investigation [6, 7, 8, 9]. Due to the high-energy photons involved refractive lenses cannot be used for producing an image of the scene and instead parallel or pinhole collimators are employed to capture the necessary spatial information [2]. However, the size of the opening is usually subject to a balanced trade-off between the number of captured photons (photon efficiency) and the spatial resolution as the former increases and the latter decreases with the size of the pinhole. A high photon efficiency is desired since the guiding principle in the medical domain is to reduce the exposed radiation to As Low As Reasonable Achievable (ALARA). Thus, photon flux is limited and achieving high quantum yield is of major importance. To improve the mentioned trade-off Coded Aperture Imaging (CAI) has been introduced [10, 11]: A mask between object and detector consisting of a radiopaque material with pinholes encodes the directional information of incoming gamma rays. As Figure 1 shows, each pinhole in the mask generates a projection of the source image on the detector resulting in a multitude of overlapping projections. Therefore, image reconstruction (also referred to as decoding) becomes necessary. If the distance between source and collimator is large and the extension in depth is small relative to the distance, CAI can be considered as an image-to-image mapping and is denoted as planar Coded Aperture Imaging [1]. In this paper, the captured detector image is denoted as \(p(x,y)\), the original Fig. 1: The basic principle of planar Coded Aperture Imaging: A mask with pinholes projects the source image onto the detector, where multiple overlapping projections emerge. Decoding or image reconstruction is necessary to obtain the original source image. Figure modified from [1]. source image and its reconstruction as \(f(x,y)\) and \(\hat{f}(x,y)\), respectively. The term super-resolution refers to the process of combining several "low resolution, noisy, slightly shifted observations" [12] to reconstruct an image of the underlying high resolution scene, as Figure 2 illustrates. Because the spatial resolution in CAI is mainly influenced by the mask's pinhole diameter [13], increasing the MURA rank and thus the amount of pinholes while reducing its diameter would increase the spatial resolution. So far, the pinhole diameter has been chosen such that the utilized detector can properly sample the resulting PSF [8, 9]. To the best of the authors' knowledge, no research group has investigated the combination of small pinholes and a low-resolution detector. Therefore, the investigated hypothesis is as follows: Existing CAI reconstruction methods are capable of reconstructing point sources from an undersampling detector, and thus achieving super-resolution, at reasonable quality even though the detector cannot resolve the higher spatial resolution of the aperture. This is due to the shifted but overlapping projections caused by the coded aperture. ## II Methods ### _Simulating a coded aperture test image_ For simulating a test image the Monte Carlo simulation toolkit TOPAS [14], a wrapper library around Geant4 [15], is deployed. Unlike ray-casting simulations, TOPAS accounts for photon-mass interactions like scattering and mask penetration, and is therefore considered to be the gold standard in gamma imaging [16]. The geometrical components and dimensions were simulated according to the experimental gamma camera from Rozhkov et al. [16]. The main characteristics can be summarized as follows: A 2\(\times\)2 mosaicked, 1 mm thick Tungsten not-two-holes-touching (NTHT) MURA mask of rank 31 with pinholes of 0.34 mm in diameter (denoted as \(d\)) was placed 42 mm (\(a\)) in front of a 2 mm thick 256\(\times\)256 pixelated CdTe semiconductor detector coupled to a Timepix\({}^{\copyright}\) readout circuit. The detector has a side length of 14.1 mm and hence a single pixel size \(s=0.0551\) mm. The virtual object plane is 172 mm (\(b\)) in front of the mask plane, resulting in a field of view (FoV) of 57.75\(\times\)57.75 mm. The test image consists of three spherical sources with diameters \(d_{1}\), \(d_{2}\) and \(d_{3}\) of 1, 2 and 3 mm distributed within the FoV as Figure 8a shows. \(10^{9}\) gamma photons with a photon energy of 140.5 keV (corresponding to the photon peak of \({}^{99\text{m}}\)Tc the most commonly used radiotracer in nuclear medicine [2]) were distributed to the three sources according to their area. Every photon hitting the detector was collected and stored in a so called _phase space file_. In addition to the coded aperture a single pinhole collimator with the same diameter was simulated to serve as a reference for the reconstructed images. The captured pinhole image was smoothed by Gaussian blurring with a \(\sigma\) of 2 pixels. The ground truth image was generated from the geometrical model and remains binary: 1 for where a source is located and 0 everywhere else. ### _Measured data from the experimental gamma-camera_ Captured images from an experimental gamma-camera also used in previous work [16, 17] was used to validate the effect of super-resolution on real-world measurement data. The camera set-up was the same as for the simulation of the test image. The phantom has the basic form of a cylinder with a height of 80 mm and 50 mm in diameter, where tubes along the vertical axis were filled with \({}^{99\text{m}}\)Tc. These three tubes have a diameter of 1.1 mm, and two of them are 15 mm long while the central one is 20 mm long. The total activity at the beginning of the measurements was 83 MBq. A depiction of the geometric computer model can be seen in Figure 3. The phantom was exposed to the gamma-camera for 2 min and afterwards rotated by 3\({}^{\circ}\). This way, a total of 120 images were captured. Outlier replacement as described in [17] was applied afterwards. ### _Generating low-resolution detector images_ To analyze the effect of different pixel sizes low-resolution images of different resolutions were produced as follows: The captured photons from the phase space file and from the measured phantom data respectively were binned into images of different low resolution. The actual detector served as reference with a resolution of 256\(\times\)256 pixels, which corresponds to the resolution of the final reconstructed image. Therefore, the super-resolution factor \(k\) is introduced. It means that \(k\times k\) high-resolution pixels are reconstructed from a single low-resolution pixel. Note that the absolute detector size remains the same: 14.1\(\times\)14.1 mm. The single pixel size \(s\) changes proportional to \(k\): \(s=k\cdot 14.1\,\text{mm}/256\). Finally, all low-resolution images were upsampled by bilinear interpolation to 256\(\times\)256 pixels in order to fit the synthetic high-resolution PSF and to fit into the CED-IN respectively. The process of generating low-resolution images is shown in Figure 4 for \(k=8\). ### _Analytical reconstruction methods_ Three different methods for super-resolution reconstruction are analyzed and compared in this paper: MURA decoding [19], a convolutional Maximum Likelihood Expectation Maximization algorithm (MLEM) [20] and a convolutional encoder-decoder network (CED) from previous work [17]. Fig. 3: The utilized phantom with its three tubes (red) filled with \({}^{99\text{m}}\)Tc of which 120 images where captured. Fig. 2: Super-resolution is referred to the process to combine multiple low quality images to reconstruct an image of the underlying high-resolution scene. Figure modified from [12]. MURA Decoding is the most commonly used reconstruction method. It consists of a single circular convolution of the detector image \(p(x,y)\) with the decoding pattern \(g(x,y)\): \[\hat{f}(x,y)=p(x,y)\mathbin{\raisebox{-1.29pt}{\hbox{\tiny$\circ$}}\raisebox{- 1.29pt}{\hbox{\tiny$\circ$}}}g(x,y) \tag{1}\] with "\(\mathbin{\raisebox{-1.29pt}{\hbox{\tiny$\circ$}}\raisebox{-1.29pt}{\hbox{\tiny$ \circ$}}}\)" denoting the circular convolution operator. All circular operations in this paper are carried out by periodically padding the second operand to twice its size, i.e. to 512\(\times\)512 pixels, and cropping the result to its central 256\(\times\)256 pixels. The decoding pattern \(g(x,y)\) is based on \(h(x,y)\) and its definition can be found in [18]: It is equivalent to changing all 0 to -1 and adding a positive pixel to the center of the PSF [18]. The MLEM algorithm works in iterations and is derived from a random Poisson process. It consists of a combination of forward and backward projections, where ten iterations were deployed in this paper: \[\hat{f}^{k+1}(x,y)=\hat{f}^{k}(x,y)\odot\left[\frac{p(x,y)}{\hat{f}^{k}(x,y) \mathbin{\raisebox{-1.29pt}{\hbox{\tiny$\circ$}}\raisebox{-1.29pt}{\hbox{\tiny$ \circ$}}}h(x,y)}\otimes h(x,y)\right] \tag{2}\] where "\(\odot\)" denotes point-wise multiplication and "\(\otimes\)" is circular cross-correlation. Instead of the real measured PSF with round pinhole projections, the two-holes-touching (THT) version of the PSF without gaps between neighboring pinholes is used for reconstruction, since it suppresses periodical noise [17]. Both the THT-PSF and its corresponding decoding pattern are of rectangular structure and a square of 8 bright pixels represents the position and bounding box of each projected pinhole. They also define the reconstructed resolution of 256\(\times\)256 pixels. Figure 5 depicts the measured PSF, the THT-PSF and the decoding pattern. Additionally, MURA Decoding with low-resolution THT-PSF was implemented, where the synthetic high-resolution THT-PSF was not used, but the down-sampled THT-PSFs emulating a low-resolution detector. Since the reconstructions come in low-resolution, the reconstructed images were upsampled to 256\(\times\)256 pixels by bilinear interpolation. ### _Reconstruction by Machine Learning_ A Convolutional Encoder-Decoder (CED) is a widely used form of Convolutional Neural Networks (CNN). A CED consists of trainable parameters that transform an input into an output image. However, this transformation is not derived from a mathematical description but by providing a sufficient amount of paired training images. First experiments were conducted on the application of CNNs to CAI reconstruction, but its validation exclusively relied on simulated and low-resolution images [21] or were only visually compared on few images [13]. While the two analytical reconstruction methods solely rely on the PSF of the gamma camera, which acts as a linear approximation of the imaging system, the CED-IN is in theory capable of more complex mappings [22]. Recent advances in Machine Learning in the field of image reconstruction [23, 24, 25] underline the potential of CEDs for CAI reconstruction. The CED used in this paper is denoted as CED-IN because it was trained with a convolutional simulation based on natural photographs from the ImageNet database [26]. Its architecture is presented in Figure 6 and for more information on the training process and data simulation the reader is referred to [17]. After reconstructing all images, the contrast-to-noise ratio (CNR) is calculated based on the reconstructed and the ground truth image. The binary ground truth image enables a separation of the reconstruction into the signal part \(S\) and background part \(B\). The following definition of CNR is employed [27]: \[\text{CNR}=\frac{\left|\bar{S}-\bar{B}\right|}{\sigma_{B}}\,, \tag{3}\] where \(\bar{S}\) denotes the mean intensity of the signal, \(\bar{B}\) the mean intensity and \(\sigma_{B}\) the standard deviation of the background. ### _Nyquist-Shannon sampling theorem_ The Nyquist-Shannon sampling theorem states that the sampling frequency of a pixelated representation must be larger than twice the maximum frequency of the periodic image [28]. Thus, when the smallest occurring structure is sampled by two pixels or less, an image is not represented unambiguously which leads to aliasing and hence signal Fig. 4: Pixels of the high-resolution detector image from the TOPAS simulation are accumulated (here with \(k=8\) into 32\(\times\)32 pixels) to form the low-resolution detector image. Afterwards this image is upsampled by bilinear interpolation to the high-resolution of 256\(\times\)256 pixels. Fig. 5: Left: The measured point spread function (PSF) of the experimental gamma camera, where the pixel intensity represents photon counts. Center: The two-holes-touching (THT) version of the used rank 31 MURA mask \(h(x,y)\) fundamental for MURA Decoding and MLEM. Right: The respective decoding pattern \(g(x,y)\) for MURA Decoding. Note the additional positive square at the center of \(g(x,y)\)[18]. Both patterns were resized to 256\(\times\)256 pixels by nearest neighbor interpolation to maintain the original shape. Fig. 6: The convolutional encoder-decoder network architecture deployed in this paper. The top row represents the number of filters per layer and the bottom row the feature map size in pixels. degradation [28]. Since the aforementioned analytical reconstruction methods MURA Decoding and MLEM consist of one or more convolutions of two discretized signals, a reconstruction without aliasing artefacts is only possible when both images were sampled by enough pixels. Thus, critical super-resolution factors \(\tilde{k}\) were determined both for the coded aperture test image and the THT-PSF \(h(x,y)\). The smallest point source of the test image is 1 mm wide and therefore much larger than the pinhole diameter: \(d_{1}\gg d\). Hence, the smallest structure on the detector caused by the small point source can be approximated by \(t=d_{1}\cdot m=1.244\) mm. For \(h(x,y)\) the smallest structure \(t\) is 8 pixels wide, i.e. \(t=8\cdot s\) (Figure 5). For the given gamma camera with its magnification factor \(m=(1+a/b)=1.244\), the smallest depicted structure \(t\) and the single pixel side length of \(s=0.0551\) mm \(\tilde{k}\) can be defined as follows: \[\tilde{k}=\left\lfloor\frac{1}{2}\cdot\frac{t}{s}\right\rfloor \tag{4}\] where \(\left\lfloor\cdot\right\rfloor\) denotes rounding off to the next smallest integer value. Figure 7: The contrast-to-noise ratio (CNR) for reconstructions of different reconstruction methods depending on the super-resolution factor \(k\). ## III Results ### _Critical super-resolution factors_ The following critical super-resolution factors \(\tilde{k}\) were obtained from the Nyquist-Shannon sampling theorem: The THT-PSF \(h(x,y)\) must be sampled by at least 64\(\times\)64 pixels leading to the following critical super-resolution factor: \(\tilde{k}_{\text{THT-PSF}}=4\). This means that the synthetic high-resolution THT-PSF in this paper is 4-times oversampled. However, the test image with the 1 mm point source results in a higher critical super-resolution factor of \(\tilde{k}_{1}=\lfloor 11.29\rfloor=11\). The tubes of the phantom captured by the experimental gamma-camera have a diameter of 1.1 mm, that are magnified to approximately 1.37 mm and thus 24.83 pixels. This results in a critical super-resolution factor for the measured data of \(\tilde{k}_{\text{measurement}}=\lfloor 12.42\rfloor=12\). ### _Results on the test image_ Figure (a)a shows the CNR of the four reconstruction methods over the super-resolution factor \(k\). The red dotted line at CNR \(=7.63\) denotes the smoothed image captured with a pinhole collimator where no reconstruction was required. It is depicted central on the right-hand side together with the ground truth image and the coded aperture test image in Figure (a)a. The left-hand side shows exemplary reconstructions for \(k=1,3,6,11\) and 16. Clearly visible, the CNRs of the reconstruction methods with the synthetic high-resolution THT-PSF increase until \(k=3\) and steadily decline afterwards. The CED-IN is an exception, where the CNR increases further until falling below its Fig. 8: Exemplary reconstructions of the analyzed reconstruction methods at different super-resolution factors \(k\). The CNR is printed in the top left corner of each image.
2301.03571
Dipolar Spin Liquid Ending with Quantum Critical Point in a Gd-based Triangular Magnet
By performing experiment and model studies on a triangular-lattice dipolar magnet KBaGd(BO$_3$)$_2$ (KBGB), we find the highly frustrated magnet with a planar anisotropy hosts a strongly fluctuating dipolar spin liquid (DSL), which originates from the intriguing interplay between dipolar and Heisenberg interactions. The DSL constitutes an extended regime in the field-temperature phase diagram, which gets lowered in temperature as field increases and eventually ends with an unconventional quantum critical point (QCP) at $B_c\simeq 0.75$~T. Based on dipolar Heisenberg model calculations, we identify the DSL as a Berezinskii-Kosterlitz-Thouless (BKT) phase with emergent U(1) symmetry. Due to the tremendous entropy accumulation that can be related to the strong BKT and quantum fluctuations, unprecedented magnetic cooling effects are observed in the DSL regime and particularly near the QCP, making KBGB a superior dipolar coolant to commercial Gd-based refrigerants. We establish the phase diagram for triangular-lattice dipolar quantum magnets where emergent symmetry plays an essential role, and provide a basis and opens an avenue for their applications in sub-Kelvin refrigeration.
Junsen Xiang, Cheng Su, Ning Xi, Zhendong Fu, Zhuo Chen, Hai Jin, Ziyu Chen, Zhao-Jun Mo, Yang Qi, Jun Shen, Long Zhang, Wentao Jin, Wei Li, Peijie Sun, Gang Su
2023-01-09T18:49:53Z
http://arxiv.org/abs/2301.03571v2
# Dipolar Spin Liquid Ending with Quantum Critical Point in a Gd-based Triangular Magnet ###### Abstract By performing experiment and model studies on a triangular-lattice dipolar magnet KBaGd(BO\({}_{3}\))\({}_{2}\) (KGB), we find the highly frustrated magnet with a planar anisotropy hosts a strongly fluctuating dipolar spin liquid (DSL), which originates from the intriguing interplay between dipolar and Heisenberg interactions. The DSL constitutes an extended regime in the field-temperature phase diagram, which gets lowered in temperature as field increases and eventually ends with an unconventional quantum critical point (QCP) at \(B_{c}\simeq 0.75\) T. Based on the dipolar Heisenberg model analysis, the DSL is identified as a Berezinskii-Kosterlitz-Thouless (BKT) phase with emergent U(1) symmetry, and the end QCP belongs to the 3D XY universality class. Due to the tremendous entropy accumulation that can be related to the strong BKT and quantum fluctuations, unprecedented magnetic cooling effects are observed in the DSL and particularly near the QCP, making KGBB a superior dipolar coolant to commercial Gd-based refrigerants. We establish the phase diagram for triangular-lattice dipolar quantum magnets where emergent symmetry plays an essential role, and provide a basis and opens an avenue for their applications in sub-Kelvin refrigeration. _Introduction.--_ Triangular-lattice quantum antiferromagnets have raised great research interest recently due to the unusual quantum spin states and transitions therein [1; 2]. One prominent example is the quantum spin liquid (QSL) [3; 4; 5] and its possible materialization in organic compounds [6; 7; 8] and rare-earth triangular magnets [9; 10; 11; 12; 13; 14; 15; 16]. The intriguing spin frustration effects and two dimensionality of such systems imply Berezinskii-Kosterlitz-Thouless (BKT) physics may appear at low temperatures. Indeed, the Co-based quantum antiferromagnet Na\({}_{2}\)BaCo(PO\({}_{4}\))\({}_{2}\) hosts persistent spin fluctuations [17; 18; 19; 20] till very low temperature, and is proposed to realize spin supersolidity with prominent BKT phase fluctuations [21]. Besides, emergent symmetry has also been disclosed on the triangular lattice as a consequence of strong frustration, with a primary example of rare-earth magnet TmMgGaO\({}_{4}\)[22; 23; 24; 25; 26; 27]. Recently, it has been theoretically proposed that the dipolar interactions can give rise to QSL in triangular-lattice quantum spin systems [29]. Lately such dipolar system has been realized in Yb-based triangular compounds [30; 31; 32; 33; 34]. However, the dipolar interactions are rather weak and it is very challenging for conventional thermodynamic and spectroscopic measurements to probe the exotic spin states due to dipolar interactions. On the contrary, the rare-earth dipolar magnets with even larger moments, _e.g._, Gd-based compounds with \(\mu_{\rm eff}\approx 8\mu_{B}\) and high spin \(S=7/2\), are much less explored both in experiments and theories. It is expected that the dipolar frustration effects are a priori more evident in these systems. Moreover, high-spin frustrated systems, especially those with spin-liquid like behaviors [35], can possess large entropy density and cooling capacity, holding thus strong promise as excellent coolants for sub-Kelvin space applications [36; 37] and quantum computing [38]. In this work, we perform low-temperature thermodynamic and magnetocaloric measurements on single-crystal samples of gadolinium borate KBaGd(BO\({}_{3}\))\({}_{2}\) (KGBB). The thermodynamic measurements suggest a dipolar spin liquid state with no conventional ordering but strong spin fluctuations as reflected in the algebraic specific heat and imaginary dynamical susceptibility (\(\chi_{\rm ac}^{{}^{\prime\prime}}\)). We establish a dipolar Heisenberg model with both dipole-dipole and Heisenberg interactions for KGBB. Monte Carlo (MC) simulations well explain the experimental results and unveil exotic spin states and transitions in the phase diagram. In particular, the model simulations suggest a two-step melting of the 6-clock antiferromagnetic (AF) order [c.f., Fig. 1(c)] via two BKT transitions, between which a floating BKT phase appears with an emergent U(1) symmetry, well accounting for the experimental observations. Consequently, giant magnetocaloric effect (MCE) is observed in the quasi-adiabatic demagnetization measurements. In particular, we find a clear dip in temperature at \(B_{c}\simeq 0.75\) T, i.e., near the quantum critical point (QCP) also with emergent U(1) symmetry. The obtained lowest temperature of 70 mK clearly surpasses that of commercial refrigerant Gd\({}_{3}\)Ga\({}_{5}\)O\({}_{12}\) (GGG) under similar conditions, opening an avenue for exploring not only exotic spin states and transitions but also superior quantum coolants. _Crystal structure and effective model for KBBaGd(BO\({}_{3}\))\({}_{2}\)_-_Centimeter-sized single crystals of KBGB were synthesized using the flux method as described in detail in Supplementary Materials (SM) [28]. X-ray diffraction measurements indicate high quality of the single crystals, and confirm the trigonal structure [40; 41] with space group \(R\)-\(3m\) [_c.f._, Fig. 1(a)]. Magnetic Gd\({}^{3+}\) ions with 4\(f^{7}\) electron configuration (\(L=0,S=7/2\)) form perfect triangular lattice [Fig. 1(b)], with a relatively high ionic density of 6.4 nm\({}^{-3}\). The direct dipolar interaction between magnetic ions Gd\({}^{3+}\) has a characteristic energy \(E_{\rm dp}\sim 2\mu_{0}\mu_{\rm sat}^{2}/4\pi a^{3}\approx 0.05\) meV (with \(\mu_{\rm sat}\approx 8\)\(\mu_{\rm B}\)), which determines the low-temperature spin states in KBGB. To simulate the dipolar magnet, we consider the Hamiltonian \(H=J_{H}\sum_{\langle i,j\rangle_{\rm NN}}{\bf S}_{i}\cdot{\bf S}_{j}+J_{D} \sum_{i,j}[{\bf S}_{i}\cdot{\bf S}_{j}-3({\bf S}_{i}\cdot{\bf e}_{ij})({\bf S} _{j}\cdot{\bf e}_{ij})]/r_{ij}^{3}\), where \({\bf e}_{ij}(r_{ij})\) refers to the unit vector(distance) between site \(i\) and \(j\) in the unit of lattice constant \(a\). \(J_{H}\) and \(J_{D}\) refer to the nearest neighbor (NN) Heisenberg and dipole-dipole interactions, respectively. As the dipolar interactions show rapid (cubic) power-law decay with longer range interactions washed out, below we keep only NN terms \[H_{\rm DH}=\sum_{\langle i,j\rangle_{\rm NN}}J\,{\bf S}_{i}\cdot{\bf S}_{j}-D \,({\bf S}_{i}\cdot e_{ij})({\bf S}_{j}\cdot e_{ij}), \tag{1}\] where \(J=J_{H}+J_{D}\) is the NN isotropic coupling and \(D=3J_{D}\) refers to the dipolar anisotropic term. We perform MC simulations of the NN dipolar Heisenberg (DH) model on up to \(60\times 60\) triangular lattice [28], and find the results fit very well the experimental results. The determined coupling parameters from the fittings are \(J\simeq 47\) mK and \(D\simeq 80\) mK, which correspond to \(J_{D}\simeq 27\) mK and \(J_{H}\simeq 20\) mK, leading to a dipolar energy of about 660 mK. Such estimate based on MC fittings is consistent with the direct interaction of \(E_{\rm dp}\approx 0.05\) meV evaluated above. _Magnetic specific heat, susceptibility, and dipolar spin liquid._-- In Fig. 2(a) we show the zero-field specific heat \(C_{m}\) measured down to 65 mK for KBGB. There exists a round peak at \(T^{*}\simeq 209\) mK, below which the system exhibits \(C_{m}\sim T^{2}\) with algebraic scaling, resembling that of two-dimensional (2D) Heisenberg or XY quantum spin model with U(1) symmetry [42; 43]. The thermodynamic measurements suggest a gapless liquid-like and strongly fluctuating spin state as if the dipolar planar anisotropy were absent. This is also reflected in the huge low-temperature specific heat in KBGB, which far exceeds that of the renowned Gd-based refrigerant GGG [37; 39; 44]. In Fig. 2(b), we apply out-of-plane fields (_Bl/c_) to the compound, and find the round \(C_{m}\) peaks move towards lower temperature with heights slightly reduced. This suggests that the spin liquid states constitute an extended phase that we dub as _dipolar spin liquid_ (DSL). As field further increases and exceeds about 0.75 T, the DSL behavior disappears [_c.f._, the contour plot of \(C_{m}\)/\(T\) in Fig. 3(b)], and the \(C_{m}\) peaks move instead to the high-temperature side with low-energy fluctuations quickly suppressed. Fig. 2(c) shows the isothermal magnetization measured at \(T\) = 0.4 K, where a clear magnetic anisotropy between the out-of-plane (_/lc_ axis) and in-plane (_/la_) directions is observed. This anisotropy can be clearly recognized in the different saturation magnetization moments, and the transition fields are also different along two directions, _i.e._, about 1 T(0.5 T) along \(c\)(\(a\)) axis. Likewise, the low-temperature dc susceptibility (\(\chi_{\rm dc}\)) exhibits a clear easy-plane anisotropy, see Fig. 2(d). Although the determined Lande factor \(g_{c}\simeq 2.49\) is slightly larger than \(g_{a}\simeq 2.36\), the intrinsic dipolar anisotropy leads to larger in-plane \(\chi_{\rm dc}\) (along \(a\) and \(a^{*}\) axes) than that along the \(c\) axis. The negative Curie-Weiss temperatures fitted from the dc susceptibility reflect the AF nature, and the slightly different \(\theta_{a}\simeq-300\) mK and \(\theta_{a^{*}}\simeq-330\) mK reveal the planar anisotropy. This small but sensible planar anisotropy between \(a\) and \(a^{*}\) axes can be ascribed to the bond-dependent dipolar interaction [_c.f._, Eq. (1)]. The proposed DSL is further corroborated by the ac magnetic susceptibilities shown in Figs. 2(e,f). The real part \(\chi_{\rm ac}^{\prime}\) exhibits a frequency-independent maximum and remains large even below the characteristic temperature \(T^{*}\). The spin-glass scenario is therefore firmly excluded despite the K/Ba site mixing in the compound. Interestingly, the imaginary ac susceptibility \(\chi_{\rm ac}^{\prime\prime}(T)\), although being featureless for low frequencies \(\omega\lesssim 4\) kHz, show a clear temperature-dependent behavior for higher frequencies in Fig. 2(f). Considering that Figure 1: (a) shows the crystal structure of KBBaGd(BO\({}_{3}\))\({}_{2}\), and (b) the triangular-lattice layers of GdO\({}_{6}\) octahedra separated by the Ba/K layers with site mixing. The grey arrows refer to the spins on site \(i\) and \(j\), and the unit vector \({\bf e}_{ij}\) is also indicated. Dipole-dipole interactions are bond-dependent and follow the \(\bar{3}m\) site symmetry. (c)-(e) are histograms of the order parameter \(\Psi_{xy}\equiv\Psi_{x}+i\Psi_{y}\) for the 6-clock antiferromagnetic (AF) phase, emergent U(1) dipolar spin liquid (DSL), and paramagnetic (PM) phase [28]. \(\chi^{\prime\prime}(\omega)\) can be directly related to the dynamical correlation \(S(\omega)\) through the fluctuation-dissipation theorem, \(\chi^{\prime\prime}(\omega)\propto\frac{\#}{T}S(\omega)\) (\(\omega\ll T\)), this clearly suggests the persistence of low-energy spin fluctuations even below \(T^{*}\) and supports the spin-liquid scenario. _Magnetocaloric effect and quantum critical point._-- In Fig. 3(a), we show the quasi-adiabatic demagnetization measurements (see details in SM [28]), where the lowest temperature \(T_{m}\) is achieved around the dip \(B_{c}\simeq 0.75\) T in the isentropic line and remains at very low temperature for \(B<B_{c}\) in the small field side. In particular, it is found that KBGB clearly outperforms GGG in the obtained lowest temperature, _i.e._, \(T_{m}\simeq 70\) mK (KBGB) vs. 322 mK (GGG), for the same initial condition of \(T_{i}=2\) K and \(B_{i}=6\) T. In Fig. 3(b) we provide more of the isentropic lines from different initial conditions, and observe the highly asymmetric isentropes, which "levels off" in the bright DSL regime with strong fluctuations reflected in large \(C_{m}/T\). Remarkably, strong cooling effects are also observed for very low, sub-Kelvin initial temperature. In Fig. 3(b), we obtain a lowest \(T_{m}\simeq 33\) mK from initial \(T_{i}\simeq 95\) mK. Such unprecedented MCE response strongly corroborates the existence of QCP at \(B_{c}\simeq 0.75\) T, as is further substantiated by an evident peak-dip structure with sign change in the magnetic Gruneisen ratio \(\Gamma_{B}=\frac{1}{T}(\frac{\partial T}{\partial B})_{S}\)[45; 46; 47; 48], shown in Fig. 3 inset, which has been widely used in characterizing QCP for heavy fermions [49; 50; 51; 52; 53] and low-dimensional quantum spin systems [54; 55; 56; 57]. The peak height of \(\Gamma_{B}\) exceeds 4 times that of GGG, indicating a giant QCP cooling effect in KBGB. _Emergent symmetry in KBGB._-- According to the above measurements, we obtain the phase diagram of KBGB in Fig. 3(b). The two schematic dashed lines, enclosing the DSL with large \(C_{m}/T\) values, meet at a QCP where the lowest cooling temperature is reached. Besides QCP, within the DSL regime we find persistent spin fluctuations and cooling effects whose origin is clarified through a model analysis below. In Figs. 2(c,d), we find the anisotropic susceptibility and magnetization measured along \(a\) and \(c\) axes can be well captured by the DH model. The slight deviation between model calculations and experimental in-plane susceptibilities at low temperature \(\lesssim 0.5\) K may be attributed to strong quantum fluctuations of planar order parameters. Besides, the model calculations find a specific heat peak at about 270 mK, which also gets suppressed as field increases, well resembling the experimental results in Figs. 2(a,b). As various phases with prominent features are well captured by MC simulations [28], we thus confirm that the DH model can very well describe the dipolar magnet KGBB. To characterize the spin states in the phase diagram, we introduce the order parameter \(\Psi_{xy}\equiv me^{i\theta}=\sum_{j}e^{iQr_{j}}(m_{j}^{x}+m_{j}^{y})\), where \(Q\) is the QCP charge and \(Q\) is the QCP charge. The spin states in the phase diagram are \(\Psi_{xy}\equiv me^{i\theta}=\sum_{j}e^{iQr_{j}}(m_{j}^{x}+m_{j}^{y})\), where \(Q\) is the QCP charge and \(Q\) is the QCP charge. The spin states in the phase diagram are \(\Psi_{xy}\equiv me^{i\theta}=\sum_{j}e^{iQr_{j}}(m_{j}^{x}+m_{j}^{y})\), where \(Q\) is the QCP charge. \(im_{j}^{y}\)), where \(j\) runs over the lattice sites and \(Q=\pm\frac{1}{2}a^{*},\pm\frac{1}{2}b^{*},\pm\frac{1}{2}(a^{*}-b^{*})\)[28]. Histogram of the complex order parameter \(\Psi_{xy}\) at various temperature are shown in Figs. 1(c-e). At low temperature, the dipolar system exhibits a 6-clock AF order corresponding to \(\theta=0,\pm\pi/3,\pm 2\pi/3\), and \(\pi\)[28]. As temperature ramps up, the six points in the histogram prolong and merge into a circle with emergent U(1) symmetry, where the angle \(\theta\) can choose arbitrary angle. As temperature further enhances, the amplitude \(m\) eventually vanishes and the system enters the conventional PM phase. Recall that the 6-state clock model with \(\cos{(6\theta)}\) anisotropic term undergoes two successive BKT transitions [58], between which the anisotropic term becomes irrelevant perturbation, the intermediate DSL thus constitutes a BKT phase with emergent U(1) symmetry [59; 60; 61; 62]. For the zero-temperature QCP, as the clock term is dangerously irrelevant there [59], transition occurs directly between the 6-clock AF and PM phases and belongs to the 3D XY universality class. The corresponding critical exponents \(z=1\) and \(\nu\simeq 0.66\)[63] can lead to a strongly diverging \(\Gamma_{B}\sim T^{-1/z\nu}\)[45], which may account for the very sharp peak observed in Fig. 3(a) inset. Therefore, emergent symmetry constitutes a key for understanding the spin liquid and quantum criticality in KBGB. _Superior cooling performance.--_ Starting from \(T_{i}\) = 2 K, KBGB single crystals are observed to reach \(T_{m}\simeq 70\) mK [Fig. 3(a)], far surpassing other Gd-based refrigerants, _e.g._, GGG (322 mK) and GdLiF\({}_{4}\) (480 mK) [65]. Besides, powder samples can also achieve much lower \(T_{m}\) than that of GGG [28]. Long hold time \(t_{h}\) is also witnessed in KBGB. In the environment temperature of 2 K, it remains below 140 mK for \(t_{h}\approx 2\) h after the field is exhausted, which can be ascribed to the large heat absorption \(\Delta Q\) shown in inset of Fig. 4(a). The isothermal entropy change \(\Delta S_{m}\) characterizes the cooling capacity of refrigerants. In Fig. 4(b), we compare \(\Delta S_{m}\) of KBGB with that of GGG, and find that KBGB has significantly larger \(\Delta S_{m}\) below 1 K [shaded regime in Fig. 4(b)], i.e., the temperature window of central interest for sub-Kelvin application. Overall, the low cooling temperature \(T_{m}\), long hold time \(t_{h}\), and giant entropy change \(\Delta S_{m}\) suggest that KBGB is a superior quantum magnet coolant for cryogenics. _Discussions and outlook.--_ The pursue for high entropy density and low ordering temperature constitutes two opposing factors hard to fulfill simultaneously for sub-Kelvin refrigerants. Here we find the spin frustration and quantum criticality in the dipolar system come to the rescue. We show that the compound KBaGd(BO\({}_{3}\))\({}_{2}\) with high density Gd\({}^{3+}\) crystallizing on a triangular lattice is demonstrated to host a disordered and strongly fluctuating spin liquid till very low temperature, offering enormous cooling capacity. This can be ascribed to the prominent BKT fluctuations and quantum criticality in this \(S=7/2\) dipolar magnet. Despite a planar anisotropy, U(1) symmetry nevertheless emerges in the compound as revealed by the DH model analysis. Although in the present study only NN terms are considered, inclusion of further neighboring couplings is believed not to change the conclusion here, as it maintains the universality class of BKT transitions in the planar dipolar models [62; 66]. The scenario of DSL ending up with emergent U(1) QCP may also be applicable to other dipolar quantum magnets. Recent progress in experimental studies reveal a series of rare-earth triangular quantum dipolar antiferromagnets, _e.g._, Ba\({}_{3}\)REB\({}_{3}\)O\({}_{9}\)/Ba\({}_{3}\)REB\({}_{3}\)O\({}_{18}\) (with RE a rare-earth ion) [32; 33] and ABaRE(BO\({}_{3}\))\({}_{2}\) (with A an alkali ion) [67; 68]. For exam Figure 3: (a) shows the quasi-adiabatic isentropes measured in KBGB under out-of-plane fields. The KBGB curve exhibits a clear dip at the lowest temperature \(T_{m}\simeq 70\) mK, much lower than that of GGG (\(T_{m}\simeq 322\) mK). Starting from \(T_{i}\simeq 95\) mK, KBGB can reach a remarkably low temperature \(T_{m}\simeq 33\) mK in the dip (blue dotted line). The inset shows the magnetic Grüneisen ratio \(\Gamma_{B}\) deduced from the curves in the main panel. (b) shows the phase diagram of KBGB with the \(C_{m}/T\) contour plot in the background. The bright regime with large spin fluctuations represent the DSL, with schematic dashed line boundaries, ending with a QCP at \(B_{c}\simeq 0.75\) T. Figure 4: (a) The quasi-adiabatic demagnetization cooling curves of KBGB single-crystal sample (0.5 g), starting from two different initial conditions (\(T_{i}=4\) K, \(B_{i}=4\) T) and (\(T_{i}=2\) K, \(B_{i}=6\) T), and reaching lowest temperature \(T_{m}\simeq 205\) mK and \(70\) mK, respectively. Parasitic heat loads are estimated to be 0.2 \(\mu\)W for \(T_{i}=4\) K environment and 0.05 \(\mu\)W for \(T_{i}=2\) K. The inset shows magnetic entropy under zero and 4 T fields, with the shaded area representing the absorbed heat \(\Delta Q=47.44\) J\(\cdot\)Kg\({}^{-1}\) in the hold process. (b) plots the entropy change results \(\Delta S_{m}\) vs. \(T\), for fields decreasing from various peak values to zero, and compared to those of GGG [39; 64]. ple, it has been observed that in Ba\({}_{3}\)YbB\({}_{3}\)O\({}_{9}\) that 80% entropy remain below 56 mK [31], despite a dipolar interaction of about 160 mK, suggesting that the DSL and unconventional QCP may also be relevant in the Yb-based dipolar compounds. This work, therefore, opens a venue for hunting exotic spin states as well as superior quantum coolants in triangular dipolar magnets. _Note added.--_ Upon finishing the present work, we are aware of a recent work [69] also that also conducts the MCE study of KBGB with however polycrystalline samples, where they find strong cooling effect down to 121 mK. _Acknowledgements.--_ W.L. is indebted to Yuan Wan and Tao Shi for helpful discussions. W.J. and C.S. acknowledge the support from the beamline 1W1A of the Beijing Synchrotron Radiation Facility. This work was supported by the National Natural Science Foundation of China (Grant Nos. 12222412, 11834014, 11974036, 12047503, 12074023, 12074024, 12174387, and 12141002), National Key R & D Program of China (Grant No. 2018YFA0305800), Strategic Priority Research Program of CAS (Grant No. XDB28000000), and CAS Project for Young Scientists in Basic Research (Grant No. YSBR-057). We thank the HPC-ITP for the technical support and generous allocation of CPU time. This work was supported by the Synergetic Extreme Condition User Facility (SECUF).
2310.14994
Concentration analysis for elliptic critical equations with no boundary control: ground-state blow-up
We perform the apriori analysis of solutions to critical nonlinear elliptic equations on manifolds with boundary. The solutions are of minimizing type. The originality is that we impose no condition on the boundary, which leads us to assume $L^2-$concentration. We also analyze the effect of a non-homogeneous nonlinearity that results in the fast convergence of the concentration point.
Hussein Mesmar, Frédéric Robert
2023-10-23T14:46:59Z
http://arxiv.org/abs/2310.14994v1
Concentration analysis for elliptic critical equations with no boundary control: ground-state blow-up ###### Abstract. We perform the apriori analysis of solutions to critical nonlinear elliptic equations on manifolds with boundary. The solutions are of minimizing type. The originality is that we impose no condition on the boundary, which leads us to assume \(L^{2}-\)concentration. We also analyze the effect of a non-homogeneous nonlinearity that results in the fast convergence of the concentration point. _Dedicated to Yihong Du on the occasion of his 60th birthday_ ## 1. Introduction ### Context and main results Let \((M,g)\) be a Riemannian manifold of dimension \(n\geq 3\), with or without boundary \(\partial M\). When \(\partial M\neq\emptyset\), \(M\) denotes the interior of the manifold and \(\overline{M}\) denotes its closure, so that \(\overline{M}=M\cup\partial M\): in particular, \(M\) is open in \(\overline{M}\). We let \(a,f\in C^{0}(\overline{M})\) be functions and we consider \(u\in C^{2}(M)\) solution to \[\Delta_{g}u+au=fu^{2^{*}-1}\,;\,u>0\text{ in }M. \tag{1}\] where \(\Delta_{g}:=-\text{div}_{g}(\nabla)\) is the Laplacian with minus sign convention and \(2^{*}:=\frac{2n}{n-2}\) is critical for the Sobolev embeddings \(H_{1}^{2}(M)\hookrightarrow L^{2^{*}}(M)\). Here, the Sobolev space \(H_{1}^{2}(M)\) is the completion of \(\{u\in C^{\infty}(M)/\left\|u\right\|_{H_{1}^{2}}<\infty\}\) for the norm \(\|\cdot\|_{H_{1}^{2}}:=\|\nabla\cdot\|_{2}+\|\cdot\|_{2}\). In the case of a Euclidean smooth domain \(\Omega\subset\mathbb{R}^{n}\), then \(a,f\in C^{0}(\bar{\Omega})\) and we consider \(u\in C^{2}(\Omega)\) solution to \[\Delta u+au=fu^{2^{*}-1}\,;\,u>0\text{ in }\Omega. \tag{2}\] where \(\Delta:=-\text{div}(\nabla)\) is the Euclidean Laplacian. Due to the critical exponent \(2^{*}\), there might be families of solutions to (1) that are not relatively compact in \(C^{2}_{loc}(M)\). For instance, given \(x_{0}\in\mathbb{R}^{n}\) and \(\mu>0\), define the _Bubble_ as \[x\mapsto U_{\mu,x_{0}}(x):=\left(\frac{\mu}{\mu^{2}+\frac{c|x-x_{0}|^{2}}{n(n -2)}}\right)^{\frac{n-2}{2}}.\] Then for any domain \(\Omega\subset\mathbb{R}^{n}\), \(U_{\mu,x_{0}}\) is a solution to (2) when \(a\equiv 0\) and \(f\equiv c\). Moreover, if \(x_{0}\in\bar{\Omega}\), then \(\sup_{\Omega}U_{\mu,x_{0}}\to+\infty\) as \(\mu\to+\infty\). In the Riemannian context, for \(x\in M\) and \(\mu>0\), the _Bubble_ writes \[x\mapsto U^{(M,g)}_{\mu,x_{0}}(x):=\left(\frac{\mu}{\mu^{2}+\frac{cd_{g}(x,x_{0})^ {2}}{n(n-2)}}\right)^{\frac{n-2}{2}}.\] Concerning terminology, we say that a family \((u_{\epsilon})_{\epsilon}\in C^{0}(M)\) blows-up if \(\lim_{\epsilon\to 0}\|u_{\epsilon}\|_{\infty}=+\infty\). When \(\partial M=\emptyset\), the description of blowing-up families of (1) with bounded \(L^{2^{*}}-\)norm has been performed by Druet-Hebey-Robert [7]. The main result in [7] is that blowing-up families are controled from above by \(\left(\sum_{i}U_{\mu_{i,\epsilon},x_{i,\epsilon}}\right)_{\epsilon}\), for given families \((x_{i,\epsilon})_{\epsilon}\in M\) and \((\mu_{i,\epsilon})_{\epsilon}\to 0\). With this control, it is possible to give informations on the localization of the limits \(x_{i,\infty}:=\lim_{\epsilon\to 0}x_{i,\epsilon}\): see Druet [6]. This analysis extends to manifolds with boundary provided a boundary condition like Dirichlet (see Ghoussoub-Mazumdar-Robert [12]) or Neumann (see Druet-Robert-Wei [9]). See Premoselli [17] for a more recent point of view. The first objective of the present work is to perform an analysis similar to [7] and [6] without condition on \(\partial M\neq\emptyset\). Tackling such generality requires additional assumption: the relevant notion here is \(L^{2}-\)concentration that already appeared in Djadli-Druet [5] (see (7) below). Our second objective is to analyse the effect of a nonconstant function \(f\) in (1). In the case of a single peak, concentration occurs at a critical point. We prove that when this critical point is nondegenerate, then the family of concentration points converges very fast to its limit (see (6) below): this does not generally happen for a constant function \(f\). A similar control appears in Malchiodi-Mayer [16]. As was shown by Aubin [1], below a threshold, blow-up cannot occur. In this manuscript, we are considering solutions \((u_{\epsilon})\) that carry the minimal energy for blow-up, namely ground-state type solution. The minimal energy is given by the best constant in Sobolev embeddings: \[\frac{1}{K_{0}(n)}=\inf_{\varphi\in D^{2}_{1}(\mathbb{R}^{n})\setminus\{0\}} \frac{\int_{\mathbb{R}^{n}}|\nabla\varphi|^{2}\mathrm{d}X}{\left(\int_{ \mathbb{R}^{n}}|\varphi|^{2^{*}}\mathrm{d}X\right)^{\frac{2}{2^{*}}}}, \tag{3}\] where \(D^{2}_{1}(\mathbb{R}^{n})\) is the completion of \(C^{\infty}_{c}(\mathbb{R}^{n})\) for the norm \(\varphi\mapsto\|\nabla\varphi\|_{2}\). Aubin [1] and Talenti [21] have computed this best constant and have showed that the extremals are exactly \(C\cdot U_{\mu,x_{0}}\) for \(C\neq 0\), \(\mu>0\) and \(x_{0}\in\mathbb{R}^{n}\). Our main theorem for ground-state solutions is the following: **Theorem 1**.: _Let \((M,g)\) be a smooth compact Riemannian manifold of dimension \(n\geq 4\) with nonempty boundary \(\partial M\neq\emptyset\). We fix \(f\in C^{2}(\overline{M})\) such that \(f>0\). We consider a family \((h_{\epsilon})_{\epsilon}\in C^{1}(\overline{M})\) and \(f\in C^{2}(\overline{M})\), \(f>0\), such that there exists \(h\in C^{1}(\overline{M})\) and such that \(\Delta_{g}+h\) is coercive and_ \[\lim_{\epsilon\to 0}h_{\epsilon}=h\text{ in }C^{1}(\overline{M}). \tag{4}\] _We let \((u_{\epsilon})_{\epsilon}\in C^{2}(\overline{M})\) be a family of solutions to_ \[\Delta_{g}u_{\epsilon}+h_{\epsilon}u_{\epsilon}=fu_{\epsilon}^{2^{*}-1}\text { in }M. \tag{5}\] _Let \(x_{\epsilon}\in\overline{M}\) and \(\mu_{\epsilon}>0\) be such that_ \[u_{\epsilon}(x_{\epsilon})=\sup_{\overline{M}}u_{\epsilon}=\mu_{\epsilon}^{1- \frac{n}{2}}\] _We assume that_ * \(u_{\epsilon}\to 0\) _in_ \(L^{2}(M)\)_,_ * \(\lim_{\epsilon\to 0}x_{\epsilon}=x_{0}\in M\) _is an interior point of_ \(M\)_,_ * _The solution has minimal-type energy, that is_ \[\lim_{\epsilon\to 0}\int_{M}fu_{\epsilon}^{2^{\star}}\,dv_{g}=\frac{1}{K_{0}(n)^{ \frac{n}{2}}f(x_{0})^{\frac{n-2}{2}}}\] * _The Hessian_ \(\nabla^{2}f(x_{0})\) _is nondegenerate._ _Then \(x_{0}\) is a critical point of \(f\) and_ \[d_{g}(x_{\epsilon},x_{0})=o(\mu_{\epsilon})\text{ as }\epsilon\to 0, \tag{6}\] _and for all \(\omega\subset M\) such that \(\overline{\omega}\subset M\) and \(\delta_{0}>0\), there exists \(C(\omega,\delta_{0})>0\) such that_ \[u_{\epsilon}(x)\leq C(\omega,\delta_{0})\left(\frac{\mu_{\epsilon}}{\mu_{ \epsilon}^{2}+d_{g}(x,x_{0})^{2}}\right)^{\frac{n-2}{2}}+C(\omega,\delta_{0}) \sup_{\partial B_{\delta_{0}}(x_{0})}u_{\epsilon}\] _for all \(x\in\omega\). In addition, assuming that for all \(\delta>0\), we have that_ \[\lim_{\epsilon\to 0}\frac{\int_{M\setminus B(x_{0},\delta)}u_{\epsilon}^{2}\, dv_{g}}{\int_{M}u_{\epsilon}^{2}\,dv_{g}}=0\,\text{ for }\,n\in\{4,5,6\}, \tag{7}\] _then_ \[h(x_{0})=\frac{n-2}{4(n-1)}\left(\text{Scal}_{g}(x_{0})-\frac{n-4}{2}\cdot \frac{\Delta_{g}f(x_{0})}{f(x_{0})}\right), \tag{8}\] _where \(\text{Scal}_{g}\) is the scalar curvature of \((M,g)\)._ **Remark:**_Theorem 1 applies to the case of a bounded domain of \(\mathbb{R}^{n}\) endowed with the Euclidean metric \(g:=\text{Eucl}\). In this situation, \(M=\Omega\subset\mathbb{R}^{n}\) is a domain, \(\Delta_{g}=-\sum_{i}\partial_{ii}\), \(d_{g}(x,y)=|x-y|\) is the usual Euclidean norm for \(x,y\in\mathbb{R}^{n}\) and \(\text{Scal}_{g}=0\)._ The control (6) is remarkable since it does not hold when \(f\) is degenerate. Indeed, when \(f\equiv 1\) there is an abundance of blowup profiles with various speeds of convergence of the \((x_{\epsilon})\)'s to their limit, see for instance Premoselli [18]. The restriction of dimension \(n\geq 4\) is not surprising: indeed, see Corollary 6.4 in Druet-Hebey [8], (7) does not hold in general for \(n=3\). It is known since Aubin and Schoen that for \(n=3\), blowup cannot be characterized by local arguments and involves global arguments, like the mass. In the general local context of Theorems 1, no information is known regarding the boundary, which forbids to get any global information. ### Application to supercritical problems with symmetries A natural set application of Theorem 1 is in the context of manifolds invariant under a group of isometries. We consider a compact Riemannian manifold \((X,g)\) of dimension \(n\geq 3\), but without boundary (\(\partial X=\emptyset\)). The critical exponent \(2^{\star}\) can be improved by imposing invariance under the action of an isometry group. Let \(G\) be a compact subgroup of isometries of \((X,g)\): we say that a function \(u:X\to\mathbb{R}\) is \(G-\)invariant if \(u\circ\sigma=u\) for all \(\sigma\in G\). It follows from Hebey-Vaugon [13] that the critical exponent in this setting is \(2^{\star}(k):=\frac{2(n-k)}{n-k-2}\) where \(k:=\min_{x\in X}\dim\,Gx\) and assuming that \(1\leq k<n-2\). We refer to Hebey-Vaugon [13], Saintier [20] and Faget [10] for extensive considerations on problems invariant under isometries. In general, the quotient \(X/G\) is not a manifold of dimension \(n-k\). Following Saintier [20], we make the following assumption on \(G\): **Assumption (H):**_For any \(x_{0}\in X\) such that the orbit \(Gx_{0}\) is of dimension \(k=\min\limits_{x\in X}dimGx\geq 1\) and of volume \(V_{m}=\min_{x\in X}\{\text{Vol}_{g}(Gx)/\text{ dim }Gx=k\}\), there exists \(\delta>0\), and \(G^{\prime}\) a closed subgroup of \(Isom_{g}(X)\) such that:_ 1. \(G^{\prime}x_{0}=Gx_{0}\)_;_ 2. _For all_ \(x\in B_{\delta}(Gx_{0}):=\{y\in X/d_{g}(y;Gx_{0})<\delta\}\)_, then_ \(G^{\prime}x\) _is principal and_ \(G^{\prime}x\subset Gx\)_._ _In particular \(dim\,G^{\prime}x=dim\,Gx_{0}=k\), \(\forall x\in B_{\delta}(Gx_{0})\)._ This assumption ensures that \(B_{\delta}(Gx_{0})/G^{\prime}\) is a Riemannian manifold of dimension \(m:=n-k\) with a nontrivial boundary. In the sequel, for any \(p\in\mathbb{N}\), we define \(C^{p}_{G}(X)\) as the space of \(G-\)invariant functions of \(C^{p}(X)\). We prove the following in the spirit of Faget [11]. **Theorem 2**.: _Let \((X,g)\) be a compact Riemannian manifold of dimension \(n\) without boundary, and let \(G\) be a compact subgroup of isometries of \(X\) which satisfies Assumption \((H)\) and such that \(1\leq k<n-2\). Let \((h_{\epsilon})_{\epsilon}\in C^{1}_{G}(X)\) and \(h\in C^{1}_{G}(X)\) be such that \(\Delta_{g}+h\) is coercive and_ \[\lim\limits_{\epsilon\to 0}h_{\epsilon}=h>0\text{ in }C^{1}_{G}(X). \tag{9}\] _Let \((u_{\epsilon})_{\epsilon}\in C^{2}_{G}(X)\) be a family of solutions to_ \[\Delta_{g}u_{\epsilon}+h_{\epsilon}u_{\epsilon}=\lambda_{\epsilon}u_{\epsilon} ^{2^{*}(k)-1}\,;\,u_{\epsilon}>0\text{ in }X,\;\int_{X}u_{\epsilon}^{2^{*}(k)}\,dv_{g}=1 \tag{10}\] _We assume that_ * \(u_{\epsilon}\to 0\) _strongly in_ \(L^{2}(X)\)_,_ * _The energy is of minimal type, that is_ (11) \[\lim\limits_{\epsilon\to 0}\lambda_{\epsilon}=\frac{V_{m}^{1-\frac{2}{2^{*}(k) }}}{K_{0}(n-k)},\text{ where }V_{m}:=\min\limits_{x\in X}\{\text{Vol}_{g}(Gx)/\text{ dim }Gx=k\}.\] * _For all point_ \(z_{0}\in X\) _such that dim_ \(Gz_{0}=k\) _and_ \(\text{Vol}_{g}(Gz_{0})=V_{m}\)_, then the function_ \[\left\{\begin{array}{ccc}\bar{v}:&B_{\delta}(Gz_{0})/G^{\prime}&\to&\mathbb{ R}\\ &G^{\prime}x&\to&\text{Vol}_{g}(G^{\prime}x)\end{array}\right\}\text{ is nondegenerate at }Gz_{0}.\] _This latest assumption makes sense due to Assumption (H). Let \((x_{\epsilon})_{\epsilon}\in X\) be such that \(u_{\epsilon}(x_{\epsilon})=\max_{X}u_{\epsilon}\) and define \(\mu_{\epsilon}^{-\frac{n-k-2}{2}}=u_{\epsilon}(x_{\epsilon})\). Then there exists \(x_{0}\in X\) such that dim \(Gx_{0}=k\) and \(\text{Vol}_{g}(Gx_{0})=V_{m}\) such that \(\lim_{\epsilon\to 0}x_{\epsilon}=x_{0}\) and_ \[d_{g}(x_{\epsilon},Gx_{0})=o(\mu_{\epsilon}). \tag{12}\] _Moreover, there exists \(C>0\) such that_ \[u_{\epsilon}(x)\leq C\left(\frac{\mu_{\epsilon}}{\mu_{\epsilon}^{2}+d_{g}(x,Gx _{0})^{2}}\right)^{\frac{n-2}{2}}+\left\{\begin{array}{cc}o(\mu_{\epsilon})& \text{ if }n-k\geq 5\\ o\left(\mu_{\epsilon}\sqrt{\ln\frac{1}{\mu_{\epsilon}}}\right)&\text{ if }n-k=4 \end{array}\right. \tag{13}\] _and_ \[h(x_{0})=\frac{n-k-2}{4(n-k-1)}\left(\text{Scal}_{\bar{g}}(\bar{x}_{0})+3 \frac{\Delta_{\bar{g}}\bar{v}(\bar{x}_{0})}{\bar{v}(\bar{x}_{0})}\right)\text{ \ when }n-k\geq 4, \tag{14}\] _where \(\bar{g}\) is the metric on \(B_{\delta}(Gx_{0})/G^{\prime}\) such that the canonical projection \((B_{\delta}(Gx_{0}),g)\to(B_{\delta}(Gx_{0})/G^{\prime},\bar{g})\) is a Riemannian submersion._ ## 2. Pointwise control We consider \((u_{\epsilon})\in C^{2}(\overline{M})\), \((h_{\epsilon})\in C^{1}(\overline{M})\), \(h\in C^{1}(\overline{M})\), \(f\in C^{2}(\overline{M})\), \((x_{\epsilon})_{\epsilon}\in M\) and \((\mu_{\epsilon})_{\epsilon}\in(0,+\infty)\) as in the statement of Theorem 1. In the sequel, we let \(i_{g}(M,x)>0\) be the injectivity radius of \((M,g)\) at an interior point \(x\in M\). **Claim 1**.: _Set \(\delta\in(0,i_{g}(M,x_{0}))\) and define_ \[w_{\epsilon}(X):=\mu_{\epsilon}^{\frac{n-2}{2}}u_{\epsilon}(\exp_{x_{\epsilon} }(\mu_{\epsilon}X))\text{ for any }X\in B_{\frac{s}{\mu_{\epsilon}}}(0)\subset \mathbb{R}^{n}\] _Then_ \[\lim_{\epsilon\to 0}w_{\epsilon}(X)=w(X)=\left(\frac{1}{1+\frac{f(x_{0})|X|^{2 }}{n(n-2)}}\right)^{\frac{n-2}{2}}\text{ for all }X\in\mathbb{R}^{n}. \tag{15}\] _Moreover, the convergence holds in \(C^{2}_{loc}(\mathbb{R}^{n})\). In addition,_ \[\lim_{R\to+\infty}\lim_{\epsilon\to 0}\int_{B_{x_{\epsilon}}(R\mu_{\epsilon})}fu _{\epsilon}^{2^{*}}\ dv_{g}=\frac{1}{K_{0}(n)^{\frac{n}{2}}f(x_{0})^{\frac{n- 2}{2}}}.\] _In particular_ \[\lim_{R\to+\infty}\lim_{\epsilon\to 0}\int_{M\setminus B_{x_{\epsilon}}(R\mu_{ \epsilon})}fu_{\epsilon}^{2^{*}}\ dv_{g}=0. \tag{16}\] **Proof of Claim 1:** We define the metric \(g_{\epsilon}:=\exp_{x_{\epsilon}}^{*}g(\mu_{\epsilon}\cdot)\) in \(B_{\frac{s}{\mu_{\epsilon}}}(0)\subset\mathbb{R}^{n}\). Since, \(\mu_{\epsilon}\to 0\) when \(\epsilon\to 0\), then \(g_{\epsilon}\to\xi\) in \(C^{2}_{loc}(\mathbb{R}^{n})\) as \(\epsilon\to 0\) where \(\xi\) is the Euclidean metric. The function \(w_{\epsilon}\) satisfies the equation \[\Delta_{g_{\epsilon}}w_{\epsilon}+\mu_{\epsilon}^{2}\tilde{h}_{\epsilon}w_{ \epsilon}=\tilde{f}_{\epsilon}w_{\epsilon}^{2^{*}-1}\text{ in }B_{\frac{s}{\mu_{\epsilon}}}(0) \tag{17}\] where \(\tilde{h}_{\epsilon}(X)=h_{\epsilon}\left(\exp_{x_{\epsilon}}(\mu_{\epsilon}X)\right)\) and \(\tilde{f}_{\epsilon}(X)=f(\exp_{x_{\epsilon}}(\mu_{\epsilon}X))\) for all \(X\in B_{\frac{s}{\mu_{\epsilon}}}(0)\). Since \(0<w_{\epsilon}\leq w_{\epsilon}(0)=1\), there exists \(w\in C^{2}\left(\mathbb{R}^{n}\right)\) such that the sequence \(w_{\epsilon}\to w\) in \(C^{2}_{loc}(\mathbb{R}^{n})\) as \(\epsilon\to 0\) up to extraction. Passing to the limit in (17), we get that \[\Delta_{\xi}w=f(x_{0})w^{2^{*}-1}\text{ in }\mathbb{R}^{n},\,0\leq w(0)=1. \tag{18}\] It follows from Cafarelli-Gidas-Spruck [3] that \(w(X)=\left(1+\frac{f(x_{0})|X|^{2}}{n(n-2)}\right)^{-\frac{n-2}{2}}\) for all \(X\in\mathbb{R}^{n}\). The change of variable \(x=\exp_{x_{\epsilon}}(\mu_{\epsilon}X)\) yields \[\int_{B_{x_{\epsilon}}(R\mu_{\epsilon})}fu_{\epsilon}^{2^{*}}\ dv_{g}=\int_{B_{ R}(0)}f(\exp_{x_{\epsilon}}(\mu_{\epsilon}X)w_{\epsilon}^{2^{*}}\mathrm{d}v_{g_{ \epsilon}}.\] Therefore, \[\lim_{R\to+\infty}\lim_{\epsilon\to 0}\int_{B_{x_{\epsilon}}(R\mu_{ \epsilon})}fu_{\epsilon}^{2^{*}}\ dv_{g} = \lim_{R\to+\infty}\lim_{\epsilon\to 0}\int_{B_{R}(0)}f(\exp_{x_{ \epsilon}}(\mu_{\epsilon}x)w_{\epsilon}^{2^{*}}\mathrm{d}v_{g_{\epsilon}}\] \[= f(x_{0})\int_{\mathbb{R}^{n}}w^{2^{*}}\ dx=\frac{1}{K_{0}(n)^{\frac{n }{2}}f(x_{0})^{\frac{n-2}{2}}},\] where we have used that \(w\) is a solution to (18) and is an extremal for the Sobolev inequality (3). This proves Claim 1. **Claim 2**.: \(u_{\epsilon}\to 0\) _in \(C^{0}_{loc}(M\setminus\{x_{0}\})\)._ _Proof of the claim:_ It follows from (16) and \(f>0\) that for all \(\delta>0\), we have that \[\lim_{\epsilon\to 0}\int_{M\setminus B_{x_{0}}(\delta)}u_{\epsilon}^{2^{*}}\ dv_{g}=0.\] Let us fix \(\omega\subset M\) such that \(\overline{\omega}\subset M\setminus\{x_{0}\}\). We let \(\omega^{\prime}\) open such that \(\bar{\omega}\subset\omega^{\prime}\) and \(\bar{\omega^{\prime}}\subset M-\{x_{0}\}\). Let \(\eta\in C^{\infty}_{c}(\omega^{\prime})\) such that \(\eta(x)=1\) for all \(x\in\omega\). Let us take \(l>1\) to be fixed later. Integrating by parts as in Druet-Hebey ([8], Theorem 6.1), we get that \[\int_{M}\eta^{2}u_{\epsilon}^{l}\Delta_{g}u_{\epsilon}\,dv_{g} = \int_{M}\nabla(\eta^{2}u_{\epsilon}^{l})\nabla u_{\epsilon}\,dv_ {g}=\int_{M}l\eta^{2}u_{\epsilon}^{l-1}|\nabla u_{\epsilon}|^{2}\,dv_{g}+\int _{M}\nabla\eta^{2}\nabla\frac{u_{\epsilon}^{l+1}}{l+1}\,dv_{g}\] \[= \frac{4l}{(l+1)^{2}}\int_{M}\eta^{2}|\nabla u_{\epsilon}^{\frac{ l+1}{2}}|_{g}^{2}\,dv_{g}+\int_{M}\frac{\Delta\eta^{2}}{l+1}u_{\epsilon}^{l+1} \,dv_{g}\] Independently, for any \(v\in C^{1}(M)\), integrating also by parts, we get that \[\int_{M}(|\nabla(\eta v)|_{g}^{2}-\eta^{2}|\nabla v|_{g}^{2})\,dv_{g}=-\int_{M }\eta v^{2}\Delta_{g}\eta\,dv_{g}.\] Plugging these integrals together yields \[\int_{M}|\nabla(\,\eta u_{\epsilon}^{\frac{l+1}{2}})|_{g}^{2}\ dv_{g}=\frac{(l +1)^{2}}{4l}\int_{M}\eta^{2}u_{\epsilon}^{l}\Delta_{g}u_{\epsilon}\,dv_{g}+ \frac{l+1}{2l}\int_{M}\left(|\nabla\eta|_{g}^{2}+\frac{l-1}{l+1}\eta\Delta_{g }\eta\right)u_{\epsilon}^{l+1}\,dv_{g}\] We then get that \[\int_{M}|\nabla(\,\eta u_{\epsilon}^{\frac{l+1}{2}})|_{g}^{2}\ dv _{g}\] \[=\frac{(l+1)^{2}}{4l}\int_{M}\eta^{2}u_{\epsilon}^{l}\Delta_{g}u_ {\epsilon}\,dv_{g}+\frac{l+1}{2l}\int_{M}\left(|\nabla\eta|_{g}^{2}+\frac{l-1 }{l+1}\eta\Delta_{g}\eta\right)u_{\epsilon}^{l+1}\,dv_{g}\] \[=\frac{(l+1)^{2}}{4l}\lambda_{\epsilon}\int_{M}\eta^{2}u_{ \epsilon}^{l+2^{*}-1}\,dv_{g}-\frac{(l+1)^{2}}{4l}\int_{M}\eta^{2}h_{\epsilon }u_{\epsilon}^{l+1}\,dv_{g}\] \[+\frac{l+1}{2l}\int_{M}\left(|\nabla\eta|_{g}^{2}+\frac{l-1}{l+1 }\eta\Delta_{g}\eta\right)u_{\epsilon}^{l+1}\,dv_{g}\] \[\leq\frac{(l+1)^{2}}{4l}\lambda_{\epsilon}\left(\int_{M}\left( \eta u_{\epsilon}^{\frac{l+1}{2}}\right)^{2^{*}}\,dv_{g}\right)^{\frac{2}{2^{* }}}\left(\int_{B_{x}(2\delta)}\eta u_{\epsilon}^{2^{*}}\,dv_{g}\right)^{1- \frac{2}{2^{*}}}+C\int_{\Omega}u_{\epsilon}^{l+1}\,dv_{g}\] It follows from the Sobolev inequality that there exists \(C(\omega^{\prime})>0\) independent of \(\epsilon\) such that \[\left(\int_{\omega^{\prime}}\left(\eta u_{\epsilon}^{\frac{l+1}{2}}\right)^{2^ {*}}\,dv_{g}\right)^{\frac{2}{2^{*}}}\leq C(\omega^{\prime})\left(\int_{\omega ^{\prime}}|\nabla(\,\eta u_{\epsilon}^{\frac{l+1}{2}}\,)|_{g}^{2}\ dv_{g}+B \int_{\omega^{\prime}}\eta^{2}u_{\epsilon}^{l+1}\,dv_{g}\right)\] Combining these inequalities yields \[\left(\int_{\omega}u_{\epsilon}^{\frac{l+1}{2}}2^{*}\,dv_{g}\right)^{\frac{2}{2^ {*}}}\leq\left(\int_{\omega^{\prime}}\left(\eta u_{\epsilon}^{\frac{l+1}{2}} \right)^{2^{*}}\,dv_{g}\right)^{\frac{2}{2^{*}}}\leq C\int_{\omega^{\prime}}u_{ \epsilon}^{l+1}\,dv_{g}\] for \(\epsilon>0\) small enough and where \(C\) is independent of \(\epsilon\). Taking \(1<l<2^{*}-1\), we then get that \(u_{\epsilon}\to 0\) in \(L^{q}(\omega)\) for some \(q>2^{*}\). Since \(u_{\epsilon}\) satisfies (5), it is classical that \(u_{\epsilon}\to 0\) in \(C^{0}_{loc}(\omega)\). This proves Claim 2. **Claim 3**.: _For all \(\omega\subset M\) such that \(\overline{\omega}\subset M\), there exists \(C(\omega)\) such that_ \[d_{g}(x,x_{\epsilon})^{\frac{n-2}{2}}u_{\epsilon}(x)\leq C(\omega)\text{ for all }\epsilon>0\text{ and }x\in\omega. \tag{19}\] _Moreover,_ \[\lim_{R\to 0}\lim_{\epsilon\to 0}\sup_{x\in\omega\setminus B_{\pi_{\epsilon}}(R \mu_{\epsilon})}d_{g}(x,x_{\epsilon})^{\frac{n-2}{2}}|u_{\epsilon}(x)|=0 \tag{20}\] Proof of the Claim:.: We argue by contradiction and we let \((y_{\epsilon})_{\epsilon}\in\overline{\omega}\) be such that \[d_{g}(y_{\epsilon},x_{\epsilon})^{\frac{n-2}{2}}u_{\epsilon}(y_{\epsilon})= \sup_{x\in\overline{\omega}}d_{g}(x,x_{\epsilon})^{\frac{n-2}{2}}u_{\epsilon} (x)\to+\infty\text{ as }\epsilon\to 0.\] It follows from Claim 2 that \(\lim_{\epsilon\to 0}y_{\epsilon}=x_{0}\). Arguing as in Step 2 of Chapter 4 in Druet-Hebey-Robert [7] and using (16), we get (19). The second estimate (20) follows also from [7]. We now state and prove the main result of this section: **Proposition 1**.: _Let \(\delta>0\) be such that \(B_{2\delta}(x_{0})\subset M\). Under the assumptions of Theorem 1, we have that_ \[u_{\epsilon}(y_{\epsilon})=\left(\frac{\mu_{\epsilon}}{\mu_{\epsilon}^{2}+ \frac{f(x_{0})}{n(n-2)}d_{g}(x_{\epsilon},y_{\epsilon})^{2}}\right)^{\frac{n- 2}{2}}(1+o(1))+O(\theta_{\epsilon})\text{ when }\lim_{\epsilon\to 0}y_{ \epsilon}=x_{0}. \tag{21}\] _Moreover, there exists \(C(\delta)>0\) independent of \(\epsilon\) such that_ \[u_{\epsilon}(x) \leq C\frac{\mu_{\epsilon}^{\frac{n-2}{2}}}{(\mu_{\epsilon}+d_{g} (x,x_{\epsilon}))^{n-2}}+C\theta_{\epsilon} \tag{23}\] \[|\nabla u_{\epsilon}|(x) \leq C\frac{\mu_{\epsilon}^{\frac{n-2}{2}}}{(\mu_{\epsilon}+d_{g} (x,x_{\epsilon}))^{n-1}}+C\theta_{\epsilon} \tag{22}\] _for all \(x\in B_{\delta}(x_{0})\) where_ \[\theta_{\epsilon}:=\sup_{x\in\partial B_{\delta}(x_{0})}u_{\epsilon}(x)\to 0 \text{ as }\epsilon\to 0. \tag{24}\] Proof of Proposition 1:.: We let \(\nu\in(0,1)\) to be fixed later. We let \(\alpha_{0}>0\) such that \(\Delta_{g}+\frac{h-\alpha_{0}}{1-\nu}\) is coercive on \(B_{2\delta}(x_{0})\) where \(h\) is as in (4): up to taking \(\delta>0\) small, this is always possible. We let \(\tilde{G}_{\nu}\) be the Green's function of \(\Delta_{g}+\frac{h-\alpha_{0}}{1-\nu_{1}}\) on \(B_{2\delta}(x_{0})\) with Dirichlet boundary condition. It follows from Robert [19] that there exists \(c_{1},c_{2}>0\) such that \[c_{1}d_{g}(x,y)^{2-n}\leq\tilde{G}_{\nu}(x,y)\leq c_{2}d_{g}(x,y)^{2-n}\text{ for all }x,y\in B_{\delta}(x_{0}),\,x\neq y. \tag{25}\] We define the operator \[u\mapsto L_{\epsilon}u:=\Delta_{g}u+h_{\epsilon}u-fu_{\epsilon}^{2^{*}-2}u,\] so that (5) reads \(L_{\epsilon}u_{\epsilon}=0\). A straightforward computation yields \[\frac{L_{\epsilon}\tilde{G}_{\nu}^{1-\nu}}{\tilde{G}_{\nu}^{1-\nu}}(x,x_{ \epsilon}) = \alpha_{0}+h_{\epsilon}(x)-h(x)+\nu(1-\nu)\left|\frac{\nabla \tilde{G}_{\nu}}{\tilde{G}_{\nu}}\right|_{g}^{2}-fu_{\epsilon}^{2^{*}-2} \tag{26}\] By standard properties of Green's function [19], there exists \(c_{1},\rho>0\), such that \[\frac{|\nabla_{g,x}\tilde{G}_{\nu}|_{g}}{\tilde{G}_{\nu}}(x,x_{\epsilon})\geq \frac{c_{1}}{d_{g}(x,x_{\epsilon})}\text{ for all }x\in B_{\rho}(x_{\epsilon})-\{x_{ \epsilon}\}. \tag{27}\] Since \(u_{\epsilon}\to 0\) in \(C^{0}_{loc}(M\setminus\{x_{0}\})\) and \(h_{\epsilon}\to h\) in \(C^{0}_{loc}(M\setminus\{x_{0}\})\), (26) yields \[L_{\epsilon}\tilde{G}^{1-\nu}_{\nu}\geq 0\text{ in }B_{2\delta}(x_{0})\setminus B_{\rho}(x_{0})\] Let \(R>0\) to be fixed later. It follows from (20) that \[d_{g}(x,x_{\epsilon})^{2}u_{\epsilon}^{2^{*}-2}(x)\leq\eta(R)\text{ for all }x\in B(x_{\epsilon},\rho)\setminus B(x_{\epsilon},R\mu_{\epsilon}),\] where \(\lim_{R\to+\infty}\eta(R)=0\). Now, using \(h_{\epsilon}\to h\) in \(C^{0}(M)\), (26) and (27), for any \(x\in B(x_{\epsilon},\rho)\setminus B(x_{\epsilon},R\mu_{\epsilon})\), we get that \[\frac{L_{\epsilon}\tilde{G}^{1-\nu}_{\nu}}{\tilde{G}^{1-\nu}_{ \nu}}(x,x_{\epsilon}) \geq\frac{\alpha_{0}}{2}+\nu(1-\nu)\frac{c_{1}^{2}}{d_{g}(x,x_{ \epsilon})^{2}}-fu_{\epsilon}^{2^{*}-2}(x)\] \[\geq\frac{\alpha_{0}}{2}+\frac{\nu(1-\nu)c_{1}^{2}-f\eta(R)}{d_{g }(x,x_{\epsilon})^{2}}\geq 0\] for \(R>0\) large enough. Therefore, we get that \[L_{\epsilon}\tilde{G}^{1-\nu}_{\nu}(x,x_{\epsilon})\geq 0\text{ for all }x\in B_{\delta}(x_{0})\setminus B_{R\mu_{\epsilon}}(x_{\epsilon}). \tag{28}\] We fix \(\nu_{1}\in(0,1)\). It follows from (25) and \(\left|\left|u_{\epsilon}\right|\right|_{\infty}=\mu_{\epsilon}^{1-n/2}\), that there exists \(c_{3}>0\) such that \[u_{\epsilon}(x)\leq c_{3}\mu_{\epsilon}^{\frac{n-2}{2}-\nu_{1}(n-2)}\tilde{G} ^{1-\nu_{1}}_{\nu_{1}}(x,x_{\epsilon})\text{ for all }x\in\partial B_{R\mu_{ \epsilon}}(x_{\epsilon}). \tag{29}\] We set \(\theta_{\epsilon}:=\sup_{x\in\partial B_{\delta}(x_{0})}u_{\epsilon}(x)\). It follows from Claim 2 that \(\lim_{\epsilon\to 0}\theta_{\epsilon}=0\). We fix \(\nu_{2}\in(0,1)\) and we consider the Green's function \(\tilde{G}_{\nu_{2}}\). It follows from (25) that there exists \(c_{4}>0\) such that \[u_{\epsilon}(x)\leq c_{4}\theta_{\epsilon}\tilde{G}^{1-\nu_{2}}_{\nu_{2}}(x, x_{\epsilon})\text{ for all }x\in\partial B_{\delta}(x_{0}). \tag{30}\] We define \[H_{\epsilon}(x):=c_{3}\mu_{\epsilon}^{\frac{n-2}{2}-\nu_{1}(n-2)}\tilde{G}^{1- \nu_{1}}_{\nu_{1}}(x,x_{\epsilon})+c_{4}\theta_{\epsilon}\tilde{G}^{1-\nu_{2} }_{\nu_{2}}(x,x_{\epsilon})\text{ for }x\in B_{2\delta}(x_{\epsilon})-\{x_{ \epsilon}\}.\] It follows from (28), (29) and (30) that \[\left\{\begin{array}{cc}L_{\epsilon}u_{\epsilon}=0\leq L_{\epsilon}H_{ \epsilon}&\text{ in }B_{\delta}(x_{0})\setminus B_{R\mu_{\epsilon}}(x_{\epsilon}))\\ 0<u_{\epsilon}\leq H_{\epsilon}&\text{ on }\partial\left(B_{\delta}(x_{0}) \setminus B_{R\mu_{\epsilon}}(x_{\epsilon})\right)\end{array}\right.\] Since \(L_{\epsilon}u_{\epsilon}\geq 0\) in \(\overline{B_{\delta}(x_{0})\setminus B_{R\mu_{\epsilon}}(x_{\epsilon})}\), it follows from [2] that \[u_{\epsilon}\leq H_{\epsilon}\text{ in }B_{\delta}(x_{0})\setminus B_{R\mu_{ \epsilon}}(x_{\epsilon}).\] Using the pointwise control (25) and that \(\|u_{\epsilon}\|_{\infty}=\mu_{\epsilon}^{1-n/2}\), we get that for all \(\nu_{1},\nu_{2}\in(0,1)\), there exists \(C_{\nu_{1},\nu_{2}}>0\) such that \[u_{\epsilon}(x)\leq C_{\nu_{1},\nu_{2}}\left(\frac{\mu_{\epsilon}^{\frac{n-2}{ 2}-\nu_{1}(n-2)}}{(\mu_{\epsilon}+d_{g}(x,x_{\epsilon}))^{(n-2)(1-\nu_{1})}}+ \theta_{\epsilon}d_{g}(x,x_{\epsilon})^{(2-n)(1-\nu_{2})}\right) \tag{31}\] for all \(x\in B_{\delta}(x_{0})\). Our next step is to prove (21). We let \((y_{\epsilon})_{\epsilon}\in M\) such that \(\lim_{\epsilon\to 0}y_{\epsilon}=x_{0}\). We first assume that \(d_{g}(x_{\epsilon},y_{\epsilon})=O(\mu_{\epsilon})\) as \(\epsilon\to 0\). Then, (21) is a direct consequence of (15). From now on, we assume that \[\lim_{\epsilon\to 0}d_{g}(x_{\epsilon},y_{\epsilon})=0\text{ and }\lim_{ \epsilon\to 0}\frac{d_{g}(x_{\epsilon},y_{\epsilon})}{\mu_{\epsilon}}=+\infty.\] We let \(G_{\epsilon}\) be the Green's function for \(\Delta_{g}+h_{\epsilon}\) in \(B_{\delta}(x_{0})\) with Dirichlet boundary condition. We let \((y_{\epsilon})_{\epsilon}\in B_{\delta/2}(x_{0})\). Green's representation formula yields \[u_{\epsilon}(y_{\epsilon})=\int_{B_{\delta}(x_{0})}G_{\epsilon}(y_{\epsilon},x) (\Delta_{g}u_{\epsilon}+h_{\epsilon}u_{\epsilon})(x)\,dv_{g}(x)-\int_{\partial B _{\delta}(x_{0})}\partial_{\vec{n}}G_{\epsilon}(y_{\epsilon},z)u_{\epsilon}(z) \,d\sigma_{g}(z). \tag{32}\] It follows from Robert [19] that there exists \(c_{5},c_{6}>0\) such that \[d_{g}(x,y)^{n-2}|G_{\epsilon}(x,y)|+d_{g}(x,y)^{n-1}|\nabla_{x}G_{\epsilon}(x, y)|\leq c_{5}\text{ for all }x,y\in B_{\delta}(x_{0}),\,x\neq y \tag{33}\] for all \(\epsilon>0\). Combining these estimates with equation (5) and (24), we get that \[u_{\epsilon}(y_{\epsilon}) = \int_{B_{R\mu_{\epsilon}}(x_{\epsilon})}G_{\epsilon}(y_{\epsilon },x)f(x)u_{\epsilon}^{2^{*}-1}(x)\,dv_{g}(x)+A_{\epsilon}(R)+B_{\epsilon} \tag{34}\] where \[|A_{\epsilon}(R)|\leq C\int_{B_{\delta}(x_{0})\setminus B_{R\mu_{\epsilon}}(x _{\epsilon})}d_{g}(x,y_{\epsilon})^{2-n}u_{\epsilon}^{2^{*}-1}(x)\,dv_{g}(x)\] \[\text{and }|B_{\epsilon}|\leq C\int_{\partial B_{\delta}(x_{0})}d_{g}(z,y_{ \epsilon})^{1-n}u_{\epsilon}(z)\,d\sigma_{g}(z)\leq C\theta_{\epsilon}.\] We deal with the first term of (34). With a change of variable and (15), we get that \[\int_{B_{R\mu_{\epsilon}}(x_{\epsilon})}G_{\epsilon}(y_{\epsilon},x)f(x)u_{\epsilon}^{2^{*}-1}(x)\,dv_{g}(x)\] \[=\mu_{\epsilon}^{n-2}\int_{B_{R}(0)}G_{\epsilon}(y_{\epsilon}, \exp_{x_{\epsilon}}(\mu_{\epsilon}X))f(\exp_{x_{\epsilon}}(\mu_{\epsilon}X))w _{\epsilon}^{2^{*}-1}(X)\,dv_{g_{\epsilon}}(X)\] It follow from [19] that for any \((z_{\epsilon})_{\epsilon}\in M\) such that \(\lim_{\epsilon\to 0}d_{g}(z_{\epsilon},x_{\epsilon})=0\) we have that \[\lim_{\epsilon\to 0}d_{g}(x_{\epsilon},z_{\epsilon})^{n-2}G_{\epsilon}(x_{ \epsilon},z_{\epsilon})=\frac{1}{(n-2)\omega_{n-1}}.\] Since \(\mu_{\epsilon}=o(d_{g}(x_{\epsilon},y_{\epsilon}))\), we then get that \[\int_{B_{R\mu_{\epsilon}}(x_{\epsilon})}G_{\epsilon}(y_{\epsilon },x)f(x)u_{\epsilon}^{2^{*}-1}(x)\,dv_{g}(x)\] \[=\frac{f(x_{0})\mu_{\epsilon}^{n-2}}{(n-2)\omega_{n-1}d_{g}(x_{ \epsilon},y_{\epsilon})^{n-2}}\left(\int_{B_{R}(0)}w^{2^{*}-1}(X)\,dX+o(1)\right) \tag{35}\] \[=\frac{f(x_{0})\mu_{\epsilon}^{n-2}}{(n-2)\omega_{n-1}d_{g}(x_{ \epsilon},y_{\epsilon})^{n-2}}\left(\int_{\mathbb{R}^{n}}w^{2^{*}-1}(X)\,dX+o( 1)+\eta(R)\right)\] where \(\lim_{R\rightarrow+\infty}\eta(R)=0\). With (18) and (15), we get that \[f(x_{0})\int_{\mathbb{R}^{n}}w^{2^{*}-1}(X)\,dX = \lim_{R\rightarrow+\infty}\int_{B(0,R)}\Delta w\,dX=\lim_{R \rightarrow+\infty}\int_{\partial B(0,R)}(-\partial_{\nu}w)\,d\sigma \tag{36}\] \[= \left(\frac{n(n-2)}{f(x_{0})}\right)^{\frac{n-2}{2}}(n-2)\omega_{ n-1}.\] We now deal with \(A_{\epsilon}(R)\). Using the pointwise control (31), we get that \[|A_{\epsilon}(R)| \leq C_{\nu_{1},\nu_{2}}\int_{B_{\delta}(x_{0})\setminus B_{R\mu_{ \epsilon}}(x_{\epsilon})}d_{g}(x,y_{\epsilon})^{2-n}\frac{\mu_{\epsilon}^{ \frac{n+2}{2}-\nu_{1}(n+2)}}{(\mu_{\epsilon}+d_{g}(x,x_{\epsilon}))^{(n+2)(1- \nu_{1})}}\,dv_{g}(x)\] \[+C_{\nu_{1},\nu_{2}}\theta_{\epsilon}^{2^{\star}-1}\int_{B_{ \delta}(x_{0})}d_{g}(x,y_{\epsilon})^{2-n}d_{g}(x,x_{\epsilon})^{-(n+2)(1-\nu _{2})}\,dv_{g}(x)+C\theta_{\epsilon}. \tag{37}\] It follows from Giraud's lemma (see Appendix A of [7] for instance) that for \(\alpha,\beta\in(0,n-2)\) such that \(\alpha+\beta>n\), there exists \(C>0\) such that \[\int_{B_{\delta}(x_{0})}d_{g}(y,x)^{\alpha-n}d_{g}(x,y)^{\beta-n}\,dv_{g}(x) \leq C\text{ for all }y,z\in B_{\delta}(x_{0}).\] Taking \(1-\nu_{2}>0\) close to \(0\), we then get that \[\theta_{\epsilon}^{2^{\star}-1}\int_{B_{\delta}(x_{0})}d_{g}(x,y_{\epsilon})^ {2-n}d_{g}(x,x_{\epsilon})^{-(n+2)(1-\nu_{2})}\,dv_{g}(x)\leq C\theta_{ \epsilon}^{2^{\star}-1}\leq C\theta_{\epsilon}. \tag{38}\] We now deal with the remaining term of (37). We split the domain \(B_{\delta}(x_{0})=D_{\epsilon}^{1}\cup D_{\epsilon}^{2}\) where \[D_{\epsilon}^{1}:=\{x\in B_{\delta}(x_{0})\text{ s.t. }d_{g}(x,y_{\epsilon})\geq d _{g}(x_{\epsilon},y_{\epsilon})/2\}\] \[\text{ and }D_{\epsilon}^{2}:=\{x\in B_{\delta}(x_{0})\text{ s.t. }d_{g}(x,y_{\epsilon})<d_{g}(x_{\epsilon},y_{\epsilon})/2\}.\] We fix \(R>0\). With the change of variable \(x:=\exp_{g}(\mu_{\epsilon}X)\), we get that \[\int_{D_{\epsilon}^{1}\setminus B_{R\mu_{\epsilon}}(x_{\epsilon} )}d_{g}(x,y_{\epsilon})^{2-n}\frac{\mu_{\epsilon}^{\frac{n+2}{2}-\nu_{1}(n+2) }}{(\mu_{\epsilon}+d_{g}(x,x_{\epsilon}))^{(n+2)(1-\nu_{1})}}\,dv_{g}(x)\] \[\leq Cd_{g}(x_{\epsilon},y_{\epsilon})^{2-n}\int_{B_{2g}(x_{ \epsilon})\setminus B_{R\mu_{\epsilon}}(x_{\epsilon})}\frac{\mu_{\epsilon}^{ \frac{n+2}{2}-\nu_{1}(n+2)}}{(\mu_{\epsilon}+d_{g}(x,x_{\epsilon}))^{(n+2)(1- \nu_{1})}}\,dv_{g}(x)\] \[\leq Cd_{g}(x_{\epsilon},y_{\epsilon})^{2-n}\mu_{\epsilon}^{ \frac{n-2}{2}}\int_{\mathbb{R}^{n}\setminus B_{R}(0)}\frac{1}{(1+|X|)^{(n+2)( 1-\nu_{1})}}\,dX\] \[\leq\eta(R)d_{g}(x_{\epsilon},y_{\epsilon})^{2-n}\mu_{\epsilon}^ {\frac{n-2}{2}}\text{ where }\lim_{R\to+\infty}\eta(R)=0 \tag{39}\] when \(\nu_{1}<2/(n+2)\). Concerning the other integral, note that for all \(x\in D_{\epsilon}^{2}\), we have that \(d_{g}(x,x_{\epsilon})\geq d_{g}(x_{\epsilon},y_{\epsilon})/2\). Therefore \[\int_{D_{\epsilon}^{2}}d_{g}(x,y_{\epsilon})^{2-n}\frac{\mu_{ \epsilon}^{\frac{n+2}{2}-\nu_{1}(n+2)}}{(\mu_{\epsilon}+d_{g}(x,x_{\epsilon})) ^{(n+2)(1-\nu_{1})}}\,dv_{g}(x)\] \[\leq C\frac{\mu_{\epsilon}^{\frac{n+2}{2}-\nu_{1}(n+2)}}{d_{g}(y _{\epsilon},x_{\epsilon})^{(n+2)(1-\nu_{1})}}\int_{d_{g}(x,y_{\epsilon})<d_{g }(x_{\epsilon},y_{\epsilon})/2}d_{g}(x,y_{\epsilon})^{2-n}\,dv_{g}(x)\] \[\leq C\frac{\mu_{\epsilon}^{\frac{n+2}{2}-\nu_{1}(n+2)}}{d_{g}(y _{\epsilon},x_{\epsilon})^{(n+2)(1-\nu_{1})}}d_{g}(x_{\epsilon},y_{\epsilon}) ^{2}\] \[\leq C\frac{\mu_{\epsilon}^{\frac{n-2}{2}}}{d_{g}(x_{\epsilon},y_{ \epsilon})^{n-2}}\left(\frac{\mu_{\epsilon}}{d_{g}(y_{\epsilon},x_{\epsilon})} \right)^{2-\nu_{1}(n+2)}=o\left(\frac{\mu_{\epsilon}^{\frac{n-2}{2}}}{d_{g}(x _{\epsilon},y_{\epsilon})^{n-2}}\right) \tag{40}\] when \(\nu_{1}<2/(n+2)\). Putting (35), (36), (37), (38), (39) and (40) together yields \[u_{\epsilon}(y_{\epsilon})=\left(\frac{n(n-2)}{f(x_{0})}\right)^{\frac{n-2}{2}} \frac{\mu_{\epsilon}^{\frac{n-2}{2}}}{d_{g}(x_{\epsilon},y_{\epsilon})^{n-2}} (1+o(1))+O(\theta_{\epsilon})\] This yields (21) since \(d_{g}(x_{\epsilon},y_{\epsilon})/\mu_{\epsilon}\to+\infty\) as \(\epsilon\to 0\). When \(d_{g}(x_{\epsilon},y_{\epsilon})=o(1)\), (22) is a direct consequence of (21). Since \(u_{\epsilon}\to 0\) in \(C^{0}_{loc}(M-\{x_{0}\})\) and \(u_{\epsilon}>0\), it follows from Harnack's inequality that there exists \(c(\tau)>0\) such that \(u_{\epsilon}(x)\leq c(\tau)u_{\epsilon}(y)\) for all \(x,y\in B_{2\delta}(x_{0})\setminus B_{\tau}(x_{0})\). Therefore, if \((y_{\epsilon})_{\epsilon}\in B_{2\delta}(x_{0})\) is such that \(y_{\epsilon}\not\to x_{0}\), we have that \(u_{\epsilon}(y_{\epsilon})=O(\theta_{\epsilon})\). This proves (22) when \(x\) is far from \(x_{0}\). This proves (21) holds in all cases, which yields (22). Concerning the gradient estimate, differentiate Green's representation formula (32) to obtain \[\nabla u_{\epsilon}(y_{\epsilon})=\int_{B_{\delta}(x_{0})}\nabla_{y}G_{ \epsilon}(y_{\epsilon},x)(\Delta_{g}u_{\epsilon}+h_{\epsilon}u_{\epsilon})(x) \,dv_{g}(x)-\int_{\partial B_{\delta}(x_{0})}\partial_{\vec{n}}\nabla_{y}G_{ \epsilon}(y_{\epsilon},z)u_{\epsilon}(z)\,d\sigma_{g}(z).\] Using the pointwise control (33), we then get that \[|\nabla u_{\epsilon}(y_{\epsilon})|\leq C\int_{B_{\delta}(x_{0})}d_{g}(y_{ \epsilon},x)^{1-n}u_{\epsilon}^{2^{*}-1}(x)\,dv_{g}(x)+C\int_{\partial B_{ \delta}(x_{0})}d_{g}(y_{\epsilon},z)^{-n}u_{\epsilon}(z)\,d\sigma_{g}(z).\] We get (23) arguing as in the proof of (22). This proves Proposition 1. ## 3. Speed of convergence of \((x_{\epsilon})_{\epsilon}\) Let \(\Omega\subset\mathbb{R}^{n}\) be a smooth bounded domain. Let \(u\in C^{2}(\bar{\Omega})\), \(u>0\), and \(f\in C^{1}(\bar{\Omega})\) be functions and \(c\in\mathbb{R}\). Then for all \(z\in\mathbb{R}^{n}\), the Pohozaev identity writes \[\int_{\Omega}\left((x-z)^{i}\partial_{i}u+\frac{n-2}{2}u\right) \left(\Delta_{\xi}u-cfu^{2^{*}-1}\right)dx\] \[=\int_{\partial\Omega}\left[(x-z,\nu)\left(\frac{|\nabla u|_{\xi }^{2}}{2}-\frac{cfu^{2^{*}}}{2^{*}}\right)-\left((x-z)^{i}\partial_{i}u+\frac {n-2}{2}u\right)\partial_{\nu}u\right]\mathrm{d}\sigma\] \[+\frac{1}{2^{*}}\int_{\Omega}c\langle\nabla f(x),x-z\rangle_{\xi }u^{2^{*}}dx \tag{41}\] Differentiating with respect to \(z\), we get that for any \(j\in\{1,...,n\}\) \[-\int_{\Omega}\partial_{j}u\left(\Delta_{\xi}u-cfu^{2^{*}-1} \right)\mathrm{dx}\] \[=\int_{\partial\Omega}\left[-\nu_{j}\left(\frac{|\nabla u|_{\xi} ^{2}}{2}-\frac{cfu^{2^{*}}}{2^{*}}\right)+\partial_{j}u\,\partial_{\nu}u \right]\mathrm{d}\sigma-\frac{c}{2^{*}}\int_{\Omega}\partial_{j}f(x)u^{2^{*}} \mathrm{d}x \tag{42}\] We refer to Ghoussoub-Robert [12] for a proof. We fix \(\delta\in(0,i_{g}(M,x_{0}))\). We define \[\hat{u}_{\epsilon}(X):=u_{\epsilon}\left(\exp_{x_{\epsilon}}(X)\right)\text{ for all }X\in B_{\delta}(0)\subset\mathbb{R}^{n}.\] Therefore, equation (5) rewrites \[\Delta_{\hat{g}_{\epsilon}}\hat{u}_{\epsilon}+\hat{u}_{\epsilon}u_{\epsilon}= \hat{f}_{\epsilon}\hat{u}_{\epsilon}^{2^{*}-1}\text{ in }B_{\delta}(0).\] where \(\hat{h}_{\epsilon}(X):=h_{\epsilon}\left(\exp_{x_{\epsilon}}(X)\right)\) and \(\hat{f}_{\epsilon}(X):=f\left(\exp_{x_{\epsilon}}(X)\right)\) for all \(X\in B_{\delta}(0)\subset\mathbb{R}^{n}\) and \(\hat{g}_{\epsilon}:=\exp_{x_{\epsilon}}^{2}g\) is the pull-back of \(g\) via the exponential map. **Lemma 1**.: _Let \((\phi_{\epsilon})_{\epsilon}\in C^{0}(B_{\delta}(0))\) such that_ \[\left\{\begin{array}{c}\lim_{\epsilon\to 0}\phi_{\epsilon}(0)=s\in\mathbb{R},\\ |\phi_{\epsilon}(X)-\phi_{\epsilon}(0)|\leq C|X|\mbox{ for all }X\in B_{\delta}(0) \mbox{ and }\epsilon>0.\end{array}\right.\] _We fix \(p\geq 0\) and \(q\geq 1\). Then_ \[\int_{B_{\delta}(0)}\phi_{\epsilon}|X|^{p}\hat{u}_{\epsilon}^{q}\,dX\] \[=\left\{\begin{array}{cl}\mu_{\epsilon}^{n+p-\frac{q(n-2)}{2}} \left(s\int_{\mathbb{R}^{n}}|X|^{p}w^{q}\mathrm{d}X+o(1)\right)+O(\theta_{ \epsilon}^{q})&\mbox{ if }(n-2)q>p+n,\\ \left(s\left(\frac{n(n-2)}{f(x_{0})}\right)^{q\frac{n-2}{2}}\omega_{n-1}+o(1) \right)\mu_{\epsilon}^{\frac{q(n-2)}{2}}\ln\left(\frac{1}{\mu_{\epsilon}} \right)+O(\theta_{\epsilon}^{q})&\mbox{ if }(n-2)q=p+n,\\ O(\mu_{\epsilon}^{q\frac{n-2}{2}})+O(\theta_{\epsilon}^{q})&\mbox{ if }(n-2)q<p+n\end{array}\right.\] _Moreover, for any family \((\delta_{\epsilon})_{\epsilon}\in(0,1)\) such that \(\lim_{\epsilon\to 0}\delta_{\epsilon}=\lim_{\epsilon\to 0}\frac{\mu_{ \epsilon}}{\delta_{\epsilon}}=0\), we have that_ \[\int_{B_{\delta_{\epsilon}}(0)}\phi_{\epsilon}|X|^{p}\hat{u}_{\epsilon}^{q}\,dX =\mu_{\epsilon}^{n+p-\frac{q(n-2)}{2}}\left(s\int_{\mathbb{R}^{n}}|X|^{p}w^{q} \mathrm{d}X+o(1)\right)+O(\theta_{\epsilon}^{q})\mbox{ if }q>\frac{p+n}{n-2}. \tag{43}\] _Proof:_ We fix \(\nu>0\). It follows from (21) and (22) that there exists \(\alpha\in(0,\delta)\) such that \[\left|\hat{u}_{\epsilon}^{q}(X)-\left(\frac{\mu_{\epsilon}}{\mu_{\epsilon}^{2 }+\frac{f(x_{0})}{n(n-2)}|X|^{2}}\right)^{q\frac{n-2}{2}}\right|\leq\nu\left( \frac{\mu_{\epsilon}}{\mu_{\epsilon}^{2}+\frac{f(x_{0})}{n(n-2)}|X|^{2}} \right)^{q\frac{n-2}{2}}+C\theta_{\epsilon}^{q}\] for all \(X\in B_{\rho}(0)\setminus B_{\alpha}(0)\). Note that for all \(\alpha\in(0,\delta)\), it follows from the Harnack inequality that \[\int_{B_{\delta}(0)\setminus B_{\alpha}(0)}\phi_{\epsilon}|X|^{p}\hat{u}_{ \epsilon}^{q}\,dX=O(\theta_{\epsilon}^{q}).\] We then get that \[\left|\int_{B_{\delta}(0)}\phi_{\epsilon}|X|^{p}\hat{u}_{\epsilon}^{q}\,dX- \int_{B_{\alpha}(0)}\phi_{\epsilon}|X|^{p}\left(\frac{\mu_{\epsilon}}{\mu_{ \epsilon}^{2}+\frac{f(x_{0})}{n(n-2)}|X|^{2}}\right)^{q\frac{n-2}{2}}\,dX\right|\] \[\leq C\nu\int_{B_{\delta}(0)}|X|^{p}\left(\frac{\mu_{\epsilon}}{\mu_{\epsilon}^ {2}+|X|^{2}}\right)^{q\frac{n-2}{2}}\,dX+C\theta_{\epsilon}^{q}\] We the get Lemma 1 when \(q(n-2)<n+p\). With the change of variable \(X=\mu_{\epsilon}Y\), we get that \[\int_{B_{\alpha}(0)}\phi_{\epsilon}|X|^{p}\left(\frac{\mu_{\epsilon}}{\mu_{ \epsilon}^{2}+\frac{f(x_{0})}{n(n-2)}|X|^{2}}\right)^{q\frac{n-2}{2}}dX\] \[=\mu_{\epsilon}^{n+p-q\frac{n-2}{2}}\int_{B_{\alpha/\mu_{\epsilon}}(0)}\phi_{ \epsilon}(\mu_{\epsilon}Y)|Y|^{p}\left(\frac{1}{1+\frac{f(x_{0})}{n(n-2)}|Y|^ {2}}\right)^{q\frac{n-2}{2}}\,dY\] \[=\mu_{\epsilon}^{n+p-q\frac{n-2}{2}}\left\{\begin{array}{cl}s\int_{\mathbb{R }^{n}}|Y|^{p}w^{q}(Y)\,dY+o(1)&\mbox{ if }q(n-2)>p+n\\ s\left(\frac{n(n-2)}{f(x_{0})}\right)^{q\frac{n-2}{2}}\omega_{n-1}\ln\left( \frac{1}{\mu_{\epsilon}}\right)+o(\ln\mu_{\epsilon})&\mbox{ if }q(n-2)=p+n\end{array}\right.\] The proof of (43) is similar by taking \(\alpha:=\delta_{\epsilon}\). Putting these estimates together yields Lemma 1. We now prove (6). We fix \(l\in\{1,...,n\}\). We define \[\delta_{\epsilon}:=\mu_{\epsilon}^{\frac{1}{n-1}}.\] Pohozaev's identity (42) applied to \(\hat{u}_{\epsilon}\) reads \[A_{\epsilon}=-B_{\epsilon}+C_{\epsilon}-D_{\epsilon} \tag{44}\] With \[A_{\epsilon}:=-\frac{1}{2^{\star}}\int_{B_{\delta_{\epsilon}}(0) }\partial_{l}\hat{f}_{\epsilon}\hat{u}_{\epsilon}^{2^{\star}}\mathrm{d}X\] \[B_{\epsilon}:=\int_{\partial B_{\delta_{\epsilon}}(0)}-\left[ \nu_{l}\left(\frac{|\nabla\hat{u}_{\epsilon}|^{2}}{2}-\frac{\hat{f}_{\epsilon }\hat{u}_{\epsilon}^{2^{\star}}}{2^{\star}}\right)+\partial_{l}\hat{u}_{ \epsilon}\partial_{\nu}\hat{u}_{\epsilon}\right]\mathrm{d}\nu\] \[C_{\epsilon}:=\int_{B_{\delta_{\epsilon}}(0)}\partial_{l}\hat{u }_{\epsilon}\hat{h}_{\epsilon}\hat{u}_{\epsilon}\mathrm{d}X\text{ and }D_{\epsilon}:=\int_{B_{\delta_{\epsilon}}(0)}\partial_{l}\hat{u}_{ \epsilon}(\Delta_{\xi}\hat{u}_{\epsilon}-\Delta_{\hat{g}_{\epsilon}}\hat{u}_{ \epsilon})\mathrm{d}X\] We estimate these terms separately. It follows from (22) and (23) that \[B_{\epsilon}=O\left(\mu_{\epsilon}\left(\left(\frac{\mu_{\epsilon}}{\delta_{ \epsilon}}\right)^{n-3}+\frac{\mu_{\epsilon}^{n-1}}{\delta_{\epsilon}^{n+1}}+ \frac{\delta_{\epsilon}^{n-1}}{\mu_{\epsilon}}\theta_{\epsilon}^{2}\right) \right)=o(\mu_{\epsilon})\text{ as }\epsilon\to 0.\] Concerning \(C_{\epsilon}\), integrating by parts, we have that \[C_{\epsilon}=\int_{B_{\delta_{\epsilon}}(0)}\partial_{l}\hat{u}_{\epsilon} \hat{h}_{\epsilon}\hat{u}_{\epsilon}\mathrm{d}X=-\int_{B_{\delta_{\epsilon}}( 0)}\partial_{l}\frac{\hat{h}_{\epsilon}}{2}\hat{u}_{\epsilon}^{2}\mathrm{d}X+ \int_{\partial B_{\delta_{\epsilon}}(0)}\hat{h}_{\epsilon}\frac{\hat{u}_{ \epsilon}^{2}}{2}\overline{\nu}_{l}\mathrm{d}\sigma\] With (22) and Lemma 1, we then get that \[C_{\epsilon}=O\left(\mu_{\epsilon}\left(o(1)+\left(\frac{\mu_{\epsilon}}{ \delta_{\epsilon}}\right)^{n-3}+\frac{\delta_{\epsilon}^{n-1}}{\mu_{\epsilon} }\theta_{\epsilon}^{2}\right)\right)=o(\mu_{\epsilon})\text{ as }\epsilon\to 0.\] We now estimate \(D_{\epsilon}\). We write \[-(\Delta_{\hat{g}_{\epsilon}}-\Delta_{\xi})=(\hat{g}_{\epsilon}^{ij}-\delta^ {ij})\partial_{ij}-\hat{g}_{\epsilon}^{ij}\hat{\Gamma}_{ij}^{k}(\hat{g}_{ \epsilon})\partial_{k}\] where the \(\hat{\Gamma}_{ij}^{k}\)'s are the Christoffel symbols of the metric \(\hat{g}_{\epsilon}\). The following lemma is reminiscent in such problems: **Lemma 2**.: _Let \(\Omega\) be a smooth domain of \(\mathbb{R}^{n}\). For any \(i,j,k\in\{1,...,n\}\), let us consider \(a^{ijk}\in C^{1}(\mathbb{R}^{n})\). We assume that \(a^{ijk}=a^{jik}\) for all \(i,j,k\in\{1,...,n\}\). Then for all \(u\in C^{2}(\mathbb{R}^{n})\), we have that_ \[\int_{\Omega}a^{ijk}\partial_{ij}u\partial_{k}u\,dx = \int_{\Omega}\left(-\partial_{l}a^{lij}+\frac{1}{2}\partial_{l}a^ {ijl}\right)\partial_{i}u\partial_{j}u\,dx\] \[+\int_{\partial\Omega}\left(-\frac{1}{2}a^{ijl}\vec{\nu}_{l}+a^{ lji}\vec{\nu}_{l}\right)\partial_{i}u\partial_{j}u\,d\sigma\] _where \(\vec{\nu}\) is the outer normal vector at \(\partial\Omega\) and Einstein's summation convention has been used._ The proof is by integrations by parts and goes back to Hebey-Vaugon [14] and can also be found in Cheikh-Ali [4]. It follows from Lemma 2 that \[\int_{B_{\delta_{\epsilon}}(0)}(\hat{g}_{\epsilon}^{ij}-\delta^{ij} )\partial_{ij}\hat{u}_{\epsilon}\partial_{l}\hat{u}_{\epsilon}\,dx = \int_{B_{\delta_{\epsilon}}(0)}\left(-\partial_{m}\hat{g}_{ \epsilon}^{mj}\delta_{j,l}+\frac{1}{2}\partial_{l}\hat{g}_{\epsilon}^{ij} \delta_{m,l}\right)\partial_{i}\hat{u}_{\epsilon}\partial_{j}\hat{u}_{\epsilon }\,dx\] \[+\int_{\partial B_{\delta_{\epsilon}}(0)}\left(-\frac{1}{2}(\hat{ g}_{\epsilon}^{ij}-\delta^{ij})\delta_{m,l}\vec{\nu}_{m}+(\hat{g}_{\epsilon}^{mj}- \delta^{mj})\delta_{i,l}\vec{\nu}_{l}\right)\partial_{i}\hat{u}_{\epsilon} \partial_{j}\hat{u}_{\epsilon}\,d\sigma\] Using (22) to control the boundary terms, we get that \[D_{\epsilon} = \int_{B_{\delta_{\epsilon}}(0)}\left(-\partial_{m}\hat{g}_{ \epsilon}^{mj}\delta_{j,l}+\frac{1}{2}\partial_{m}\hat{g}_{\epsilon}^{ij} \delta_{m,l}\right)\partial_{i}\hat{u}_{\epsilon}\partial_{j}\hat{u}_{\epsilon }\,dx\] \[-\int_{B_{\delta_{\epsilon}}(0)}\hat{g}_{\epsilon}^{ij}\Gamma_{ij} ^{k}(\hat{g}_{\epsilon})\partial_{k}\hat{u}_{\epsilon}\partial_{l}\hat{u}_{ \epsilon}\,dX+\underbrace{O\left(\mu_{\epsilon}\left(\left(\frac{\mu_{\epsilon }}{\delta_{\epsilon}}\right)^{n-3}+\frac{\delta_{\epsilon}^{n-1}}{\mu_{\epsilon }}\theta_{\epsilon}^{2}\right)\right)}_{=o(\mu_{\epsilon})}\] as \(\epsilon\to 0\). Since \[\Gamma_{ij}^{k}(\hat{g}_{\epsilon})=\frac{1}{2}\hat{g}_{\epsilon}^{km}\left( \partial_{i}(\hat{g}_{\epsilon})_{jm}+\partial_{j}(\hat{g}_{\epsilon})_{im}- \partial_{m}(\hat{g}_{\epsilon})_{ij}\right)\] and \(\hat{g}_{\epsilon}\) is normal at \(0\) (that is \(\partial_{m}(\hat{g}_{\epsilon})_{ij}(0)=0\) for all \(i,j,m\in\{1,...,n\}\)), we then get that there exists \(a_{ij\alpha}\in\mathbb{R}\), \(i,j,\alpha\in\{1,...,n\}\) such that \[D_{\epsilon} = \int_{B_{\delta_{\epsilon}}(0)}a_{ij\alpha}X^{\alpha}\partial_{ i}\hat{u}_{\epsilon}\partial_{j}\hat{u}_{\epsilon}\,dx+O\left(\int_{B_{\delta_{ \epsilon}}(0)}|X|^{2}|\nabla\hat{u}_{\epsilon}|^{2}\,dX\right)+o(\mu_{ \epsilon})\] With (23), we get that \[\int_{B_{\delta_{\epsilon}}(0)}|X|^{2}|\nabla\hat{u}_{\epsilon}| ^{2}\,dX \leq C\int_{B_{\delta_{\epsilon}}(0)}|X|^{2}\frac{\mu_{\epsilon}^{n-2}}{( \mu_{\epsilon}+|X|)^{2(n-1)}}\,dX+C\theta_{\epsilon}^{2}\delta_{\epsilon}^{n}\] \[\leq C\mu_{\epsilon}^{2}\int_{B_{\delta_{\epsilon}/\mu_{\epsilon}}(0 )}\frac{|X|^{2}\,dX}{(1+|X|)^{2(n-1)}}+C\theta_{\epsilon}^{2}\delta_{\epsilon }^{n}=o(\mu_{\epsilon})\] With (23), given \(R>0\), using (46) and \(n>3\), we have that \[\left|\int_{B_{\delta_{\epsilon}}(0)\setminus B_{R\mu_{\epsilon }}(0)}X^{\alpha}\partial_{i}\hat{u}_{\epsilon}\partial_{j}\hat{u}_{\epsilon} \,dx\right|\leq C\int_{B_{\delta_{\epsilon}}(0)\setminus B_{R\mu_{\epsilon }}(0)}\frac{\mu_{\epsilon}^{n-2}|X|\,dX}{(\mu_{\epsilon}+|X|)^{2(n-1)}}+C \theta_{\epsilon}^{2}\delta_{\epsilon}^{n}\] \[\leq C\mu_{\epsilon}\int_{\mathbb{R}^{n}\setminus B_{R}(0)}\frac{| Y|\,dX}{(1+|Y|)^{2(n-1)}}+C\theta_{\epsilon}^{2}\delta_{\epsilon}^{n}\leq\eta(R) \mu_{\epsilon}+o(\mu_{\epsilon})\] where \(\lim_{R\rightarrow+\infty}\eta(R)=0\). Using the change of variable \(X=\mu_{\epsilon}Y\), the convergence (15) and the radial symmetry of \(w\), we have that \[\int_{B_{R\mu_{\epsilon}}(0)}X^{\alpha}\partial_{i}\hat{u}_{ \epsilon}\partial_{j}\hat{u}_{\epsilon}\,dx = \mu_{\epsilon}\int_{B_{R}(0)}Y^{\alpha}\partial_{i}w_{\epsilon} \partial_{j}w_{\epsilon}\,dY\] \[= \mu_{\epsilon}\left(\int_{B_{R}(0)}X^{\alpha}\partial_{i}w \partial_{j}w\,dY+o(1)\right)=o(\mu_{\epsilon})\text{ since }n>3.\] Therefore, we get that \(\int_{B_{\delta_{\epsilon}}(0)}X^{\alpha}\partial_{i}\hat{u}_{\epsilon} \partial_{j}\hat{u}_{\epsilon}\,dx=o(\mu_{\epsilon})\), and then \[D_{\epsilon}=o(\mu_{\epsilon})\text{ for }n\geq 4.\] We now deal with \(A_{\epsilon}\). With a Taylor expansion of \(f\), we get \[A_{\epsilon} = -\frac{1}{2^{\star}}\int_{B_{\delta_{\epsilon}}(0)}\partial_{l}\hat {f}_{\epsilon}\hat{u}_{\epsilon}^{2^{\star}}\,\mathrm{d}X\] \[= -\frac{1}{2^{\star}}\partial_{l}\hat{f}_{\epsilon}(0)\int_{B_{ \delta_{\epsilon}}(0)}\hat{u}_{\epsilon}^{2^{\star}}\,\mathrm{d}X-\frac{1}{2^ {\star}}\partial_{lj}\hat{f}_{\epsilon}(0)\int_{B_{\delta_{\epsilon}}(0)}X^{j} \hat{u}_{\epsilon}^{2^{\star}}\,\mathrm{d}X\] \[+O\left(\int_{B_{\delta_{\epsilon}}(0)}|X|^{2}\hat{u}_{\epsilon}^ {2^{\star}}\,\mathrm{d}X\right)\] Arguing as above, we get that \[\int_{B_{\delta_{\epsilon}}(0)}|X|^{2}\hat{u}_{\epsilon}^{2^{\star}}\, \mathrm{d}X=o(\mu_{\epsilon})\text{ and }\int_{B_{\delta_{\epsilon}}(0)}X^{j}\hat{u}_{\epsilon}^{2^{\star}}\, \mathrm{d}X=o(\mu_{\epsilon})\text{ as }\epsilon\to 0.\] With (21), we get that there exists \(C_{0}>0\) such that \[\int_{B_{\delta_{\epsilon}}(0)}\hat{u}_{\epsilon}^{2^{\star}}\,\mathrm{d}X=C _{0}+o(1)\text{ as }\epsilon\to 0.\] Therefore, we get that \[A_{\epsilon}=\left(-\frac{C_{0}}{2^{\star}}+o(1)\right)\partial_{l}\hat{f}_{ \epsilon}(0)+o(\mu_{\epsilon})\text{ as }\epsilon\to 0.\] Putting the estimates of \(A_{\epsilon}\), \(B_{\epsilon}\), \(C_{\epsilon}\) and \(D_{\epsilon}\) into (44) yields \[\partial_{l}\hat{f}_{\epsilon}(0)=o(\mu_{\epsilon})\text{ as }\epsilon\to 0 \text{ for all }l\in\{1,...,n\}. \tag{45}\] Passing to the limit, we get that \(\nabla f(x_{0})=0\). We now express \(\partial_{l}\hat{f}_{\epsilon}(0)\) more precisely. We write \[\hat{f}_{\epsilon}(X)=f\circ\exp_{x_{\epsilon}}(X)=\tilde{f}\circ\varphi(X_{ \epsilon},X)\text{for}X\in\mathbb{R}^{n}\] where \(\tilde{f}:=f\circ\exp_{x_{0}}\) and \(\varphi(Z,X):=\exp_{x_{0}}^{-1}\circ\exp_{\exp_{x_{0}}(Z)}(X)\) for \(X,Z\in\mathbb{R}^{n}\). We set \(X_{\epsilon}:=\exp_{x_{0}}^{-1}(x_{\epsilon})\). Since \(\nabla\tilde{f}(0)=0\), we get that \[\partial_{l}(f\circ\exp_{x_{\epsilon}})(0)=\frac{\partial^{2}(f\circ\exp_{x_{ 0}})}{\partial x_{l}\partial x_{j}}(0)X_{\epsilon}^{j}+o(|X_{\epsilon}|)\] therefore, with (45), we get that \[\frac{\partial^{2}(f\circ\exp_{x_{0}})}{\partial x_{l}\partial x_{j}}(0)X_{ \epsilon}^{j}=o(|X_{\epsilon}|)+o(\mu_{\epsilon})\text{ as }\epsilon\to 0 \text{ for all }l=1,..,n.\] Since \(\nabla^{2}f(x_{0})\) is nondegenerate, we then get that \(|X_{\epsilon}|=o(\mu_{\epsilon})\), in other words \(d_{g}(x_{\epsilon},x_{0})=o(\mu_{\epsilon})\) as \(\epsilon\to 0\). This proves (6). **Lemma 3**.: _Under the assumptions of Theorem 1, we have that_ \[\theta_{\epsilon}=\left\{\begin{array}{ll}o(1)&\text{ if }n\geq 7,\\ o(\mu_{\epsilon})&\text{ if }n\in\{5,6\}.\\ o\left(\mu_{\epsilon}\sqrt{\ln(\frac{1}{\mu_{\epsilon}})}\right.&\text{ if }n=4.\end{array}\right. \tag{46}\] _Proof:_ The case \(n\geq 7\) is simply (24). It follows from (22) that \[\int_{B_{\delta}(x_{0})}u_{\epsilon}^{2}\,dv_{g} \leq \int_{B_{2\delta}(x_{\epsilon})}\frac{\mu_{\epsilon}^{n-2}}{(\mu_{ \epsilon}+d_{g}(x,x_{\epsilon}))^{2(n-2)}}\,dv_{g}+C\theta_{\epsilon}^{2}\] \[\leq \mu_{\epsilon}^{2}\int_{B_{2\delta/\mu_{\epsilon}}(0)}\frac{1}{(1 +|X|))^{2(n-2)}}\,dv_{g}+C\theta_{\epsilon}^{2}\] \[\leq C\left\{\begin{array}{ll}\mu_{\epsilon}^{2}&\mbox{ if }n\geq 5 \\ \mu_{\epsilon}^{2}\ln(\frac{1}{\mu_{\epsilon}})&\mbox{ if }n=4\end{array}\right.\] Equation (5) rewrites \(\Delta_{g}u_{\epsilon}+(h_{\epsilon}-fu_{\epsilon}^{2^{*}-2})u_{\epsilon}=0\) in \(M\). Since \(u_{\epsilon}\to 0\) in \(C^{0}_{loc}(M-\{x_{0}\})\) and \(u_{\epsilon}>0\), it follows from Harnack's inequality that there exists \(c>0\) such that \(u_{\epsilon}(x)\leq cu_{\epsilon}(y)\) for all \(x,y\in B_{2\delta}(x_{0})\setminus B_{\delta/3}(x_{0})\). Therefore, with the definition (24) of \(\theta_{\epsilon}\), we get that \[\int_{B_{2\delta}(x_{0})\setminus B_{\delta}(x_{0})}u_{\epsilon}^{2}\,dv_{g} \geq c^{-2}\theta_{\epsilon}^{2}.\] When \(4\leq n\leq 6\), it follows from \(L^{2}-\)concentration assumption (7) that \[\int_{B_{2\delta}(x_{0})\setminus B_{\delta}(x_{0})}u_{\epsilon}^{2}\,dv_{g} \leq\int_{M\setminus B_{\delta}(x_{0})}u_{\epsilon}^{2}\,dv_{g}=o\left(\int_{B _{\delta}(x_{0})}u_{\epsilon}^{2}\,dv_{g}\right)\mbox{ as }\epsilon\to 0.\] Putting these inequalities together, we get (46). This proves Lemma 3. \(\Box\) ## 4. Interaction with the scalar curvature: proof of (8) This part is strongly inspired by Cheikh-Ali [4]. We define \[\delta_{\epsilon}:=\left\{\begin{array}{ll}\mu_{\epsilon}^{\frac{2}{n-2}}& \mbox{ if }n\geq 7\\ \delta&\mbox{ if }n\in\{4,5,6\}.\end{array}\right.\] Writing the Pohozaev identity (41) for \(\hat{u}_{\epsilon}\) that satisfies (5), we get that \[A_{\epsilon}+B_{\epsilon}=C_{\epsilon}+D_{\epsilon} \tag{47}\] where \[B_{\epsilon} :=\int_{\partial B_{\delta_{\epsilon}}(0)}\left[(X,\nu)\left( \frac{|\nabla\hat{u}_{\epsilon}|_{\xi}^{2}}{2}-\frac{\hat{f}_{\epsilon}\hat{u }_{\epsilon}^{2^{*}}}{2^{\star}}\right)-\left(X^{l}\partial\hat{u}_{\epsilon} +\frac{n-2}{2}\hat{u}_{\epsilon}\right)\partial_{\nu}\hat{u}_{\epsilon}\right] \mathrm{d}\nu\] \[C_{\epsilon} :=-\int_{B_{\delta_{\epsilon}}(0)}\left(X^{l}\partial_{l}\hat{u} _{\epsilon}+\frac{n-2}{2}\hat{u}_{\epsilon}\right)\hat{h}_{\epsilon}\hat{u}_{ \epsilon}\mathrm{d}X\] \[D_{\epsilon} :=-\int_{B_{\delta_{\epsilon}}(0)}\left(X^{l}\partial_{l}\hat{u} _{\epsilon}+\frac{n-2}{2}\hat{u}_{\epsilon}\right)(\Delta_{\hat{g}_{\epsilon }}\hat{u}_{\epsilon}-\Delta_{\xi}\hat{u}_{\epsilon})\mathrm{d}X\] \[A_{\epsilon} :=\frac{1}{2^{\star}}\int_{B_{\delta_{\epsilon}}(0)}(\nabla\hat{f }_{\epsilon},X)\hat{u}_{\epsilon}^{2^{*}}\mathrm{d}X\] Following Cheikh-Ali [4] and using the pointwise controls (21), (22), (23) and the control (46) on \((\theta_{\epsilon})_{\epsilon}\) when \(n\in\{4,5,6\}\), we get that \[B_{\epsilon}=\left\{\begin{array}{cl}o(\mu_{\epsilon}^{2})&\mbox{ if }n\geq 5 \\ o\left(\mu_{\epsilon}^{2}\ln\frac{1}{\mu_{\epsilon}}\right)&\mbox{ if }n=4. \end{array}\right.\mbox{ as }\epsilon\to 0,\] \[C_{\epsilon}=\left\{\begin{array}{ll}h(x_{0})\mu_{\epsilon}^{2}\ln\left(\frac{1}{ \mu_{\epsilon}}\right)\left(\frac{8}{f(x_{0})}\right)^{2}\omega_{3}+o(\mu_{ \epsilon}^{2}\ln\frac{1}{\mu_{\epsilon}})&\mbox{ if }n=4\\ h(x_{0})\mu_{\epsilon}^{2}\int_{\mathbb{R}^{n}}w^{2}\mathrm{d}X+o(\mu_{ \epsilon}^{2})&\mbox{ if }n\geq 5\end{array}\right.\] \[D_{\epsilon}=\left\{\begin{array}{ll}-\mu_{\epsilon}^{2}\ln\left(\frac{1}{ \mu_{\epsilon}}\right)\frac{1}{6}\mathrm{Scal}_{g}(x_{\epsilon})\left(\frac{8 }{f(x_{0})}\right)^{2}\omega_{3}+o(\mu_{\epsilon}^{2}\ln\frac{1}{\mu_{ \epsilon}})&\mbox{ if }n=4\\ -\mu_{\epsilon}^{2}\frac{n-2}{4(n-1)}\mathrm{Scal}_{g}(x_{\epsilon})\int_{ \mathbb{R}^{n}}w^{2}\mathrm{d}X+o(\mu_{\epsilon}^{2})&\mbox{ if }n\geq 5\end{array}\right.\] We are then left with estimating \(A_{\epsilon}\). With a Taylor expansion of \(\hat{f}_{\epsilon}\), we get that \[A_{\epsilon} = \frac{1}{2^{\star}}\int_{B_{\delta_{\epsilon}}(0)}\partial_{i} \hat{f}_{\epsilon}(0)X^{i}\hat{u}_{\epsilon}^{2^{\star}}\mathrm{d}X+\frac{1}{2 ^{\star}}\partial_{ij}\hat{f}_{\epsilon}(0)\int_{B_{\delta_{\epsilon}}(0)}X^{ i}X^{j}\hat{u}_{\epsilon}^{2^{\star}}\mathrm{d}X\] \[+O\left(\int_{B_{\delta_{\epsilon}}(0)}|X|^{3}\hat{u}_{\epsilon}^ {2^{\star}}\mathrm{d}X\right)\] Since \(\hat{f}_{\epsilon}:=f\circ\mathrm{exp}_{x_{\epsilon}}\) and \(\nabla f(x_{0})=0\), we get that \(\nabla\hat{f}_{\epsilon}(0)=O(d_{g}(x_{\epsilon},x_{0}))\). With (6), we then get that \(\nabla\hat{f}_{\epsilon}(0)=o(\mu_{\epsilon})\). It follows from Lemma 1 that \(\int_{B_{\delta_{\epsilon}}(0)}|X|^{3}\hat{u}_{\epsilon}^{2^{\star}}\mathrm{d}X =o(\mu_{\epsilon}^{2})\) and \(\int_{B_{\delta_{\epsilon}}(0)}|X|\hat{u}_{\epsilon}^{2^{\star}}\mathrm{d}X=O (\mu_{\epsilon})\). Therefore, we get that \[A_{\epsilon}=\frac{1}{2^{\star}}\partial_{ij}\hat{f}_{\epsilon}(0)\int_{B_{ \delta_{\epsilon}}(0)}X^{i}X^{j}\hat{u}_{\epsilon}^{2^{\star}}\mathrm{d}X+o( \mu_{\epsilon}^{2})\] Arguing as in the proof of Lemma 1, we get \[\int_{B_{\frac{\delta_{\epsilon}}{\mu_{\epsilon}}}(0)}X^{i}X^{j}\hat{u}_{ \epsilon}^{2^{\star}}\mathrm{d}X=\mu_{\epsilon}^{2}\int_{\mathbb{R}^{n}}X^{i} X^{j}w^{2^{\star}}\mathrm{d}X+o(\mu_{\epsilon}^{2})\mbox{ when }n\geq 4.\] Since \(w\) is radially symmetric, we get that \(\int_{\mathbb{R}^{n}}X^{i}X^{j}w^{2^{\star}}\mathrm{d}X=\frac{\delta_{ij}}{n}\int_{\mathbb{R}^{n}}|X|^{2}w^{2^{ \star}}\mathrm{d}X\). Since \(\hat{g}_{\epsilon}\) is normal at \(0\), we have that \(\Delta_{g}f(x_{\epsilon})=-\sum_{i}\partial_{ii}\hat{f}_{\epsilon}(0)\), which yields We claim that \[A_{\epsilon}=-\frac{1}{2^{\star}n}\Delta_{g}f(x_{0})\mu_{\epsilon}^{2}\int_{ \mathbb{R}^{n}}|X|^{2}w^{2^{\star}}\mathrm{d}X+o(\mu_{\epsilon}^{2})\quad \mbox{ if }n\geq 4\] By Jaber [15] we have that \[\frac{\int_{\mathbb{R}^{n}}|X|^{2}w^{2^{\star}}\mathrm{d}X}{\int_{\mathbb{R}^{ n}}w^{2}\mathrm{d}X}=\frac{n^{2}(n-4)}{4(n-1)f(x_{0})}\mbox{ for }n\geq 5.\] Putting the expressions of \(A_{\epsilon}\), \(B_{\epsilon}\), \(C_{\epsilon}\) and \(D_{\epsilon}\) in (47) and letting \(\epsilon\to 0\) yield (8). This ends the proof of Theorem 1. ## 5. Application to a super-critical problem: proof of Theorem 2 We follow the notations and assumptions of Theorem 2. We consider a family \((u_{\epsilon})_{\epsilon>0}\in C_{G}^{2}(X)\) of \(G-\)invariant solutions to the problem \[\Delta_{g}u_{\epsilon}+h_{\epsilon}u_{\epsilon}=\lambda_{\epsilon}u_{\epsilon}^ {2^{\star}(k)-1}\,,\,\int_{X}u_{\epsilon}^{2^{\star}(k)}\mathrm{d}v_{g}=1\,,\, \|u_{\epsilon}\|_{2}\to 0\mbox{ as }\epsilon\to 0 \tag{48}\] where \((h_{\epsilon})_{\epsilon>0}\in C_{G}^{1}(X)\) is such that there exists \(h\in C_{G}^{1}(X)\) such that (9) holds and \((\lambda_{\epsilon})_{\epsilon}\) is such that (11) holds. **Claim 4**.: _There exists \(x_{0}\in X\) such that_ \[\lim_{\epsilon\to 0}\int_{B_{\delta}(Gx_{0})}u_{\epsilon}^{2^{\star}(k)}\,dv_{g}=1 \mbox{ for all }\delta>0. \tag{49}\] _Proof:_ We fix a point \(z_{0}\in X\). We choose \(\eta_{0}\in C^{\infty}(\mathbb{R})\) such that \(\eta_{0}(t)=1\) for \(t\leq 1\) and \(\eta_{0}(t)=0\) for \(t\geq 2\). Given \(\delta>0\), we define \(\eta(x):=\eta_{0}(\frac{d_{\sigma}(Gx_{0},x)}{\delta})\) for all \(x\in X\). For \(\delta>0\) small enough, we have that \(\eta\in C^{\infty}_{G}(X)\). Multiply (10) by \(\eta^{2}u_{\epsilon}^{l}\) for some \(l>1\) and integrate over \(X\), we get that \[\int_{M}\eta^{2}u_{\epsilon}^{l}\Delta_{g}u_{\epsilon}\mathrm{d}v_{g}+\int_{M }\eta^{2}h_{\epsilon}u_{\epsilon}^{l+1}\mathrm{d}v_{g}=\lambda_{\epsilon}\int_ {M}\eta^{2}u_{\epsilon}^{l+2^{\star}(k)-1}\mathrm{d}v_{g} \tag{50}\] As in the proof of Claim 2, we get that \[\int_{M}|\nabla(\,\eta u_{\epsilon}^{\frac{l+1}{2}}\,)|_{g}^{2}\ dv_{g}=\frac{(l+1)^{2}}{4l}\int_{M}\eta^{2}u_{ \epsilon}^{l}\Delta_{g}u_{\epsilon}\,dv_{g}+\frac{l+1}{2l}\int_{M}\left(| \nabla\eta|_{g}^{2}+\frac{l-1}{l+1}\eta\Delta_{g}\eta\right)u_{\epsilon}^{l+1 }\,dv_{g}\] Using (50) and Holder's inequality, we get \[\int_{X}|\nabla(\,\eta u_{\epsilon}^{\frac{l+1}{2}}\,)|_{g}^{2}\ dv _{g}\leq C\int_{X}u_{\epsilon}^{l+1}\,dv_{g}\] \[+\frac{(l+1)^{2}}{4l}\lambda_{\epsilon}\left(\int_{X}\left(\eta u _{\epsilon}^{\frac{l+1}{2}}\right)^{2^{\star}(k)}\,dv_{g}\right)^{\frac{2}{2^ {\star}(k)}}\left(\int_{B_{2\delta}(Gx_{0})}u_{\epsilon}^{2^{\star}(k)}\,dv_{g }\right)^{1-\frac{2}{2^{\star}(k)}} \tag{51}\] It follows from Faget [10] that for all \(\alpha>0\), there exists \(B>0\) such that, for all \(\epsilon\), \[\left(\int_{X}\left(\eta u_{\epsilon}^{\frac{l+1}{2}}\right)^{2^{\star}(k)}\, dv_{g}\right)^{\frac{2}{2^{\star}(k)}}\leq\frac{K_{0}(n-k)(1+\alpha)}{V_{m}^{1- \frac{2}{2^{\star}(k)}}}\int_{X}|\nabla(\,\eta u_{\epsilon}^{\frac{l+1}{2^{ \star}}}\,)|_{g}^{2}\ dv_{g}+B\int_{X}\eta^{2}u_{\epsilon}^{l+1}\,dv_{g}\] where \(K_{0}(n-k)\) is as in (3) and \(V_{m}=\min_{x\in X}\{Vol_{g}(Gx)/\dim\,Gx=k\}\). By combining this inequality with (51), we obtain: \[\left(\int_{X}\left(\eta u_{\epsilon}^{\frac{l+1}{2}}\right)^{2^{\star}(k)}\, dv_{g}\right)^{\frac{2}{2^{\star}(k)}}\chi_{\epsilon}\leq C\|u_{\epsilon}\|_{l+1}^{l+1} \tag{52}\] where \[\chi_{\epsilon}:=1-\frac{(l+1)^{2}}{4l}\lambda_{\epsilon}\frac{K_{0}(n-k)(1+ \alpha)}{V_{m}^{1-\frac{2}{2^{\star}(k)}}}\left(\int_{B_{2\delta}(Gx_{0})}u_{ \epsilon}^{2^{\star}(k)}\,dv_{g}\right)^{1-\frac{2}{2^{\star}(k)}}\] Assume that, up to extraction, \[\lim_{\epsilon\to 0}\int_{B_{2\delta}(Gz_{0})}u_{\epsilon}^{2^{\star}(k)}\,dv_{g}<1.\] Using (11), there exists \(1<l<2^{\star}(k)-1\) such that \(\chi_{\epsilon}\geq\beta>0\) for all \(\epsilon>0\) up to taking \(\alpha\) small. As \(u_{\epsilon}\to 0\) in \(L^{l+1}(X)\) since \(l+1<2^{\star}(k)\), with (52), we then get that \(\lim_{\epsilon\to 0}\int_{B_{\delta}(Gz_{0})}u_{\epsilon}^{\frac{l+1}{2}2^{ \star}(k)}\,dv_{g}=0\). With similar arguments, we get that for all \(\delta^{\prime}<\delta\), \(u_{\epsilon}\to 0\) in \(L^{q}(B_{\delta^{\prime}}(Gz_{0})\) for all \(q\geq 1\). It then follows from (10) and elliptic theory that \(u_{\epsilon}\to 0\) in \(C^{0}(B_{\delta^{\prime}}(Gz_{0}))\). Since \(\int_{X}u_{\epsilon}^{2^{\star}}\,dv_{g}=1\) and \(X\) is compact, the existence of \(x_{0}\in X\) such that (49) holds follows. This proves the claim. **Claim 5**.: _We have that dim \(Gx_{0}=k\) and \(\text{Vol}_{g}(Gx_{0})=V_{m}\)._ _Proof:_ We follow Faget [10]. Assume that \(\dim\,Gx_{0}>k\). Therefore, there exists \(\delta>0\) such that \(\dim\,Gx\geq k_{1}>k\) for all \(x\in B_{2\delta}(Gx_{0})\). It then follows from Hebey-Vaugon [13] that \(H^{2}_{1,G}(B_{\delta}(Gx_{0}))\hookrightarrow L^{p}(B_{\delta}(Gx_{0}))\) is compact for \(1\leq p<2^{\star}(k_{1})\). Since \(2^{\star}(k)<2^{\star}(k_{1})\) and \(u_{\epsilon}\to 0\) in \(L^{2}(X)\), we get that \(u_{\epsilon}\to 0\) strongly in \(L^{2^{*}(k)}(B_{\delta}(Gx_{0}))\), contradicting (49). Therefore \(\dim\,Gx_{0}=k\). It follows from Faget (formula (8) in [10]) that for all \(\alpha>0\), there exists \(\delta_{\alpha}>0\) such that \[\left(\int_{B_{\delta_{\alpha}}(Gx_{0})}|v|^{2^{*}(k)}\,dv_{g}\right)^{\frac{2 }{2^{*}(k)}}\leq(1+\alpha)\frac{K_{0}(n-k)}{\operatorname{Vol}_{g}(Gx_{0})^{1- \frac{2}{2^{*}(k)}}}\int_{B_{\delta_{\alpha}}(Gx_{0})}|\nabla v|^{2}\,dv_{g}\] for all \(v\in C^{1}_{G}(B_{\delta_{\alpha}}(Gx_{0}))\) with compact support in \(B_{\delta_{\alpha}}(Gx_{0})\). Let us fix \(\eta_{\alpha}\in C^{\infty}_{G}(X)\) with compact support in \(B_{\delta_{\alpha}}(Gx_{0})\) and such that \(0\leq\eta_{\alpha}\leq 1\) and \(\eta_{\alpha}(x)=1\) for \(d_{g}(x,Gx_{0})<\delta_{\alpha}/2\). We then get that \[\left(\int_{B_{\delta_{\alpha}/2}(Gx_{0})}u_{\epsilon}^{2^{*}(k) }\,dv_{g}\right)^{\frac{2}{2^{*}(k)}} \leq \left(\int_{B_{\delta_{\alpha}}(Gx_{0})}(\eta_{\alpha}u_{\epsilon })^{2^{*}(k)}\,dv_{g}\right)^{\frac{2}{2^{*}(k)}}\] \[\leq (1+\alpha)\frac{K_{0}(n-k)}{\operatorname{Vol}_{g}(Gx_{0})^{1- \frac{2}{2^{*}(k)}}}\int_{X}|\nabla(\eta_{\alpha}u_{\epsilon})|^{2}\,dv_{g}\] Integrating by parts and using \(\|u_{\epsilon}\|_{2}\to 0\), we get that \[\int_{X}|\nabla(\eta_{\alpha}u_{\epsilon})|_{g}^{2}\,dv_{g}=\int_{X}\eta_{ \alpha}^{2}|\nabla u_{\epsilon}|_{g}^{2}\,dv_{g}+\int_{X}\eta(\Delta_{g}\eta)u _{\epsilon}^{2}\,dv_{g}\leq\int_{X}|\nabla u_{\epsilon}|_{g}^{2}\,dv_{g}+o(1) \tag{54}\] Multiplying (48) by \(u_{\epsilon}\), integrating and using again \(\|u_{\epsilon}\|_{2}\to 0\), we get that \[\lambda_{\epsilon}=\int_{X}\lambda_{\epsilon}u_{\epsilon}^{2^{*}(k)}\,dv_{g}= \int_{X}|\nabla u_{\epsilon}|^{2}\,dv_{g}+\int_{X}h_{\epsilon}u_{\epsilon}^{2} \,dv_{g}=\int_{X}|\nabla u_{\epsilon}|^{2}\,dv_{g}+o(1). \tag{55}\] Putting together (53), (54), (55) and (49), we get that \[1\leq(1+\alpha)\frac{K_{0}(n-k)}{\operatorname{Vol}_{g}(Gx_{0})^{1-\frac{2}{2 ^{*}(k)}}}\lambda_{\epsilon}+o(1).\] Using (11), letting \(\epsilon\to 0\) and then \(\alpha\to 0\) yields \(\operatorname{Vol}_{g}(Gx_{0})\leq V_{m}\). Therefore \(\operatorname{Vol}_{g}(Gx_{0})=V_{m}\) and the claim is proved. **Claim:**_The following \(L^{2}-\)concentration holds_ \[\lim_{\epsilon\to 0}\frac{\int_{X\setminus B_{\delta}(Gx_{0})}u_{\epsilon}^{2} \,dv_{g}}{\int_{X}u_{\epsilon}^{2}\,dv_{g}}=0\text{ for }n-k\geq 4. \tag{56}\] We prove the claim by arguing as in Djadli-Druet [5]. We have that \[\int_{X\setminus B_{\delta}(Gx_{0})}u_{\epsilon}^{2}\,dv_{g}\leq\left(\sup_{X \setminus B_{\delta}(Gx_{0})}u_{\epsilon}\right)\int_{X\setminus B_{\delta}( Gx_{0})}u_{\epsilon}\,dv_{g}\] Since \(u_{\epsilon}\to 0\) in \(C^{0}_{loc}(X\setminus Gx_{0})\), Harnack's inequality yields \(c>0\) such that \[\int_{X\setminus B_{\delta}(Gx_{0})}u_{\epsilon}^{2}\,dv_{g} \leq c\inf_{X\setminus B_{\delta}(Gx_{0})}u_{\epsilon}\int_{X \setminus B_{\delta}(Gx_{0})}u_{\epsilon}\,dv_{g}\] \[\leq c\left(\int_{X\setminus B_{\delta}(Gx_{0})}u_{\epsilon}^{2} \,dv_{g}\right)^{\frac{1}{2}}\int_{X}u_{\epsilon}\,dv_{g}\leq c\|u_{\epsilon}\| _{2}\int_{X}u_{\epsilon}\,dv_{g}\] Integrating (10) yields \(\int_{X}h_{\epsilon}u_{\epsilon}\,dv_{g}=\int_{X}u_{\epsilon}^{2^{*}(k)-1}\,dv _{g}\). It follows from (9) that there exists \(\beta>0\) such that \(h_{\epsilon}\geq\beta\) for all \(\epsilon>0\). Therefore we get that \[\int_{X\setminus B_{\delta}(Gx_{0})}u_{\epsilon}^{2}\,dv_{g}\leq c\beta^{-1}\| u_{\epsilon}\|_{2}\|u_{\epsilon}\|_{2^{*}(k)-1}^{2^{*}(k)-1}.\] if \(n-k\geq 6\), we have \(2^{\star}(k)-1\leq 2\). Using Holder inequalities, we have \(\|u_{\epsilon}\|_{2^{\star}(k)-1}^{2^{\star}(k)-1}\leq c\|u_{\epsilon}\|_{2^{ \star}(k)-1}^{2^{\star}(k)-1}\). Since \(u_{\epsilon}\to 0\) in \(L^{2}(X)\), we get that \[\int_{X\setminus B_{\delta}(Gx_{0})}u_{\epsilon}^{2}\,dv_{g}\leq c\|u_{ \epsilon}\|_{2}^{2^{\star}(k)}=o(\|u_{\epsilon}\|_{2}^{2})\] if \(4\leq n-k\leq 5\), we have \(2^{\star}(k)-1=2\omega+(1-\omega)2^{\star}(k)\) with \(\omega=\frac{n-k-2}{4}>0\). Holder's inequality yields \(\|u_{\epsilon}\|_{2^{\star}(k)-1}^{2^{\star}(k)-1}\leq\|u_{\epsilon}\|_{2}^{2 \omega}\|u_{\epsilon}\|_{2^{\star}(k)}^{(1-\omega)2^{\star}(k)}\). As a result \[\int_{X\setminus B_{\delta}(Gx_{0})}u_{\epsilon}^{2}\,dv_{g}\leq\|u_{ \epsilon}\|_{2^{\star}(k)-1}^{2^{\star}(k)-1}\|u_{\epsilon}\|_{2}\leq\|u_{ \epsilon}\|_{2}^{1+2\omega}\|u_{\epsilon}\|_{2^{\star}(k)}^{(1-\omega)2^{ \star}(k)}=o(\|u_{\epsilon}\|_{2}^{2}).\] This proves the claim. We are now in position to take the quotient. Since \(\dim\,Gx_{0}=k\), we choose \(\delta>0\) and \(G^{\prime}\subset G\) as in Assumption \((H)\). Then \(M:=B_{\delta}(Gx_{0})/G^{\prime}\) is a manifold of dimension \(n-k\) that is endowed with the metric \(\bar{g}\) on \(B_{\delta}(Gx_{0})/G^{\prime}\) such that the projection \((B_{\delta}(Gx_{0}),g)\to(B_{\delta}(Gx_{0})/G^{\prime},\bar{g})\) is a Riemannian submersion. We define \(\bar{u}_{\epsilon}\in C^{2}(M)\), \(\bar{h}_{\epsilon}\in C^{1}(M)\) and \(\bar{v}\in C^{2}(M)\) be such that \[\bar{u}_{\epsilon}(\bar{x})=u_{\epsilon}(x)\,,\,\bar{h}_{\epsilon}(\bar{x})= h_{\epsilon}(x)\text{ and }\bar{v}(\bar{x})=\operatorname{Vol}_{g}(G^{\prime}x)\text{ for all }x\in B_{\delta}(Gx_{0}).\] Let us first rewrite equation (48) as in Saintier [20]. Let \(\bar{\varphi}\in C^{\infty}_{c}(M)\) be a function on \(M=B_{\delta}(Gx_{0})/G^{\prime}\). Define \(\varphi(x):=\bar{\varphi}(\bar{x})\) for all \(x\in B_{\delta}(Gx_{0})\): as one checks, \(\varphi\in C^{\infty}_{c}(B_{\delta}(Gx_{0}))\) and is \(G-\)invariant. It then follows from (48) that \[\int_{X}(\nabla u_{\epsilon},\nabla\varphi)_{g}\,dv_{g}+\int_{X}h_{\epsilon}u _{\epsilon}\varphi\,dv_{g}=\lambda_{\epsilon}\int_{X}u_{\epsilon}^{2^{\star}( k)-1}\varphi\,dv_{g}.\] We define \(\tilde{g}:=\bar{v}^{\frac{2}{n-k-2}}\bar{g}\). Since \(u_{\epsilon},\varphi\) are \(G-\)invariant, we get that \[\int_{X}(\nabla u_{\epsilon},\nabla\varphi)_{g}\,dv_{g}=\int_{B_{\delta}(Gx_{ 0})/G^{\prime}}\bar{v}(\bar{x})(\nabla\bar{u}_{\epsilon},\nabla\bar{\varphi} )_{\tilde{g}}\,dv_{\tilde{g}}=\int_{B_{\delta}(Gx_{0})/G^{\prime}}(\nabla\bar {u}_{\epsilon},\nabla\bar{\varphi})_{\tilde{g}}\,dv_{\tilde{g}}.\] Performing the same computations for the remaining terms, setting \(\tilde{h}_{\epsilon}:=\bar{v}^{-\frac{2}{n-k-2}}\tilde{h}_{\epsilon}\), \(\tilde{f}:=\bar{v}^{-\frac{2}{n-k-2}}\) and \(\tilde{u}_{\epsilon}:=\lambda_{\epsilon}^{\frac{1}{2^{\star}-2}}\bar{u}_{\epsilon}\), we get that \[\Delta_{\tilde{g}}\tilde{u}_{\epsilon}+\tilde{h}_{\epsilon}\tilde{u}_{\epsilon }=\tilde{f}\tilde{u}_{\epsilon}^{2^{\star}(k)-1}\text{ in }M.\] We deal with the \(L^{2^{\star}(k)}-\)norm. The definitions of \(\tilde{f}\) and \(\tilde{g}\) and (49) yield \[\lim_{\epsilon\to 0}\int_{M}\tilde{f}\tilde{u}_{\epsilon}^{2^{\star}(k)}\,dv_{ \tilde{g}}=\lim_{\epsilon\to 0}\lambda_{\epsilon}^{\frac{2^{\star}(k)}{2^{ \star}(k)-2}}=\frac{1}{K_{0}(n-k)^{\frac{n-k}{2}}\tilde{f}(\bar{x_{0}})^{ \frac{n-k-2}{2}}}.\] Concerning the \(L^{2}-\)concentration, it follows from (56) that for any \(r<\delta\), for \(n-k\geq 4\), we have that \[\int_{B_{\delta}(Gx_{0})\setminus B_{r}(Gx_{0})}u_{\epsilon}^{2}\,dv_{g}\leq \int_{X\setminus B_{r}(Gx_{0})}u_{\epsilon}^{2}\,dv_{g}=o\left(\int_{B_{r}(Gx_{ 0})}u_{\epsilon}^{2}\,dv_{g}\right).\] Taking the quotient, we get that \[\int_{M\setminus B_{r}(\bar{x}_{0})}\tilde{u}_{\epsilon}^{2}\,dv_{\tilde{g}}=o \left(\int_{B_{r}(\bar{x}_{0})}\tilde{u}_{\epsilon}^{2}\,dv_{\tilde{g}}\right) \text{ when }n-k\geq 4,\] which yields the \(L^{2}-\)concentration on \(M\). We apply Theorem 1. Taking \((x_{\epsilon})_{\epsilon}\in X\) such that \(\|u_{\epsilon}\|_{\infty}=u_{\epsilon}(x_{\epsilon})=\mu_{\epsilon}^{-\frac{n-k-2 }{2}}\), we get (12) and (13). Equation (8) rewrites \[\tilde{h}(\bar{x}_{0})=\frac{n-k-2}{4(n-k-1)}\left(\mathrm{Scal}_{\tilde{g}}( \bar{x}_{0})-\frac{n-k-4}{2}\cdot\frac{\Delta_{\tilde{g}}\tilde{f}(\bar{x}_{0}) }{\tilde{f}(\bar{x}_{0})}\right)\] where \(\tilde{h}:=\lim_{\epsilon\to 0}\tilde{h}_{\epsilon}\). Using the invariance of the conformal Laplacian, that is \[\Delta_{\tilde{g}}\varphi+\frac{m-2}{4(m-1)}\mathrm{Scal}_{\tilde{g}}\varphi= \omega^{-\frac{m+2}{m-2}}\left(\Delta_{\tilde{g}}(\omega\varphi)+\frac{m-2}{4 (m-1)}\mathrm{Scal}_{\tilde{g}}\omega\varphi\right)\] for any \(\varphi\in C^{2}(M)\) and where \(\tilde{g}=\omega^{\frac{4}{m-2}}\bar{g}\), \(m=n-k\), we get (14). This proves Theorem 2. **Acknowledgement:** The initial version of this article required the \(L^{2}-\)concentration (7) for all dimensions \(n\geq 4\). The authors are grateful to the anonymous referee who noticed that this concentration could be bypassed for \(n\geq 7\).
2310.05641
Mathematical problems and solutions of the Ninth International Olympiad in Cryptography NSUCRYPTO
Every year the International Olympiad in Cryptography Non-Stop University CRYPTO (NSUCRYPTO) offers mathematical problems for university and school students and, moreover, for professionals in the area of cryptography and computer science. The mail goal of NSUCRYPTO is to draw attention of students and young researchers to modern cryptography and raise awareness about open problems in the field. We present problems of NSUCRYPTO'22 and their solutions. There are 16 problems on the following topics: ciphers, cryptosystems, protocols, e-money and cryptocurrencies, hash functions, matrices, quantum computing, S-boxes, etc. They vary from easy mathematical tasks that could be solved by school students to open problems that deserve separate discussion and study. So, in this paper, we consider several open problems on three-pass protocols, public and private keys pairs, modifications of discrete logarithm problem, cryptographic permutations and quantum circuits.
V. A. Idrisova, N. N. Tokareva, A. A. Gorodilova, I. I. Beterov, T. A. Bonich, E. A. Ishchukova, N. A. Kolomeec, A. V. Kutsenko, E. S. Malygina, I. A. Pankratova, M. A. Pudovkina, A. N. Udovenko
2023-10-09T11:52:00Z
http://arxiv.org/abs/2310.05641v1
# Mathematical problems and solutions of the Ninth International Olympiad in Cryptography NSUCRYPTO ###### Abstract Every year the International Olympiad in Cryptography Non-Stop University CRYPTO (NSUCRYPTO) offers mathematical problems for university and school students and, moreover, for professionals in the area of cryptography and computer science. The mail goal of NSUCRYPTO is to draw attention of students and young researchers to modern cryptography and raise awareness about open problems in the field. We present problems of NSUCRYPTO'22 and their solutions. There are 16 problems on the following topics: ciphers, cryptosystems, protocols, e-money and cryptocurrencies, hash functions, matrices, quantum computing, S-boxes, etc. They vary from easy mathematical tasks that could be solved by school students to open problems that deserve separate discussion and study. So, in this paper, we consider several open problems on three-pass protocols, public and private keys pairs, modifications of discrete logarithm problem, cryptographic permutations and quantum circuits. **Keywords:** cryptography, ciphers, protocols, number theory, S-boxes, quantum circuits, matrices, hash functions, interpolation, cryptocurrencies, postquantum cryptosystems, Olympiad, NSUCRYPTO. ## 1 Introduction **Non-Stop University CRYPTO (NSUCRYPTO)** is the unique international competition for professionals, school and university students, providing various problems on theoretical and practical aspects of modern cryptography, see [16]. The main goal of the olympiad is to draw attention of young researchers not only to competetive fascinating tasks, but also to sophisticated and tough scientific problems at the intersection of mathematics and cryptography. That is why each year there are several open problems in the list of tasks that require rigorous studying and deserve a separate publication in case of being solved. Since NSUCRYPTO holds via the Internet, everybody can easily take part in it. Rules of the Olympiad, the archive of problems, solutions and many more can be found at the official website [17]. The first Olympiad was held in 2014, since then more than 3000 students and specialists from almost 70 countries took part in it. The Program committee now is including 22 members from cryptographic groups all over the world. Main organizers and partners are Cryptographic Center (Novosibirsk), Mathematical Center in Akademgorodok, Novosibirsk State University, KU Leuven, Tomsk State University, Belarusian State University, Kovalevskaya North-West Center of Mathematical Research and Kryptonite. This year 37 participants in the first round and 27 teams in the second round from 14 countries became the winners (see the list [18]). This year we proposed 16 problems to participants and 5 of them were entirely open or included some open questions. Totally, there were 623 particpants from 36 countries. Following the results of each Olympiad we also publish scientific articles with detailed solutions and some analysis of the solutions proposed by the participants, including advances on unsolved ones, see [1, 2, 7, 8, 9, 10, 11, 14]. ## 2 An overview of open problems One of the main characteristic of the Olympiad is that unsolved scientific problems are proposed to the participants in addition to problems with known solutions. All 31 open problems that were offered since the first NSUCRYPTO can be found here [19]. Some of these problems are of great interest to cryptographers and mathematicians for many years. These are such problems as "APN permutation" (2014), "Big Fermat numbers" (2016), "Boolean hidden shift and quantum computings" (2017), "Disjunct Matrices" (2018), and others. Despite that it is marked that the problem is open and therefore it requires a lot of hard work to advance, some of the problems we suggested are solved or partially solved by our participants during the Olympiad. For example, problems "Algebraic immunity" (2015), "Sylvester matrices" (2018), "Miller -- Rabin revisited" (2020) were solved completely. Also, partial solutions were suggested for problems "Curl27" (2019), "Bases" (2020), "Quantum error correction" (2021) and "s-Boolean sharing" (2021). Moreover, some researchers continue to work on solutions even after the Olympiad was over. For example, authors of [13] proposed a complete solution for problem "Orthogonal arrays" (2018). Partial solutions for another open problem, "A secret sharing", (2014) were presented in [5], [6], and a recursive algorithm for finding the solution was proposed in [4]. This year, two open problems ware solved during the Olympiad. These are problems "Public keys for e-coins" (see Problem 4.10) and "Quantum entanglement" (see Problem 4.16). ## 3 Problem structure of the Olympiad There were 16 problems stated during the Olympiad, some of them were included in both rounds (Tables 1, 2). Section A of the first round consisted of six problems, while Section B of the first round consisted of eight problems. The second round was composed of eleven problems; five of them included unsolved questions (awarded special prizes). \begin{tabular}{|l|l|c|} \hline N & Problem title & Max score \\ \hline \hline 1 & Numbers and points & 4 \\ \hline 2 & Wallets & 4 \\ \hline 3 & A long-awaited event & 4 \\ \hline 4 & Hidden primes & 4 \\ \hline 5 & Face-to-face & 4 \\ \hline 6 & Crypto locks & 4 + open problem \\ \hline \end{tabular} **Section A** Table 1. Problems of the first round ## 4 Problems and their solutions In this section, we formulate all the problems of 2022 year Olympiad and present their detailed solutions, in some particular cases we also pay attention to solutions proposed by the participants. ### Problem "Numbers and points" #### 4.1.1 Formulation Decrypt the message in Fig. 1. #### 4.1.2 Solution There is a board made up of numbers and dots on the right half of Fig. 1. One cell is highlighted in red. The path along which the sensible plaintext is encrypted begins with it (Fig. 2). The ciphertext has a +number - number - dot* pattern. The ciphertext is the following: \[21\.\ 42\.\ 24\.\ 15\.\ 33\.\ 14\.\] The table in the left half of Fig. 1 refers to the Polybius square. Each letter is represented by its coordinates in the grid. Comparing the numbers from the ciphertext with the coordinates of the letters in the Polybius square, we get: \[\text{F. R. (I/J). E. N. D.}\] Picking I from (I/J), we get the sensible plaintext **FRIEND**. The problem looked simple but there was only one complete solution proposed by the team of Robin Jadoul (Belgium), Esrever Yu (Taiwan) and Jack Pope (United Kingdom). \begin{table} \begin{tabular}{|c|l|c|} \hline N & Problem title & Max score \\ \hline \hline 1 & CP problem & open problem \\ \hline 2 & Interpolation with errors & 8 \\ \hline 3 & HAS01 & 8 \\ \hline 4 & Weaknesses of the PHIGFS & 8 \\ \hline 5 & Super dependent S-box & 6 + open problem \\ \hline 6 & Quantum entanglement & 6 + open problem \\ \hline 7 & Numbers and points & 4 \\ \hline 8 & Bob’s symbol & 8 \\ \hline 9 & Crypto locks & 4 + open problem \\ \hline 10 & Public keys for e-coins & open problem \\ \hline 11 & A long-awaited event & 4 \\ \hline \end{tabular} \end{table} Table 2: Problems of the second round Figure 1: The illustration for the problem “Numbers and points” ### 4.2 Problem "Wallets" #### 4.2.1 Formulation Bob has a wallet with 2022 NSUcoins. He decided to open a lot of new wallets and spread his NSUcoins among them. The platform that operates his wallets can distribute content of any wallet between 2 newly generated ones, charging 1 NSUcoin commission and removing the initial wallet. He created a lot of new wallets, but suddenly noticed that all of his wallets contain exactly 8 NSUcoins each. Bob called the platform and told that there might be a mistake. How did he notice that? #### 4.2.2 Solution Suppose that there were \(n\) such operations, so we had \(n+1\) wallets. Since 1 NSUcoin is charged for each operation, the total omission is equal to \(n\). Therefore, we have \(2022-n=8(n+1)\) and \(2014=9n\), but that is impossible since \(n\) is a natural number. The most accurate and detailed solution was sent by Egor Desyatkov (Russia). ### 4.3 Problem "A long-awaited event" #### 4.3.1 Formulation Bob received from Alice the secret message L78V8LC7GBEYEE informing him about some important event. It is known that Alice used an alphabet with 37 characters from A to Z, from O to 9 and a space. Each of the letters is encoded as follows: \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline A & B & C & D & E & F & G & H & I & J & K & L & M & N & O & P & Q & R & S \\ \hline 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 \\ \hline \hline T & U & V & W & X & Y & Z & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & SPACE \\ \hline 19 & 20 & 21 & 22 & 23 & 24 & 25 & 26 & 27 & 28 & 29 & 30 & 31 & 32 & 33 & 34 & 35 & 36 \\ \hline \end{tabular} For the encryption, Alice used a function \(f\) such that \(f(x)=ax^{2}+bx+c\pmod{37}\) for some integers \(a,b,c\) and \(f\) satisfies the property \[f(x-y)-2f(x)f(y)+f(1+xy)=1\pmod{37}\ \ \text{for any integers}\ x,y.\] Decrypt the message that Bob has received. #### 4.3.2 Solution Let \(y=0\): \[f(x)-2f(x)f(0)+f(1)=1\pmod{37},\] \[f(x)(1-2f(0))=1-f(1)\pmod{37}.\] Since \(f\) is not a constant function, we have that both sides of the equation above are zeros, so \(f(0)=19\pmod{37}\) and \(f(1)=1\pmod{37}\). From this we obtain that \(c=19\). Let \(y=-1\): \[f(1+x)+f(1-x)=1+2f(x)f(-1)\pmod{37}.\] Figure 2: The path along which the sensible plaintext is encrypted By replacing \(x\mapsto(-x)\) we get \[f(1-x)+f(1+x)=1+2f(-x)f(-1)\pmod{37}.\] Left sides of the last two expressions are equal, therefore \(f(x)=f(-x)\pmod{37}\) that is \(f\) is even function, provided \(f(-1)\neq 0\pmod{37}\). We can check the last condition buy putting \(x=0\), \(y=1\) to the initial relation on \(f\), that yields \(f(-1)=1\neq 0\pmod{37}\). Therefore, \(f(x)=f(-x)\pmod{37}\) for any integer \(x\), hence \(b=0\). From \(f(1)=1\pmod{37}\) we reveal the value of the coeffecient \(a\) that is equal to \(19\). Thus, we have \(f(x)=19\big{(}x^{2}+1\big{)}\pmod{37}\), then for recovering of the plaintext we use the inverse expression \(x=\pm\sqrt{2f(x)+36}\pmod{37}\) and for every symbol of the ciphertext we choose the appropriate variant of the corresponding symbol of the plaintext: \[\text{L78V8LC7GBEYEE}\,\hookrightarrow\,\text{NSUCRYPTO 2022}.\] The only correct solution was sent by William Zhang (United Kingdom). ### 4.4 Problem "Hidden primes" #### 4.4.1 Formulation The Olympiad team rented an office at the Business Center, 1-342 room, on 1691th street for NSUCRYPTO-2022 competition for 0 nsucoins (good deal!). Mary from the team wanted to create a task for the competition and she needed to pick up three numbers for this task. She used to find an inspiration in numbers around her and various equations with them. After some procedure she found three prime numbers! It is interesting that when Mary added the smallest number to the largest one and divided the sum by the third number, the result was also the prime number. Could you guess these numbers she found? #### 4.4.2 Solution We may assume from the problem statement that Mary used some numbers around her and some equations with them in order to find these three numbers. We may also get from the description that she used only one procedure to find these hidden numbers. Figure 3: The illustration for the problem “Hidden primes” So, all three numbers are connected by some procedure and the numbers around Mary are used, from phrase "various equations" we can assume that there exists some equation with these numbers as coefficients. There were 5 numbers around Mary: 1, -342, 1691, -2022 and 0. In addition, analyzing the picture (see Fig. 3), you can see the curve, cubes with 4 letters: a, b, c, d and the cube with 0. The curve resembles a graph of a cubic function and the letters on the cubes look like coefficients of a cubic function. The cube with 0 gives a hint for the use of a cubic equation. Let us substitute the numbers from the problem statement into the cubic equation. Solving the equation \(x^{3}-342x^{2}+1691x-2022=0\) we find the roots 2, 3, 337. All three numbers are prime and satisfy the condition from the statement: \((2+337)/3=113\), where 113 is also a prime number. Best solutions were proposed independently by Konstantin Romanov (Russia), Vasiliy Kadykov (Russia) and Sergey Zabolotskiy (Russia). ### 4.5 Problem "Face-to-face" #### Formulation Alice picked a new pin code (4 pairwise distinct digits from \(\{1,2,\ldots,9\}\)) for her credit card such that all digits have the same parity and are arranged in increasing order. Bob and Charlie wanted to guess her pin code. Alice said that she can give each of them a hint but face-to-face only. Bob alone came to Alice and she told him that the sum of her pin code digits is equal to the number of light bulbs in the living room chandelier. Bob answered that there is still no enough information for him to guess the code, and left. After that, Charlie alone came to Alice and she told him that if we find the product of all pin code digits and then sum up digits of those product, this result number would be equal to the amount of books on the shelf. Charlie also answered that there is still no enough information for him to guess the code, and left. Unfortunately, Eve was eavesdropping in the next apartment and, after Charlie had left, she immediately found out Alice pin code despite that she had never seen those chandelier and bookshelf. Could you find the pin code too? #### Solution Let \(P\) be the pin code. Since all the digits of \(P\) have the same parity and are arranged in increasing order, we have only six options: \begin{tabular}{|c|c|c|c|} \hline Pin code \(P\) & The sum of digits & The product of digits & The sum of product digits \\ \hline 1357 & 16 & 105 & 6 \\ \hline 1359 & 18 & 135 & 9 \\ \hline 1379 & 20 & 189 & 18 \\ \hline 1579 & 22 & 315 & 9 \\ \hline 2468 & 20 & 384 & 15 \\ \hline 3579 & 24 & 945 & 18 \\ \hline \end{tabular} Since Bob could not guess the code, the sum of digits must allow at least two options for the code, so, we have that \(P\in\{1379,2468\}\). Since Charlie could not guess the code either, we have the same problem for the sum of product digits and it follows that \(P\in\{1359,1579,1379,3579\}\). Therefore, the pin code is equal to 1379. Best solutions of this problem were sent by Henning Seidler (Germany), Himanshu Sheoran (India) and Phuong Hoa Nguyen (France). ### 4.6 Problem "Crypto locks" #### Formulation Alice and Bob are wondering about the creation of a new version for the Shamir three-pass protocol. They have several ideas about it. The Shamir three-pass protocol was developed more than 40 years ago. Recall it. Let \(p\) be a big prime number. Let Alice take two secret numbers \(c_{A}\) and \(d_{A}\) such that \(c_{A}d_{A}=1\bmod(p-1)\). Bob takes numbers \(c_{B}\) and \(d_{B}\) with the same property. If Alice wants to send a secret message \(m\) to Bob, where \(m\) is an integer number \(1<m<p-1\), then she calculates \(x_{1}=m^{c_{A}}\bmod p\) and sends it to Bob. Then Bob computes \(x_{2}=x_{1}^{c_{B}}\bmod p\) and forwards it back to Alice. On the third step, Alice founds \(x_{3}=x_{2}^{d_{A}}\bmod p\) and sends it to Bob. Finally, Bob recovers \(m\) as \(x_{3}^{d_{B}}\bmod p\) according to Fermat's Little theorem. It is possible to think about action of \(c_{A}\) and \(d_{A}\) over the message as about locking and unlocking, see Fig. 4. Alice and Bob decided to change the scheme by using symmetric encryption and decryption procedures instead of locking and unlocking with \(c_{A}\), \(c_{B}\), \(d_{A}\) and \(d_{B}\). * Propose some simple symmetric ciphers that would be possible to use in such scheme. What properties for them are required? Should Alice and Bob use the same cipher (with different own keys) or not? * Problem for a special prize! Could you find such symmetric ciphers that make the modified scheme to be secure as before? Please, give your reasons and proofs. #### 4.6.2 Solution **Q1**. Assume that Alice and Bob use functions \(\mathrm{Enc}_{A}\), \(\mathrm{Dec}_{A}\) and \(\mathrm{Enc}_{B}\), \(\mathrm{Dec}_{B}\) for encryption and decryption, respectively. Suppose that Alice wants to send the message \(m\), then the three-pass protocol will look as follows: \(\bullet\) Alice calculates \(\mathrm{Enc}_{A}(m,k_{A})\), where \(k_{A}\) is her secret key, and sends it to Bob; \(\bullet\) Bob computes \(\mathrm{Enc}_{B}(\mathrm{Enc}_{A}(m,k_{A}),k_{B})\), where \(k_{B}\) is his secret key, and forwards it to Alice; \(\bullet\) Finally, Alice computes \(\mathrm{Dec}_{A}(\mathrm{Enc}_{B}(\mathrm{Enc}_{A}(m,k_{A}),k_{B}),k_{A})\) and sends it to Bob; In order for Bob to recover \(m\) the following property must hold \[\mathrm{Dec}_{B}(\mathrm{Dec}_{A}(\mathrm{Enc}_{B}(\mathrm{Enc}_{A}(m,k_{A}),k_{B}),k_{A}),k_{B})=m.\] The most common approach was to use encryption functions that commute with each other. In that case, if Alice wants to send a secret message \(m\) to Bob, then she calculates \(x=m\circ k_{A}\) and sends it to Bob. Then Bob computes \(x_{2}=x\circ k_{B}\) and forwards it back to Alice. On the third step, Alice finds \(x_{3}=x_{2}\circ k_{A}^{-1}\) and sends it to Bob. Finally, the commutative property of operation \(\circ\) allows Bob to recover \(m\) as \(x_{3}\circ k_{B}^{-1}\). **Remark 1**: Note that if Eve can intercept all three messages, then she can obtain \(m\) if she could compute \(x_{2}^{-1}\), since \(x\circ x_{3}\circ x_{2}^{-1}=m\). As a result, all schemes that use ciphers with only XOR operation (the most common suggestion by the participants) have this weakness. Regarding **Q2**, one interesting idea found by a few participants is to use product of matrices for encryption and decryption, with the additional condition that the matrix \(M\) associated with the message \(m\) is singular. That additional condition appears as a countermeasure against the attack described in Remark 1. However, such schemes require additional security analysis. Another interesting idea suggested by the team of Himanshu Sheoran, Gyumin Roh and Yo Iida (India, South Korea, Japan) was to base the scheme on permutations that commute with each other. Note that a three-pass cryptographic protocol with a similar idea was presented in [3]. Figure 4: The illustration for the problem “Crypto locks” ### Problem "Matrix and reduction" #### 4.7.1 Formulation Alice used an alphabet with 30 characters from A to Z and 0, 1, +,*, +1*. Each of the letters is encoded as follows: \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline A & B & C & D & E & F & G & H & I & J & K & L & M & N & O \\ \hline 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 \\ \hline \hline P & Q & R & S & T & U & V & W & X & Y & Z & 0 & 1 &, &! \\ \hline 15 & 16 & 17 & 18 & 19 & 20 & 21 & 22 & 23 & 24 & 25 & 26 & 27 & 28 & 29 \\ \hline \end{tabular} **Encryption.** The plaintext is divided into consequent subwords of length 4 that are encrypted independently via the same encryption \((2\times 2)\)-matrix \(F\) with elements from \(\mathbb{Z}_{30}\). For example, let the \(j\)-th subword be WORD and the encryption matrix \(F\) be equal to \[F=\begin{pmatrix}11&9\\ 11&10\end{pmatrix}.\] The matrix that corresponds to WORD is denoted by \(P_{j}\) and the matrix that corresponds to the result of the encryption of WORD is \(C_{j}\) and calculated as follows: \[C_{j}=F\cdot P_{j}=\begin{pmatrix}11&9\\ 11&10\end{pmatrix}\cdot\begin{pmatrix}22&17\\ 14&3\end{pmatrix}=\begin{pmatrix}8&4\\ 22&7\end{pmatrix}\pmod{30},\] that is the \(j\)-th subword of the ciphertext is IWEH. Eve has intercepted a ciphertext that was transmitted from Alice to Bob: CYPHXWQEWNKHZOZ Also, she knows that the third subword of the plaintext is FORW. Will Eve be able to restore the original message? #### 4.7.2 Solution The third word of the plaintext is FORW: \[P=\texttt{FORW}=\begin{pmatrix}5&17\\ 14&22\end{pmatrix}\pmod{30}.\] The ciphertext corresponding to it: \[C=\texttt{!WNK}=\begin{pmatrix}29&13\\ 22&10\end{pmatrix}\pmod{30}.\] Since \(C_{3}=F\cdot P_{3}\), where \(F\) is the encryption matrix, the matrix for the decryption could have the following form: \[D=P_{3}\cdot C_{3}^{-1}.\] But \(\det\bigl{(}C_{3}\bigr{)}=4\pmod{30}\) and \(\gcd(4,30)\neq 1\), that is such matrix does not exist modulo 30. So, we will consider following calculations by reduction modulo 15. Let \(\overline{P_{3}}=P_{3}\pmod{15}\), \(\overline{C_{3}}=C\pmod{15}\) and \(\overline{F}=F\pmod{15}\). We have \[\overline{F}^{-1}=\overline{P_{3}}\cdot\bigl{(}\overline{C_{3}}\bigr{)}^{-1}= \begin{pmatrix}9&2\\ 4&9\end{pmatrix}\pmod{15},\] consequently, \[D=\begin{pmatrix}9&2\\ 4&9\end{pmatrix}+15F_{0}\pmod{30},\] where \(F_{0}\) is \(2\times 2\) binary matrix. We have \(D\cdot C_{3}=P_{3}\) or \[\overline{F}^{-1}\cdot\begin{pmatrix}29&13\\ 22&10\end{pmatrix}+15\cdot F_{0}\cdot\begin{pmatrix}29&13\\ 22&10\end{pmatrix}=\begin{pmatrix}5&17\\ 14&22\end{pmatrix}\pmod{30}.\] Finally, we obtain \[\begin{pmatrix}5&17\\ 14&22\end{pmatrix}=F_{0}\cdot\begin{pmatrix}15&15\\ 0&0\end{pmatrix}=\begin{pmatrix}5&17\\ 14&22\end{pmatrix}\pmod{30}.\] If we set \(F_{0}=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\) then it is clear that only the values \(a=c=0\) and only the values \(b=1\), \(d=0\) give us the answer GOODLUCKFORWIN!!. Best solutions for this problem were sent by Pieter Senden (Belgium), and by Sergey Zabolotskiy (Russia). ### 4.8 Problem "Reversing a gate" #### Formulation Daniel continues to study quantum circuits. A controlled NOT (CNOT) gate is the most complex quantum gate from the universal set of gates required for quantum computation. This gate acts on two qubits and makes the following transformation: \[\ket{00}\rightarrow\ket{00},\quad\ket{01}\rightarrow\ket{01},\quad\ket{10} \rightarrow\ket{11},\quad\ket{11}\rightarrow\ket{10}.\] This gate is clearly asymmetric. The first qubit is considered as control one, and the second is as a target one. CNOT is described by the following quantum circuit \((x,y\in\mathbb{F}_{2})\): \[\ket{x}\] \[\ket{y}\] \[\ket{y}\] \[\ket{y}\] \[\ket{x}\] \[\ket{y}\] \[\ket{y}\] \[\ket{y}\] **The problem.** Help Daniel to design a circuit in a special way that reverses CNOT gate: \[\ket{x}\] \[\ket{x\oplus y}\] \[\ket{y}\] It makes the following procedure: \(\ket{00}\rightarrow\ket{00},\;\ket{01}\rightarrow\ket{11},\;\ket{10} \rightarrow\ket{10},\;\ket{11}\rightarrow\ket{01}\). To do this you should modify the original CNOT gate without re-ordering the qubits but via adding some single-qubit gates instead from the following ones: \begin{tabular}{|l|c|c|l|} \hline Pauli-X gate & \(\ket{x}\) & \(\ket{x\oplus 1}\) & acts on a single qubit in the state \\ & & \(\ket{x}\), \(x\in\{0,1\}\) \\ \hline Pauli-Z gate & \(\ket{x}\) & \(\ket{Z}\) & \(\ket{-1}^{x}\ket{x}\) & acts on a single qubit in the state \\ & & \(\ket{x}\), \(x\in\{0,1\}\) \\ \hline Hadamard gate & \(\ket{x}\) & \(\ket{H}\) & \(\frac{\ket{0}+(-1)^{x}\ket{1}}{\sqrt{2}}\) & acts on a single qubit in the state \\ & & \(\ket{x}\), \(x\in\{0,1\}\) \\ \hline \end{tabular} **Remark.** Let us briefly formulate the key points of quantum circuits. A qubit is a two-level quantum mechanical system whose state \(\ket{\psi}\) is the superposition of basis quantum states \(\ket{0}\) and \(\ket{1}\). The superposition is written as \(\ket{\psi}=\alpha_{0}\ket{0}+\alpha_{1}\ket{1}\), where \(\alpha_{0}\) and \(\alpha_{1}\) are complex numbers, called amplitudes, that possess \(\ket{\alpha_{0}}^{2}+\ket{\alpha_{1}}^{2}=1\). The amplitudes \(\alpha_{0}\) and \(\alpha_{1}\) have the following physical meaning: after the measurement of a qubit which has the state \(\ket{\psi}\), it will be observed in the state \(\ket{0}\) with probability \(\ket{\alpha_{0}}^{2}\) and in the state \(\ket{1}\) with probability \(\ket{\alpha_{1}}^{2}\). In order to operate with multi-qubit systems, we consider the bilinear operation \(\otimes:\ket{x},\ket{y}\rightarrow\ket{x}\otimes\ket{y}\) on \(x,y\in\{0,1\}\) which is defined on pairs \(\ket{x},\ket{y}\), and by bilinearity is expanded on the space of all linear combinations of \(\ket{0}\) and \(\ket{1}\). When we have two qubits in states \(\ket{\psi}\) and \(\ket{\varphi}\) correspondingly, the state of the whole system of these two qubits is \(\ket{\psi}\otimes\ket{\varphi}.\) In general, for two qubits we have \(\ket{\psi}=\alpha_{00}\ket{0}\otimes\ket{0}+\alpha_{01}\ket{0}\otimes\ket{1}+ \alpha_{10}\ket{1}\otimes\ket{0}+\alpha_{11}\ket{1}\otimes\ket{1}.\) The physical meaning of complex numbers \(\alpha_{ij}\) is the same as for one qubit, so we have the essential restriction \(\ket{\alpha_{00}}^{2}+\ket{\alpha_{01}}^{2}+\ket{\alpha_{10}}^{2}+\ket{\alpha_{ 11}}^{2}=1\). We use more brief notation \(\ket{a}\otimes\ket{b}\equiv\ket{ab}\). In order to verify your circuits, you can use different quantum circuit simulators, for example, see [15]. #### 4.8.2 Solution The desired circuit has the following form for any \(x,y\in\mathbb{F}_{2}\): Indeed, with initial state \(\left|x\right\rangle\left|y\right\rangle\) we have \[\left|\psi_{1}\right\rangle =\left(\frac{\left|0\right\rangle+(-1)^{x}\left|1\right\rangle}{ \sqrt{2}}\right)\left(\frac{\left|0\right\rangle+(-1)^{y}\left|1\right\rangle }{\sqrt{2}}\right)\] \[=\frac{\left|00\right\rangle+(-1)^{y}\left|01\right\rangle+(-1)^ {x}\left|10\right\rangle+(-1)^{x\oplus y}\left|11\right\rangle}{2},\] \[\left|\psi_{2}\right\rangle =\frac{\left|00\right\rangle+(-1)^{y}\left|01\right\rangle+(-1)^ {x}\left|11\right\rangle+(-1)^{x\oplus y}\left|10\right\rangle}{2}\] \[=\left(\frac{\left|0\right\rangle+(-1)^{x\oplus y}\left|1 \right\rangle}{\sqrt{2}}\right)\left(\frac{\left|0\right\rangle+(-1)^{y} \left|1\right\rangle}{\sqrt{2}}\right),\] \[\left|\psi_{3}\right\rangle =\left|x\oplus y\right\rangle\left|y\right\rangle.\] Best solutions were sent by Daniel Popescu (Romania), by Yo Iida (Japan) and by David Marton (Hungary). ### 4.9 Problem "Bob's symbol" #### 4.9.1 Formulation Bob learned the Goldwasser-Micali cryptosystem at university. Now, he is thinking about functions over finite fields that are similar to Jacobi symbol. He chose a function \(B_{n}:\mathbb{F}_{2^{n}}\rightarrow\mathbb{F}_{2}\) (Bob's symbol) defined as follows for any \(a\in\mathbb{F}_{2^{n}}\): \[B_{n}(a)=\begin{cases}1,&\text{if $a=x^{2}+x$ for some $x\in\mathbb{F}_{2^{n}}$},\\ 0,&\text{otherwise}.\end{cases}\] Bob knows that finite fields may have some subfields. Indeed, it is well known that \(\mathbb{F}_{2^{k}}\) is a subfield of \(\mathbb{F}_{2^{n}}\) if and only if \(k\mid n\). Bob wants to exclude the elements of subfields. In other words, he considers the restriction of \(B_{n}\) to the set \[\widehat{\mathbb{F}}_{2^{n}}=\mathbb{F}_{2^{n}}\setminus\bigcup_{k\mid n,\,k \neq n}\mathbb{F}_{2^{k}}.\] Here, by \(\mathbb{F}_{2^{n}}\setminus\mathbb{F}_{2^{k}}\) we mean the removal from \(\mathbb{F}_{2^{n}}\) the elements forming the field of order \(2^{k}\). Finally, Bob is interested in the sets \[B_{n}^{0}=\{y\in\widehat{\mathbb{F}}_{2^{n}}:B_{n}(y)=0\}\quad\text{and}\quad B _{n}^{1}=\{y\in\widehat{\mathbb{F}}_{2^{n}}:B_{n}(y)=1\}.\] * Help Bob to find \(|B_{n}^{0}|/|B_{n}^{1}|\) if \(n\) is odd. * Help Bob to find \(|B_{n}^{0}|\) and \(|B_{n}^{1}|\) for an arbitrary \(n\). #### 4.9.2 Solution Let us define \[B(\mathbb{F}_{2^{n}})=\{x\in\mathbb{F}_{2^{n}}:B_{n}(x)=0\},\text{ i.e. }B_{n}^{0}=\widehat{\mathbb{F}}_{2^{n}}\cap B(\mathbb{F}_{2^{n}}).\] First we prove the following lemma. **Lemma 1**: Let \(k\mid n\). Then \[\left|\mathbb{F}_{2^{k}}\cap B(\mathbb{F}_{2^{n}})\right|=\begin{cases}\frac{ 1}{2}|\mathbb{F}_{2^{k}}|,&\text{if $n/k$ is odd},\\ 0,&\text{otherwise}.\end{cases}\] _Proof._ Let us consider the function \(G(x)=x^{2}+x=x(x+1)\), where \(x\in\mathbb{F}_{2^{k}}\). First, \(G(x)=G(x+1)\). Secondly, \(x^{2}+x+a\), \(a\in\mathbb{F}_{2^{k}}\), has at most \(2\) roots. It means that \(G\) is a two-to-one function. Therefore, there are exactly \(2^{k-1}\) distinct \(a\) such that \(x^{2}+x\neq a\) for any \(x\in\mathbb{F}_{2^{k}}\). Next, for any such \(a\) the polynomial \(x^{2}+x+a\) is irreducible over \(\mathbb{F}_{2^{k}}\). It means that it has a root \(q\) in the quadratic extension \(\mathbb{F}_{2^{2k}}\) of \(\mathbb{F}_{2^{k}}\), i.e. \(a=q^{2}+q\). If \(n/k\) is even, \(\mathbb{F}_{2^{2k}}\) is a subfield of \(\mathbb{F}_{2^{n}}\), i.e. \(q\in\mathbb{F}_{2^{n}}\). Thus, \(|\mathbb{F}_{2^{k}}\cap B(\mathbb{F}_{2^{2k}})|=0\). If \(n/k\) is odd, then \(\mathbb{F}_{2^{2k}}\) is not a subfield of \(\mathbb{F}_{2^{n}}\). Moreover, \(\mathbb{F}_{2^{2k}}\cap\mathbb{F}_{2^{n}}=\mathbb{F}_{2^{k}}\). It means that any root \(q\) does not belong to \(\mathbb{F}_{2^{n}}\), i.e. \(|\mathbb{F}_{2^{k}}\cap B(\mathbb{F}_{2^{n}})|=2^{k-1}\). Now we are ready to answer the questions. Let \(n=m2^{t}\), where \(m\) is odd. We define \[f_{t}(d)=|\widehat{\mathbb{F}}_{2^{d2^{t}}}\cap B(\mathbb{F}_{2^{n}})|\text{ and }g_{t}(d)=\frac{1}{2}2^{d2^{t}},\] where \(d\mid m\). This means that \(|B^{0}_{n}|=|B^{0}_{m2^{t}}|=f_{t}(m)\). At the same time, the definition of \(\widehat{\mathbb{F}}_{2^{n}}\) gives us that \[\sum_{d\mid n}|\widehat{\mathbb{F}}_{2^{d}}\cap B(\mathbb{F}_{2^{n}})|=| \mathbb{F}_{2^{n}}\cap B(\mathbb{F}_{2^{n}})|.\] According to Lemma 1 and the denotations above, \[\sum_{d\mid n}|\widehat{\mathbb{F}}_{2^{d}}\cap B(\mathbb{F}_{2^{n}})|=\sum_ {d\mid m}|\widehat{\mathbb{F}}_{2^{d2^{t}}}\cap B(\mathbb{F}_{2^{n}})|=\sum_ {d\mid m}f_{t}(d)\text{ and }\] \[|\mathbb{F}_{2^{n}}\cap B(\mathbb{F}_{2^{n}})|=|\mathbb{F}_{2^{m2^{t}}}\cap B (\mathbb{F}_{2^{n}})|=\frac{1}{2}|\mathbb{F}_{2^{m2^{t}}}|=g_{t}(m).\] Hence, \[g_{t}(m)=\sum_{d\mid m}f_{t}(d)\text{ holds for any integers }m\geqslant 1\text{ and }t\geqslant 0.\] According to the Mobius inversion formula, \[f_{t}(m)=\sum_{d\mid m}\mu(d)g_{t}(m/d)=\frac{1}{2}\sum_{d\mid m}\mu(d)2^{(m/d )2^{t}}.\] Recall that \(\mu(d)=0\) if \(d\) is not square-free (there is an integer \(u\geqslant 2\) such that \(u^{2}\mid d\)); otherwise, it is equal to \(1\) (\(-1\) resp.) if \(d\) has an even (odd resp.) number of prime factors. As a result, \[|B^{0}_{n}|=\frac{1}{2}\sum_{d\mid m}\mu(d)2^{n/d}.\] Also, \(|B^{1}_{n}|=|\widehat{\mathbb{F}}_{2^{n}}|-|B^{0}_{n}|\). We need only to note that \[|\widehat{\mathbb{F}}_{2^{n}}|=\sum_{d\mid n}\mu(d)2^{n/d}.\] This can be easily proven just using \[2^{n}=|\mathbb{F}_{2^{n}}|=\sum_{d\mid n}|\widehat{\mathbb{F}}_{2^{d}}|\] together with the Mobius inversion formula. Finally, we can see that \(|B^{0}_{n}|=|B^{1}_{n}|=\frac{1}{2}|\widehat{\mathbb{F}}_{2^{n}}|\) for odd \(n\), which means that the answer for Q1 is \(1\). In fact, it directly follows from Lemma 1 and the definition of \(\widehat{\mathbb{F}}_{2^{n}}\). Many teams provided the correct answers in the second round using similar ideas: Himanshu Sheoran, Gyumin Roh, Yo Iida (India), Mikhail Kudinov, Denis Nabokov, Alexey Zelenetskiy (Russia), Stepan Davydov, Anastasia Chichaeva, Kirill Tsaregorodtsev (Russia), Mikhail Borodin, Vitaly Kiryukhin, Andrey Rybkin (Russia), Kristina Geut, Sergey Titov, Dmitry Ananichev (Russia), Pham Minh, Dung Truong Viet (Vietnam) and Alexander Belov (Russia). ### Problem "Public keys for e-coins" #### 4.10.1 Formulation Alice has \(n\) electronic coins that she would like to spend via some public service \(S\) (bank). The service applies some asymmetric algorithm of encryption \(E(,)\) and decryption \(D(,)\) in its work. Namely, for the pair of public and private keys \((PK,SK)\) and for any message \(m\) it holds: if \(c=E(m,PK)\), then \(m=D(c,SK)\) and visa versa: if \(c^{\prime}=E(m,SK)\), then \(m=D(c^{\prime},PK)\). To spend her money, Alice generates a sequence of public and private key pairs \((PK_{1},SK_{1}),\dots,\)\((PK_{n},SK_{n})\) and sends the sequence of public keys \(PK_{1},\dots,PK_{n}\) to the service \(S\). By this she authorizes the service \(S\) to control her \(n\) coins. If Alice would like to spend a coin with number \(i\) in the shop of Bob, she just gives the secret key \(SK_{i}\) to Bob and informs him about the number \(i\). To get the coin with number \(i\), Bob sends to the service \(S\) three parameters: number \(i\), some non secret message \(m\), and its electronic signature \(c^{\prime}=E(m,SK_{i})\). The service \(S\) checks whether the signature \(c^{\prime}\) corresponds to the message \(m\), i.e. does it hold the equality \(m=D(c^{\prime},PK_{i})\). If it is so, the service accepts the signature, gives the coin number \(i\) to Bob and marks it as <<spent>>. **Problem for a special prize**! Propose a _modification of this scheme_ related to generation of public and private key pairs. Namely, is it possible for Alice not to send the sequence of public keys \(PK_{1},\dots PK_{n}\) to the service \(S\), but send only some initial information enough for generating all necessary public keys on the service's side? Suppose that Alice sends to the service \(S\) only some initial key \(PK\) (denote it also as \(PK_{0}\)), some function \(f\) and a set of parameters \(T\) such that \(PK_{i+1}=f(PK_{i},T)\) for all \(i\geqslant 0\). Propose your variant of this function \(f\) and the set \(T\). Think also what asymmetric cryptosystem it is possible to use in such scheme. **Requirements to the solution.** Knowing \(PK\), \(f\) and \(T\), it is impossible to find any private key \(SK_{i}\), where \(i=1,\dots,n\). It should be impossible to recover \(SK_{i}\) even if the secret keys \(SK_{1},\dots,SK_{i-1}\) are also known, or even if all other secret keys are known (more strong condition). #### 4.10.2 Solution The problem was solved by two teams and partially solved by three teams. One of the best partial solutions was proposed by the team of Viet-Sang Nguyen, Nhat Linh Le Tan and Phuong Hoa Nguyen from France. It is based on principles of Elliptic-curves-cryptography and hash functions. The main idea is to consider \(SK_{i}\) as the sequence of numbers related to each other with the help of HMAC-SHA256. Public keys can be easily generated by the server \(S\). The main disadvantage of the scheme is described by the authors: server \(S\) should keep the point \(PK_{0}\) in secret, as well as Bob should do with \(SK_{i}\). The problem is that if there is some data leakage, then all coins of Alice will be lost. So, the potential complicity of the server and Bob forms a crucial danger for Alice. An interesting idea was proposed by Himanshu Sheoran (India) Gyumin Roh (South Korea) and Yo Iida (Japan). It is based on the combination of two pairs of RSA keys. With one pair it is proposed to sign messages from Bob to the server, with another one Alice generates her private keys to give them to Bob. Solution was accepted as partial since the security of this scheme should be considered in more details. A very nice partial solution was proposed by Robin Jadoul (Belgium), Eserver Yu (Taiwan), Jack Pope (United Kingdom). The authors describe an identity-based signature scheme with message recovery based on the RSA hardness assumption. The main idea is to generate public and private keys from the corresponding master keys by application of cryptographic hash functions (four functions are used). An original attempt to solve the problem was proposed by Alexander Bakharev, Rinchin Zapanov and Denis Bykov (Russia). They applied RSA-like technique and considered private keys as \(SK_{i}=PK_{i}^{-1}\mod\phi(n)\), where \(n=pq\) and prime numbers \(p\), \(q\) are known to Alice only, as well as \(\phi(n)\). Public keys are formed as the consecutive prime numbers: \(PK_{i+1}\) is the next prime number after \(PK_{i}\). But the security of this scheme is still under the question since public keys are too connected; it should be analyzed. We have accepted two complete solutions. One of them was proposed by the team of G.Teseleanu, P.Cotan, L.Constantin-Sebastian from the Institute of Mathematics of the Romanian Academy. On the first round the partial solution was proposed by G.Teseleanu. RSA-like technique is applied in the solution. Private and public keys are connected as \((SK_{i})^{2}=PK_{i}\mod N\), while public keys are generated via some PRNG from the fixed master key \(K\) and number \(i\). Only Alice can produce private keys since she knows prime factors \(p\) and \(q\), where \(N=pq\). Another accepted solution was proposed by Ivan Ioganson, Zhan-Mishel Dakuo and Andrei Golovanov from Saint Petersburg ITMO University (Russia). Ideas of ID-based signature scheme are used in it. Public and private keys are generated from the corresponding master keys \(PK_{0}\) and \(SK_{0}\). The principles of Diffie-Hellman protocol on finite groups are applied. Namely, private keys are generated as \(SK_{i}=SK_{0}*H(i)\), whereas public keys used by the server are combinations of \(PK_{0}=SK_{0}*P\) and numbers \(i\), where \(P\) is a generator element of the group. It is hard to recover \(SK_{i}\) by information on the server and from \(SK_{1},\ldots,SK_{i-1},SK_{i+1},\ldots,SK_{n}\) if the hash function \(h\) is of a good cryptographic quality. ### Problem "CP Problem" #### 4.11.1 Formulation Let \(\mathbb{G}=\langle g\rangle\) be a group of prime order \(q\), \(\kappa\) is the bit length of \(q\). Let us consider two known modifications of the discrete logarithm problem over \(\mathbb{G}\), namely, \(s\)-DLOG problem and \(\ell\)-OMDL problem. Both of them are believed to be difficult. \(s\)**-DLOG problem** (with parameter \(s\in\mathbb{N}\)) \begin{tabular}{l l} Unknown values: & \(x\) is chosen uniformly at random from \(\mathbb{Z}_{q}^{*}\). \\ Known values: & \(g^{x},g^{x^{2}},\ldots,g^{x^{s}}\). \\ Access to oracles: & no. \\ The task: & to find \(x\). \\ \end{tabular} \(\ell\)**-OMDL (One-More Discrete Log) problem** (with parameter \(\ell\in\mathbb{N}\)) \begin{tabular}{l l} Unknown values: & \(x_{1},x_{2},\ldots,x_{\ell+1}\) are chosen uniformly at random from \(\mathbb{Z}_{q}^{*}\). \\ Known values: & \(g^{x_{1}},g^{x_{2}},\ldots,g^{x_{\ell+1}}\). \\ Access to oracles: & at most \(\ell\) queries to \(O_{1}\) that on input \(y\in\mathbb{G}\) returns \(x\) \\ & such that \(g^{x}=y\). \\ The task: & to find \(x_{1},x_{2},\ldots,x_{\ell+1}\). \\ \end{tabular} Consider another one problem that is close to the \(s\)-DLOG and \(\ell\)-OMDL problems: \((k,t)\)**-CP (Chaum--Pedersen) problem** (with parameters \(k,t\in\mathbb{N}\)) \begin{tabular}{l l} Unknown values: & \(x_{1},x_{2},\ldots,x_{t+1}\) are chosen uniformly at random from \(\mathbb{Z}_{q}^{*}\). \\ Known values: & \(g^{x_{1}},g^{x_{2}},\ldots,g^{x_{t+1}}\). \\ Access to oracles: & at most \(k\) queries to \(O_{1}\) that on input \((i,z)\in\{1,\ldots,t+1\}\times\mathbb{G}\) \\ & returns \(z^{x_{i}}\), and at most \(t\) queries to \(O_{2}\) that on input \((\alpha_{1},\ldots,\alpha_{t+1})\in\mathbb{Z}_{q}^{t+1}\) returns \(\alpha_{1}x_{1}+\ldots+\alpha_{t+1}x_{t+1}\). \\ The task: & to find \(x_{1},x_{2},\ldots,x_{t+1}\). \\ \end{tabular} It is easy to see that if there exists a polynomial (by \(\kappa\)) algorithm that solves the \(s\)-DLOG problem, then there exists a polynomial algorithm that solves the \((s-1,t)\)-CP problem for any \(t\in\mathbb{N}\). Problem for a special prize! Prove or disprove the following conjecture: if there exists a polynomial algorithm that solves \((k,t)\)-CP problem, then there exists a polynomial algorithm that solves at least one of the \(s\)-DLOG and \(\ell\)-OMDL problems, where \(k,t,s,\ell\) are upper bounded by polynomial of \(\kappa\). #### 4.11.2 Solution Unfortunately, there were no any advances on solving this problem among participants, so, this conjecture is still open. ### Problem "Interpolation with Errors" #### 4.12.1 Formulation Let \(n=2022\) and let \(\mathbb{Z}_{n}\) be the ring of integers modulo \(n\). Given \(x_{i},y_{i}\in\mathbb{Z}_{n}\) for \(i\in\{1,\ldots,324\}\), find monic polynomials \[f(x) =x^{16}+\alpha_{15}x^{15}+\ldots+\alpha_{1}x+\alpha_{0},\] \[g(x) =x^{16}+\beta_{15}x^{15}+\ldots+\beta_{1}x+\beta_{0}\] of degree \(d=16\) and coefficients from \(\mathbb{Z}_{n}\) such that the relation \[y_{i}=\frac{f(x_{i})}{g(x_{i})}=\frac{x_{i}^{16}+\alpha_{15}x_{i}^{15}+\ldots+ \alpha_{1}x_{i}+\alpha_{0}}{x_{i}^{16}+\beta_{15}x_{i}^{15}+\ldots+\beta_{1}x_ {i}+\beta_{0}}\] holds for at least \(90\) of the indices \(i\in\{1,\ldots,324\}\). **Note.** The coefficients \(\beta_{0},\ldots,\beta_{15}\) are such that the denominator of the above fraction is invertible for all possible values of \(x_{i}\in\mathbb{Z}_{n}\). It can be assumed that they are sampled uniformly at random from all such sets of values. Furthermore, the positions and error values can be also assumed to be sampled uniformly at random. The attachment (see [20]) contains a CSV file with \(324\) triplets \((i,x_{i},y_{i})\). #### 4.12.2 Solution First, note that \(n=2022=2\cdot 3\cdot 337\). Therefore, the problem can be solved for moduli \(2,3,337\) independently, and then recovered using the Chinese Remainder Theorem (CRT). Furthermore, for moduli \(2\) and \(3\), there are only a few possible polynomials (modulo the relations \(x^{2}=x\) modulo \(2\) and \(x^{3}=x\) modulo \(3\)). The best candidate polynomial modulo \(6\) (ignoring equivalent forms) satisfies \(125\) / \(324\) values \(x_{i},y_{i}\), while the next best one does only \(109\) / \(324\). Note that the expected value is \(90+(324-90)/6=129\) (\(90\) correct ones and \(1/6\) wrong pairs satisfying the relation modulo \(6\) by chance), so that it is safe to assume that the best one is correct. We can now consider the problem modulo \(337\), where we know that the \(90\) correct pairs must be among those \(125\) correct pairs observed modulo \(6\). Denote the set of those \(125\) remaining indices by \(I\). Note that the relation can be rewritten as \[y_{i}\cdot g(x_{i})-f(x_{i})=0,\] or, more explicitly, \[\left(y_{i}\cdot\sum_{j=0}^{15}\beta_{i}x_{i}^{j}\right)-\Big{(}\sum_{j=0}^{1 5}\alpha_{i}x_{i}^{j}\Big{)}+\left(y_{i}x_{i}^{d}-x_{i}^{d}\right)=0. \tag{1}\] The target problem can now be formulated as the problem of decoding a linear code over the finite field \(GF(337)\). Indeed, let the generator matrix \(G\) be given by columns \[(-1,-x_{i},-x_{i}^{2},\ldots,-x_{i}^{15},\ \ y_{i},y_{i}\cdot x_{i},y_{i} \cdot x_{i}^{2},\ldots,y_{i}\cdot x_{i}^{15})\] for all chosen indexes \(i\in I\), let the target vector \(v\) be given by \[v=(y_{i}x_{i}^{d}-x_{i}^{d})_{i\in I},\] and consider the "solution" vector \[s=(\alpha_{0},\ldots,\alpha_{15},\beta_{0},\ldots,\beta_{15}).\] It easy easy to verify that the codeword \(s\times G\) differs from \(-v\) in at most \(125-90\) places, i.e., has at most \(35\) errors. Indeed, the vector \(s\times G\) compute the contribution of the first two clauses of Equation (1), whereas \(v\) defines the third clause, and the three clauses sum to zero on correct data pairs. Note that \(G\) defines a \([125,32]\) code, i.e., a \(32\)-dimensional code of length \(125\). A random such code has expected minimum distance about 82 (given by the Gilbert-Varshamov bound), so that the solution (with the error 35 less than half of the distance) should likely be unique (modulo 337). A very basic yet efficient method for linear code decoding is the so-called "pooled Gauss" method: choosing \(k=32\) random coordinates of the code and assuming that they are error-free, allowing to recover full codeword by solving a linear system. Alternatively, SageMath includes an implementation of the Lee-Brickell method, which is slightly faster. The decoding should take less than 30 minutes using the basic method. **Remark:** due to equivalent polynomial fractions modulo 2 and modulo 3, the overall solution is not unique (but there are only a few candidates). ### 4.13 Problem "Has01" #### Formulation Bob is a beginner cryptographer. He read an article about the new hash function HAS01 (see a description in [12]). Bob decided to implement the HAS01 function in order to use it for checking the integrity of messages being forwarded. However, he was inattentive and made a mistake during the implementation. In the function \(f_{1}\), he did not notice the sign \(\ast\)'\(\flat\) in the variable \(a\) and used the following set of formulas: **for**\(i=0\) to 7 **do** **for**\(j=0\) to 6 **do** **Q1**: Prove that Bob's version of the hash function is cryptographically weak. **Q2**: Find a collision to the following message (given in hexadecimal format): 316520393820336220323620343720316320373820386520. The test set value for the original HAS01 hash function is given in [21]. The test set value for Bob's implementation is given in [22]. #### Solution **Q1** In the case where Bob makes a mistake and uses formulas with recursion, it turns out that for each first byte of the string (a00, a10, a20, a30, a40, a50, a60, a70), the most significant three bits do not affect the formation of the digest. Therefore, the function is not collision resistant, making it easy to pick up a number of different values that produce the same hash value. **Q2** According to the formulas, the most significant three bits for the first byte of each string do not affect the formation of the hash value. However, the original message fills only the first three rows of the original matrix. Therefore, changing the upper three bits in bytes a00, a10, a20 will allow you to get the same hash values. Hence, for a given value 316520393820336220323620323620343720316320373820386520, you can get \(2^{9}-1=511\) collisions. For example: 316520393820336220323620343720316320373820386520; F1652039382033622032362023620343720316320373820386520; F16520393820336220323620343720316320373820386520; 31652039382033622032362034372031E320373820386520; and so on. It should be noted, that most of those participants who tried to solve this problem were able to get the correct answer and determine the collision. Separately, it is worth noting that the team of Mikhail Borodin, Vitaly Kiryukhin and Andrey Rybkin (Russia) not only answered the questions of the task correctly, but also considered the issues of a possible vulnerability for the HAS01-512 algorithm. ### Problem "Weaknesses of the PHIGFS" #### 4.14.1 Formulation A young cryptographer Philip designs a family of lightweight block ciphers based on a 4-line type-2 Generalized Feistel scheme (GFS) with better diffusion effect. Its block is divided into four \(m\)-bit subblocks, \(m\geqslant 1\). For better diffusion effect, Philip decides to use a \((4\times 4)\)-matrix \(A\) over \(\mathbb{F}_{2^{m}}\) instead of a standard subblocks shift register in each round. The family PHIGFS\({}_{\ell}(A,b)\) is parameterized by a non-linear permutation \(b\colon\mathbb{F}_{2^{m}}\to\mathbb{F}_{2^{m}}\), the matrix \(A\) and the number of rounds \(\ell\geqslant 1\). The one-round keyed transformation of PHIGFS\({}_{\ell}(A,b)\) is a permutation \(g_{k}\) on \(\mathbb{F}_{2^{m}}^{4}\) defined as: \[g_{k}(x_{3},x_{2},x_{1},x_{0})=A\cdot(x_{3},x_{2}\oplus b(x_{3}\oplus k_{1}),x _{1},x_{0}\oplus b(x_{1}\oplus k_{0}))^{T},\] where \(x_{0},x_{1},x_{2},x_{3}\in\mathbb{F}_{2^{m}}\), \(k=(k_{1},k_{0})\) is a \(2m\)-bit round key, \(k_{0},k_{1}\in\mathbb{F}_{2^{m}}\). The \(\ell\)-round encryption function \(f_{k^{(1)},\ldots,k^{(\ell)}}\colon\mathbb{F}_{2^{m}}^{4}\to\mathbb{F}_{2^{m}}^ {4}\) under a key \((k^{(1)},\ldots,k^{(\ell)})\in\mathbb{F}_{2^{m}}^{\ell}\) is given by \[f_{k^{(1)},\ldots,k^{(\ell)}}(\mathbf{x})=g_{k^{(\ell)}}\ldots g_{k^{(1)}}( \mathbf{x})\text{ for all }\mathbf{x}\in\mathbb{F}_{2^{m}}^{4}.\] For effective implementation and security, Philip chooses two binary matrices \(A^{\prime},A^{\prime\prime}\) with the maximum branch number among all binary matrices of size 4, where \[A^{\prime}=\left(\begin{array}{cccc}1&1&0&1\\ 1&0&1&1\\ 0&1&1&1\\ 1&1&1&0\end{array}\right),\ A^{\prime\prime}=\left(\begin{array}{cccc}0&1&1& 1\\ 1&1&1&0\\ 1&1&0&1\\ 1&0&1&1\end{array}\right).\] For approval, he shows the cipher to his friend Antony who claims that \(A^{\prime},A^{\prime\prime}\) are bad choices because ciphers PHIGFS\({}_{\ell}(A^{\prime},b)\), PHIGFS\({}_{\ell}(A^{\prime\prime},b)\) are insecure against distinguisher attacks for all \(b\colon\mathbb{F}_{2^{m}}\to\mathbb{F}_{2^{m}}\), \(\ell\geqslant 1\). Help Philip to analyze the cipher PHIGFS\({}_{\ell}(A,b)\). Namely, for any \(b\colon\mathbb{F}_{2^{m}}\to\mathbb{F}_{2^{m}}\) and any \(\ell\geqslant 1\), show that PHIGFS\({}_{\ell}(A,b)\) has * \(\ell\)-round differential sets with probability 1; * \(\ell\)-round impossible differential sets; for the following cases: **Q1**\(A=A^{\prime}\); and **Q2**\(A=A^{\prime\prime}\). In each case, construct these nontrivial differential sets and prove the corresponding property. **Remark.** Let us recall the following definitions. * Let \(\delta,\varepsilon\in\mathbb{F}_{2^{n}}\) be fixed nonzero input and output differences. The _differential probability_ of \(s\colon\mathbb{F}_{2^{n}}\to\mathbb{F}_{2^{n}}\) is defined as \[p_{\delta,\varepsilon}(s)=2^{-n}\cdot\left|\{\alpha\in\mathbb{F}_{2^{n}}|s( \alpha\oplus\delta)\oplus s(\alpha)=\varepsilon\}\right|.\] * If \(s\colon\mathbb{F}_{2^{n}}\times K\to\mathbb{F}_{2^{n}}\) depends on a key space \(K\), then the _differential probability_ of \(s\) is defined as \[p_{\delta,\varepsilon}(s)=|K|^{-1}\sum_{k\in K}p_{\delta,\varepsilon}(s_{k}),\] where \(s(x,k)=s_{k}(x)\), \(x\in\mathbb{F}_{2^{n}}\), \(k\in K\). * Let \(\Omega,\Delta\subseteq\mathbb{F}_{2^{n}}\backslash\{0\}\) and \(\Omega,\Delta\) are nonempty. If \(p_{\delta,\varepsilon}(s)=0\) for any \(\delta\in\Omega,\ \varepsilon\in\Delta\), then \((\Omega,\Delta)\) are _impossible differential sets_. But if \[\sum_{\delta\in\Omega,\varepsilon\in\Delta}p_{\delta,\varepsilon}(s)=1,\] then \((\Omega,\Delta)\) are _differential sets with probability 1_. We call \((\Omega,\Delta)\) trivial (impossible) differential sets if \(\Omega\in\{\varnothing,\mathbb{F}_{2^{n}}\backslash\{0\}\}\) or \(\Delta\in\{\varnothing,\mathbb{F}_{2^{n}}\backslash\{0\}\}\). #### 4.14.2 Solution Let \(\delta,\varepsilon\in\mathbb{F}_{2^{n}}\) be fixed nonzero input and output differences. The differential probability of \(s\colon\mathbb{F}_{2^{n}}\rightarrow\mathbb{F}_{2^{n}}\) is defined as \[p_{\delta,\varepsilon}(s)=2^{-n}\cdot\left|\{\alpha\in\mathbb{F}_{2^{n}}|s( \alpha\oplus\delta)\oplus s(\alpha)=\varepsilon\}\right|.\] If \(s\colon\mathbb{F}_{2^{n}}\times K\rightarrow\mathbb{F}_{2^{n}}\) depends on a key space \(K\) then the differential probability of \(s\) is defined as \[p_{\delta,\varepsilon}(s)=|K|^{-1}\sum_{k\in K}p_{\delta,\varepsilon}(s_{k}),\] where \(s(x,k)=s_{k}(x)\), \(x\in\mathbb{F}_{2^{n}}\), \(k\in K\). In that case the pair \((\delta,\varepsilon)\) represents a differential denoted by \(\delta{\longrightarrow}^{s}\varepsilon\). For the \(l\)-round encryption function \(f\), we will sometimes write \(\delta{\longrightarrow}_{l}\varepsilon\) to emphasize the number of rounds \(l\) instead of \(\delta{\longrightarrow}^{l}\varepsilon\). For \(\delta\in\mathbb{F}_{2^{m}},b:\mathbb{F}_{2^{m}}\rightarrow\mathbb{F}_{2^{m}}\), we denote \[\Delta_{\delta}(b)=\left\{b(\alpha\oplus\delta)\oplus b(\alpha)|\alpha\in \mathbb{F}_{2^{m}}\right\}.\] Note that \(g_{k}\) consists of a transformation \(v_{k}:\mathbb{F}_{2^{m}}^{4}\rightarrow\mathbb{F}_{2^{m}}^{4}\) and the matrix \(a\) over \(\mathbb{F}_{2^{m}}\), where \[v_{k}(x_{3},x_{2},x_{1},x_{0})=(x_{3},x_{2}\oplus b(x_{3}\oplus k_{1}),x_{1},x _{0}\oplus b(x_{1}\oplus k_{0}))\,,\] \[g_{k}(\mathbf{x})=a(v_{k}(\mathbf{x}))^{T},\mathbf{x}\in\mathbb{F}_{2^{m}}^{4}.\] **Case I.**\(a=a_{1}\). Let \(\varepsilon\in\mathbb{F}_{2^{m}}\), \[W(\varepsilon)=\left\{(\alpha_{3},\alpha_{2},\alpha_{1},\alpha_{0})\in \mathbb{F}_{2^{m}}^{4}|\alpha_{3}\oplus\alpha_{1}=\varepsilon\right\}\setminus \left\{(0,0,0,0)\right\}.\] **Theorem 1**. Let \(l\) be any positive integer, \(\varepsilon\in\mathbb{F}_{2^{m}}\). Then \(l\)-round differential sets \(W(\varepsilon){\longrightarrow}_{l}W(\varepsilon)\) of the \(\mathrm{PHIGFS}_{l}(a_{1},b)\) hold with probability \(1\). **Proof.** Note that for any \((x_{3},x_{2},x_{1},x_{0})\in\mathbb{F}_{2^{m}}^{4}\) we have the following equality \[a_{1}(x_{3},x_{2},x_{1},x_{0})^{T}=(x_{3}\oplus x_{2}\oplus x_{0},x_{3}\oplus x _{1}\oplus x_{0},x_{2}\oplus x_{1}\oplus x_{0},x_{3}\oplus x_{2}\oplus x_{1}) ^{T}.\] Let us consider any nonzero \((\delta,\lambda,\omega)\in\mathbb{F}_{2^{m}}^{3}\) and any round key \(k\in\mathbb{F}_{2^{m}}^{2}\). Note that \(v_{k}\) maps a difference \[(\delta,\lambda,\delta\oplus\varepsilon,\omega)\in W(\varepsilon)\text{ to a difference }\left(\delta,\lambda^{(1)},\delta\oplus\varepsilon,\omega^{(1)}\right)\in W(\varepsilon)\] for any \[\lambda^{(1)}\in\Delta_{\delta}(b)\oplus\lambda,\;\omega^{(1)}\in\Delta_{ \delta\oplus\varepsilon}(b)\oplus\omega.\] Then \[a_{1}\left(\delta,\lambda^{(1)},\delta\oplus\varepsilon,\omega^{(1)}\right)= \left(\omega^{(1)}\oplus\delta\oplus\lambda^{(1)},\omega^{(1)}\oplus \varepsilon,\omega^{(1)}\oplus\delta\oplus\lambda^{(1)}\oplus\varepsilon, \lambda^{(1)}\oplus\varepsilon\right).\] Thus, \(g_{k}\) encrypts the difference \[(\delta,\lambda,\delta\oplus\varepsilon,\omega)\in W(\varepsilon)\text{ to the difference }\left(\delta^{(1)},\lambda^{(2)},\delta^{(1)}\oplus\varepsilon,\omega^{(2)} \right)\in W(\varepsilon),\] where \[\delta^{(1)}=\lambda^{(1)}\oplus\delta\oplus\omega^{(1)},\;\lambda^{(2)}= \omega^{(1)}\oplus\varepsilon,\;\omega^{(2)}=\lambda^{(1)}\oplus\varepsilon.\] Therefore, \[P\left\{W(\varepsilon){\longrightarrow}^{g}W(\varepsilon)\right\}=1.\] By induction on the number of rounds \(l\), we can straightforwardly get \[P\left\{W(\varepsilon){\longrightarrow}_{l}W(\varepsilon)\right\}=1.\] \(\square\) **Corollary 1.** For any number of rounds \(l\geqslant 1\), \((W(\varepsilon),W(\delta))\) are a pair of impossible \(l\)-round differential sets for any different \(\varepsilon,\delta\in\mathbb{F}_{2^{m}}\). The proof follows from Theorem 1. \(\Box\) **Case II.**\(a=a_{2}\). Let \[W=\left\{(0,\delta,\delta,\theta)|(\delta,\theta)\in\mathbb{F}_{2^{m}}^{2} \setminus\{(0,0)\}\right\}.\] **Theorem 2**. Let \(l\) be any positive integer, \(\varepsilon\in\mathbb{F}_{2^{m}}\). Then \(l\)-round differential sets \(W{\longrightarrow_{l}}W\) of the PHIGFS\({}_{l}(a_{2},b)\) holds with probability \(1\). **Proof.** Note that for any \((x_{3},x_{2},x_{1},x_{0})\in\mathbb{F}_{2^{m}}^{4}\) we have \[a_{2}(x_{3},x_{2},x_{1},x_{0})^{T}=(x_{3}\oplus x_{2}\oplus x_{1},x_{3}\oplus x _{2}\oplus x_{0},x_{3}\oplus x_{1}\oplus x_{0},x_{2}\oplus x_{1}\oplus x_{0})^ {T}.\] Let us consider any nonzero \((\delta,\theta)\in\mathbb{F}_{2^{m}}^{2}\) and any round key \(k\in\mathbb{F}_{2^{m}}^{2}\). Note that \(v_{k}\) maps a difference \[(0,\delta,\delta,\theta)\in W\text{ to a difference }\left(0,\delta,\delta, \theta^{(1)}\right)\in W\] for any \(\theta^{(1)}\in\Delta_{\delta}(b)\oplus\gamma.\) Then \[a_{2}\left(0,\delta,\delta,\theta^{(1)}\right)=\left(0,\theta^{(1)}\oplus \delta,\theta^{(1)}\oplus\delta,\theta^{(1)}\right).\] Thus, \(g_{k}\) encrypts the difference \[(0,\delta,\delta,\theta)\in W\text{ to the difference }\left(0,\delta^{(1)}, \delta^{(1)},\theta^{(1)}\right)\in W,\] where \(\delta^{(1)}=\theta^{(1)}\oplus\delta\). Therefore, \[P\left\{W{\longrightarrow^{g}}W\right\}=1.\] By induction on the number of rounds \(l\), we can straightforwardly get \[P\left\{W{\longrightarrow_{l}}W\right\}=1.\] \(\Box\) **Corollary.** For any the number of rounds \(l\geqslant 1\), \((W,W^{\prime})\) are a pair of impossible \(l\)-round differential sets for any \(W^{\prime}\subseteq\mathbb{F}_{2^{m}}^{4}\setminus(W\cup\{0\})\). The proof follows from Theorem 2. \(\Box\) We would like to mention the solution of Gabriel Tulba-Lecu, Ioan Dragomir and Mircea-Costin Preoteasa (Romania). ### 4.15 Problem "Super dependent S-box" #### 4.15.1 Formulation Harry wants to find a super dependent S-box for his new cipher. He decided to use a permutation that is strictly connected with every of its variables. He tries to estimate the number of such permutations. A vectorial Boolean function \(F(x)=(f_{1}(x),f_{2}(x),\ldots,f_{n}(x))\), where \(x\in\mathbb{F}_{2}^{n}\), is a _permutation_ on \(\mathbb{F}_{2}^{n}\) if it is a one-to-one mapping on the set \(\mathbb{F}_{2}^{n}\). Its coordinate function \(f_{k}(x)\) (that is a Boolean function from \(\mathbb{F}_{2}^{n}\) to \(\mathbb{F}_{2}\)), _essentially depends_ on the variable \(x_{j}\) if there exist values \(b_{1},b_{2},\ldots,b_{j-1},b_{j+1},\ldots,b_{n}\in\mathbb{F}_{2}\) such that \[f_{k}\left(b_{1},b_{2},\ldots,b_{j-1},0,b_{j+1},\ldots,b_{n}\right)\neq f_{k} \left(b_{1},b_{2},\ldots,b_{j-1},1,b_{j+1},\ldots,b_{n}\right).\] In other words, the essential dependence on the variable \(x_{j}\) of a function \(f\) means the presence of \(x_{j}\) in the algebraic normal form of \(f\) (the unique representation of a function in the basis of binary operations AND, XOR, and constants \(0\) and \(1\)). **An example.** Let \(n=3\). Then the Boolean function \(f(x_{1},x_{2},x_{3})=x_{1}x_{2}\oplus x_{3}\) essentially depends on all its variables; but \(g(x_{1},x_{2},x_{3})=x_{1}x_{2}\oplus x_{2}\oplus 1\) essentially depends only on \(x_{1}\) and \(x_{2}\). **The problem.** Find the number of permutations on \(\mathbb{F}_{2}^{n}\) such that all their coordinate functions essentially depend on all \(n\) variables, namely **Q1**: Solve the problem for \(n=2,3\). **Q2**: **Problem for a special prize!** Solve the problem for arbitrary \(n\). #### 4.15.2 Solution Let us denote the number of super-dependent S-boxes in \(n\) variables by \(S(n)\). We can represent \(F\) as \(F(x)=(f_{1}(x),\ldots,f_{n}(x))\), where \(x\in\mathbb{F}_{2}^{n}\) and \(f_{1},\ldots,f_{n}\) are Boolean functions in \(n\) variables (i.e. functions of the form \(\mathbb{F}_{2}^{n}\to\mathbb{F}_{2}\)). Recall that \(F\) is a permutation if and only if any its component function \(b_{1}f_{1}(x)\oplus\ldots\oplus b_{n}f_{n}(x)\), \(b\in\mathbb{F}_{2}^{n}\setminus\{0\}\), is balanced (i.e. it takes zero and one in the same number of arguments). The most of solutions provided by the participants contain an answer for Q1. As a rule, an exhaustive search was used. The correct answer for Q1 is the following: \(S(2)=0\) and \(S(3)=24576\). At the same time, some progress has been made on Q2. A short description of these results is bellow. The team of Mikhail Kudinov, Denis Nabokov and Alexey Zelenetskiy (Russia) used the inclusion-exclusion principle and provided lower and upper bounds for \(S(n)\). Their ideas were the following. Let \(H(k)\) be the set of functions \(f:\mathbb{F}_{2}^{k}\to\mathbb{F}_{2}\) that essentially depend on all its variables \(x_{1},\ldots,x_{k}\). Then, \[|H(n)|=C_{2^{n}}^{2^{n-1}}-\sum_{k=0}^{n-1}C_{n}^{k}|H(k)|,\] where \(C_{n}^{k}\) is a binomial coefficient. Next, let us define for any \(i\in\{1,\ldots,n\}\) the sets \[A_{i}=\{\text{a permutation }F(x)=(f_{1}(x),\ldots,f_{n}(x))\text{ on }\mathbb{F}_{2}^{n}:f_{i}\notin H(n)\}.\] It means that the number of super-dependent S-boxes is the following: \[S(n)=2^{n}!-|A_{1}\cup\ldots\cup A_{n}|.\] It is not difficult to see that \(|A_{i_{1}}\cap\ldots\cap A_{i_{k}}|=|A_{1}\cap\ldots\cap A_{k}|\) for any \(1\leqslant k\leqslant n\) and any \(k\)-element set \(\{i_{1},\ldots,i_{k}\}\subseteq\{1,\ldots,n\}\). The inclusion-exclusion principle gives us that \[S(n)=2^{n}!+\sum_{k=1}^{n}(-1)^{k}C_{n}^{k}|A_{1}\cap\ldots\cap A_{k}|.\] The cardinalities of intersections can be calculated in the following way: \[|A_{1}\cap\ldots\cap A_{k}|=2^{n}!\frac{d(n,k)}{\prod_{i=0}^{k-1}(C_{2^{n-i}} ^{2^{n-i-1}})^{2^{i}}},\] where \(d(n,k)\) is the number of tuples \((f_{1},\ldots,f_{k})\) consists of Boolean functions in \(n\) variables such that \(f_{1},\ldots,f_{k}\notin H(n)\) and \(b_{1}f_{1}\oplus\ldots\oplus b_{k}f_{k}\) is balanced for any \(b\in\mathbb{F}_{2}^{k}\setminus\{0\}\). It is not easy to calculate \(d(n,k)\). However, there is a trivial estimation \(d(n,k)\geqslant C_{2^{n-1}}^{2^{n-2}}\). Also, \[|A_{1}|=2^{n}!\frac{C_{2^{n-1}}^{2^{n-1}}-|H(n)|}{C_{2^{n-1}}^{2^{n-1}}}.\] This can be used to estimate \(S(n)\): \[2^{n}!-n|A_{1}|\leqslant S(n)\leqslant 2^{n}!-|A_{1}|.\] The team of Stepan Davydov, Anastasiia Chichaeva and Kirill Tsaregorodtsev (Russia) proposed interesting ideas as well. They noticed that \(2^{n}\mid S(n)\), implemented Monte-Carlo simulations for \(n=4\) and \(n=5\) and showed that \(\lim_{n\to\infty}\frac{S(n)}{2^{n}}=1\). Also, the team pointed out a subclass of super-dependent S-boxes such that even component functions of its representatives essentially depend on all its variables. The team of Mikhail Borodin, Vitaly Kiryukhin and Andrey Rybkin (Russia) calculated that \(S(4)=19344102217728=24\cdot 16\cdot 50375266192\). They used that the addition to a super-dependent S-box in \(n\) variables of any binary vector from \(\mathbb{F}_{2}^{n}\) and rearranging its output bits provided a super-dependent S-box as well. In other words, \(n!\cdot 2^{n}\mid S(n)\) holds. Note that some other participants mentioned such kind of classifications (for instance, in the solution above). However, the team most successfully exploited this fact. ### Problem "Quantum entanglement " #### 4.16.1 Formulation The Nobel Prize in Physics in 2022 was awarded to researchers who experimentally investigated quantum _entanglement_. One of their studies was devoted to a Greenberger-Horne-Zeilinger state \(\left|GHZ\right\rangle=\frac{1}{\sqrt{2}}(\left|000\right\rangle+\left|111\right\rangle)\), which is an entangled state of three qubits. This state can be created using the following quantum circuit: After the measurement, the probability to find the system described by \(\left|GHZ\right\rangle\) in the state \(\left|000\right\rangle\) or in the state \(\left|111\right\rangle\) is equal to \(1/2\). When we make measurements in quantum physics, we are able to make _post-selection_. For example, if we post-select the events when the first qubit was in state \(\left|0\right\rangle\), the second and the third qubits will also be found in the state \(\left|0\right\rangle\) for sure, this is actually what entanglement means. We also see that the post-selection destroys entanglement of two remaining qubits. * But what will happen, if we post-select the events when the 1st qubit is in the Hadamard state \(\left|+\right\rangle=\frac{1}{\sqrt{2}}(\left|0\right\rangle+\left|1\right\rangle)\)? How can we perform this kind of post-selection if the result of each measurement of a qubit state can be only \(0\) or \(1\) and we can only post-select these events? Will the two remaining qubits be entangled after post-selection? Design the circuit which will provide an answer. * **Problem for a special prize!** There are two different classes of three-qubit entanglement. One of them is \[\left|GHZ\right\rangle=\frac{1}{\sqrt{2}}(\left|000\right\rangle+\left|111 \right\rangle),\] and the other is \[\left|W\right\rangle=\frac{1}{\sqrt{3}}(\left|001\right\rangle+\left|010 \right\rangle+\left|100\right\rangle).\] Discuss the possible ideas how the difference between these states can be found with the usage of post-selection and measurement. Don't forget that you need to verify entanglement for both types of states! **Remark.** Let us briefly formulate the key points of quantum circuits. A qubit is a two-level quantum mechanical system whose state \(\left|\psi\right\rangle\) is the superposition of basis quantum states \(\left|0\right\rangle\) and \(\left|1\right\rangle\). The superposition is written as \(\left|\psi\right\rangle=\alpha_{0}\left|0\right\rangle+\alpha_{1}\left|1\right\rangle\), where \(\alpha_{0}\) and \(\alpha_{1}\) are complex numbers, called amplitudes, that possess \(\left|\alpha_{0}\right|^{2}+\left|\alpha_{1}\right|^{2}=1\). The amplitudes \(\alpha_{0}\) and \(\alpha_{1}\) have the following physical meaning: after the measurement of a qubit which has the state \(\left|\psi\right\rangle\), it will be observed in the state \(\left|0\right\rangle\) with probability \(\left|\alpha_{0}\right|^{2}\) and in the state \(\left|1\right\rangle\) with probability \(\left|\alpha_{1}\right|^{2}\). Note that we can measure qubit, initially given in the state \(\left|\psi\right\rangle=\alpha_{0}\left|0\right\rangle+\alpha_{1}\left|1\right\rangle\), in other basis, for example Hadamard basis \(\left|+\right\rangle=\frac{1}{\sqrt{2}}(\left|0\right\rangle+\left|1\right\rangle)\) and \(\left|-\right\rangle=\frac{1}{\sqrt{2}}(\left|0\right\rangle+\left|1\right\rangle)\). In order to do this, we consider the state in the form \(\left|\psi\right\rangle=\alpha_{0}^{\prime}\left|+\right\rangle+\alpha_{1}^{ \prime}\left|-\right\rangle\), where complex amplitudes \(\alpha_{0}^{\prime},\alpha_{1}^{\prime}\) have the same physical meaning as \(\alpha_{0}\) and \(\alpha_{1}\). Then we can calculate the probability that the qubit will be in the state \(\left|+\right\rangle\) or \(\left|-\right\rangle\) after the measurement and consider the process of post-selection in this case. In order to operate with multi-qubit systems, we consider the bilinear operation \(\otimes:\left|x\right\rangle,\left|y\right\rangle\rightarrow\left|x\right\rangle \otimes\left|y\right\rangle\) on \(x,y\in\left\{0,1\right\}\) which is defined on pairs \(\left|x\right\rangle,\left|y\right\rangle\), and by bilinearity is expanded on the space of all linear combinations of \(\left|0\right\rangle\) and \(\left|1\right\rangle\). When we have two qubits in states \(\left|\psi\right\rangle\) and \(\left|\varphi\right\rangle\) correspondingly, the state of the whole system of these two qubits is \(\left|\psi\right\rangle\otimes\left|\varphi\right\rangle.\) In general, for two qubits we have \(\left|\psi\right\rangle=\alpha_{00}\left|0\right\rangle\otimes\left|0\right\rangle +\alpha_{01}\left|0\right\rangle\otimes\left|1\right\rangle+\alpha_{10}\left|1 \right\rangle\otimes\left|0\right\rangle+\alpha_{11}\left|1\right\rangle \otimes\left|1\right\rangle.\) The physical meaning of complex numbers \(\alpha_{ij}\) is the same as for one qubit, so we have the essential restriction \(\left|\alpha_{00}\right|^{2}+\left|\alpha_{01}\right|^{2}+\left|\alpha_{10} \right|^{2}+\left|\alpha_{11}\right|^{2}=1\). We use more brief notation \(\left|a\right\rangle\otimes\left|b\right\rangle\equiv\left|ab\right\rangle\). By induction, this process is expanded on the case of three qubits and more. Mathematically, the entanglement of \(n\)-qubits state means that we can not consider this state in the form \(\left|\psi\right\rangle=\left|\varphi_{1}\right\rangle\otimes\left|\varphi_{2}\right\rangle\), where \(\left|\varphi_{1}\right\rangle\) and \(\left|\varphi_{2}\right\rangle\) are some states of \(m\) and \(n-m\) qubits, correspondingly. In order to verify your circuits, you can use different quantum circuit simulators, for example, see [15]. #### 4.16.2 Solution The first question. The circuit for creation of the Greenberger-Horne-Zeilinger state \(|GHZ\rangle\) is the following: First, we need to post-select events when the first qubit is in the Hadamard state \(|+\rangle=\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)\). For this purpose, we make an Hadamard gate prior to the measurement of the first qubit. After this we perform a post-selection. The state \(|GHZ\rangle\) can be written as \[|GHZ\rangle=\frac{|000\rangle+|111\rangle}{\sqrt{2}}=|+\rangle\,\frac{(|00 \rangle+|11\rangle)}{2\sqrt{2}}+|-\rangle\,\frac{(|00\rangle-|11\rangle)}{2 \sqrt{2}},\] where \(|\pm\rangle=(|0\rangle\pm|1\rangle)/\sqrt{2}\). It means that if we select the first qubit in the state \(|+\rangle\), the other qubits will be in the entangled Bell state \(|\Phi^{+}\rangle=\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle)\). This state can be detected using a CNOT gate followed by the Hadamard gate. The whole circuit is The second question that supposed to be the open problem was solved during the Olympiad by the team of Viet-Sang Nguyen, Nhat Linh LE Tan and Phuong Hoa Nguyen (France). Here we provide the solution. If we measure any qubit of the state \(|GHZ\rangle\) and known the result of the measurement, the state of two rest qubits immediately become known to us. Thus, the state of the whole system of 3 qubits is an entangled one. But the state of two rest qubits after the measurement of any qubit is separable. When we measure the first qubit of the state \(|W\rangle\), the result is 0 with probability 2/3, and 1/3 for the result 1. When the state of the first qubit is measured 1, the system collapses to a separable state \(|00\rangle\) hence it is not entangled anymore. However, when the state of the first qubit is measured 0, the remaining two qubits become the maximally entangled state of two qubits. Given the measurement of one qubit as \(|1\rangle\), we can deduce the information of the other two because there is correlation in the information between qubits. Thus, \(|W\rangle\) is an entangled quantum state of three qubits. Different from \(|GHZ\rangle\), measuring one qubit in \(|W\rangle\) creates an entangle state of two remaining qubit with probability 2/3. While being in \(|GHZ\rangle\), the system collapses to a separable state after measurement of any qubit. The post-selection procedure for the state \(|GHZ\rangle\) was discussed in the first question, so the same technique can be applied for the state \(|W\rangle\). This state contains residual entanglement after measurement of a qubit, we can post-selection the third qubit in the state \(|0\rangle\) to attain the Bell state of the remaining qubits. Here \(R_{y}(\theta)\) gate is a single-qubit rotation through angle \(\theta=2\arccos(1/\sqrt{3})\) (radians) around the \(y\)-axis. The state \(\ket{W}\) has the following representation \[\ket{W} =\frac{1}{\sqrt{3}}\big{(}\ket{001}+\ket{010}+\ket{100}\big{)}\] \[=\frac{1}{\sqrt{6}}\big{(}\ket{00+}+\ket{01+}+\ket{10+}-\ket{00-}+ \ket{01-}+\ket{10-}\big{)}\] If we can post-select the state \(\ket{+}\) for the third qubit, we have: \[\frac{1}{\sqrt{3}}\big{(}\ket{00+}+\ket{01+}+\ket{10+}\big{)}=\frac{1}{\sqrt{3} }\big{(}\ket{00}+\ket{01}+\ket{10}\big{)}\otimes\ket{+},\] which is equivalent to a circuit with two entangled qubits similar to \(\ket{W}\) and a independent qubit in the state \(\ket{+}\). There is a correlation between 2 rest qubits in this system: if we measure 1 in one qubit, the other must be 0. Hence, we have an entanglement between 2 qubits. The circuit for the system with third qubit in the state \(\ket{+}\) and 2 entangled qubits In conclusion, when measuring one qubit of the state \(\ket{W}\), the state of the other two qubits are still entangled. But after the measurement of any qubit of the state \(\ket{GHZ}\), the states of the rest qubits become known. When post measuring Hadamard \(\ket{+}\) state, both \(\ket{W}\) and \(\ket{GHZ}\) states return outcome equivalent to a separate qubit in the state \(\ket{+}\) and a entangled state of two qubits. We also would like to mention participants who made a progress in solution, that is the team of Gabriel Tulba-Lecu, Mircea-Costin Pretoasa and Ioan Dragomir (Romania), the team of Mikhail Kudinov, Denis Nabokov and Alexey Zelenetskiy (Russia), the team of Himanshu Sheoran, Gyumin Roh and Yo Iida (India, South Korea, Japan) and the team of Donat Akos Koller, Csaba Kiss and Marton Marits (Hungary). ## 5 Acknowledgement The authors are grateful to Andrey Nelyubin, Yuliya Maksimlyuk, Irina Khilchuk, Darya Zyubina and Valeria Kochetkova for useful discussions and various help. The work of the first, second, third, fifth, seventh and eighth authors was supported by the Mathematical Center in Akademgorodok under the agreement No. 075-15-2022-282 with the Ministry of Science and Higher Education of the Russian Federation. The work of the ninth author was supported by the Kovalevskaya North-West Centre of Mathematical Research under the agreement No. 075-02-2023-934 with the Ministry of Science and Higher Education of the Russian Federation. The work is also supported by Novosibirsk State University and Kryptonite.
2303.04467
The evolution of cooperation and diversity by integrated indirect reciprocity
Indirect reciprocity is one of the major mechanisms for the evolution of cooperation in human societies. There are two types of indirect reciprocity: upstream and downstream. Cooperation in downstream reciprocity follows the pattern, 'You helped someone, and I will help you'. The direction of cooperation is reversed in upstream reciprocity, which instead follows the pattern, 'You helped me, and I will help someone else'. In reality, these two types of indirect reciprocity often occur in combination. However, upstream and downstream reciprocity have mostly been studied theoretically in isolation. Here, we propose a new model that integrates both types. We apply the standard giving-game framework of indirect reciprocity and analyze the model by means of evolutionary game theory. We show that the model can result in the stable coexistence of altruistic reciprocators and free riders in well-mixed populations. We also found that considering inattention in the assessment rule can strengthen the stability of this mixed equilibrium, even resulting in a global attractor. Our results indicate that the cycles of forwarding help and rewarding help need to be established for creating and maintaining diversity and inclusion in a society.
Tatsuya Sasaki, Satoshi Uchida, Isamu Okada, Hitoshi Yamamoto
2023-03-08T09:37:16Z
http://arxiv.org/abs/2303.04467v1
## The evolution of cooperation and diversity by integrated indirect reciprocity ## Abstract Indirect reciprocity is one of the major mechanisms for the evolution of cooperation in human societies. There are two types of indirect reciprocity: upstream and downstream. Cooperation in downstream reciprocity follows the pattern, "You helped someone, and I will help you'. The direction of cooperation is reversed in upstream reciprocity, which instead follows the pattern, "You helped me, and I will help someone else". In reality, these two types of indirect reciprocity often occur in combination. However, upstream and downstream reciprocity have mostly been studied theoretically in isolation. Here, we propose a new model that integrates both types. We apply the standard giving-game framework of indirect reciprocity and analyze the model by means of evolutionary game theory. We show that the model can result in the stable coexistence of altruistic reciprocators and free riders in well-mixed populations. We also found that considering inattention in the assessment rule can strengthen the stability of this mixed equilibrium, even resulting in a global attractor. Our results indicate that the cycles of forwarding help and rewarding help need to be established for creating and maintaining diversity and inclusion in a society. ## Introduction Reciprocal cooperation is an indispensable part of sustainable societies. Even nearly half a century after Trivers' seminal work (1) on reciprocal altruism, the exploration of game-theoretical models for the evolution of cooperation through reciprocity remains at the forefront of evolutionary biology and the social sciences. Because helping is costly, self-interested individuals will free ride on others, so unconditional cooperation is unlikely to evolve. Therefore, the standard paradigm in the evolution of cooperation is a type of cooperation that is conditional on the degree of the other party's cooperativeness, as in reciprocal cooperation. To succeed in competition with free riders, cooperative reciprocators require enough cognitive capacity to effectively process information for discriminating non-free riders from free riders. When the interaction consists of iterated rounds between the same pair of individuals, reciprocity is often in the form of direct reciprocity (1-3). Direct reciprocity is expressed as follows: A helps B, and then B helps A. Direct reciprocity requires memorising what the co-player and one's self did for each other in the past rounds of the iteration. In the absence of such iterations--as in the case of generalized exchange (4)--reciprocity should be indirect (5-9). Indirect reciprocity extends closed pairwise interactions to relationships involving external third parties. Implementing indirect reciprocity thus requires knowing about what players who may be involved did to others or had done to them by others in the past, such as by observing the co-player directly or using reputation systems. There are two types of indirect reciprocity: upstream and downstream (7). Downstream reciprocity, on the one hand, can be expressed as follows: B helps C, and then A helps B (Fig. 1**b**). In other words, the response to B helping C was not C helping B directly but B being helped by a third party A, who observed B helping C and consequently evaluated B positively; this led to B being helped by A or another party who was influenced by B's positive evaluation--for instance through gossip or reputation (10-12). This is called'rewarding reputation' (13). Therefore, downstream reciprocity uses reputation to identify partners with whom to cooperate. The motivation for such a reputational mechanism in downstream reciprocity, thus, is often described, as follows: 'If I help you, then I will be deemed good, and then someone will help me'. This is called'reputational giving' (14). Upstream reciprocity, which is expressed as A helps B, and then B helps C (Fig. 1**a**), is characterised by the logic of not choosing the partners with whom to cooperate. This differs from the logic behind downstream reciprocity, which is based on conditional cooperation. Upstream reciprocity is a chain of altruistic behaviors (15), called 'paying it forward' (16-18), that increases driving forces such as gratitude (13,18-22) or a sense of indebtedness (22), rather than the expectation of direct or indirect reward. In the eyes of a third party, however, emotional behavior can be viewed as a kind of reputational one, and vice versa. These motivations for reciprocity can easily be intertwined with each other when evaluated. Although upstream and downstream reciprocity are commonly observed behaviors in experimental settings and field research (23-27), evolutionary game theory predicts that natural selection can favor downstream reciprocity but not upstream reciprocity, for which no supportive mechanism exists (7,28-30). Notably, also common is that different types of reciprocal mechanisms can be applied in tandem for promoting cooperation (14). Baker and Bulkley (13) suggested that rewarding reputation and paying it forward can reinforce each other as complementary mechanisms. A recent experimental study reported that in situations in which downstream reciprocators can provide help as a reward, those who pay it forward can become more likely to forward the help received (22). However, upstream and downstream reciprocity are theoretically studied mostly in isolation, and the impact of their interplay on the evolution of cooperation is still unknown. This is the riddle to be solved. Here, we present a new model that integrates both types of indirect reciprocity. By using the model, we will show that stable coexistence between reciprocal altruists and free riders can be achieved by a method based only on indirect reciprocity without incorporating other mechanisms such as direct or spatial reciprocity [29]. Specifically, we attempted to implement the virtuous cycle of paying it forward and rewarding reputation [13], as follows (see Fig. 1**c**). Let B be the modeled integrated reciprocator, who can act as either an upstream or downstream reciprocator. First, assume that D helps E; witnessing this, the integrated reciprocator B deems D good and rewards them by helping them as a downstream reciprocator. Furthermore, if A is another integrated reciprocator who already deemed B good, they will try to reward B by helping them as well. Then, B will forward the help received to someone else (C) as an upstream reciprocator. This, again, may lead to B being rewarded by another witnessing integrated reciprocator. After that, the reactive cycle of forwarding and rewarding among integrated reciprocators may continue in the same way. It should be recalled that the chain of unconditional helping by upstream reciprocators is easily terminated when facing a free rider [29]. The re-activation of helping requires waiting for the fortune that a new chain will come. In contrast to this, it is expected in the model that helping is more likely to revive because of the intervention of selective rewarding, as is depicted above. Considering the interplay between forwarding and rewarding, as such, would be suitable as a first step towards comprehensive study of the interplay of upstream and downstream reciprocity. In the next section, we will model the integrated reciprocator by incorporating these forwarding and rewarding behaviors into an action rule for individuals and then analyze the model by means of evolutionary game theory. ## Results **The setup.** We build the model on the basis of the giving game in a well-mixed population. We assume that, given any interaction event, two players are randomly selected from the population and then interact with each other in only one round. Who plays the role of the donor or recipient is determined by a coin toss. To simplify the analysis, we assumed that in each round, a player acts as both donor and recipient [31]. When acting as a donor, each player is offered an option to help (C) or not (D). Helping leads to benefits \(b\) for the recipient and costs \(c\) for the donor, with \(b>c>0\). Not helping has no effect on either the donor or the recipient. Thus, this yields an example of the well-known prisoner's dilemma game [2]. We also consider the probability of failing to implement an intended action--whether or not to help--denoted by \(\epsilon\)[32]. We then applied the standard framework to study the evolution of indirect reciprocity on the basis of the giving game [33-35]. The player's strategy is described using an action and an assessment rule. The action rule prescribes whether a player helps or not. After every round, each player acting as the donor is assigned a binary image of 'good' (G) or 'bad' (B) by following the assessment rule. Note that the player's image when acting as the recipient is assumed to remain unchanged. In this study, we consider public assessment, under which a representative observer monitors each game, enforces the assessment rule for updating images, and broadcasts information about the population. We allow each player to know the co-player's information regarding actions and images perfectly. **Integrated reciprocators stepping forward.** To study the interplay of upstream and downstream reciprocity, we establish the circulation of forwarded and rewarded help, as in Fig. 1**c**. In this study, we examine integrated reciprocators that help conditionally on the integrated action rule (Table 1**a**), as follows. Those who received help in the previous round will help a potential recipient, irrespective of the recipient's image, and those who did not receive help in the previous round will help a potential recipient only if the recipient's image is good. In what follows, we analyze a minimalistic setting in which each individual can choose one of three strategies: unconditional cooperator (X), unconditional defector (Y), and integrated reciprocator (Z). Unconditional cooperator and defector always intend to help and not to help, respectively. The three strategies' relative frequencies are denoted by \(x\), \(y\), and \(z\), respectively, with \(x+y+z=1\). We assume that in the learning process, strategies that earn a higher payoff are more likely to be imitated in the population. We studied this simple process by means of replicator dynamics [(36)] (see Materials and Methods for details). In what follows, we will present the results of the basement model (Model I) and tuned one (Model II). **Model I: Stable coexistence of the good and the bad.** We first developed Model I by considering the simplest assessment rule: those who help are deemed good, and those who do not are deemed bad (Table 1**b**). This is just the well-known _scoring_ rule [(37-39)]. As shown in Fig. 2, Model I can stabilize the intermediate level of cooperation in a mixed state of reciprocators and defectors (at P in Fig. 2**a,b**). In maintaining the coexistence, while unconditionally forwarding help by the upstream-reciprocation part in the action rule (the upper row of Table 1**a**) can be exploited by defectors, this is compensated for by conditional rewarding from its downstream-reciprocation part (the bottom row of Table 1**a**). In this way, the riddle of the evolution of upstream reciprocity is resolved within indirect reciprocity. Figure 2 shows more details of the evolution of the three strategies. We can see that the phase portraits have a continuum of fixed points in the interior of the simplex \(\Delta=\left\{(x,y,z)\colon x+y+z=1\right\}.\) Fascinating is the dimorphic dynamics between integrated reciprocators and defectors, seen along the edge YZ given by \(x=0\). In the case without errors (Fig. 2**a**), edge YZ generally consists of segment RZ, a basin of attractor P and segment YR, a continuum of boundary fixed points. Attractor P: \(z=z_{0}\) is given by \[z_{0}=\frac{b-2c}{b-c} \tag{1}\] The location of attractor P asymptotically comes close to node Y (\(z=1\)) as the benefit-cost ratio, \(b/c\), increases. At attractor P the population average of the probability to help takes \(-z_{0}^{2}+3z_{0}-1\). We see that curve PQ, a continuum of interior fixed points connecting points P and Q, divides the simplex. Turning to other boundaries, the dynamics between integrated reciprocators and cooperators along edge XZ are neutral, and the dynamics between cooperators and defectors along edge XY are dominated by defectors. In the long run, therefore, considering random fluctuations can lead the population to come in the vicinity of node Y, the 100%-state of defectors. In the case with errors (Fig. 2**b**), an attractor P and also a repeller Q can appear along edge YZ. While continuums of boundary fixed points disappear, those of interior fixed points, PQ, remain. The dynamics between reciprocators and cooperators become dominated by the former. Beside these changes, the evolutionary fate of the population in the long run remains similar, even more definitely converging to the 100%-defector state (see Materials and Methods for details). While Model I succeeds in inducing the attractor between reciprocators and defectors, the equilibrium induced is not asymptotically stable [(36)] against the invasion of cooperators. Therefore, regardless of the presence or absence of errors, considering the random perturbation, the population will leave the coexistence state in the long run. This is similar to the evolution of indirect reciprocity by _scoring_ (32,40,41). The lack of stability of the coexistence state can be understood as follows. The definition of goodness in Model I is based only on whether to help or not, thus giving rise to the infamous problem of 'unjustified defection' [(9, 42, 43)], when reciprocators refuse to help those who are deemed bad. In this case, the image of reciprocators becomes bad, and the chance of being rewarded by other reciprocators decreases. When such a chain reaction of unjustified defection and image downgrading occurs, the advantage of being a reciprocator rather than a cooperator is lost. **Model II: Robustness against the invasion of cooperators.** To strengthen the stability of the coexistence state, we propose Model II with a tuned assessment rule (Table 1**c**). Under the new rule, only those who implement upstream reciprocity deserve to be rewarded by those who follow the action rule. That is, Model II better captures the virtuous cycle of forwarding and rewarding cooperation. Indeed, when receiving help in the previous round, those who help are deemed good, and those who do not, bad, and when receiving no help in the previous round, the donor's image remains unchanged (denoted as K in Table 1**c**) whether they help or not in the current round. The new assessment rule is a sort of _staying_ rule (44,45) and has been invented for rewarding to be focused on upstream reciprocation. Model II can result in the coexistence of reciprocators and defectors, which does not allow cooperators to invade. In fact, in striking contrast to Model I, the dynamics for Model II have no interior equilibria, whether with or without errors. Fig. 3 shows that all the interior orbits converge to the boundary of the simplex, particularly edge YZ. This is from the fact that if the rate of the implementation error, \(\epsilon\), is sufficiently small, integrated reciprocators are better off than cooperators (that is, \(P_{\mathrm{Z}}-P_{\mathrm{X}}>0\) holds). In the case without errors (Fig. 3**a**), along edge YZ exists a unique fixed point, P: \(z=z_{0}\), with the same coordinates as in Model I, and node Y is a saddle. At the attractor P the cooperation rate (the probability to do C) over the population is given by \(-z_{0}^{3}+2z_{0}^{2}\). The dynamics on the other edges, XZ and XY, remain unchanged, as in Model I. This follows that P is even the global attractor. Then, turning to the case with errors (Fig. 3**b**), edge YZ can exhibit an attractor P and a repeller Q; thus, the population dynamics can be bistable, evolving either to the mixed state P or the 100%-defector state Y (see Materials and Methods for details). The stability of the attractor P against the invasion of cooperators can be understood as follows. Assuming that the integrated reciprocator received no help in the previous round, even if they interacted with a co-player with a bad image, the reciprocator's image would not change due to the _staying_ element (K) of the assessment rule in Model II (Table 1**c**). Thus, the occurrence of unjustifiable defection is prevented. This means that a reciprocator with a good image can keep that image and thus continue to deserve to be rewarded by other reciprocators. ## Discussion This study is a point of departure in an uncharted region in the field of the evolution of indirect reciprocity. Theories that explain the evolution of upstream reciprocity have so far used models that combine other mechanisms, such as direct or spatial reciprocity, while excluding downstream reciprocity. Our model shows that it is possible to establish a global attractor that can sustain a high level of upstream reciprocation, even without assuming errors, by integrating it with downstream reciprocation. Surprisingly, in the attractor, the integrated reciprocators considered can coexist with all-out defectors while deterring the intrusion of unconditional cooperators. Indeed, none of the previous models of indirect reciprocity resulted in the stable coexistence of altruistic reciprocators and free riders for the harsh, prisoner's dilemma game in well-mixed populations. Instead, finding an attractor between conditional and unconditional cooperators has been intensively studied [(33,46-50)]. The results can be compared with what happens in the evolution of four strategies: unconditional cooperator, unconditional defector, upstream reciprocator, and downstream reciprocator. Indeed, our study shows that the replicator dynamics for the four strategies can only result in the bistable fate of the population, as in the evolution of downstream reciprocity. The state space is divided into two distinct regions by a continuum of stable and unstable fixed points, given by \(z=c/(1-\epsilon)b\) (with \(c/(1-\epsilon)b<1\)), that is, the planar set (Fig. 4). Considering the random perturbation thus leads the population to end up with the 100%-defector state. This reveals that the simple extension of the strategy space to upstream reciprocators, as such, has no effect on improving the stability of cooperation (see Materials and Methods for details). Here, let us discuss of the role of errors. It has been established that conditional strategies that attempt to establish cooperation can be eroded by unconditional cooperation strategies--ironically, once full cooperation is established. In a fully cooperative regime, conditional cooperators cannot be distinguished from unconditional ones and are thus seen as neutral mutants. Once unconditional cooperators spread to some extent, the invasion of all-out defectors will take place. Hence, most models of the evolution of conditional cooperation have considered errors that lead conditional cooperators to refuse to help unconditional cooperators. By this, conditional cooperators can be better off than unconditional cooperators. Considering errors has hitherto been essential in stabilising conditional cooperation [(48,48,51-53)]. The need for errors to maintain cooperation has thus been a sort of necessary evil in the evolution of indirect reciprocity. In this regard, our results are not based on an artificial extrapolation of error factors. The significance of this study is that it demonstrates that the coexistence of altruists and free riders can be endogenously established through an evolutionary process. The asymptotic stability of this coexistence is a merit that stems from integrating upstream and downstream reciprocity and not by considering downstream reciprocity in isolation. One important issue we have left out is that the stable coexistence established in Model II can become unstable against the invasion of 'pure' upstream reciprocity (Table 2**a**). Pure upstream reciprocators are those who can free ride on the costly rewarding by the integrated reciprocators. To deal with this issue, a considerable countermeasure would be updating the assessment rule so as to downgrade pure upstream reciprocators. We also remark on another type of free rider, those who only employ downstream reciprocity (Table 2**b**) and thus who can free ride on the costly unconditional forwarding of help. Our analysis suggests that the coexistence established can be stable against the invasion of pure downstream reciprocators (see Materials and Methods for details). Other high demand issues to address include systematic exploration of integrated assessment rules, extension to negative reciprocity or paying forward greed [(54,55)], combination of upstream reciprocity with more complex downstream reciprocity, such as the leading eight norms [(56,57,57)], application of private assessment in norm ecosystem [(58-64)], or further integration with other types of reciprocity [(29,65,66)] or sanctioning systems [(67,68)] In human societies, the coexistence of individuals with diverse degrees of cooperativeness is commonly observed, and maintaining inclusion and diversity is one of the key factors for sustainable development. While previous research on the evolution of cooperation by reciprocity has mostly focused on exclusively establishing the monomorphic state with full cooperation, little has been revealed about the conditions under which the polymorphic state with high and low cooperation can evolve (69-71). We believe that this study can pave the way for further research to facilitate and strengthen social diversity. ## Materials and Methods ### Evolutionary dynamics and image dynamics. We will analyze the model by means of evolutionary game theory and investigate the replicator dynamics for a set of strategies considered. We thus assume an infinitely large population and its slow evolution, such that the composition of the population may be supposed to stay without changes in consecutive rounds. The replicator dynamics are given, in general, by \(ds/dt=s(P_{S}-P)\), in which \(s\) denotes the relative frequency of individuals who employ strategy \(S\), \(P_{S}\) the expected payoff per round for strategy \(S\) (\(P_{S}\) is determined after playing the infinitely large number of rounds), and \(P\) the average payoff over the population, given by \(\sum_{S}sP_{S}\). As the first step, let us investigate the dynamics for the three strategies: unconditional cooperator (X), unconditional defector (Y), and integrated reciprocators (Z). We denote these relative frequencies by \(x\), \(y\), and \(z\), respectively. Thus, \(x+y+z=1\) and \(P=xP_{\mathrm{X}}+yP_{\mathrm{Y}}+zP_{\mathrm{Z}}\). We also describe the relative frequency of those who have a good image within each strategy subpopulation by \(g_{S}\) with \(S\in\{\mathrm{X},\mathrm{Y},\mathrm{Z}\}\). We denote the frequency of the good over the whole population by \(g=xg_{\mathrm{X}}+yg_{\mathrm{Y}}+zg_{\mathrm{Z}}\). Here, we introduce a minimalistic framework that can deal with the interplay of upstream and downstream reciprocity, by using the following, called the generalized first-order action and assessment rule. The generalized first-order assessment rule is given in the following matrix: \[\begin{array}{cccc}\text{in/out}&\text{give C}&\text{give D}\\ \text{received C}&\begin{pmatrix}g(\mathrm{C},\mathrm{C})&g(\mathrm{C},\mathrm{D })\\ g(\mathrm{D},\mathrm{C})&g(\mathrm{D},\mathrm{D})\end{pmatrix},\end{array} \tag{2}\] in which each element \(g(a,b)\) denotes the probability that the focal player who received action \(a\) in the previous round and then gives action \(b\) in the current round, with \(a,b\in\{\mathrm{C},\mathrm{D}\}\), are deemed good. This matrix is a function of what the focal player does and what was done to the focal player and thus can cover the first-order assessment rule such as _scoring_ (Table 1**b**) In the equilibrium state (attained by starting from the state in which all have a good image \(g_{S}\)) the frequency of the good for each strategy should satisfy the following: \[\begin{array}{c}g_{S}=\sum_{a,b\in\{\mathrm{C},\mathrm{D}\}}u_{S}(a)\ v_{S}(b)\ g(a,b)\\ =u_{S}(\mathrm{C})v_{S}(\mathrm{C})g(\mathrm{C},\mathrm{C})+u_{S}(\mathrm{C})v _{S}(\mathrm{D})g(\mathrm{C},\mathrm{D})+u_{S}(\mathrm{D})v_{S}(\mathrm{C})g (\mathrm{D},\mathrm{C})+u_{S}(\mathrm{D})v_{S}(\mathrm{D})g(\mathrm{D}, \mathrm{D})^{\prime}\end{array} \tag{3}\] in which \(u_{S}(i)\) and \(v_{S}(i)\) denote the probabilities that the focal player with strategy \(S\) receives action \(i\) and that the focal player with strategy \(S\) gives action \(i\), respectively, in a given round. Thus, \(u_{S}(\mathrm{D})=1-u_{S}(\mathrm{C})\) and \(v_{S}(\mathrm{D})=1-v_{S}(\mathrm{C})\). Then, we give the generalized first-order action rule by the following matrix: \[\begin{array}{cccc}\text{Good}&\text{Bad}\\ \text{received C}&\begin{pmatrix}p_{S}(\mathrm{C},\mathrm{G})&p_{S}(\mathrm{C}, \mathrm{B})\\ p_{S}(\mathrm{D},\mathrm{G})&p_{S}(\mathrm{D},\mathrm{B})\end{pmatrix},\end{array} \tag{4}\] in which each element \(p_{S}(a,i)\) denotes the probability that the focal player who received action \(a\in\{\mathrm{C},\mathrm{D}\}\) in the previous round and then is given an opponent with image \(i\in\{\mathrm{G},\mathrm{B}\}\) implements action C as a potential donor in the current round. This framework can cover the fundamental action rules: integrated reciprocity (Table 1a), upstream reciprocity (Table 2a), and downstream reciprocity (Table 2b). By using the notations in Eqn (4), the probability that a donor with strategy \(S\) implements C to (or helps) a recipient with strategy \(T\) is given by \[\begin{array}{l}u(S,T)=\sum_{a\in(\text{C},\text{D}),i\in(\text{G},\text{B})}u _{\text{S}}(a)\ g_{T}(i)\ p_{S}(a,i)\\ =u_{S}(\text{C})g_{T}(\text{G})p_{S}(\text{C},\text{G})+u_{S}(\text{C})g_{T}( \text{B})p_{S}(\text{C},\text{B})+u_{S}(\text{D})g_{T}(\text{G})p_{S}(\text{D},\text{G})+u_{S}(\text{D})g_{T}(\text{B})p_{S}(\text{D},\text{B})\\,\end{array} \tag{5}\] in which \(g_{T}(\text{G}):=g_{T}\) and thus \(g_{T}(\text{B})=1-g_{T}(\text{G})\). This yields that \[u_{S}(\text{C})=\sum_{S^{\prime}}s^{\prime}\,u(S^{\prime},S), \tag{6}\] and \[v_{S}(\text{C})=\sum_{S^{\prime}}s^{\prime}\,u(S,S^{\prime}). \tag{7}\] Therefore, for the minimalistic setting with the strategy space \(\{\text{X},\text{Y},\text{Z}\}\), we have \[\begin{array}{l}u_{\text{X}}(\text{C})=x(1-\epsilon)+y\epsilon+z[u_{\text{Z} }(\text{C})(1-\epsilon)+u_{\text{Z}}(\text{D})(g_{\text{X}}(\text{G})(1- \epsilon)+g_{\text{X}}(\text{B})\epsilon)],\\ u_{\text{Y}}(\text{C})=x(1-\epsilon)+y\epsilon+z[u_{\text{Z}}(\text{C})(1- \epsilon)+u_{\text{Z}}(\text{D})(g_{\text{Y}}(\text{G})(1-\epsilon)+g_{\text{ Y}}(\text{B})\epsilon)],\\ u_{\text{Z}}(\text{C})=x(1-\epsilon)+y\epsilon+z[u_{\text{Z}}(\text{C})(1- \epsilon)+u_{\text{Z}}(\text{D})(g_{\text{Z}}(\text{G})(1-\epsilon)+g_{\text{ Z}}(\text{B})\epsilon)],\\ \end{array} \tag{8}\] and \[\begin{array}{l}v_{\text{X}}(\text{C})=1-\epsilon,\\ v_{\text{Y}}(\text{C})=\epsilon,\\ v_{\text{Z}}(\text{C})=u_{\text{Z}}(\text{C})(1-\epsilon)+u_{\text{Z}}(\text{D })[g(1-\epsilon)+(1-g)\epsilon].\\ \end{array} \tag{9}\] By solving Eqs. (3,8,9), we can obtain \(g_{S}(\text{G})\), \(u_{S}(\text{C})\), and \(v_{S}(\text{C})\) for each point \((x,y,z)\) of the state space \(\Delta\). We assume that the image dynamics in Eqs. (3,5) are so fast that the replicator dynamics can be determined by the expected payoffs which depend on \(u_{S}(\text{C})\) and \(v_{S}(\text{C})\) in the equilibrium state of the image dynamics. We also assume that the image dynamics start from a situation in which all individuals have a good image. The expected payoffs for strategies are given by \[P_{S}=bu_{S}(\text{C})-cv_{S}(\text{C}), \tag{10}\] **Model I.** From the assessment rule that those who help are deemed good (Table 1b), we have \[g_{S}=v_{S}. \tag{11}\] Thus, substituting Eq. (11) into Eq. (8) yields \[\begin{array}{l}P_{\text{Z}}-P_{\text{Y}}=(g_{\text{Z}}(\text{G})-g_{\text{ Y}}(\text{G}))[P_{\text{X}}-P_{\text{Y}}],\\ P_{\text{Z}}-P_{\text{X}}=(g_{\text{Z}}(\text{G})-g_{\text{X}}(\text{G}))[P_{ \text{X}}-P_{\text{Y}}],\\ \end{array} \tag{12}\] in which \[P_{\text{X}}-P_{\text{Y}}=bz(1-u_{\text{Z}}(\text{C}))-c, \tag{13}\] holds. The zero set of \(P_{\text{X}}-P_{\text{Y}}\) as a function of \((x,y,z)\) provides a continuum of fixed points for the replicator dynamics in the interior of the two-dimensional state space \(\Delta\). This is what the interior curve PQ describes in Figs. 2a,b. We first focus on the case without errors (Fig. 2a). From Eqs. (12,13), we have on edge YZ that for segment ZR with \((3-\sqrt{5})/2<z\leq 1\), the fraction of the good converges to \[g_{\text{Z}}(\text{G})=-\frac{z^{2}-3z+1}{z}, \tag{14}\] or otherwise, for segment RY with \(0\leq z<(3-\sqrt{5})/2\), to \(g_{\rm Z}=0\). Hence, the fraction of the good over the whole population (that is, the frequency of those who cooperate) is \(g=(1-z_{0})g_{\rm Y}({\rm G})+z_{0}g_{\rm Z}({\rm G})=-z_{0}^{2}+3z_{0}-1\). Substituting Eq. (14) into Eq. (13) yields the zero set of Eq. (13) on segment ZR, which is given by \(z_{0}=(b-2c)/(b-c)\) in Eqn (1). This yields that the point P with \(z=z_{0}\) is an attractor with a basin, ZR. On the other side, segment RY consists exclusively of fixed points, along which \(g_{\rm Z}=g_{\rm Y}=0\) yield \(P_{\rm Z}=P_{\rm Y}=0\). Turning to the dynamics along edge XZ, we have \(g_{\rm Z}=g_{\rm X}=1\), and thus \(P_{\rm Z}=P_{\rm X}\). Hence, it follows that the dynamics of reciprocalors and cooperators are neutral. On edge XY, it is obvious that \(z=0\) yields \(P_{\rm X}-P_{\rm Y}=-c<0\) and thus that defectors dominate cooperators. We then examine the case with errors (Fig. 2**b**). Using numerical simulations, we see that an attractor P and also a repeller Q can appear along edge YZ in general. Since the error rate is non-zero, the fraction of the good among reciprocators, \(g_{\rm Z}({\rm G})\), can always take the non-zero value. Similarly, \(g_{\rm Z}({\rm G})\), never takes its full value. As a result, no continuum of boundary fixed points appears along the boundary of the state space. In contrast to this, Eqs. (12,13) hold, irrespective of the presence or absence of errors, and thus a continuum of interior fixed points remains. When considering neutral drift or random perturbation, in particular in the case with errors, the population in the long run will converge to the 100%-defector state (node Y). Of interest is that the global dynamics for Model I have some similarity with those for _scoring_[31]. **Model II.** By the _staying_ element in the assessment rule (Table 1**c**), in the equilibrium state of the image dynamics, we have the following equations: \[\begin{array}{l}g_{\rm X}({\rm G})=u_{\rm X}({\rm C})(1-\epsilon)+u_{\rm X} ({\rm D})g_{\rm X}({\rm G}),\\ g_{\rm Y}({\rm G})=u_{\rm Y}({\rm C})\epsilon\qquad+u_{\rm Y}({\rm D})g_{\rm Y} ({\rm G}),\\ g_{\rm Z}({\rm G})=u_{\rm Z}({\rm C})(1-\epsilon)+u_{\rm Z}({\rm D})g_{\rm Z}({ \rm G}),\end{array} \tag{15}\] which obviously lead to the following constant values: \[g_{\rm X}({\rm G})=1-\epsilon,\,g_{\rm Y}({\rm G})=\epsilon,\,\mbox{and}\,g_{ \rm Z}({\rm G})=1-\epsilon. \tag{16}\] In striking contrast to Model I, the replicator dynamics for Model II have no interior equilibrium in the state space, and we thus can see that all interior orbits will converge to the boundary of the state space (Figs. 3**a,b**). Indeed, the payoff difference between reciprocators and cooperators is given by \[P_{\rm Z}-P_{\rm X}=c(1-u_{\rm Z}({\rm C}))(1-g_{\rm Z}({\rm G}))(1-2\epsilon), \tag{17}\] in which \((1-u_{\rm Z}({\rm C}))(1-g_{\rm Z}({\rm G}))\neq 0\) holds in the interior state space, yielding \(P_{\rm Z}-P_{\rm X}>0\) for the sufficiently small errors with \(\epsilon<1/2\). Next, let us check the dynamics between reciprocators and defectors along edge YZ. For \(x=0\), we have that \[P_{\rm Z}-P_{\rm Y}=bz(1-u_{\rm Z}({\rm C}))(1-2\epsilon)-c[u_{\rm Z}({\rm C} )+u_{\rm Z}({\rm D})g_{\rm Z}({\rm G})], \tag{18}\] and, furthermore in the case without errors (\(\epsilon=\)0), \[P_{\rm Z}-P_{\rm Y}=-z^{2}(b-c)+z(b-2c). \tag{19}\] Thus, for \(b>2c\), point P with \(z=z_{0}\) the same as in Eq. (1) becomes an attractor along edge YZ. We note also that the dynamics along edges XZ and XY remain unchanged from those for Model I. From these reasons, in the case without errors, it follows that all interior orbits will converge to P and thus that P is the global attractor (Fig. 3**a**). By using Eq. (9), we can have the probability that reciprocators give C, \(v_{\rm Z}({\rm C})\) be equal to \(z_{0}(2-z_{0})\), and thus its population average is \(z_{0}^{2}(2-z_{0})\). In the case with errors, it turns out that an attractor P and a repeller Q appear simultaneously on edge YZ. Hence, the replicator dynamics have only two local attractors, P and node Y. As a result, the global dynamics are bistable: the population will converge to either P or Y (Fig. 3**b**). **Stability of the attractor P against the invasion of pure downstream reciprocators in Model II.** Here, we prove in the case without errors that a rare mutant of pure downstream reciprocators (W) is worse off than the resident population consisting of defectors (Y) and reciprocators (Z). Consider pure downstream reciprocators (PDR) who employ the action rule by Extended Data Table 1**b** and the assessment rule by Table 1**c**. We first note that \(g_{\text{W}}(\text{G})=u_{\text{W}}(\text{C})g+u_{\text{W}}(\text{D})g_{\text{W }}(\text{G})\), and thus \(g_{\text{W}}(\text{G})=z\) on edge YZ. Using this, we calculate the probability for PDR to receive C as \(u_{\text{W}}(\text{C})=(1-z)\cdot 0+z(u_{\text{Z}}(\text{C})+u_{\text{Z}}(\text{D})g_{ \text{W}}(\text{G}))=z(u_{\text{Z}}(\text{C})+u_{\text{Z}}(\text{D})z)\), and the probability for PDR to give C as \(v_{\text{W}}=(1-z)\cdot 0+zg_{Z}=z\). Similarly, the probability for integrated reciprocators to receive C is given by \(u_{\text{Z}}=(1-z)\cdot 0+z(u_{\text{Z}}(\text{C})+u_{\text{Z}}(\text{D})g_{ \text{Z}}(\text{G}))=z>u_{\text{W}}\), and the probability for integrated reciprocators to give C, \(v_{\text{Z}}(\text{C})\), is equal to \(v_{\text{W}}(\text{C})\). Therefore, it follows that the expected payoff for the mutant PDR, \(P_{\text{W}}=bu_{\text{W}}(\text{C})-cv_{\text{W}}(\text{C})\), is smaller than that for the resident reciprocators, \(P_{\text{Z}}=bu_{\text{Z}}(\text{C})-cv_{\text{Z}}(\text{C})\). That is, the mutant PDR is not selected for in the residents along edge YZ (including P). **Cooperator, defector, upstream reciprocator, and downstream reciprocator.** We also explore the evolution of four strategies: unconditional cooperator, unconditional defector, upstream reciprocator, and downstream reciprocators. Downstream reciprocator intends to help a recipient, if the recipient helped someone else in the previous round. If the recipient did not help, downstream reciprocator intends not to help (Table 2**b**). Upstream reciprocator intends to help a recipient, irrespective of the recipient's image, if upstream reciprocator received help in the previous round. Otherwise, upstream reciprocator intends not to help (Table 2**a**). We denote by \(x\), \(y\), \(v\), and \(w\) the relative frequencies of unconditional cooperator (X), unconditional defector (Y), upstream reciprocator (V), and downstream reciprocator (W), respectively. Thus, \(x+y+v+w=1\) and \(P=xP_{\text{X}}+yP_{\text{Y}}+vP_{\text{V}}+wP_{\text{W}}\). The frequency of the good over the whole population is given by \(g=xg_{\text{X}}+yg_{\text{Y}}+vg_{\text{V}}+wg_{\text{W}}\). Then, as in Eq. (8), we can have the following equations to define \(u_{S}(\text{C})\) recursively: \[\begin{split} u_{\text{X}}(\text{C})&=x(1-\epsilon )+y\varepsilon+v[u_{\text{V}}(\text{C})(1-\epsilon)+u_{\text{V}}(\text{D}) \epsilon]+w[g_{\text{X}}(\text{G})(1-\epsilon)+g_{\text{X}}(\text{B})\epsilon ],\\ u_{\text{Y}}(\text{C})&=x(1-\epsilon)+y\varepsilon+v [u_{\text{V}}(\text{C})(1-\epsilon)+u_{\text{V}}(\text{D})\epsilon]+w[g_{ \text{Y}}(\text{G})(1-\epsilon)+g_{\text{Y}}(\text{B})\epsilon],\\ u_{\text{V}}(\text{C})&=x(1-\epsilon)+y\varepsilon+v [u_{\text{V}}(\text{C})(1-\epsilon)+u_{\text{V}}(\text{D})\epsilon]+w[g_{ \text{V}}(\text{G})(1-\epsilon)+g_{\text{V}}(\text{B})\epsilon],\\ u_{\text{W}}(\text{C})&=x(1-\epsilon)+y\varepsilon+v [u_{\text{V}}(\text{C})(1-\epsilon)+u_{\text{V}}(\text{D})\epsilon]+w[g_{ \text{W}}(\text{G})(1-\epsilon)+g_{\text{W}}(\text{B})\epsilon].\end{split} \tag{20}\] By solving Eqs. (3,9,20), we can have \(g_{S}(\text{G})\), \(u_{S}(\text{C})\),and \(v_{S}(\text{C})\). Substituting these into Eq. (10) allows us to calculate the payoffs and thus the replicator dynamics. For Model II, \(v_{S}\), the probability that a player with strategy \(S\) gives C is given by: \[\begin{split} v_{\text{X}}(C)&=1-\epsilon,\\ v_{\text{Y}}(C)&=\epsilon,\\ v_{\text{V}}(C)&=u_{\text{V}}(\text{C})(1-\epsilon)+u_ {\text{V}}(\text{D})\epsilon,\\ v_{\text{W}}(C)&=g(1-\epsilon)+(1-g)\epsilon.\end{split} \tag{21}\] Fig. 4 describes the evolution of the four strategies by the replicator dynamics. Fig. 4**a** shows the boundary dynamics on each face. On the X-Y-V plane, defector is dominant. For other three faces (X-Y-W, X-W-V, and Y-W-V), there can exist a continuum of interior fixed points if \(c/(1-\epsilon)b<1\). We see also that the edge dynamics between downstream and upstream are neutral. Therefore, the random shock can bring the population eventually to node Y, which is the homogeneous state for defectors. Fig. 4**b** shows the interior dynamics. If \(c/(1-\epsilon)b<1\), there exists an intersection of the plane and the 3D simplex \(\Delta_{4}=\{(x,y,v,w)\colon x+y+v+w=1\}\). Otherwise, there is no interior fixed point in \(\Delta_{4}\). Fig. 4**b** shows the intersection consists of stable and unstable fixed points. Depending on the initial conditions, the population may first evolve to a stable point within the planar continuum of fixed points. Whatever the initial conditions, the random perturbation can still lead the population to finally converge to node Y. ## Acknowledgments This work was supported by JSPS KAKENHI Grant Numbers JP19H02376 (I.O, H.Y), JP20K20651 (I.O), JP21H01568 (I.O, H.Y), JP21KK0027 (I.O, H.Y), JP22H03906 (H.Y, I.O), and JP19K21570 (H.Y). The funders have/had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript. We are grateful to A. Brannstrom, U. Dieckmann, Y. Nakai, H. Ohtsuki, and N. Takahashi for their comments.
2310.03230
The squish map and the $\text{SL}_2$ double dimer model
A plane partition, whose 3D Young diagram is made of unit cubes, can be approximated by a ``coarser" plane partition, made of cubes of side length 2. Indeed, there are two such approximations obtained by ``rounding up" or ``rounding down" to the nearest cube. We relate this coarsening (or downsampling) operation to the squish map introduced by the second author in earlier work. We exhibit a related measure-preserving map between the dimer model on the honeycomb graph, and the $\text{SL}_2$ double dimer model on a coarser honeycomb graph; we compute the most interesting special case of this map, related to plane partition $q$-enumeration with 2-periodic weights. As an application, we specialize the weights to be certain roots of unity, obtain novel generating functions (some known, some new, and some conjectural) that $(-1)$-enumerate certain classes of pairs of plane partitions according to how their dimer configurations interact.
Leigh Foster, Benjamin Young
2023-10-05T00:54:34Z
http://arxiv.org/abs/2310.03230v2
# The squish map and the \(\mathrm{SL}_{2}\) double dimer model ###### Abstract A plane partition, whose 3D Young diagram is made of unit cubes, can be approximated by a "coarser" plane partition, made of cubes of side length 2. Indeed, there are two such approximations obtained by "rounding up" or "rounding down" to the nearest cube. We relate this coarsening (or downsampling) operation to the squish map introduced by the second author in earlier work. We exhibit a related measure-preserving map between the dimer model on the honeycomb graph, and the \(\mathrm{SL}_{2}\) double dimer model on a coarser honeycomb graph; we compute the most interesting special case of this map, related to plane partition \(q\)-enumeration with 2-periodic weights. As an application, we specialize the weights to be certain roots of unity, obtain novel generating functions (some known, some new, and some conjectural) that \((-1)\)-enumerate certain classes of pairs of plane partitions according to how their dimer configurations interact. ## 1 Introduction Anyone who has played with cubical building blocks in their youth has, at some point, constructed a \(2\times 2\times 2\) cube out of eight \(1\times 1\times 1\) cubes. Later in life, some of us (including the authors) went on to study _plane partitions_, which are nothing more than stable piles of cubes in the corner of a large room. These two experiences tell us that there ought to be _some_ relation between plane partitions made out of the little cubes, and plane partitions made out of the bigger ones. Or, going the other way: what if we take a picture of a plane partition and "downsample" it, by approximating its \(1\times 1\times 1\) cubes by roughly one eighth as many \(2\times 2\times 2\) cubes, as best we can (see Figure 1). How much information do we lose by doing this? We propose to answer this question through an analysis of the _squish map_ - a map originally studied in [10], as a means of proving combinatorial theorem about plane partition enumeration. Here, we prove that the squish map is a measure-preserving transformation between instances of the single and double dimer models on honeycomb graphs. Thus, the loops of the double dimer model indicate where information is lost in the "downsampling" process. ### Definitions and literature survey A _plane partition_ is an infinite matrix of nonnegative integers, all of which are zero sufficiently far from the origin, which are weakly decreasing both in rows or in columns (we do not typically draw the zeros). Equivalently, interpreting the numbers in a plane partition as \(z\) coordinates, one can represent a plane partition as a stack of unit cubes in the corner of a room - this is precisely the relationship between ordinary integer partitions and their Young diagrams, but one dimension higher. If the plane partition's cubes fit inside an \(x\times y\times z\) box, then it is said to be a _boxed \(x\times y\times z\) plane partition_. MacMahon [14] proved that the generating function for boxed \(x\times y\times z\) plane partitions is \[\prod_{i=1}^{\infty}\prod_{j=1}^{\infty}\frac{1-q^{i+j+c-1}}{1-q^{i+j-1}}\] which, in the limit \(x,y,z\to\infty\) gives the famous generating function for plane partitions, \[\prod_{i\geq 1}\left(\frac{1}{1-q^{i}}\right)^{i}.\] Boxed plane partitions are in bijection with tilings of an \(x\times y\times z\) hexagon graph, and hence with the dimer model on an certain graph. This result (as far as we know) is folklore, and we don't know a good reference; see Figure 6 for an illustration. We call the graph in question \(H_{x,y,z}\), the \(x\times y\times z\) honeycomb graph (see Figure 2). Let \(D_{x,y,z}\) denote the set of perfect matchings (otherwise known as _dimer configurations_) on \(H_{x,y,z}\). Furthermore, let \(DD_{x,y,z}\) be the set of _double dimer configurations_ on \(H_{x,y,z}\). That is, an element \(m\in D_{x,y,z}\) is an induced \(1\)-regular subgraph of \(H_{x,y,z}\) - every vertex of \(H_{x,y,z}\) is in exactly one edge of \(D_{x,y,z}\), whereas an element \(m\in DD_{x,y,z}\) is an induced \(2\)-regular subgraph (with the slightly unusual convention that doubled edges are allowed). See Figure 4. We require Kenyon's \(\mathrm{SL}_{2}(\mathbb{C})\)-weighted double dimer model [13]: in addition to an edge weight, one assigns a \(2\times 2\) matrix \(M_{e}\in\mathrm{SL}_{2}(\mathbb{C})\) to each edge \(e\); the partition function involves a product of traces of such matrices taken around a loop. This collection of matrices is called a _connection_ on the graph, by analogy with differential geometry, since its contribution to the partition function is the monodromy around closed paths in the graph. When all \(M_{e}\) are the identity matrix, all loops contribute \(\mathrm{Tr}(I)=2\), so the model reduces to two independent copies of the ordinary dimer model, but this is typically a much more general and subtle model. There are standard, determinantal tools for computing single and double dimer partition functions. For the single dimer model, one uses [15] and [16]; for lattice paths, the references are [12] and [17]. Kenyon [13] introduces a matrix, analogous to Kasteleyn's matrix, whose determinant computes the partition function of this model. We do not need either determinant for this paper. The map we introduce here, _the squish map_, is a map from \(D_{2x,2y,2z}\) to \(DD_{x,y,z}\). One can (in principle) discover everything there is to know about it by drawing \(H_{2x,2y,2z}\) with distorted edge lengths (see Figure 5). It was introduced in [11]. Our result here is that the squish map is Figure 1: A plane partition, viewed as a stack of boxes - and then “downsampled” by a factor of two, rounding up to the nearest full box. The plane partitions are \(\pi\) and \(\pi_{\max}\), respectively, from Example 8 in Section 3.3. measure-preserving, for particular choices of the parameters of the single and double dimer models. Indeed, we are even able to prove a subtle enumerative result about the enumeration of downsampled plane partitions, hinted at in [13]: If we let \(x,y,z\to\infty\), then there is a closed-form generating function which is preserved by the squish map. It is a certain 4-variable generating function for "\(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) - colored plane partitions" studied in [13]; in it, a cube at position \((i,j,k)\) gets a color according to the parities of \((i-k)\) and \((j-k)\); the four variables keep track of how many cubes of each color there are. The squish map allows us to see this generating function as marking certain statistics on a downsampled plane partition. This generating function was studied in [13] under a certain specialization; we both study it in full generality here, including the original specialization and a few intriguing others. ### Acknowledgements We would like to thank Richard Kenyon and Sunil Chhita for helpful conversations. ## 2 The single and double dimer models **Definition 1**.: The _weight_ of a graph is an assignment \(\nu:E\to\mathbb{R}_{\geq 0}\) of real numbers onto each edge of the graph, where \(E\) is the set of edges in a graph \(G\). **Definition 2**.: Consider a perfect matching \(m\) on \(G\). The _weight of \(m\)_ is defined to be \(w(m)=\prod_{e\in m}w(e)\). **Example 3**.: The following weighting reproduces the \(Q^{\text{(number of boxes)}}\) statistic on plane partitions, up to an overall power of \(Q\). This, as far as the authors know, is a "folklore" idea for which we do not know a good reference. Assign a weight of 1 to one horizontal edge in each column of hexagons. Above each horizontal edge with weight 1, assign weights \(Q,Q^{2},Q^{3},\ldots\), and below weights \(Q^{-1},Q^{-2},\ldots\). Non-horizontal edges get weight 1. Then the overall weight of the perfect matching in Figure 2 is \(Q^{-6}\). Figure 2: A perfect matching (dimer configuration) on \(H_{6,2,4}\), with a monochromatic weighting on the horizontal edges The following definition of the \(\mathrm{SL}_{2}(\mathbb{C})\) double dimer model is due to Kenyon [16]: **Definition 4**.: Let \(G=(V,E)\) be a bipartite graph with a _scalar weight_\(w:E\to\mathbb{C}\) as well as an \(\mathrm{SL}_{2}\)_connection_: a map \(\Gamma:E\to SL_{2}(\mathbb{C})\). Then the contribution of a double dimer configuration \(DD\) is defined to be \[\left(\prod_{e\in m}w(e)\right)\times\prod_{\text{closed loops }L}\mathrm{Tr} \left(\prod_{e\in L}\Gamma(e)\right).\] ## 3 The Squish Map ### Coordinates on the honeycomb grid The honeycomb graph is the tiling of the plane by hexagons; the center of each hexagon is a point in the dual triangular lattice. A convenient way to give coordinates to the triangular lattice is to draw it on the plane normal to \((1,1,1)\in\mathbb{R}^{3}\). This plane has an orthonormal basis \[\vec{x}=\frac{1}{\sqrt{2}}(-1,1,0)\qquad\vec{y}=\frac{1}{\sqrt{6}}\left(-1,-1,2\right)\] which we won't really use, but we _will_ orient our pictures according to it, and we will use words like up, left, bottom, horizontal, etc... in the conventional way with respect to this basis. Figure 4: A double dimer configuration – an overlaying of the right and the left single dimer configurations, that consists of closed loops and doubled edges Figure 3: The two possible configurations of a single hexagon loop in the double dimer model The set \(\mathbb{Z}^{3}\subseteq\mathbb{R}^{3}\) projects onto a copy of the triangular lattice in this plane. Draw the honeycomb graph \(G\) in such a way that the lattice points are the centers of the hexagons, then label each lattice point (and thus each hexagon) with any of the lattice points that project to its center. For instance, the hexagon at the origin has the labels \((0,0,0),\pm(1,1,1),\pm(2,2,2)\), etc. Call this particular embedding of the honeycomb graph \(H\). In the \(\vec{x},\vec{y}\) coordinates above, the edge common to the hexagon at \((0,0,0)\) and the hexagon at \((0,0,1)\) is horizontal. There is also a second embedding of the honeycomb graph on to the plane which is of interest to us, which we shall glibly call \(2H\), and it is obtained by projecting the double-sized lattice \((2\mathbb{Z})\times(2\mathbb{Z})\times(2\mathbb{Z})\) to a (bigger) triangular lattice in the plane, and taking the planar dual. The hexagons of \(2H\) have even coordinates \((2i,2j,2k)\). ### Degenerating H to 2H The squish map can be defined entirely combinatorially, but for visualization purposes it is extremely helpful to first define a continuous degeneration \(H(t):[0,1]\to\mathbb{R}^{2}\) such that \(H(0)=H\) and \(H(1)=2H\). This degeneration is shown in Figure 5 and first appeared in [21]. To define \(H(t)\), write \(H=(V,E)\), and let \(H^{ev}=(V^{ev},E^{ev})\) be the subgraph of \(H\) consisting of all hexagons whose centers have even coordinates; we say that \(H^{ev}\) are the "even hexagons". Let \(P=H\setminus H^{ev}\). Then the graph \((V,P)\) is a disconnected union of "propellers" (or \(K_{1,3}\)'s or "claws", depending on what dialect of graph theory you speak). Each propeller has a central vertex \(v\) and three leaves \(x,y,z\). For \(t\in[0,1]\), define \(x(t)=tv+(1-t)x\), so that \(x(t)\) is a point on the edge joining \(v\) to \(x\). Define \(y(t)\) and \(z(t)\) similarly, and make the same definitions at each other propeller. The graph embedding \(H(t)\) is obtained by drawing the vertex \(a\) at position \(a(t)\). For \(0\leq t<1\), \(H(t)\) is an embedding of \(H\); indeed, \(H(t)\) is an explicit homotopy equivalence. At \(t=1\), however, we have \(v=x=y=z\), and thus \(H(1)\) degenerates to an embedding of \(2H\). ### The Squish map \(H(t)\) defines a 2-to-1 map \(Sq\) from \(E^{ev}\) to the edges of \(2H\): given an edge \(e\) of an even hexagon in \(H\), find the corresponding edge in \(H(0)\), and let \(Sq(e)\) be the corresponding edge in \(H(1)\). **Definition 5**.: The squish map \(Sq:D(H)\to DD(2H)\) sends a dimer configuration \(m\) on \(H\) to a double dimer configuration \(Sq(m)\) on \(2H\) as follows: let \(m^{ev}=m\cap E^{ev}\); then \(Sq(m)=Sq(m^{ev})\). Indeed, when drawing \(m\) on the graph \(H(t)\) for \(t<1\), visualize what the squish map is doing: propeller edges of \(m\) get shorter and shorter, while the doubled edges get longer and closer together. This process is shown in Figure 5. Figure 5: The degeneration H(t) We now relate the squish map to the \(1\times 1\times 1\) and \(2\times 2\times 2\) stacked cubes mentioned in the introduction. We can use the language of plane partitions to visualize a perfect matching in the single dimer model as a stack of cubes in the corner of a room, as in Figure 6. Then putting our matching through the squish map 'downsamples' the plane partition: we would use \(1/8\) as many \(2\times 2\times 2\) bricks instead. For example, to build a \(2\times 2\times 4\) prism we can use \(16\) single cubes, or two larger \(2\times 2\times 2\) cubes. **Theorem 6**.: _Consider a single dimer configuration \(\mathcal{S}\) sent through the squish map resulting in a double dimer configuration \(\mathcal{D}\) made of loops and doubled edges. In each \(2\times 2\) section of the plane partition for \(\mathcal{S}\), let \(\pi_{\text{min}}\left(\begin{array}{c|c}m&n\\ \hline o&p\end{array}\right)=\left\lfloor\frac{\min(m,n,o,p)}{2}\right\rfloor\), and \(\pi_{\text{max}}\left(\begin{array}{c|c}m&n\\ \hline o&p\end{array}\right)=\left\lceil\frac{\max(m,n,o,p)}{2}\right\rceil\)._ _These are the minimal and maximal plane partitions that, when overlayed, give us the double dimer configuration \(\mathcal{D}\)._ A few natural questions one might ask at this point would be: what single dimer configurations squish to a given double dimer configuration? In the language of building blocks, how can we tell which larger \(2\times 2\times 2\) brick configurations overlay with one another to give us a particular double dimer model? How do we recover that information if we were only given the \(1\times 1\times 1\) smaller blocks? **Example 7**.: Consider the two plane partitions given by Figure 6. Both of these configurations become a loop within a loop (as in Figure 7) when sent through the squish map. If we picture the plane partitions as boxes in a room, the minimal configuration that squishes to the loop within a loop is the diagram on the left of Figure 6, and the maximal one is the diagram on the right. (Note that these are just two out of a possible 23,364 configurations that squish to the same loop-within-a-loop. See Theorem 20.) To determine which possible two overlayed \(2\times 2\times 2\) single dimer Figure 6: Two single dimer configurations shown as boxed plane partitions configurations give the same double dimer configuration, we start by downsampling either plane partition. For this example, we will work through the process on the minimum diagram and partition above, though the process will work on either partition (or any of the other \(23,362\) in between). We want to round down to get the first single dimer configuration, \(\pi_{min}\), and round up to get the second single dimer configuration, \(\pi_{max}\). To do this rounding process, start by sectioning the plane partition into \(2\times 2\) grids. \begin{tabular}{|c|c|c|c|} \hline 3 & 3 & 3 & 0 \\ \hline 3 & 2 & 1 & 0 \\ \hline 3 & 1 & 1 & 0 \\ \hline 0 & 0 & 0 & 0 \\ \hline \end{tabular} Each \(2\times 2\) section will become one entry in the downsampled plane partition. When we round down, we want to count how many _complete_\(2\times 2\times 2\) blocks exist in each section. (You can also think about this as removing smaller \(1\times 1\times 1\) cubes one-at-a-time until you are only left with \(2\times 2\times 2\) blocks.) Thus, we have \begin{tabular}{|c|c|c|} \hline 1 & 0 & as the first single dimer configuration. \\ \hline 0 & 0 & 0 \\ \hline \end{tabular} Now to find the second single dimer configuration, we want to round up to the nearest larger cube. (You can also think about this as adding smaller cubes until we get to the fill the \(2\times 2\times 2\) cube.) So the maximal configuration in its entirety becomes rewritten as \begin{tabular}{|c|c|c|} \hline 2 & 2 \\ \hline 2 & 1 \\ \hline \end{tabular} Now we have a pair of plane partitions that, when overlayed, become the same double dimer configuration we had as the result under the squish map. In Figure 7, the yellow (lighter colored) configuration corresponds to \begin{tabular}{|c|c|c|} \hline 1 & 0 & 0 \\ \hline 0 & 0 \\ \hline \end{tabular} and the blue (darker colored) configuration corresponds to Figure 7: A loop within a loop in the double dimer model **Example 8**.: For the plane partition \[\begin{array}{|c|c|c|c|}\hline 8&8&6&5\\ \hline 7&6&6&5\\ \hline 6&4&3&3\\ \hline 5&4&3&3\\ \hline 4&3&3&2\\ \hline 3&3&2&1\\ \hline 2&2&1&1\\ \hline 1&1&1&0\\ \hline\end{array}\] then \[\pi_{min}=\begin{array}{|c|c|c|}\hline 3&2\\ \hline 2&1\\ \hline 1&0\\ \hline 0&0\\ \hline\end{array}\qquad\text{and}\qquad\pi_{max}=\begin{array}{|c|c|c|} \hline 4&3\\ \hline 3&2\\ \hline 2&2\\ \hline 1&1\\ \hline\end{array}\] **Remark 9**.: _Consider a plane partition that could already have been made out of \(2\times 2\times 2\) boxes. We can downsample in a straightforward manner. Each \(2\times 2\) region in the plane partition contains all the same even entry, so the entry in \(\pi_{min}\) would be half that number, as it would be for \(\pi_{max}\). Since all four entries were the same, then we have no rounding to do, so \(\pi_{min}=\pi_{max}\). Thus when we overlay the single dimer configurations corresponding to \(\pi_{min}\) and \(\pi_{max}\), we get all doubled edges in the double dimer configuration._ Proof of Theorem 6.: Consider a given plane partition \(\pi\) that may not break down nicely into \(2\times 2\times 2\) cubes, where \(\pi\) has \(k\) boxes. Then \(Sq(\pi)=\pi_{min}\sqcup\pi_{max}\) as double dimer configurations. We proceed by induction on \(k\). The base case is \(k=0\), which is straightforward by the remarks above: \(\pi,\pi_{\min},\pi_{\max}\) and all the corresponding single or double dimer configurations are minimal. Suppose that \(\pi\) has \(k+1\) boxes. Delete a box from \(\pi\) to create a \(\pi^{\prime}\) that has \(k\) boxes. Then via the induction hypothesis we have that \(Sq(\pi^{\prime})=\pi^{\prime}_{min}\sqcup\pi^{\prime}_{max}\). If we add the \((k+1)\)th box back in, we land in one of three cases. In the first case, we have started a new \(2\times 2\times 2\) cube by adding that cube in the \((2i,2j,2k)\) position. Then, when viewed as a plane partition, \(\pi_{max}(i,j)=\pi^{\prime}_{max}(i,j)+1\), while the minimum configuration stays the same (\(\pi_{min}(i,j)=\pi^{\prime}_{min}(i,j)\)). Then under the squish map we get that \(Sq(\pi)=Sq(\pi^{\prime})\), except at the hexagon located at \((i,j,k)\), where the perfect matching around said hexagon changes from the left of Figure 8 to the right. In the second case, we have added the cube into the last empty slot of a \(2\times 2\times 2\) larger cube, so the new cube has gone into position \((2i+1,2j+1,2k+1)\). Then the maximum plane partition remains the same, so \(\pi_{max}(i,j)=\pi^{\prime}_{max}(i,j)\), but \(\pi_{min}(i,j)=\pi^{\prime}_{min}(i,j)+1\). So under the squish map we have that \(Sq(\pi)=Sq(\pi^{\prime})\), except at \((i,j,k)\), where the matching around the hexagon changes from the right of Figure 8 to the left. Finally, the last case occurs when we add a cube anywhere else; i.e. adding this smaller cube neither completes a \(2\times 2\times 2\) box nor is the first cube in an otherwise empty \(2\times 2\times 2\) box. Here we have that \(\pi_{min}=\pi^{\prime}_{min}\) and \(\pi_{max}=\pi^{\prime}_{max}\). In this case we have no change under the squish map, so \(Sq(\pi)=Sq(\pi^{\prime})\) exactly. ### Transfer matrix approach Now that we have the squish map defined on graphs and plane partitions, we want to be able to take a weight function in the single dimer model and push it through the squish map to give us an \(\mathrm{SL}_{2}\) connection and a scalar weight function in the double dimer model. To do this, we first consider a perfect matching on the single dimer model. Once the graph has been squished, we can consider walking along a path around a given loop in the now-double dimer model. Given a starting vertex, each path consists of a series of right and left turns at each new vertex encountered until we once again reach the starting vertex. These turns are given labels \(L\) and \(R\) for left and right assigned to each vertex in the path. We want to interpret \(L\) and \(R\) as \(2\times 2\) transfer matrices for keeping track of the loops contribution to the dimer model. This was the strategy of [10], which used different matrices. To determine what particular matrices \(L\) and \(R\) should be, we consider a walk along the path snippet given by Figure 9. We begin at the bottom edge and then turn left at the next vertex, so we could step onto either edge \(z\) or \(w\). If \(x\) is an edge in the perfect matching, then \(z\) cannot also be a matched edge, so the only next step could be \(w\). Similarly, if \(y\) is in the perfect matching, then a left turn onto the next matched edge could include either \(z\) or \(w\). Figure 8: adding or removing a box We represent this with the vectors \[\begin{bmatrix}x\\ 0\end{bmatrix}\qquad\begin{bmatrix}0\\ y\end{bmatrix}\] as our two possible starting locations. Then we need the first vector to map only to edge \(w\), and the second vector to map to both edges \(z\) and \(w\), so we get the following maps \[\begin{bmatrix}x\\ 0\end{bmatrix}\mapsto\begin{bmatrix}0\\ w\end{bmatrix}\qquad\begin{bmatrix}0\\ y\end{bmatrix}\mapsto\begin{bmatrix}z\\ w\end{bmatrix}.\] A similar scenario happens for \(R\), so we then find the \(2\times 2\) matrix to make the above maps hold, getting that \[L=\begin{bmatrix}1&1\\ 1&0\end{bmatrix}\qquad\text{ and }\qquad R=\begin{bmatrix}0&1\\ 1&1\end{bmatrix}\] **Remark 10**.: _Note that instead of defining \(L\) and \(R\) as above, we could also have defined \(L\) to be its current inverse, and similar for \(R\). So instead we would have had_ \[L=\begin{bmatrix}0&1\\ 1&-1\end{bmatrix}\qquad\text{ and }\qquad R=\begin{bmatrix}-1&1\\ 1&0\end{bmatrix}.\] _Sometimes it may be more convenient to define \(L\) and \(R\) this way (with the negative signs), such as if we had specialized the edge weights to be \(\pm 1\). We ultimately wanted to use the version with only positive entries, however, to (hopefully) make it more clear to the reader how things are working._ ### From transfer matrices to the \(\mathrm{SL}_{2}\) connection We can now use the matrices \(L\) and \(R\) to compute the contribution of a closed loop under the squish map from the single to the double dimer model by taking the trace. However, we're supposed to have \(2\times 2\) matrices, with determinant \(1\) associated to the _edges_ of \(H(i,j,k)\), not the vertices. We use the following process to include the left and right turn information in the edges of the graph as matrix weights. To begin with, we associate general \(2\times 2\) edge-weight matrices \(A,B\), and \(C\) to be placed on each horizontal, north-east, and north-west matched edge, respectively, as in Figure 10. \[A=\begin{bmatrix}a&0\\ 0&a^{-1}\end{bmatrix},\qquad B=\begin{bmatrix}b^{-1}&0\\ 0&b\end{bmatrix},\qquad C=\begin{bmatrix}c&0\\ 0&c^{-1}\end{bmatrix}\] Figure 9: Left: the possible \(x,y,z\), and \(w\) edge weights over a left-hand turn. Right: A perfect matching on a squished hexagon lattice. To handle the process of moving \(L\) and \(R\) from the vertices to the edges, we need some way to include the information for turns in an unknown path. Then we want to come up with new matrices \(\alpha\), \(\beta\), and \(\gamma\) that include the information from \(A\), \(B\), and \(C\), but that also encode the information in the turns. For example, the move \(A\to B\) is always left turn, and the move \(A\to C\) is always a right turn. So we somehow want the new \(\beta\alpha\) to include the same information as \(BLA\) and \(\gamma\alpha\) to encode the information from \(CRA\). **Definition 11**.: Let \(J=\begin{bmatrix}1&0\\ 0&-1\end{bmatrix}.\) Then \(\alpha\), \(\beta\), and \(\gamma\) are defined as follows: \[\alpha:=iAJ=\begin{bmatrix}ia&0\\ 0&\frac{1}{ia}\end{bmatrix}\] \[\beta:=iRBLJ=\begin{bmatrix}i\,b&0\\ i\,b+\frac{i}{b}&-\frac{i}{b}\end{bmatrix}\] \[\gamma:=-iLCRJ=\begin{bmatrix}-\frac{i}{c}&i\,c+\frac{i}{c}\\ 0&ic\end{bmatrix}.\] **Theorem 12**.: _The single dimer model on the \(2x\times 2y\times 2z\) hexagon lattice with \(A\), \(B\), and \(C\) weights (as in Figure 10) gives rise to the same partition function as that on the double dimer model on the \(x\times y\times z\) hexagon graph with scalar weight of 1 everywhere and connection given by \(\alpha\), \(\beta\), and \(\gamma\) (as in Figure 11)._ Figure 10: Matrix weights \(A\), \(B\), and \(C\) on the single dimer model. Proof.: From the examples above, Let's consider the right turn \(A\to C\). In our matrices, we write this as \(\gamma\alpha\), which is \((-iLCRJ)(iAJ)=LCRA\), which does indeed include the information \(CRA\), as we desired. There are 12 total ways to step from one edge to another, whose equivalent products are given below. \[\begin{aligned} \text{Left Turns}&\text{Right Turns}\\ \beta\alpha&=-RBLA&\alpha\beta=-JARBLJ\\ \gamma\beta&=-LCLBLJ&\beta\gamma=RBCRJ\\ \alpha^{-1}\gamma&=-JA^{-1}LCRJ&\gamma \alpha^{-1}=-LCRA^{-1}\\ \beta^{-1}\alpha^{-1}&=-RB^{-1}LA^{-1}& \alpha^{-1}\beta^{-1}=-JA^{-1}RB^{-1}LJ\\ \gamma^{-1}\beta^{-1}&=-LC^{-1}LB^{-1}LJ& \beta^{-1}\gamma^{-1}=RB^{-1}RC^{-1}RJ\\ \alpha\gamma^{-1}&=-JALC^{-1}RJ&\gamma ^{-1}\alpha=-LC^{-1}RA\end{aligned}\] Assume we have a valid path of length \(n\) with matrix string \(\chi_{n}\chi_{n-1}\dots\chi_{1}\). Then this string is equivalent to one of the form \[X_{n}M_{n}T_{n-1}M_{n-1}\dots T_{2}M_{2}T_{1}M_{1}X_{1},\] where each \(M\in\{A,B,C,A^{-1},B^{-1},C^{-1}\}\) is one of the original edge matrices, \(T_{i}\in\{R,L\}\) is a turning matrix \(L\) or \(R\), and the \(X_{i}\)-terms on either end are the extras above used for bookkeeping (the \(LJ\), \(RJ\), \(L\), \(R\), or \(J\) 'bookkeeping' terms on the left or right of each full string). If we were to take another step along this path, then that \((n+1)\)th step would involve left-multiplying our string by \(\chi_{n+1}\), so we now have \(\chi_{n+1}\chi_{n}\dots\chi_{1}=\chi_{n+1}\chi_{n})\chi_{n-1}\dots\chi_{1}\). But the \(\chi_{n+1}\chi_{n}\) is a valid construction of two edge-weight matrices with a left or right turn in the middle (possibly with an \(X_{n+1}\) or \(X_{n}\) bookkeeping term), which means it is one of the 12 turns explicitly computed above. So \(\chi_{n+1}\chi_{n}\dots\chi_{1}\) gives us a string that is equivalent to using \(A\), \(B\), and \(C\) with left- and right-turn matrices. **Example 13**.: Consider the path around a single hexagon with these new weights. If we begin at the bottom edge and then travel counterclockwise around the loop, then we perform matrix Figure 11: The \(\alpha\), \(\beta\), and \(\gamma\) connection on the squished hexagon lattice multiplication (reading right to left) in the order we reach the edges, getting \(\gamma^{-1}\beta^{-1}\alpha^{-1}\gamma\beta\alpha\). This product gives us the total connection for the path. Now if we compute this matrix out for the left-turn version, we get \[\gamma^{-1}\beta^{-1}\alpha^{-1}\gamma\beta\alpha =(iLC^{-1}RJ)(-iRB^{-1}LJ)(iA^{-1}J)(-iLCRJ)(iRBLJ)(iAJ)\] \[=i^{6}LC^{-1}RJRB^{-1}LJA^{-1}JLCRJRBLJAJ\] \[=-LC^{-1}LB^{-1}LA^{-1}iLCLBLA\] \[=\begin{bmatrix}\frac{a^{2}b^{2}c^{4}+a^{2}b^{2}c^{2}+a^{2}b^{2}c^{ 4}+b^{4}c^{4}+a^{2}b^{2}c^{2}+b^{4}c^{2}+b^{2}c^{4}+3b^{2}c^{2}+b^{2}c^{2}+1}{b^ {2}c^{2}}&\frac{\left(a^{2}b^{2}c^{2}+b^{2}c^{2}+b^{2}+1\right)\left(c^{2}+1 \right)}{a^{2}b^{2}c^{2}}\\ \frac{\left(a^{2}b^{2}c^{2}+b^{2}c^{2}+1\right)\left(b^{2}+1\right)}{b^{2}c^{2 }}&\frac{a^{2}b^{2}c^{2}+b^{2}c^{2}+b^{2}+c^{2}+1}{a^{2}b^{2}c^{2}}\end{bmatrix}\] and the trace of this matrix is \(a^{2}b^{2}c^{2}+a^{2}b^{2}+a^{2}c^{2}+b^{2}c^{2}+a^{2}+b^{2}+c^{2}+\frac{1}{a^{ 2}}+\frac{1}{b^{2}}+\frac{1}{c^{2}}+\frac{1}{a^{2}b^{2}}+\frac{1}{b^{2}c^{2}}+ \frac{1}{a^{2}b^{2}}+\frac{1}{b^{2}c^{2}}+\frac{1}{a^{2}b^{2}c^{2}}+4\) which has 18 terms (including repeated terms). So, in particular, we know that there are only 20 \(2\times 2\times 2\) boxed plane partitions: the minimal one, the maximal one, and 18 others. Then using the trace of our matrix, we have accounted for all 18 terms that correspond to the 18 perfect matchings which get squished by the squish map to a single loop. These are the correct weights for plane partitions with 2-periodic weights. ## 4 Generating Functions and specializations Note that the previous sections have used periodic edge weights and connections. Here we generalize to cover several natural weight functions for plane partition enumeration. ### Arbitrary Weights We first consider an _arbitrary_ nonzero weight function \(w:G\to\mathbb{C}^{*}\). Note that every vertex \(v\) of \(G\) is a part of some propeller: either \(v\) is the center vertex of a propeller, or not (in which case, only one of the edges incident to \(v\) is part of the propeller). We modify the weight function \(w\) by performing a so-called _gauge transformation_: for each vertex \(v\) which is not the center of a propeller; suppose that the edge \(e\) connects \(v\) to the center of the propeller. Divide the weights of all of \(v\)'s incident edges by \(w(e)\). This operation changes the dimer model partition function only by an overall constant, which we can ignore at least in the case where \(G\) is a finite subgraph of the honeycomb graph. Assume without loss of generality that this has been done. Then, given two edges \(e_{1},e_{2}\) which get sent to \(e\) under the squish map, let \(w(e)=\sqrt{w(e_{1})w(e_{2})}\), the geometric mean of the weights of \(e_{1}\) and \(e_{2}\). Define a new weighting on \(G\) by \[\widetilde{w}(e_{1})=\frac{w(e_{1})}{w(e)},\qquad\widetilde{w}(e_{2})=\frac{w(e _{2})}{w(e)}.\] This weighting \(\widetilde{w}\) has the property that \(\widetilde{w}(e_{1})=\widetilde{w}(e_{2})^{-1}\), so we can push it through the squish map as before. ### Periodic weights on plane partitions One has to be careful with computing the weight of a perfect matching \(m\) on any infinite graph, such as \(H\). There are obviously infinitely many edges in such a graph, so one would need to consider issues of convergence before multiplying all of the weights together to find the weight of a perfect matching. There is then a separate convergence issue when trying to sum over \(m\). However, our main interest is in plane partitions, which are in bijection with certain perfect matchings on \(H\). Plane partitions come with a natural generating function, computed by MacMahon, as stated in Section 1.1. We now define \[M(a,q)=\prod_{i\geq 1}\left(\frac{1}{1-aq^{i}}\right)^{i}\] for weighted plane partitions [10]. Indeed, there is a particular weighting which is uniquely relevant on the entire honeycomb lattice: the \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) periodic weighting. We assign the weights _to the lattice points_ as follows: \[w(i,j,k)=\begin{cases}q&\text{ if }i-j\equiv 0,i-k\equiv 0\pmod{2},\\ r&\text{ if }i-j\equiv 0,i-k\equiv 0\pmod{2},\\ s&\text{ if }i-j\equiv 0,i-k\equiv 0\pmod{2},\\ t&\text{ if }i-j\equiv 0,i-k\equiv 0\pmod{2}.\end{cases}\] There are a variety of edge weight functions on the dimer model which correspond to this; one which is well behaved with respect to the squish map is (partially) shown in Figure 12. We previously defined the weight function on a finite portion of the honeycomb graph (as in the monochromatic weighting on Figure 2). To extend this weighting to the entire plane, set \(p^{3}=q\) and include constants \(k_{1}\), \(k_{2}\), and \(k_{3}\). Choose each of \(k_{1}\), \(k_{2}\), and \(k_{3}\) so that the desired region of the plane has the appropriate edge weights. (Generally we choose this in such a way that the edge weights near the center of the region in question are all of low degree.) This is comparable to choosing the weight "1" on a particular place in a given column of horizontal edges from Example 3. The result will be that each'strip' of hexagons (in each vertical, northeast/southwest, or northwest/southeast diagonal, as in figure 12) will have a weight of 1 in the desired location, with increasing and decreasing powers traveling either direction away from the center. We can decompose this weighting into two pieces, as shown on the bottom of Figure 12. The first piece becomes the \(\mathrm{SL}_{2}\) connection, as described above and seen in the lower right of the figure. The second piece becomes the scalar weights, which is shown in the lower left. For further details on various weight functions of the honeycomb graph, see [23]. **Theorem 14**.: _Using the weight function given in Figure 2 and a monodromy given by placing \(\alpha\) on every northeast/southwest edge, \(\beta\) on every northwest/southeast edge, and \(\gamma\) on every horizontal (east/west) edge, then the squish map is measure preserving._ **Corollary 15**.: _Using \(\alpha\), \(\beta\), and \(\gamma\) as described in Theorem 14, we can find the generating function for a given \(m\times n\times o\) boxed double dimer plane partition._ **Remark 16**.: _As seen in [11], we can compute the probability of a given edge being present in the single dimer model by taking the determinant of a certain Kasteleyn matrix. When we use this procedure in conjunction with the squish map, we can now determine the probability that a given edge will appear as a doubled edge in the double dimer model. If said doubled edge is present, then its preimage under the squish map would have two edges mapping to the doubled edge, so the overall probability would be the product of the two individual probabilities that each edge would be present in the single dimer model._ Define \(Q:=qrst\), and \(\widetilde{M}(x,y):=M(x,y)M(x^{-1},y)\). The generating function for plane partitions in which \(q,r,s,t\) mark the boxes in the above locations was computed in [23] (where it was denoted \(Z_{\mathbb{Z}_{2}\times\mathbb{Z}_{2}}\)); it is \[Z_{Q}=M(1,Q)^{4}\frac{\widetilde{M}(rs,Q)\widetilde{M}(st,Q)\widetilde{M}(tr,Q)}{ \widetilde{M}(-r,Q)\widetilde{M}(-s,Q)\widetilde{M}(-t,Q)\widetilde{M}(-rst,Q)}. \tag{1}\] Under the specialization \(r=s=t=q\) (and hence \(Q=q^{4}\)) we recover MacMahon's generating function for plane partitions by a slightly delicate manipulation of formal power series. In [10] it is observed that under the specialization \(r=s=t=-1\), the above generating function specializes to \(M(1,-q)\), and this latter statement is proven using the squish map (without the \(\mathrm{SL}_{2}\) double dimer model). **Corollary 17** (to Theorem 14).: _The partition function for the \(\mathrm{SL}_{2}\) double dimer model is \(Z_{Q}\) from equation 1._ ## 5 Examples We now work out two examples in which we squish boxed \(2\times 2\times 2\) and \(2\times 2\times 4\) plane partitions, with boxes weighted by \(w(i,j,k)\). These do not have nice closed-form generating functions akin to MacMahon's generating function, but we can nonetheless evaluate them with our techniques. We also investigate the specializations of our formula at primitive roots of unity, recovering and extending the results of [10], as well as a new conjecture. Figure 12: Top: 2-periodic weights for the single dimer model. Bottom: \(\mathrm{SL}_{2}\) connection and scalar weight for the corresponding double dimer model. **Example 18** (\(2\times 2\times 2\)).: Consider the path around a single hexagon that is the result of a particular single dimer configuration sent through the squish map, or a \(2\times 2\times 2\) boxed plane partition in the double dimer model. The two configurations on \(H_{2,2,2}\) that do not give rise to a loop under the squish map are pictured in Figure 13. The right graphic has a total weight of \(1\), and the image on the left has a total weight of \(Q^{2}\). Then the remaining \(18\) partitions are represented by the polynomial \(1+q+qr+qs+qt+qrs+qrt+qst+4qrst+qr^{2}st+qrs^{2}t+qrst^{2}+qr^{2}s^{2}t+qr^{2} st^{2}+qrs^{2}t^{2}+qr^{2}s^{2}t^{2}+qr^{2}s^{2}t^{2}+qr^{2}s^{2}t^{2}+q^{2}s^{2}t^{2}+q^{2} r^{2}s^{2}t^{2}+q^{2}r^{2}s^{2}t^{2}\). We want to draw a comparison between \(Z_{Q}\) and the matrix model under the squish map, so consider the trace of the product of terms corresponding to the single hexagon loop. \[\frac{a^{4}b^{4}c^{4}+a^{4}b^{4}c^{2}+a^{4}b^{2}c^{4}+a^{2}b^{4}c^{4}+a^{4}b^{ 2}c^{2}+a^{2}b^{4}c^{2}+a^{2}b^{2}c^{4}+4a^{2}b^{2}c^{2}+a^{2}b^{2}+a^{2}c^{2} +b^{2}c^{2}+a^{2}+b^{2}+c^{2}+1}{a^{2}b^{2}c^{2}}\] Note that this is only the connection in the \(\mathrm{SL}_{2}(\mathbb{C})\) double dimer model, not also the weights. So let the weight on every horizontal edge be \(Q^{\left(\text{height of that edge}\right)}=(qa^{2}b^{2}c^{2})^{\left(\text{height of that edge}\right)}\), as in Figure 2. Then the total contribution (the connection and the weight) of that single hexagon is \[q^{a^{2}b^{2}c^{2}\cdot\left(\frac{a^{4}b^{4}c^{4}+a^{4}b^{4}c^{2}+a^{4}b^{2}c^ {4}+a^{2}b^{4}c^{4}+a^{4}b^{2}c^{2}+a^{2}b^{4}c^{2}+a^{2}b^{2}c^{4}+a^{2}b^{2} c^{2}+a^{2}b^{2}+a^{2}c^{2}+b^{2}c^{2}+a^{2}+b^{2}+c^{2}+1}{a^{2}b^{2}c^{2}}\right)}\] \[=q(a^{4}b^{4}c^{4}+a^{4}b^{4}c^{2}+a^{4}b^{2}c^{4}+a^{2}b^{4}c^{4}+a^{4}b^{2}c^ {2}+a^{2}b^{4}c^{2}+a^{2}b^{2}c^{4}+4a^{2}b^{2}c^{2}+a^{2}b^{2}+a^{2}c^{2}+b^{ 2}c^{2}+a^{2}+b^{2}+c^{2}+1)\] Now consider a mapping from \(Z_{Q}\) to the matrices, where \(a^{2}=r,b^{2}=s\), and \(c^{2}=t\). Then \(Q=qrst=qa^{2}b^{2}c^{2}\). Then making the replacements in the statement above, we get \[q\left(r^{2}s^{2}t^{2}+r^{2}s^{2}t+r^{2}st^{2}+rs^{2}t^{2}+r^{2}st+rs^{2}t+rst ^{2}+4rst+rs+st+rt+r+s+t+1\right)\] Then this matches what we got for \(Z_{Q}\) when we include the \(1\) and \(Q^{2}\) terms from the two non-loop configurations. **Example 19** (\(2\times 2\times 4\)).: For another example, we consider the case with two hexagons, or a Figure 13: The only two configurations of a \(2\times 2\times 2\) single dimer model that do not squish to a single hexagon loop \(2\times 2\times 4\) boxed plane partition. Our generating function is \[GF_{2\times 2\times 4} =\ q^{4}r^{4}s^{4}t^{4}+q^{3}r^{4}s^{4}t^{4}+q^{3}r^{4}s^{4}t^{3}+q^{ 3}r^{4}s^{3}t^{4}+q^{3}r^{3}s^{4}t^{4}+q^{3}r^{3}s^{4}t^{3}+q^{3}r^{3}s^{4}t^{3}+ q^{2}r^{4}s^{4}t^{3}\] \[\ \ +q^{3}r^{3}s^{3}t^{4}+q^{2}r^{4}s^{4}t^{2}+4q^{3}r^{3}s^{3}t^{ 3}+q^{2}r^{4}s^{3}t^{3}+q^{2}r^{3}s^{4}t^{3}+q^{3}r^{3}s^{3}t^{2}+q^{2}r^{4}s^{ 3}t^{2}+q^{2}r^{3}s^{4}t^{2}\] \[\ \ +q^{3}r^{3}s^{2}t^{3}+q^{3}r^{2}s^{3}t^{3}+3q^{2}r^{3}s^{3}t^{ 3}+q^{3}r^{3}s^{2}t^{2}+q^{3}r^{2}s^{3}t^{2}+4q^{2}r^{3}s^{3}t^{2}+q^{3}r^{2}s^ {2}t^{3}\] \[\ \ +2q^{2}r^{3}s^{2}t^{3}+2q^{2}r^{2}s^{3}t^{3}+q^{2}r^{3}s^{3}t +q^{2}r^{2}s^{2}t^{2}+3q^{2}r^{3}s^{2}t^{2}+3q^{2}r^{2}s^{3}t^{2}+3q^{2}r^{2}s^ {2}t^{3}\] \[\ \ +q^{2}r^{3}s^{2}t+q^{2}r^{2}s^{3}t+9q^{2}r^{2}s^{2}t^{2}+q^{2} r^{2}st^{3}+q^{2}rs^{2}t^{3}+3q^{2}r^{2}s^{2}t+3q^{2}r^{2}st^{2}+3q^{2}rs^{2}t^{2}\] \[\ \ +qr^{2}s^{2}t^{2}+q^{2}rst^{3}+2q^{2}r^{2}st+2q^{2}rs^{2}t+qr^ {2}s^{2}t+4q^{2}rst^{2}+qr^{2}st^{2}+qrs^{2}t^{2}+3q^{2}rst\] \[\ \ +qr^{2}st+qrs^{2}t+q^{2}rt^{2}+q^{2}st^{2}+qrst^{2}+q^{2}rt+q^ {2}st+4qrst\] \[\ \ +q^{2}t^{2}+qrs+q^{2}t+qrt+qst+qr+qs+qt+q+1.\] This generating function, however, contains all 105 of the terms given by a \(2\times 2\times 4\) four-colored boxed plane partition. To compare with the matrix version, we need to be careful which terms squish to which configurations. See Figure 14. Note that the top row of configurations in Figure 14 has no loops (only doubled edges), so there is no contribution from the \(\mathrm{SL}_{2}(\mathbb{C})\) connection. So we get a contribution only from the weight, and we get 1 for the configuration on the left, \(Q^{2}\) for the configuration in the middle, and \(Q^{4}\) for the configuration on the right. Since there is only one configuration in the single dimer model that squishes to each of these they must each account for only one term in the generating function \(GF_{2\times 2\times 4}\), namely 1, \(q^{2}r^{2}s^{2}t^{2}\), and \(q^{4}r^{4}s^{4}t^{4}\). Now we examine the second line of Figure 14. From the \(2\times 2\times 2\) (Example 18) case, we see that we get 18 terms in the generating function for a single loop. So each of the first two figures in the second line correspond to 18 terms in the generating function, but to determine which terms Figure 14: A representation of of each possible result of pushing the \(2\times 2\times 4\) single dimer configuration through the squish map we need to include the \(Q\)-weights. So the lower left figure has two horizontal edges, the lower getting a weight of \(1\) and the upper getting a weight of \(Q\). So this diagram corresponds with the terms \(Q\)(the single hexagon generating function seen previously) \(=Q(r^{2}s^{2}t^{2}+r^{2}s^{2}t+r^{2}st^{2}+rs^{2}t+rs^{2}t+rst^{2}+rst+rst+rst+ rt+s+t+1)\). Then the lower-middle diagram gets a weight of \(Q^{2}\cdot Q=Q^{3}\), so its generating function is \(Q^{3}(r^{2}s^{2}t^{2}+r^{2}s^{2}t+r^{2}st^{2}+rs^{2}t^{2}+r^{2}st+rs^{2}t+rst^{ 2}+4rst+rs+st+rt+s+t+1)\). Then we have the \(105\) monomials from \(GF_{2\times 2\times 4}\), and subtract off the three monomials corresponding to the no-loop configurations, and the \(2\cdot 18\) monomials just described corresponding to single hexagon loop configurations, and we are left with the following \(66\) terms: \((r^{4}s^{4}t^{2}+r^{4}s^{4}t+r^{4}s^{3}t^{2}+r^{3}s^{4}t^{2}+r^{4}s^{3}t+r^{3} s^{4}t+3r^{3}s^{3}t^{2}+4r^{3}s^{3}t+2r^{3}s^{2}t^{2}+2r^{2}s^{3}t^{2}+3r^{3}s^{ 3}+3r^{3}s^{2}t+3r^{2}s^{3}t+3r^{2}s^{2}t^{2}+r^{3}s^{2}+r^{2}s^{3}+8r^{2}s^{2}t +r^{2}st^{2}+rs^{2}t^{2}+3r^{2}s^{2}+3r^{2}st+3rs^{2}t+rst^{2}+2r^{2}s+2rs^{2}+ 4rst+3rs+rt+st+r+s+t+1)q^{2}t\), which corresponds to a loop around two hexagons, as in the lower right of Figure 14. To compare with the corresponding contribution of the \(\mathrm{SL}_{2}\) double dimer model, we first take the matrix product of a monodromy around both hexagons, then take the trace and multiply by the weight \((qa^{2}b^{2}c^{2})^{2}\). \[(qa^{2}b^{2}c^{2})^{2}\cdot\left(\frac{1}{a^{4}b^{4}c^{2}}\cdot \left[a^{8}b^{8}c^{4}+a^{8}b^{8}c^{2}+a^{8}b^{6}c^{4}+a^{6}b^{8}c^{4}+a^{8}b^{6 }c^{2}+a^{6}b^{8}c^{2}\right.\right.\] \[\left.\left.+3a^{6}b^{6}c^{4}+4a^{6}b^{6}c^{2}+2a^{6}b^{4}c^{4}+2a ^{4}b^{6}c^{4}+a^{6}b^{6}+3a^{6}b^{4}c^{2}+3a^{4}b^{6}c^{2}+3a^{4}b^{4}c^{4}\right.\right.\] \[\left.\left.+a^{6}b^{4}+a^{4}b^{6}+8a^{4}b^{4}c^{2}+a^{4}b^{2}c^{4 }+a^{2}b^{4}c^{4}+3a^{4}b^{4}+3a^{4}b^{2}c^{2}\right.\right.\] \[\left.+3a^{2}b^{4}c^{2}+a^{2}b^{2}c^{4}+2a^{4}b^{2}+2a^{2}b^{4}+4 a^{2}b^{2}c^{2}+3a^{2}b^{2}+a^{2}c^{2}+b^{2}c^{2}+a^{2}+b^{2}+c^{2}+1\right]\right)\] Then make the replacements using \(a^{2}=r\), \(b^{2}=s\), and \(c^{2}=t\). The resulting polynomial is \((r^{4}s^{4}t^{2}+r^{4}s^{4}t+r^{4}s^{3}t^{2}+r^{3}s^{4}t^{2}+r^{4}s^{3}t+r^{3} s^{4}t+3r^{3}s^{3}t^{2}+4r^{3}s^{3}t+2r^{3}s^{2}t^{2}+2r^{2}s^{3}t^{2}+r^{3}s^{3}+3r^{3}s^{2}t+3r ^{2}s^{3}t+3r^{2}s^{2}t^{2}+r^{3}s^{2}t^{2}+r^{2}s^{3}+r^{2}s^{3}+r^{2}s^{3}+r^ {2}s^{3}+r^{2}s^{3}+r^{2}s^{3}+r^{2}s^{3}+r^{2}s^{3}+r^{2}s^{3}+8r^{2}s^{2}t+r^{ 2}st^{2}+rs^{2}t^{2}+3r^{2}s^{2}+3r^{2}st+3rs^{2}t+rst^{2}+2r^{2}s+2rs^{2}+4rst+ 3rs+rt+st+r+s+t+1)q^{2}t\), which matches exactly with the \(66\) terms from naively enumerating the plane partitions with the same weights. ### Specializations In an attempt to find a simple application of these techniques to plane partition enumeration, we have noticed the following curious phenomenon. Consider the specialization \(a=b=c=\omega\), a primitive \(n\)th root of unity for various \(n\), and let \(L\) be a loop which appears in the \(\mathrm{SL}_{2}\) double dimer model, after performing the squish map. **Theorem 20**.: _If \(n=1\) or \(n=2\), then the contribution of \(L\) is the number of double dimer configurations that contribute to that loop._ **Example 21**.: Consider the shape from Figure 15 (one of the six snake tiles). The path around this tile is \(\gamma^{-1}(\beta^{-1}\alpha^{-1}\gamma\alpha^{-1})^{2}\gamma(\beta\alpha\gamma^ {-1}\alpha)^{2}\), and if we calculate out that matrix product and then specialize to \(a=b=c=\omega\), a first or second root of unity, we get the matrix \[\begin{bmatrix}337&576\\ 1152&1969\end{bmatrix}.\] The trace of this matrix is \(2306\), corresponding with the \(2306\) possible double dimer configurations that squish to that particular snake tile. Proof.: The first and second roots of unity case essentially acts as a reduction from the \(\alpha\), \(\beta\), and \(\gamma\) version of the matrices to the original \(L\) and \(R\) vertex turn matrices. If we calculate the path around a shape as _turns_ instead of using edge connections \(\alpha\), \(\beta\), and \(\gamma\), then we get the same resulting trace of the matrix product. In example 21, we would have the turn sequence (reading right-to-left since it corresponds to a matrix product) \(LLLRRLLLRLLLRRLLLRL\), and if we use the originally-defined \(L\) and \(R\) matrices, we would get \(\begin{bmatrix}1969&576\\ 1152&337\end{bmatrix}\), which has a trace of 2306, as in the example. **Theorem 22**.: _If \(n=4\) then the contribution of \(L\) is 2._ Proof.: Note that this is the same contribution as if the connection were trivial (the matrix associated to every edge of \(G\) being identity matrix). When \(n=4\), then each matrix \(\alpha\), \(\beta\), and \(\gamma\) is equivalent to \(-I\), where \(I\) is the \(2\times 2\) identity matrix. Then since any possible loop in the graph must have even length, the resulting product around a loop is \(I\). **Theorem 23**.: _If \(n=8\), then the result of multiplying the connection around \(L\) is \(\pm\begin{bmatrix}1&0\\ 0&1\end{bmatrix}\), where the sign tracks the parity of the number of hexagons contained in \(L\). Note that this is still true regardless of our choice of basepoint._ Proof.: The trace of a loop around a single hexagon when specialized to eighth roots of unity is \(\begin{bmatrix}-1&0\\ 0&-1\end{bmatrix}=-1\cdot\begin{bmatrix}1&0\\ 0&1\end{bmatrix}=-I\). Then assume that the path around a region containing \(n\) hexagons is \((-1)^{n}\cdot I\). Consider a region \(R\) with \(n\) hexagons, as in our assumption above. Attach one additional hexagon on the boundary of \(R\) in a way that does not affect the simply-connectedness of \(R\), and call this new region \(R^{\prime}\). Choose your basepoint to be the spot right after the added hexagon. Then the boundary of \(R\) is the same until the new hexagon. Upon reaching the new hexagon, take the old path of \(R\), then loop all the way around the new hexagon (still in the counterclockwise direction). Note that this will traverse the most recent one to five edges from \(R\), but now in the opposite direction. Finally, the boundary of \(R^{\prime}\) also includes the boundary of the new hexagon. So our whole path is that of \(R\cdot\{\text{path around a single hexagon}\}\), which is \((-1)^{n}\cdot I\cdot(-1)\cdot I=(-1)^{n+1}\cdot I\). For the next conjecture we need some terminology from Conway-Lagarias [10] about tiling regions in \(H\) with tiles made of unions of hexagons. A _bone_ is the union of three collinear adjacent hexagons in \(H\); a _stone_ is the union of three hexagons in \(H\) which all share a common vertex. A _signed tiling of \(L\)_ is a collection of tiles, each with a weight of \(+1\) or \(-1\), covering \(L\) in such a way that the total contribution at each hexagon inside \(L\) is 1, and the total contribution of each hexagon outside \(L\) is zero. We also define one more tile, the _snake_, which is a union of four hexagons in an "S" shape (see Figure 15). **Conjecture 24**.: _If \(n=3\) or \(n=6\), then the contribution of \(L\) is 0 unless there exists a signed tiling of \(L\) by stones, bones and snakes - in which case, the contribution is \((-1)^{s}\cdot I\), with \(s\) being the number of stones used in the signed tiling._ When \(n=3\) or \(n=6\), then the contribution of a loop around a bone or snake is \(I\). So if we inductively add tiles to a region (as we did in the proof of Theorem 23 for individual hexagons), we get \(\pm I\), where the parity depends only on the number of stone tiles used in the tiling. So to prove the conjecture, one needs only show that the monodromy of all non-signed-tilable \(L\) is 0. We expect that after this specialization our \(\mathrm{SL}_{2}\) connection is strongly related to the character of one of the Conway-Lagarias tiling groups, so the proof will involve computing this character, as Figure 16: All of the tiles we use for \(\omega\) a twelfth root of unity. The top row is all three orientation of bones, the second row both orientation of stones, and the bottom two rows all six orientations of the snake tile. well as a map from said tiling group to a subgroup generated by the matrices \(\alpha\), \(\beta\), and \(\gamma\). Note that a contribution of \(0\) means that double dimer configuration will not contribute at all to the partition function, so this should give an interesting generating function for "good pairs" of plane partitions. ## 6 Concluding Remarks Here we explain the process of finding \(\alpha\), \(\beta\), and \(\gamma\). Perhaps our discovery process is of interest to the reader that has made it this far. Let \(M\) (for move) be the matrix \(M=L^{-1}R\). Then \(M^{-1}=R^{-1}L\). Our first attempt to create matrices \(\alpha\), \(\beta\), and \(\gamma\) involved simply conjugating \(A,B,C\) with different powers of \(M\), noting that \(M^{3}=-I\); based on empirical studies of the shortest possible loop, six left turns around a single hexagon, this looked close to the correct definition. We defined \(\alpha_{1}:=A\), \(\beta_{1}:=M^{-1}BM\), and \(\gamma_{1}:=M^{-2}BM^{2}\). These matrices worked for our main goal - terms in the trace of the resulting matrix count how many perfect matchings squish to a particular loop, with the correct weights. However, that polynomial had alternating signs depending on the total degree of each term. Our next step was to correct for the signs by replacing \(a\) with \(ai\), \(b\) with \(bi\), and \(c\) with \(ci\) in the original \(A\), \(B\) and \(C\) matrices. Let \(A^{\prime}\), \(B^{\prime}\), and \(C^{\prime}\) be the corresponding \(A\), \(B\), and \(C\) matrices after making the above replacements. We wanted all of the signs to be positive because the matrix product of a path around a loop should give rise to a probability measure on the graph, as in [16]. After including the is we had a trace that has all positive values. Then, \(\alpha_{2}:=A^{\prime}\), \(\beta_{2}:=M^{-1}B^{\prime}M\), and \(\gamma_{2}:=M^{-2}C^{\prime}M^{2}\). We had one final problem, however: we needed to accomplish the specialization \(a\mapsto ai\), _etc_, using linear algebra. We did this using the matrix \(J\) as defined above. We can now rewrite \(A^{\prime}=iJA\), \(B^{\prime}=-iJB\), and \(C^{\prime}=iJC\). Finally, to correct for the signs we mentioned, we conjugate each \(\alpha_{2}\), \(\beta_{2}\), and \(\gamma_{2}\) with \(J\). So (finally) we have that \(\alpha_{3}:=JA^{\prime}J\), \(\beta_{3}:=JM^{-1}B^{\prime}MJ\), and \(\gamma_{3}:=JM^{-2}C^{\prime}M^{2}J\). After rewriting and simplifying, we arrive at the final definition of \(\alpha\), \(\beta\), and \(\gamma\) as stated in Definition 11. Figure 17: The single dimer model before and after the squish map showing various closed loops, including two snakes, a bone, and some other loops with positive contributions under Conjecture 24
2309.01544
On the evolution of a stellar system in the context of the virial equation
The virial equation is used to clarify the nature of the dynamic evolution of a stellar system. Compared to the kinetic equation, it gives a deeper but incomplete description of the process of relaxation to a quasi-stationary state, which here means the fulfillment of the virial theorem. Analysis shows that the time to reach the virial equlibrium state $T_v$ is about two to three dozen dynamic time periods $T_d$. Namely, during $T_v$ the virial ratio, the mean harmonic radius, and the root-mean-square radius of the system fluctuate, and then the first two characteristics stabilize near their equilibrium values, while the root-mean-square radius continues to grow (possibly ad infinitum). This indicates a fundamentally different behavior of the moment of inertia of the system relative to the center of gravity and its potential energy, leading to the formation of a relatively small equilibrium core and an extended halo.
V. Yu. Terebizh
2023-09-04T11:55:39Z
http://arxiv.org/abs/2309.01544v1
# On the evolution of a stellar system ###### Abstract The virial equation is used to clarify the nature of the dynamic evolution of a stellar system. Compared to the kinetic equation, it gives a deeper but incomplete description of the process of relaxation to a quasi-stationary state, which here means the fulfillment of the virial theorem. Analysis shows that the time to reach the virial equlibrium state \(T_{v}\) is about two to three dozen dynamic time periods \(T_{d}\). Namely, during \(T_{v}\) the virial ratio, the mean harmonic radius, and the root-mean-square radius of the system fluctuate, and then the first two characteristics stabilize near their equilibrium values, while the root-mean-square radius continues to grow (possibly ad infinitum). This indicates a fundamentally different behavior of the moment of inertia of the system relative to the center of gravity and its potential energy, leading to the formation of a relatively small equilibrium core and an extended halo. Key words: Stellar dynamics (1596) ## 1 Introduction Theoretical considerations, reinforced in recent years by extensive numerical simulations, show that the dynamical evolution of a stellar system in its own gravitational field is characterized by three basic time scales. The shortest of them, the _dynamic time_\(T_{d}\sim(G\rho)^{-1/2}\), where \(G\) is the gravitational constant and \(\rho\) - the mass-average density of the system, is associated with large-scale motions of matter in the early stages of system evolution. This parameter is also called the _crossing time_, because the return time of a body that has fallen from the surface of a homogeneous ball of density \(\rho\) into a hole passing along its diameter is equal to \((3\pi/G\rho)^{1/2}\). According to Jeans (1915, 1919), the subsequent relaxation of the system to a quasi-stationary state in the smoothed, so-called _regular_ gravitational field is described by the _collisionless Boltzmann equation_ for the distribution function \(f({\bf r},{\bf v},t)\) in 6-dimensional phase space, \[\frac{\partial f}{\partial t}+{\bf v}\,\frac{\partial f}{\partial{\bf r}}- \frac{\partial\Phi}{\partial{\bf r}}\,\frac{\partial f}{\partial{\bf v}}=0, \tag{1}\] supplemented by the Poisson equation for the conjoint potential \(\Phi({\bf r},t)\) [Henon 1982; Binney & Tremaine 2008]. The definition of quasi-stationary state is often associated with the _virial equation_, which is valid for a set of \(N\) gravitating points in the center of mass coordinate system: \[\frac{1}{2}\,\frac{d^{2}J(t)}{dt^{2}}=2K(t)+W(t), \tag{2}\] where \[J(t)=\sum_{1}^{N}m_{i}{\bf r}_{i}^{2},\] \[K(t)=\sum_{1}^{N}m_{i}{\bf v}_{i}^{2}/2,\quad\mbox{and} \tag{3}\] \[W(t)=-G\,\sum_{i=1}^{N-1}\,\sum_{j=i+1}^{N}\,\frac{m_{i}m_{j}}{|{\bf r}_{i}-{ \bf r}_{j}|}\] are, respectively, the moment of inertia of the system, its kinetic and potential energies, whereas \(m_{i}\), \({\bf r}_{i}(t)\) and \({\bf v}_{i}(t)\) are the mass, radius-vector and the speed of the \(i\)-th star.1 The total mass \(M=\sum m_{i}\) of the star system and its total energy \(E\) are assumed to be given. If the motions occur in a limited region of space, then, averaging Eq. (2) over time, we obtain the equality \(2\langle K\rangle+\langle W\rangle=0\), called the _virial theorem_ (Landau & Lifshitz 1976). Stellar systems are not closed in space, but it is assumed that after some time a quasi-stationary state is reached, in which the left side of Eq. (2) becomes negligible, so, marking the parameters in quasi-stationary state with asterisks, we can take \[2K_{*}+W_{*}=0. \tag{4}\] Within the framework of the discussed here approach, one should understand the quasi-stationary state as the _virial equilibrium state_ (VES). The characteristic time interval for reaching VES will be denoted as \(T_{v}\). Some models of internal rearrangement of a system evolving from the virial to a true quasi-stationary equilibrium were studied by Levin, Pakter & Rizzato (2008) and Benetti et al. (2014). Finally, the third stage, the relaxation of the system's core towards the Maxwell-Boltzmann state, takes even more time \(T_{r}\). For half a century it was believed that this process is due solely to the _irregular_ gravitational field of the system, which is defined as the difference between the real and smoothed fields (Ambartsumian 1938; Chandrasekhar 1942; Spitzer 1987). Since the spatial density of stars in galaxies is low, the main contribution to the process is made by pair collisions (close passages) of stars; it is taken into account by the non-zero _collisional term_ on the right side of Eq. (1). An explicit representation of this term for systems with Coulomb or gravitational interaction was given by Landau (1937). Research in recent decades has associated relaxation to a more efficient process of _dynamic chaos_ (Gurzadyan & Savvidy 1984, 1986); the corresponding relaxation time \(T_{r}\simeq N^{1/3}T_{d}\) (according to Rastorguev & Sementsov 2006, the exponent is \(1/5\)). Thermodynamic equilibrium is never reached already due to the long-range nature of the gravitational force (Lynden-Bell 1967; Levin, Pakter & Rizzato 2008; Levin et al. 2014; Benetti et al. 2014); in addition, the uncloseness of the system is manifested in its external parts. As regards the evolution at the second of the stages mentioned above, the nature of the observed fast "Maxwellization" of galaxies in a regular field remained unclear for a long time. The revival of research in this direction was initiated by Henon (1964) and Lynden-Bell (1967); the latter proposed an appropriate stochastic mechanism, as he called it, _violent relaxation_. In the current understanding, this implies the importance of collective processes in systems with long-range interaction (Shu 1978; Levin, Pakter & Rizzato 2008; Levin et al. 2013; Gurzadyan & Kocharyan 2009). On the other hand, numerical simulations, starting with van Albada (1982) studies and up to Halle, Colombi & Peirani (2019) and Sylos Labini & Capuzzo-Dolcetta (2020) recent calculations, gradually clarify the commensurate role of radial instabilities and internal density fluctuations that lead to the formation of local substructures of increasing size. Unlike \(T_{d}\) and \(T_{r}\), no explicit representation of \(T_{v}\) in terms of the integral parameters of the system has been found so far, especially since it depends on the initial state. Quantification is hindered by the extreme complexity of combining the kinetic and Poisson equations. In this connection, the virial equation attracts more attention. It should be taken to a deeper level of description compared to the kinetic equation in one its form or another, because the latter is inevitably formulated with finite accuracy, while the virial equation is due only to the fundamental fact that the potential energy in the gravitational interaction of a pair of point-like bodies is inversely proportional to the distance between them, i.e., it is a _homogeneous function_ of coordinates of degree \(-1\). Among other things, the virial equation is valid for any, small or large, number of interacting points, while the accuracy of Eq. (1) drops as \(N\) decreases. For \(N\gg 1\), the collisionless Boltzmann equation is consistent with the virial equation in the sense that Eq. (2), in its continuous version, can be derived from Eq. (1). Since the total energy of an isolated system \(E=K(t)+W(t)\) is conserved in time, Eq. (2) is usually written as \[\frac{1}{2}\,\frac{d^{2}J(t)}{dt^{2}}=2E-W(t). \tag{5}\] For a gravitationally bound system, a necessary (but not sufficient) stability condition is \(E<0\)(Chandrasekhar 1942); we will assume that this condition is satisfied. Equation (5) includes two unknown functions of time, and therefore, by itself, does not allow complete description of even the integral properties of the system. However, it can be hoped that it will provide some information about _the character_ of evolution and the corresponding time intervals. The present paper is devoted to elucidating this possibility. ## 2 Signs of different evolution of two characteristic radii of the system Usually the degree of proximity to the state of virial equilibrium is given by the value of the _virial ratio_ \[V(t)\equiv\frac{2K(t)}{|W(t)|}=2\left[1-\frac{E}{W(t)}\right],\qquad 0\leq V<2. \tag{6}\] In view of Eq. (4), the values of the kinetic and potential energies in VES are \[K_{*}=-E,\qquad W_{*}=2E, \tag{7}\] so the equilibrium virial ratio \(V_{*}=1\). As the time scale in VES, we take \(T_{*}\equiv\Re_{*}/v_{*}\) - the time of intersection of the mean harmonic radius \(\Re_{*}\) of the system with the characteristic velocity \(v_{*}\). The values of the last two follow from Eq. (7) and the definitions \[K_{*}\equiv\frac{1}{2}Mv_{*}^{2},\qquad W_{*}\equiv-\frac{GM^{2}}{2\Re_{*}}\,. \tag{8}\] In this way we get: \[v_{*}=\left(\frac{2|E|}{M}\right)^{1/2},\qquad\Re_{*}=\frac{GM^{2}}{4|E|}, \qquad T_{*}=\frac{G}{4}\left(\frac{M^{5}}{2|E|^{3}}\right)^{1/2}. \tag{9}\] The literature uses similar definitions with minor differences in numerical coefficients; the values adopted here are convenient for what follows. Notice that all these parameters are determined by the values of \(M\) and \(E\). We can write the last of Eqs. (9) as \(T_{*}=(3/2\pi G\rho_{*})^{1/2}\), where characteristic density \(\rho_{*}\equiv 3M/4\pi\Re_{*}^{3}\); as expected, the time scale \(T_{*}\) turns out to be of the order of dynamic time \(T_{d}\). Since the estimates of \(v_{*}\) and \(\Re_{*}\) can be found directly from observations, the first two of formulas (9) make it possible to estimate the total mass and energy of the system (Bahcall & Tremaine 1981; Binney & Tremaine 2008). The discussed picture of evolution is illustrated in Fig. 1 (see also Fig. 6.1 in Ciotti (2021) monograph). It is convenient to pass in Eq. (5) to the searched quantities of a single physical nature - the root-mean-square radius \(R(t)\) and the mean harmonic radius \(\Re(t)\), which are defined as follows: \[\begin{array}{c}R^{2}(t)\equiv\frac{1}{M}\sum_{1}^{N}m_{i}\mathbf{r}_{i}^{2 },\\ \\ \Re^{-1}(t)\equiv\frac{2}{M^{2}}\sum_{i=1}^{N-1}\sum_{j=i+1}^{N}\frac{m_{i}m_{ j}}{|\mathbf{r}_{i}-\mathbf{r}_{j}|},\end{array} \tag{10}\] so that \[J(t)=MR^{2}(t),\qquad W(t)=-\frac{GM^{2}}{2\,\Re(t)}\,, \tag{11}\] and Eq. (5) takes the form: \[\frac{d}{dt}\left(R\ \frac{dR}{dt}\right)=\frac{GM}{2\,\Re(t)}-\frac{2|E|}{M}\,. \tag{12}\] Finally, introducing, with the help of Eq. (9), the dimensionless variables \[\tau\equiv t/T_{*},\qquad x\equiv\Re/\Re_{*},\qquad\mbox{and}\quad y\equiv R /\Re_{*}, \tag{13}\] we reduce the virial equation to the dimensionless form with all unit coefficients: \[\frac{d}{d\tau}\left(y\,\frac{dy}{d\tau}\right)=\frac{1}{x}-1, \tag{14}\] while the definition (6) for the virial ratio becomes \[V(\tau)=2-x(\tau),\qquad 0<x(\tau)\leq 2. \tag{15}\] We emphasize an important fact that has not been discussed before: _The mean harmonic radius of a system with a negative total energy_ _does not exceed twice the equilibrium value._ The assertion follows from the definition \(\Re/\Re_{*}\equiv 2|E|/|W|\) and the inequalities \(K>0\), \(|E|<|W|\). A clear evidence is that the lines \(V=\) const in Fig. 1 do not intersect with the line corresponding to the given value \(E\) when const \(\geq 2\). In view of the above, the approximation to virial equilibrium over time means that the harmonic radius of the system \(\Re(t)\), changing in a relatively narrow range Figure 1: In the \((K,W)\) plane, the system evolves along the straight line \(K(t)+W(t)=E\), approaching on average a _virial equilibrium state_ (VES). The thin lines correspond to fixed values of the virial ratio \(V\). The letters \(C\) and \(H\) denote “cold” and “hot” configurations, which are characterized by the values \(K<|E|\) and \(K>|E|\), respectively. of values \((0,2\mathfrak{R}_{*})\), tends to its equilibrium value (9), i.e., \(x(\tau)\to 1\), the second derivative of \(R^{2}(t)\) tends to zero, while the RMS radius \(R(t)\) tends to a finite or infinite value. To show the theoretical possibility of the described scenario, we, anticipating the discussion in the next section, present in Fig. 2 the evolution of the harmonic and root-mean-square radii for a given behavior of the first radius, while the change in the second radius is calculated according to the virial equation (14). Specifically, the initial values \(x(0)=1.5\) and \(y(0)=2.0\) were set, and it was assumed that \(x(\tau)\) tends to 1, experiencing an exponential decay and oscillations with a period \(2\pi T_{*}\) (see Eq. (23) below). Note that \(y^{2}(\tau)\) grows linearly as \(\tau\to\infty\). Supporting this model is the practical constancy of the half-mass radius \(R_{h}\) in time when calculating the evolution of isolated stellar systems (Spitzer, 1987). Within the framework of King's models, which describe well the internal structure of the globular clusters, the mean harmonic radius \(\mathfrak{R}\simeq 2.5R_{h}\) and, therefore, also changes little (private communication, A. Rastorguev, 2023). Thus, the ratio of radii \[q(\tau)\equiv\frac{R}{\mathfrak{R}}=\frac{y(\tau)}{x(\tau)} \tag{16}\] can vary within wide limits. It is useful to estimate the parameter \(q\) for several continuous, for simplicity, density distributions that differ significantly from each other. Table 1 lists the corresponding data for systems with central symmetry. Figure 2: Change in the RMS radius of the system \(y(\tau)\) (solid line) for a given behavior of the harmonic radius \(x(\tau)\) (dash-dotted line). These distributions can be conditionally considered as instantaneous (but not sequential) states of a star cluster evolving in accordance with the kinetic and Poisson equations. To calculate the values given in the table, note that for a continuous distribution, the mass \(M(r)\) inside a sphere of radius \(r\) and the potential energy \(W\) are defined by the formulas \[\begin{array}{l}M(r)=4\pi\int_{0}^{r}\rho(r)r^{2}dr,\\ W=-4\pi G\int_{0}^{\infty}\rho(r)M(r)rdr,\end{array} \tag{17}\] so the analogs of Eqs. (10) are reduced, using Eq. (11), to \[\begin{array}{l}R^{2}=\frac{4\pi}{M}\int_{0}^{\infty}\rho(r)r^{4}dr,\\ \Re^{-1}=\frac{8\pi}{M^{2}}\int_{0}^{\infty}\rho(r)M(r)rdr.\end{array} \tag{18}\] As Table 1 shows, for the first three, relatively homogeneous distributions, the values of \(q\) are close to 1. Apparently, these density distributions are adequate only at the initial stage of evolution. The values of \(q\) remain of the same order for fairly significant deviations from the spherical symmetry of the density distribution, for example, towards ellipsoidality. More important is the density distribution in the outer region of the system. The theoretical estimates by von Hoerner (1956), the thorough numerical modeling by van Albada (1982), reinforced by physical arguments of Trenti, Bertin & van Albada (2005), further simulations of cluster evolution by Yangurazova & Bisnovatyi-Kogan (1984), Levin, Pakter \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline **No.** & \(\rho(r)/\rho_{0}\) & \(M/\rho_{0}a^{3}\) & \(\bar{R}/a\) & \(R/a\) & \(\Re/a\) & \(q\equiv R/\Re\) \\ \hline 1 & \(1,r\leq a;\ 0,r>a\) & \(4\pi/3\) & \(3/4\) & \(\sqrt{3/5}\) & 5/6 & \(2(3/5)^{3/2}\simeq 0.9295\) \\ \hline 2 & \(\exp(-r/a)\) & \(8\pi\) & \(3\) & \(2\sqrt{3}\) & \(16/5\) & \(5\sqrt{3}/8\simeq 1.0825\) \\ \hline 3 & \(\exp(-r^{2}/a^{2})\) & \(\pi^{3/2}\) & \(2/\sqrt{\pi}\) & \(\sqrt{3/2}\) & \(\sqrt{\pi/2}\) & \(\sqrt{3/\pi}\simeq 0.9772\) \\ \hline 4 & \((1+r^{2}/a^{2})^{-5/2}\) & \(4\pi/3\) & \(2\) & \(\infty\) & \(16/3\pi\) & \(\infty\) \\ \hline 5 & \((1+r^{2}/a^{2})^{-2}\) & \(\pi^{2}\) & \(\infty\) & \(\infty\) & \(\pi\) & \(\infty\) \\ \hline \end{tabular} \end{table} Table 1: Characteristics of systems with central symmetry for various spatial density distributions \(\rho(r)\). The following designations are accepted: \(\rho_{0}\) – central density; \(M\) – total mass; \(\bar{R}\), \(R\) and \(\Re\) – respectively, mean, mean square and mean harmonic radii. & Rizzato (2008), Levin et al. (2013), Joyce, Marcos & Sylos Labini (2010), Sylos Labini (2013), Halle, Colombi & Peirani (2019), and Sylos Labini & Capuzzo-Dolcetta (2020) indicate the formation of a power-law density distribution \(\rho(r)\propto r^{-\alpha}\) with exponent \(\alpha\sim 3.3-4\) in the halo. The last two examples of Table 1 are just that. Example No. 4 is the system of Schuster (1883) and Plummer (1911), which was repeatedly used in connection with studies of globular star clusters. With density distributions as flat as in Examples 4 and 5, the integrals for \(R\) diverge, so the \(q\)-factor is infinitely large. The above models assume a more or less gradual change in density with distance from the center. An idea of the reverse behavior is given by a two-layer model with radii \(R_{1}\), \(R_{2}\) and densities \(\rho_{1}\), \(\rho_{2}\) in the central and outer zones, respectively. We do not present the corresponding formulas because of their cumbersomeness. The general conclusion is that at moderate values of the ratio \(\rho_{1}/\rho_{2}\), the \(q\)-factor is still close to 1, and only when \(\rho_{1}/\rho_{2}\gg 1\) and \(R_{1}/R_{2}<0.1\) can \(q\) more than 10 be achieved. Thus, it seems likely that relatively homogeneous systems at the initial stage of evolution are characterized by \(q(\tau)\) of the order of 1, while the \(q\)-factor increases significantly in the course of further evolution as a dense core and extended halo of the system are formed. ## 3 Solutions for a given ratio of radii In addition to the formal reason for the studying in Eq. (12) the dimensionless ratio of two unknown radii \(R(t)\) and \(\Re(t)\), the function \(q(\tau)\) plays an important role by setting the systematic behavior of the RMS radius averaged over fast oscillations with a period of the order of dynamic time \(T_{d}\) (see Appendix). In order to verify the oscillatory nature of the solutions of the virial equation, we write down Eq. (14) as \[\frac{d}{d\tau}\left(y\,\frac{dy}{d\tau}\right)=\frac{q(\tau)}{y}-1, \tag{19}\] and assume first that the ratio of the radii does not change with time. Lynden-Bell (1967) additionally linearized the corresponding equation; Chandrasekhar & Elbert (1972) found a rather complicated analytical solution to a non-linear equation. Written in parametric form, the exact solution of Eq. (19) at describes a classical cycloid \[\left\{\begin{array}{l}y=q_{0}-a\cos\theta-b\sin\theta,\\ \tau=q_{0}\theta-a\sin\theta-b(1-\cos\theta),\ \ 0\leq\theta<\infty\,,\end{array}\right. \tag{20}\] whereas the mean harmonic radius of the system \[x(\tau)=y(\tau)/q_{0}=1-(a/q_{0})\cos\theta-(b/q_{0})\sin\theta \tag{21}\] oscillates around the equilibrium value \(x_{*}=1\). The constants \(a\) and \(b\) are determined by the initial state: \[a=q_{0}-y_{0},\qquad b=-y_{0}\cdot y_{0}^{\prime}. \tag{22}\] It is convenient to proceed from three initial values, namely, the pair \((y_{0},y_{0}^{\prime})\) and the virial ratio \(V_{0}\); then, according to Eqs. (15) and (16), we have \(x_{0}=2-V_{0}\) and \(q_{0}=y_{0}/x_{0}\). Equation (6) shows that values of the kinetic energy less or greater than \(|E|\) correspond to virial ratio values less or greater than 1 (see Fig. 1); how it is accepted, we call the respective states of the system "cold" and "hot". The mean harmonic radius \(\mathfrak{R}\) of the former state exceeds the equilibrium value \(\mathfrak{R}_{*}\), while \(\mathfrak{R}<\mathfrak{R}_{*}\) for the latter state. Figure 3: Changing the RMS radius \(y(\tau)\) in a model with constant ratio \(y/x\equiv q_{0}\) at initial values \(y_{0}=1.60\), \(y_{0}^{\prime}=0.10\), \(V_{0}=0.20\). The dashed line corresponds to \(q_{0}=8/9\). An example solution for \(V_{0}=0.20\) ("cold" system) is shown in Fig. 3. The period of oscillations of the cycloid in real time is equal to \(q_{0}P_{*}\), where \[P_{*}=2\pi T_{*}=\frac{\pi G}{2}\left(\frac{M^{5}}{2|E|^{3}}\right)^{1/2}=\sqrt{ 6\pi/G\rho_{*}}, \tag{23}\] and \(\rho_{*}=3M/4\pi\Re_{*}^{3}\) is the characteristic density of the cluster. It is clear that the undamped, so-called homologous oscillations give only a preliminary description of the early evolution of a self-gravitating system. As noted at the end of the previous section, to estimate the characteristic time to reach virial equilibrium, it is necessary to take into account a progressive macroscopic inhomogeneity of the system. Accordingly, we must turn to a model with a time-varying ratio of radii \(R/\Re\). The Appendix to this paper shows that the approximate solution of Eq. (19) with an arbitrary function \(q(\tau)\), on which only the condition of its slow change on the time scale \(T_{*}\) is imposed, is a generalization of the classical cycloid, namely: \[\left\{\begin{array}{l}y=u(\theta)-a(\theta)\cos\theta-b(\theta)\sin\theta, \\ \tau=\theta u(\theta)-a(\theta)\sin\theta-b(\theta)(1-\cos\theta),\ \ 0\leq\theta<\infty.\end{array}\right. \tag{24}\] Here, the base function \(u(\theta)\) is given by the implicit equation \(u=q(\theta\cdot u)\), and the variable coefficients \(a(\theta)\) and \(b(\theta)\) depend on \(u(\theta)\), that is, they are also given by the function \(q(\tau)\). In the case of \(q(\tau)\equiv q_{0}\), we get \(u=q_{0}\), the coefficients \(a\), \(b\) become constant, and we return to the model considered above. The physical meaning of representation (24) is that the functions \(u(\theta),a(\theta)\) and \(b(\theta)\) change, Figure 4: Solution of Eq. (19) with initial data \(V_{0}=1.40\), \(y_{0}=0.60\), \(y_{0}^{\prime}=0.10\). Left: RMS radius \(y(\tau)\) of the system (solid line) and \(q(\tau)\) function (dashed line). Right: Virial ratio \(V(\tau)\) (solid line) and unit level corresponding to VES (dashed line). following \(q(\tau)\), relatively slowly, while their trigonometric factors reflect precisely these rapid variations of the RMS radius around \(q(\tau)\), and the mean harmonic radius and virial ratio - around the equilibrium value equal to 1. Our numerical examples show that in the immediate vicinity of the VES, the oscillation period slightly increases. Not as informative as the analytical, but more accurate approach is the direct numerical solution of Eq. (19) for a given \(q(\tau)\). Figure 4 gives an idea of a typical evolutionary pattern, in this case a "hot" system. The behavior of the mean harmonic radius \(x(\tau)\) is not shown because, in view of Eq. (15), it is a reflection of \(V(\tau)\) relative to the equilibrium level. We see several rapid initial fluctuations in both the RMS radius and the virial ratio, however, later \(y(\tau)\) follows the given function \(q(\tau)\), while \(V(\tau)\) and \(x(\tau)\) practically stabilize around the equilibrium level \(x_{*}=1\). _This means that, over a period of two to three dozen of dynamic time intervals \(T_{*}\), an equilibrium core with radius \(\Re_{*}\) is formed in the system, while the surrounding halo, which determines the RMS radius \(R(t)\), continues to expand._ The pattern of virial oscillations seen in the right Fig. 4 has the same character as that obtained by numerical simulation of the dynamics of a multiparticle system (see, for example, Fig. 6 in Trenti, Bertin & van Albada 2005). Fast oscillations are just a ringing against the backdrop of a slower reorganization of the system. Eventually, it is possible that after two dozen dynamic time periods \(T_{*}\) the harmonic radius of the system \(x(\tau)\) will become close to 1, and then, as the virial equation (14) shows, the further evolution of the root-mean-square radius of the system is described by simple law \(R(t)/\Re_{*}\simeq q(\tau)\simeq(c_{1}\tau+c_{2})^{1/2}\), where \(\tau=t/T_{*}\) and \(c_{1},c_{2}\) are some dimensionless constants. Figure 5: Solution of Eq. (19) with initial data \(V_{0}=1.0\), \(y_{0}=1.0\), \(y_{0}^{\prime}=0\) corresponding to virial equilibrium state. Explanations are the same as in Fig. 4. For control, one should also check the evolution of the system, which was initially in a virial equilibrium state. As can be seen in Fig. 5, the rapid oscillations of the radii and the virial ratio have disappeared, the RMS radius \(y(\tau)\) still follows \(q(\tau)\), while the quasi-equilibrium nucleus experiences only long-term weak oscillations. This was to be expected. The difference in the behavior of the system core and halo seems quite plausible, but we should not forget that in the context considered here it is partly determined by the specification of the \(q(\tau)\) function. This prompted us to consider models with different types of \(q(\tau)\); all cases, except for extremely "hot" systems, show the same behavior. ## 4 Concluding remarks The above analysis supports two features of the evolution of a stellar system towards virial equilibrium. First, its integral characteristics fluctuate during two to three tens of dynamic time \(T_{*}\) until the virial ratio stabilizes near the equilibrium value \(V_{*}=1\). Secondly, the root-mean-square and mean harmonic radii vary in time in different ways, so the assumption previously accepted by a number of researchers about the approximate equality of radii is far from reality. As already noted, the first conclusion agrees with the results of numerical simulation of self-gravitating systems at \(N\gg 1\). The second conclusion means a fundamentally different behavior of the moment of inertia of the system relative to the center of gravity and its potential energy. Further details of the process of approaching the virial equilibrium state remain hidden when only the virial equation is analyzed. Additional data are desirable, at least in the form of approximate relationships between the integral characteristics of the system. On the other hand, the difference between evolutionary paths of the RMS and the mean harmonic radii can be easily elucidated on the basis of both already performed and future numerical simulations. ## Acknowledgements The author is grateful to A.S. Rastorguev for useful comments. ## Data availability No new data were generated or analysed in support of this research. ## Appendix. Approximate analytical solution of the virial equation A nonlinear differential equation of the second order \[y\left[\frac{d}{d\tau}\left(y\,\frac{dy}{d\tau}\right)+1\right]=q(\tau)\] is considered in the domain \(\tau\geq 0\) for a given non-negative function \(q(\tau)\). The approach presented below, which goes back to the method of Van der Pol (1927), is widely used in the theory of oscillations. We will look for a solution in a parametric form: \[\left\{\begin{array}{l}y=u(\theta)-a(\theta)\cos\theta-b(\theta)\sin\theta,\\ \tau=\theta u(\theta)-a(\theta)\sin\theta-b(\theta)(1-\cos\theta),\ \ \theta\geq 0,\end{array}\right.\] where \(u(\theta)\), \(a(\theta)\) and \(b(\theta)\) are some unknown functions slowly varying over an interval of length \(2\pi\). In particular, they can be constant. Specifically, we assume that \(|u^{\prime}(\theta)/u(\theta)|\ll 1\), and similar inequalities hold for the coefficients \(a\) and \(b\). Under this condition, the solution averaged over an interval of length \(2\pi\) is \[\langle y(\theta)\rangle=\frac{1}{2\pi}\int_{\theta}^{\theta+2\pi}y(t)dt\simeq u (\theta),\] which determines the physical meaning of the function \(u(\theta)\). We have from Eqs. (A2): \[\left\{\begin{array}{l}dy/d\theta=u^{\prime}-a^{\prime}\cos\theta+a\sin \theta-b^{\prime}\sin\theta-b\cos\theta,\\ d\tau/d\theta=u+\theta u^{\prime}-a^{\prime}\sin\theta-a\cos\theta-b^{\prime} (1-\cos\theta)-b\sin\theta.\end{array}\right.\] According to the above condition, we can neglect here terms with derivatives, i.e. put \[\left\{\begin{array}{l}u^{\prime}-a^{\prime}\cos\theta-b^{\prime}\sin\theta =0,\\ \theta u^{\prime}-a^{\prime}\sin\theta-b^{\prime}(1-\cos\theta)=0,\end{array}\right.\] so that Eqs. (A4) take the form: \[\left\{\begin{array}{l}dy/d\theta=a\sin\theta-b\cos\theta,\\ d\tau/d\theta=u-a\cos\theta-b\sin\theta=y.\end{array}\right.\] Dividing the top of Eqs. (A6) by the bottom gives: \[y\,{dy\over d\tau}=a\sin\theta-b\cos\theta.\] Moreover, Eqs. (A6) shows that the derivatives with respect to \(\tau\) and with respect to \(\theta\) are connected by the relation \[{d\over d\tau}={1\over d\tau/d\theta}\cdot{d\over d\theta}={1\over y}\,{d\over d \theta}\,.\] Applying this operator to Eq. (A7), taking into account the first of Eqs. (A2) and the condition of smallness of derivatives, we find: \[\eqalign{&y\,{d\over d\tau}\left(y\,{dy\over d\tau}\right)={d\over d\theta}(a \sin\theta-b\cos\theta)\simeq\cr&a\cos\theta+b\sin\theta=u(\theta)-y.\cr}\] Thus, \[y\left[{d\over d\tau}\left(y\,{dy\over d\tau}\right)+1\right]=u(\theta),\] which coincides with Eq. (A1) provided that \[u(\theta)=q(\tau).\] Here it suffices to restrict ourselves to the first term in the representation of \(\tau\) from Eqs. (A2), so that the function \(u(\theta)\) is found from the implicit equation \[u=q(\theta\cdot u).\] Finally, given the known function \(u(\theta)\), one can find coefficients \(a(\theta)\) and \(b(\theta)\) by solving the system of linear Eqs. (A5) with respect to \(a^{\prime}\) and \(b^{\prime}\), and then integrating the results. We will not go into further technical details.
2310.17522
Proposal on Model Based Current Overshoot Suppression of Receiver Side Coil in Drone Wireless Power Transfer System
This paper proposes a model-based control method in the wireless power transfer (WPT) system by operating a semi-bridgeless active rectifier (SBAR) to suppress the secondary coil current overshoot. By damping the current overshoot, it is possible to reduce the rectifier's rated current and decrease the rectifier's size, which is beneficial for the lightweight-oriented system such as drones. In the control method, an inverse of the plant model is used to calculate the reference input to the system. The current overshoot is reduced by operating the SBAR under the duty ratio calculated from the model. To confirm the performance of the proposed method, the simulation and the experiment using the WPT prototype are conducted. The experimental results show that the proposed method can suppress the secondary coil current overshoot. The results suggest it is possible to realize the lighter secondary system by applying the proposed method.
Kota Fujimoto, Takumi Hamada, Hiroshi Fujimoto
2023-10-26T16:15:55Z
http://arxiv.org/abs/2310.17522v1
Proposal on Model Based Current Overshoot Suppression of Receiver Side Coil in Drone Wireless Power Transfer System ###### Abstract This paper proposes a model-based control method in the wireless power transfer (WPT) system by operating a semi-bridgeless active rectifier (SBAR) to suppress the secondary coil current overshoot. By damping the current overshoot, it is possible to reduce the rectifier's rated current and decrease the rectifier's size, which is beneficial for the lightweight-oriented system such as drones. In the control method, an inverse of the plant model is used to calculate the reference input to the system. The current overshoot is reduced by operating the SBAR under the duty ratio calculated from the model. To confirm the performance of the proposed method, the simulation and the experiment using the WPT prototype are conducted. The experimental results show that the proposed method can suppress the secondary coil current overshoot. The results suggest it is possible to realize the lighter secondary system by applying the proposed method. wireless power transfer, semi-bridgeless active rectifier, overshoot suppression, current control ## I Introduction There are some studies that aim to realize the wireless power transfer (WPT) system for a flying drone. The feasibility of applying the WPT system to a flying drone is verified in [1] to extend the flight duration, in which the microwave beam is implemented as an energy medium. The main bottleneck for the microwave WPT system for drones is low efficiency, and recently there have been several studies that have tried to improve the system efficiency. In [2], the port-to-port efficiency is \(31.4\,\mathrm{\char 37}\), which is not practically sufficient for drones. On the other hand, some studies show that a magnetic resonant circuit helps realize the WPT system for flying drones [3, 4, 5, 6]. In [3], the average link efficiency, which is the same role as a port-to-port efficiency, is approximately \(90\,\mathrm{\char 37}\) despite the dynamic conditions of flying drones. Although they both have their advantages, it is generally said that the magnetic resonant WPT system is more efficient than the microwave WPT system. Fig. 1 shows a diagram of the drone wireless in-flight charging system with the magnetic resonant WPT system. To realize this system, it is needed to make the secondary system as light as possible. A magnetic resonant WPT system has a rectifier on the secondary side. In the drone wireless in-flight charging system, it could happen that excessive current flows into the battery due to a sudden change of the mutual inductance. Therefore, it is necessary to choose a lighter and a power-controllable rectifying system to realize the in-flight charging system. There are several types of rectifiers, such as a full-bridge diode rectifier [7], a full-bridge active rectifier with semiconductor switches [8, 9], and a semi-bridgeless active rectifier (SBAR) [10, 11, 12], which has two diodes and two semiconductor switches as shown in Fig. 2. Using a full-bridge diode rectifier, it is unavoidable to load a DC-DC converter to control the power to a battery, which increases the weight of drones. Although it is possible to control the load power by using a full-bridge active rectifier without a DC-DC converter, the system becomes more complex and expensive than an SBAR. On the other hand, an SBAR does not need a DC-DC converter, and it is composed of only two semiconductor switches. Therefore, an SBAR is the most suitable for a drone WPT system in terms of weight and simplicity. The SBAR often operates with an unsynchronized ON-OFF switching method [13]. This method can be easily implemented as it does not need an alternative current sensor. The operation circuit diagram of the SBAR is shown in Fig. 2. The transferred secondary AC current is rectified at the SBAR Fig. 1: Diagram of the drone in-flight charging system. with rectification mode (RM) to charge the battery as shown in Fig. 2(a). When the battery is fully charged, the SBAR operation mode is switched to short mode (SM) as shown in Fig. 2(b), so that the current flow to the battery is cut off. By changing the operation mode, it is possible to prevent the battery from overcharging. However, when the mode switches from the RM to the SM, the secondary coil current overshoot occurs. There have been no studies addressing this problem. To avoid raising the rated current and increasing the size of the rectifier, we focus on the overshoot suppression in the SBAR system with an unsynchronized ON-OFF switching method. This paper focuses on the secondary coil current control to resolve the above problem. By switching between the RM and the SM alternately and gradually extending the SM period, it is possible to suppress the coil current overshoot. The model of the WPT system derived in this paper determines the ratio of the SM period to the operation period. The experimental results show that the overshoot is suppressed by applying the control method to the SBAR system. By solving (4), \(I_{1}(s)\) and \(I_{2}(s)\) are expressed as \[I_{1}(s) =\frac{\left(\alpha_{1}s+\beta_{1}\right)V_{1}(s)}{s^{2}+2\zeta{ \omega_{\rm n}}s+{\omega_{\rm n}}^{2}}-\frac{\gamma V_{2}(s)}{s^{2}+2\zeta{ \omega_{\rm n}}s+{\omega_{\rm n}}^{2}}, \tag{5a}\] \[I_{2}(s) =\frac{\gamma V_{1}(s)}{s^{2}+2\zeta{\omega_{\rm n}}s+{\omega_{ \rm n}}^{2}}-\frac{\left(\alpha_{2}s+\beta_{2}\right)V_{2}(s)}{s^{2}+2\zeta{ \omega_{\rm n}}s+{\omega_{\rm n}}^{2}}, \tag{5b}\] where \(\zeta\), \(\omega_{\rm n}\), \(\alpha_{1}\), \(\beta_{1}\), \(\alpha_{2}\), \(\beta_{2}\), and \(\gamma\) are shown as \[\zeta =\frac{L_{1}R_{2}+L_{2}R_{1}}{\sqrt{4L_{1}L_{2}}}\cdot\frac{1}{ \sqrt{R_{1}R_{2}+\left(\omega L_{\rm m}\right)^{2}}}, \tag{6}\] \[\omega_{\rm n} =\frac{1}{2}\sqrt{\frac{R_{1}R_{2}+\left(\omega L_{\rm m}\right) ^{2}}{L_{1}L_{2}}},\] \[\alpha_{1} =\frac{1}{2L_{1}},\hskip 14.226378pt\beta_{1}=\frac{R_{2}}{4L_{1}L _{2}},\] \[\alpha_{2} =\frac{1}{2L_{2}},\hskip 14.226378pt\beta_{2}=\frac{R_{1}}{4L_{1}L _{2}},\hskip 14.226378pt\gamma=\frac{\omega L_{\rm m}}{4L_{1}L_{2}}.\] ## III Control strategy for current overshoot suppression This section describes how the SBAR is operated. Fig. 5 shows the ideal operating waveforms of the SBAR. It shows the gate signal sent to the lower arm of the rectifier, the secondary voltage \(v_{2}\), and the secondary current \(i_{2}\), respectively. \(T\) is the rectifier's operating period. In this paper, \(T\) is defined as a half of the switching period of the primary inverter. \(T_{\rm short}\) is the short period in \(T\) in which the lower arm of the rectifier is on. During this period, the power is not transferred to the load side. The duty ratio of the rectifier \(d_{\rm short}\) is defined as \[d_{\rm short}=\frac{T_{\rm short}}{T}. \tag{7}\] By deciding \(d_{\rm short}\), the operation of the rectifier is determined. According to (5b), the transfer function from \(V_{2}\) to \(I_{2}\) is expressed as follows: \[G_{22}=\frac{I_{2}(s)}{V_{2}(s)}=-\frac{\alpha_{2}s+\beta_{2}}{s^{2}+2\zeta{ \omega_{\rm n}}s+{\omega_{\rm n}}^{2}}. \tag{8}\] When a step signal of \(V_{2}\) is input to the system by switching the SBAR from the RM to the SM, the coil current overshoot occurs because \(\zeta\) is too small in a typical magnetic resonant WPT system. In order to suppress the overshoot in the step response of \(I_{2}\), this paper proposes a SBAR control method. By controlling \(V_{2}\) appropriately, the overshoot can be suppressed. Fig. 6 is the block diagram of the control system, which shows how the SBAR is operated. \(I_{\rm 2ref}\), \(I_{\rm 2}\)\({}^{*}\), \(V_{\rm 2}\)\({}^{*}\), and \(d_{\rm short}\)\({}^{*}\) are the secondary current's reference value, the secondary current's calculated value, the secondary voltage's calculated value, and the calculated value of the duty ratio of the rectifier, respectively. \(f\left(d_{\rm short}\right)\) is the function showing the relation between \(V_{2}\) and \(d_{\rm short}\), which is defined as \[V_{2}=f\left(d_{\rm short}\right)=\frac{4}{\pi}V_{L}\left\{1-\sin\left(\frac{ \pi}{2}d_{\rm short}\right)\right\}. \tag{9}\] (9) is derived as the difference between the fundamental wave amplitude of the square wave and the fundamental wave amplitude of the square wave when operating the rectifier with a phase-shift control. \(f^{-1}\left(V_{2}\right)\) is the inversion of \(f\left(d_{\rm short}\right)\). In this paper, \(V_{\rm 2}\)\({}^{*}\) is calculated using the inversion system of \(G_{22}\). \(G_{\rm 22}\)\({}^{-1}\) is the non-proper system, so the low-pass filter shown in Fig. 6 is placed before \(G_{\rm 22}\)\({}^{-1}\). Applying this scheme to the operation, the proposed method, in which the secondary coil current overshoot at the transition phase from the RM to the SM is suppressed, is implemented. ## IV Simulation and experiment In order to validate the effectiveness of the proposed method, the simulation is carried out at first. The parameter is listed in Table. I. Figs. 7(a), 7(b), and 7(c) show the reference values of \(V_{\rm 2}\)\({}^{*}\) considered in the simulation. Each figure shows a step-type reference value, a ramp-type reference value, and a proposed reference value calculated using the model as shown in Fig. 6, respectively. The slope of the ramp-type reference value is decided such that the time constant is the same as that of the proposed method. Figs. 7(d), 7(e), and 7(f) show the simulation results. All the waveforms once converge at \(4\,\mathrm{ms}\). The convergence of the secondary coil current amplitude at \(4\,\mathrm{ms}\) is \(10.5\,\mathrm{A}\). The waveforms after \(4\,\mathrm{ms}\) are shown because the proposed reference value converges around \(14\,\mathrm{ms}\). Fig. 7(d) shows that the maximum current amplitude is tremendously large if there is no controller in the secondary system. Fig. 7(e) shows that the ramp-type reference trajectory cannot wholly suppress the overshoot. On the other hand, Fig. 7(f) shows that the proposed method can completely suppress the overshoot. It is because the inverse of the plant model is implemented in the control system, which is able Fig. 6: Block diagram of the control system. to suppress the overshoot ideally. Table. II shows all the maximum values of the secondary coil current between \(0\,\mathrm{ms}\) and \(4\,\mathrm{ms}\) in the simulation. According to Table. II, it can be seen that the proposed method is numerically superior to the other methods. These simulation results verify the proposed method to dump the overshoot of the secondary coil current. With the above simulation results, experiments are performed to verify the feasibility of the proposed method. Fig. 8 shows an experimental prototype setup based on the parameters listed in Table. I. The calculation shown in Fig. 6 is implemented with a digital signal processing (DSP) controller. The reference values of \({V_{2}}^{*}\) are the same as that shown in Fig. 7. The calculated value \({d_{\mathrm{short}}}^{*}\) is input to the lower arm of the rectifier as the gate signal. Figs. 9-11 show the experimental results. Figs. 9(a), 10(a), and 11(a) show the waveforms of the secondary current. Their tendencies are similar to the simulation results as shown in Fig. 7, which shows the feasibility of the proposed method. Figs. 9(b), 10(b), and 11(b) show the waveforms of the load current. It is observed that the load current flows into the battery in Figs. 10(b) and 11(b) after the switching mode of the rectifier is changed from the RM to the SM. In terms of the transferred power control, it can be said that the step method is more accessible than the other methods; however, it is possible to control the energy including the power transferred after changing the mode of the rectifier. In addition, there is no surge current in Fig. 11(b), which means the proposed method has no adverse effect on the battery safety. These results suggest that the proposed method is valid to suppress the coil current overshoot and control the power transferred to the battery with the SBAR system, which decreases the rated current of the rectifier so that the lighter drone system is realized. On the other hand, it is needed to consider the case when the phase of the gate signal shifts from the state as shown in Fig. 5 because the gate signal is not synchronized with the secondary current. According to Fig. 12, the coil current overshoot can be suppressed even if the phases of the gate signal and the secondary current are shifted by \(\frac{\pi}{2}\) from the state as shown in Fig. 5. This result suggests that (9) is robust to the fluctuation of the model which assumes that the phases of the gate signal and the secondary current match as shown in Fig. 5. Fig. 8: Experimental prototype of the system. Fig. 7: Reference values of \({V_{2}}^{*}\) and simulation results of the secondary current. (a) Step-type reference (b) Ramp-type reference (c) Proposed reference (d) Result with the step-type reference (e) Result with the ramp-type reference (f) Result with the proposed reference ## V Conclusion In this paper, we proposed a novel model-based current overshoot suppression method with a two-mode operation SBAR. The control strategy is validated with the simulation and the experiment in which the maximum values of the secondary coil current overshoot are evaluated. The proposed method accomplishes a suppression of the secondary coil current overshoot, which leads to decreasing the rated current of the rectifier. By applying this method to the drone wireless in-flight charging system, the lighter secondary system is realized. Meanwhile, it should be noted that the proposed method assumes the perfect resonance condition. The robustness to the parameter fluctuation will be improved in future studies. ## VI Acknowledgment This work was partly supported by JST-Mirai Program Grant Number JPMJMI21E2, JSPS KAKENHI Grant Number JP18H03768, and the New Energy and Industrial Technology Development Organization (NEDO) Project Number JPNP21005, Japan.
2308.12610
Emotion-Aligned Contrastive Learning Between Images and Music
Traditional music search engines rely on retrieval methods that match natural language queries with music metadata. There have been increasing efforts to expand retrieval methods to consider the audio characteristics of music itself, using queries of various modalities including text, video, and speech. While most approaches aim to match general music semantics to the input queries, only a few focus on affective qualities. In this work, we address the task of retrieving emotionally-relevant music from image queries by learning an affective alignment between images and music audio. Our approach focuses on learning an emotion-aligned joint embedding space between images and music. This embedding space is learned via emotion-supervised contrastive learning, using an adapted cross-modal version of the SupCon loss. We evaluate the joint embeddings through cross-modal retrieval tasks (image-to-music and music-to-image) based on emotion labels. Furthermore, we investigate the generalizability of the learned music embeddings via automatic music tagging. Our experiments show that the proposed approach successfully aligns images and music, and that the learned embedding space is effective for cross-modal retrieval applications.
Shanti Stewart, Kleanthis Avramidis, Tiantian Feng, Shrikanth Narayanan
2023-08-24T07:20:47Z
http://arxiv.org/abs/2308.12610v2
# Emotion-Aligned Contrastive Learning Between Images and Music ###### Abstract Traditional music search engines rely on retrieval methods that match natural language queries with music metadata. There have been increasing efforts to expand retrieval methods to consider the audio characteristics of music itself, using queries of various modalities including text, video, and speech. While most approaches aim to match general music semantics to the input queries, only a few focus on affective qualities. In this work, we address the task of retrieving emotionally-relevant music from image queries by learning an affective alignment between images and music audio. Our approach focuses on learning an emotion-aligned joint embedding space between images and music. This embedding space is learned via emotion-supervised contrastive learning, using an adapted cross-modal version of the SupCon loss. We evaluate the joint embeddings through cross-modal retrieval tasks (image-to-music and music-to-image) based on emotion labels. Furthermore, we investigate the generalizability of the learned music embeddings via automatic music tagging. Our experiments show that the proposed approach successfully aligns images and music, and that the learned embedding space is effective for cross-modal retrieval applications. Shanti Stewart\({}^{1}\) Kleanthis Avramidis\({}^{1,\star}\) Tiantian Feng\({}^{1,\star}\) Shrikanth Narayanan\({}^{1}\)\({}^{1}\) Signal Analysis and Interpretation Lab, University of Southern California, USA Multimodal Learning, Contrastive Learning, Cross-Modal Retrieval, Music Information Retrieval Footnote †: These authors contributed equally to this work. ## 1 Introduction Modern large-scale music search engines primarily retrieve music by matching natural language queries with music metadata--such as the artist's name, album title, or song title. While some of these retrieval systems allow querying by genre or mood, they often fall short in supporting high-granularity queries. Users specify their queries in a pre-defined set of descriptors, such as "jazz" (genre) and "happy" (mood), instead of detailed musical descriptions (e.g., "a happy upbeat Latin jazz song with saxophone and bass"). In addition, existing music retrieval systems typically focus on metadata and do not consider the auditory characteristics of the music. There have been increasing efforts to address this problem. Won et al. [1] presents a method to retrieve music audio from single-word (tag) queries. Manco et al. [2] instead proposes a framework for cross-modal text-to-music retrieval from free-form sentence queries. Doh et al. [3] combines both tag-based and sentence-based music retrieval methods into a unified framework. In addition, there have been a number of works on video-to-music retrieval [4, 5, 6]. These newer cross-modal music retrieval frameworks operate on general audio semantics and typically use paired multimodal datasets [2, 4, 5] or some form of weak language supervision [1, 3]. While the paired datasets can sometimes be organically created when two modalities co-occur naturally (e.g., video and music in music videos), the pairings are often generated by human annotators. Such semantic pairings can be subjective, and manual annotation is also costly in time and effort. An alternative to retrieving music based on general semantics is through cross-modal class supervision. Finding semantic classes that are compatible across multiple modalities is challenging; classes used in one modality (e.g., image object classes) may not have equivalent meanings in other modalities. Emotions, however, have equivalent meanings across multiple modalities: images, language, speech, and music. On this idea, two different works present methods for emotion-supervised cross-modal music retrieval. Won et al. [7] proposes a framework for text-to-music retrieval based on emotions, and Doh et al. [8] extends this method for speech-to-music retrieval. Building upon this body of work, we address the task of emotion-supervised music retrieval from image queries. To the best of our knowledge, this problem has not been previously addressed in the literature. Retrieving emotionally-relevant music from images introduces several benefits. Using non-language queries can be more intuitive at times: i.e., emotions can be conveyed more expressively through images or music than through language. In addition, automatically matching emotionally-similar images and music can encourage the creation of more compelling multimedia content. To this end, we propose _Emo-CLIM_: a framework for Emotion-Aligned Contrastive Learning Between Images and Music. Our approach learns an emotion-aligned joint embedding space between images and music, in which embeddings of emotionally-similar images and music are close together. We then directly leverage these joint embeddings for emotion-supervised cross-modal retrieval. In contrast to prior work [7, 8] which use triplet loss functions, we use a supervised contrastive loss--which has the benefit of comparing across all items in a training batch. Furthermore, our loss is modality-symmetric, unlike [7, 8], allowing the embedding space to be used for both image-to-music and music-to-image retrieval. Our key contributions can be summarized as follows: * To the best of our knowledge, Emo-CLIM is the first framework that learns an affective alignment between images and music audio. This framework is distinct from existing literature that aligns music with other modalities. * Unlike prior work that uses triplet losses, Emo-CLIM uses an emotion-supervised contrastive loss, demonstrating promising results in cross-modal retrieval as well as automatic music tagging. ## 2 Related Work Many works have successfully applied contrastive learning to multimodal problems. CLIP [9] used contrastive learning between images and text to learn effective image representations, and AudioCLIP [10] and Wav2CLIP [11] extended CLIP to handle audio. Several other studies have explored contrastive learning to align language and audio [12, 13]. There have also been a number of works using multimodal contrastive learning in the music domain. MusCALL [2] and MuLan [14] proposed contrastive learning approaches between language and music audio, and several other works [5, 15] explored similar approaches for videos and music. A common application for multimodal embedding spaces is cross-modal retrieval. Several works [2, 14, 3] learn joint embedding spaces between language and music audio, which are used for text-to-music retrieval. Methods for music retrieval from video queries have also been proposed [4, 5, 6]. Although there are numerous papers on cross-modal music retrieval, music retrieval based on emotions--the focus of our work--is under-explored. Among studies that address this topic, Won et al. [7] implement text-to-music retrieval, and Doh et al. [8] implement speech-to-music retrieval. ## 3 Emo-Clim Framework As shown in Figure 1, the Emo-CLIM framework consists of three main components: feature extraction, modality alignment, and emotion-supervised contrastive learning. Given an image \(x^{(Im)}\) and an audio (music) clip \(x^{(Au)}\), Emo-CLIM computes an image embedding \(z^{(Im)}\) and an audio embedding \(z^{(Au)}\) as follows: \[z^{(Im)}=h_{Im}\left(f_{Im}\left(x^{(Im)}\right)\right);z^{(Au)}=h_{Au}\left( f_{Au}\left(x^{(Au)}\right)\right) \tag{1}\] where \(f_{Im}(\cdot)\) and \(f_{Au}(\cdot)\) are image and audio encoders, and \(h_{Im}(\cdot)\) and \(h_{Au}(\cdot)\) are projection networks for the image and audio modalities, respectively. The encoder networks extract modality-specific features, and the projection networks map these features to a joint embedding space. We use supervised contrastive learning to align emotionally-paired images and audio clips in this embedding space. ### Feature Extraction For the image encoder \(f_{Im}(\cdot)\), we use the vision transformer component of the CLIP model [9]. We obtain the pre-trained model from OpenAI's official GitHub repository.1 During training, we keep the CLIP model frozen, since CLIP embeddings have been shown to be effective without fine-tuning [9], and our datasets are too small to fine-tune a model of this size. Footnote 1: [https://github.com/openai/CLIP](https://github.com/openai/CLIP) For the audio encoder \(f_{Au}(\cdot)\), we use music-specific models as well as general audio representation models. For the music-specific models, we use two different architectures that are commonly used in the music information retrieval domain: Short-Chunk CNN [16] and Harmonic CNN [17]. Both are CNN-based architectures and take in mel-spectrogram inputs. Short-Chunk CNN operates on approximately 3.7-second input audio clips, while Harmonic CNN operates on 5.0-second audio clips [16]. We utilize pre-trained models--trained on automatic music tagging using the Million Song Dataset [18]--and obtain model weights from an open-source repository.2 Footnote 2: [https://github.com/minzwon/sota-music-tagging-models](https://github.com/minzwon/sota-music-tagging-models) In addition to these music-specific models, we use the audio transformer component of the CLAP model [13]. CLAP is a transformer-based model that operates on an audio input of 10.0 seconds. We downloaded the pre-trained model weights from an open-source GitHub repository.3 We selected the CLAP checkpoint that was trained without AudioSet data to ensure a fair evaluation. Footnote 5: [https://github.com/LAION-AI/CLAP](https://github.com/LAION-AI/CLAP) ### Modality Alignment To map the image and audio features to the joint embedding space, we use two separate projection networks--one for each modality. Each network is a small multi-layer perceptron (MLP), consisting of a linear layer, batch normalization layer, ReLU activation, dropout layer, and a second linear layer that yields 128-dimensional embeddings (which are \(L_{2}\)-normalized). ### Emotion-Supervised Contrastive Learning To learn the emotion-aligned multimodal embedding space, we use supervised contrastive learning on the joint embeddings, supervised by motion labels. To this end, we adapt the SupCon loss [21] to our multimodal setting, as follows. Given a batch of \(N\) images with their emotion labels \(\{(x_{i}^{(Im)},\ y_{i}^{(Im)})\}_{i=1}^{N}\) and \(N\) music audio clips with their emotion labels \(\{(x_{j}^{(Au)},\ y_{j}^{(Au)})\}_{j=1}^{N}\), we compute 4 different supervised contrastive losses, as detailed in the following subsections. For the remainder of this paper, we adopt the following notations: \(y_{i}^{(M)}\) = emotion label of sample \(i\) of modality \(M\), \(z_{i}^{(M)}\) = embedding of sample \(i\) of modality \(M\), \(I=\{1,...,N\}\) = all indices in a batch, and \(\tau\) = the temperature hyperparameter. **Cross-Modal Contrastive Losses**: To align the image and audio modalities, we use a cross-modal version of the SupCon loss. Given \(N\) samples \(\{(x_{i}^{(M_{1})},\ y_{i}^{(M_{1})})\}_{i=1}^{N}\) from modality \(M_{1}\) and \(N\) samples \(\{(x_{p}^{(M_{2})},\ y_{p}^{(M_{2})})\}_{p=1}^{N}\) from modality \(M_{2}\), our cross-modal \(M_{1}\to M_{2}\) SupCon loss is: \[\begin{split} L_{M_{1}\to M_{2}}=-\frac{1}{N}\sum_{i=1}^{N} \frac{1}{|P(M_{1}\to M_{2})\big{(}i)|}\\ \sum_{p\in P(M_{1}\to M_{2})(i)}log\ \frac{exp(z_{i}^{(M_{1})} \cdot z_{p}^{(M_{2})}/\tau)}{\sum_{k\in I}exp(z_{i}^{(M_{1})}\cdot z_{k}^{(M_{2} )}/\tau)}\end{split} \tag{2}\] \(P(M_{1}\to M_{2})(i)\) is the set of indices of positive samples \(x_{p}^{(M_{2})}\) for anchor sample \(x_{i}^{(M_{1})}\), and is defined as: \[P(M_{1}\to M_{2})(i)=\{p\in I\mid y_{i}^{(M_{1})}=y_{p}^{(M_{2})}\} \tag{3}\] These cross-modal SupCon losses "pull together" cross-modal embeddings with the same emotion label and "push apart" cross-modal embeddings with different emotion labels. Figure 1: Overview of the Emo-CLIM framework. A dual-branch architecture separately encodes images and music, then projects the encoded features to an emotion-aligned joint embedding space. Two cross-modal (image-to-audio and audio-to-image) and two intra-modal (image-to-image and audio-to-audio) contrastive losses operate on the joint embeddings. **Intra-Modal Contrastive Losses**: To learn a more robust joint embedding space as well as regularize the cross-modal objectives, we include intra-modal SupCon loss terms in our full objective. The intra-modal SupCon losses are defined as in Equation 2 with \(M_{1}=M_{2}\). These intra-modal SupCon losses "pull together" same-modality embeddings with the same emotion label and "push apart" same-modality embeddings with different emotion labels. **Total Contrastive Loss**: The total combined loss is a weighted average of 2 cross-modal and 2 intra-modal losses: \[L_{\text{total}}=\lambda_{1}L_{Im\to Au}+\lambda_{2}L_{Au\to Im}+ \lambda_{3}L_{Im\to Im}+\lambda_{4}L_{Au\to Au} \tag{4}\] \(L_{\text{total}}\) is modality-symmetric, which ensures the joint embedding space does not favor one modality over the other. The weights of each loss component are determined empirically. Thus, the adapted supervised contrastive objective enables us to learn a joint emotion space between images and music audio, aligned both in an intra-modal and cross-modal manner. ## 4 Experiments and Results ### Datasets We use the DeepEmotion image dataset [19], which consists of 21,829 annotated images collected from Flickr and Instagram. Each image is assigned a single emotion label among 8 labels: _amusement_, _awe_, _contentment_, _excitement_, _anger_, _disgust_, _fear_, and _sadness_. To create training/validation/test subsets, we use a random 80-10-10% split, stratified with respect to the labels. For the music dataset, we use the AudioSet music mood subset [20], which consists of 13,713 10.0-second music (audio) clips gathered from YouTube. Each music clip is assigned a single emotion label among 7 labels: _exciting_, _funny_, _happy_, _tender_, _angry_, _sad_, and _scary_. To create training/validation/test subsets, we likewise use a random 80-10-10% split, stratified with respect to the labels. The emotion label taxonomies of the image and music datasets are different. To address this issue, we define a manual mapping between these labels, many of which differ only in wording (e.g., _excitement_ and _exciting_). However, _awe_ and _disgust_ images and _tender_ music do not have clear equivalents. Hence, we completely remove all images/audio clips with these three emotion labels in order to avoid ambiguous or illogical mappings. ### Implementation Details Since we use CLIP to encode images, we apply the corresponding image pre-processing transforms4, which include resizing and cropping to a size of \(224\times 224\) and normalization. We use random cropping during training and center cropping during evaluation. Footnote 4: Details can be found at [https://github.com/openai/CLIP](https://github.com/openai/CLIP). We use raw audio at a sample rate of 16 kHz.5 Since AudioSet contains 10.0-second audio clips and the music-specific audio encoder models operate on shorter inputs, we randomly crop audio segments during training. During evaluation, we use a sliding window with an overlap ratio of 75% to divide each 10.0-second audio clip into multiple chunks, then pass each chunk through the model and take the average over chunks to obtain a single embedding. When using CLAP, we do not use these cropping and sliding window methods, since CLAP's input length (10.0 seconds) matches AudioSet. We set the dimension of the joint embedding space to 128. For the contrastive losses, we use a temperature of 0.07 and equal \(\lambda\) values. For all experiments, we use the AdamW [22] optimizer with a batch size of 64 and a learning rate of 0.0001. We train all models for 15 epochs and keep the checkpoint with the lowest validation loss. ### Cross-Modal Retrieval Following other works [2, 14, 7, 8], we evaluate the learned joint embedding space via cross-modal retrieval. Given a query item of one modality, we retrieve the most similar item of the other modality using a simple nearest-neighbor search. We use the held-out test subsets of the image and music datasets for all retrieval evaluations. **Experimental Setup**: Cross-modal retrieval evaluation is implemented as a ranking problem. Given a query item of one modality, we rank all items (in the test set) of the other modality by the cosine similarity between the query and candidate item embeddings. We report Precision@5 (P@5) and Mean Reciprocal Rank (MRR) scores. A retrieved item is considered correct if it has the same emotion label as the query. In line with [7], we macro-average retrieval metrics across emotion classes in order to avoid potential bias caused by the imbalanced emotion class distribution. **Results**: Table 1 presents cross-modal retrieval (image-to-music and music-to-image) and intra-modal retrieval (image-to-image and music-to-music) results for the three different audio encoder models introduced in subsection 3.1. For the music-specific models, we include results for both frozen and unfrozen models, while we keep CLAP frozen throughout our experiments. For image-to-music retrieval, the frozen Harmonic CNN and CLAP models perform the best. For music-to-image retrieval, the unfrozen harmonic CNN and CLAP models perform the best. Interestingly, the unfrozen music-specific models generally perform better than their frozen counterparts for music-to-image retrieval, but this pattern does not hold for image-to-music retrieval. For music-to-music retrieval, CLAP consistently performs the best, demonstrating its ability to handle complex audio tasks while frozen. Since Harmonic CNN and CLAP perform the best in cross-modal retrieval, \begin{table} \begin{tabular}{l|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Audio Encoder Model} & \multicolumn{2}{c|}{Image \(\rightarrow\) Music} & \multicolumn{2}{c|}{Music \(\rightarrow\) Image} & \multicolumn{2}{c|}{Image \(\rightarrow\) Image} & \multicolumn{2}{c}{Music \(\rightarrow\) Music} \\ & P@5 & MRR & P@5 & MRR & P@5 & MRR & P@5 & MRR \\ \hline Short-Chunk CNN (Frozen) & 64.23\% & 71.88\% & 61.43\% & 70.78\% & **72.54\%** & **81.07\%** & 55.34\% & 68.90\% \\ Short-Chunk CNN (Unfrozen) & 63.95\% & 75.50\% & 63.46\% & 69.87\% & 69.25\% & 78.72\% & 55.15\% & 67.43\% \\ Harmonic CNN (Frozen) & 65.94\% & **78.59\%** & 64.34\% & 72.23\% & 70.48\% & 79.54\% & 55.16\% & 68.46\% \\ Harmonic CNN (Unfrozen) & 63.18\% & 74.08\% & **67.58\%** & **74.0\%** & 68.64\% & 78.20\% & 57.83\% & 68.46\% \\ CLAP (Frozen) & **68.15\%** & 76.65\% & 67.32\% & 73.95\% & 70.71\% & 79.27\% & **60.80\%** & **72.19\%** \\ \hline \hline \end{tabular} \end{table} Table 1: Cross-modal and intra-modal retrieval performance on the DeepEmotion [19] image dataset and the AudioSet music mood subset [20], for five different audio encoder variants. We report Precision@5 (P@5) and Mean Reciprocal Rank (MRR) evaluation metrics. A retrieved item is considered correct if it has the same emotion label as the query. we use these audio encoder models for the remainder of our analysis. We attribute the superior performance of Harmonic CNN and CLAP to their longer audio input lengths (5.0 seconds and 10.0 seconds) compared to Short-Chunk CNN's 3.7 seconds. **Comparison With Other Works**: Since emotion-matched image-to-music retrieval has not been addressed before in the literature, there is no direct baseline for comparison. Hence, we compare with two other works on emotion-matched music retrieval from other modalities: text-to-music retrieval [7] and speech-to-music retrieval [8]. We compare our image-to-music retrieval results--using our two best-performing frozen audio encoder models (where _HCNN_ = Harmonic CNN)--with these two works in Table 2. For the text-to-music retrieval paper [7], we include results for their manual emotion label mapping6, since we also use a manual mapping. We report results for two of their best-performing methods along with the text datasets used. For speech-to-music retrieval [8], we show results for the IEMOCAP [23] and RAVDESS [24] speech datasets. We do not include results for the Hi,KIA dataset [25] because it is challenging to provide statistically meaningful results with the limited number of samples in this dataset (only 488 utterances in total). We report results for their best-performing non-fusion-based models to provide a fair comparison to the rest of the models (which are all non-fusion-based). Footnote 6: The authors report results for 3 different emotion label taxonomy mappings: valence-arousal-based, Word2Vec-embedding-based, and manual. Table 2 suggests that our image-to-music retrieval framework substantially outperforms the text-to-music retrieval work presented in [7]. Our results are comparable to those of the speech-to-music retrieval work [8]. We argue that aligning speech and music is more trivial than aligning images and music, because speech and music belong to the same modality (audio). We recognize that these comparisons are not direct--due to the differences in modalities, datasets, and emotion label taxonomies--but they provide useful insights into the effectiveness of our approach. ### Automatic Music Tagging **Experimental Setup**: Following previous studies [26, 15], we use automatic music tagging as a downstream task to evaluate our music representations. Music tagging is a multi-label classification task that aims to predict a number of semantic binary tags for a music track. These typically describe musical attributes such as genre, instrument, and mood. We use the popular MagnaTagATune dataset [27], which consists of 25,000 music clips (around 30 seconds each) generated from 6,662 unique songs. In line with the literature [16, 26, 15], we select the top 50 most frequent tags for our evaluation. To implement music tagging, we first generate music audio embeddings using the frozen pre-trained audio component of our Emo-CLIM framework. We then train a small classification head on these music embeddings using a binary cross-entropy loss. The classification head consists of a linear layer, a BatchNorm layer, a ReLU activation, and a second linear layer. We evaluate performance using ROC-AUC and PR-AUC, averaged over all tags. **Results**: In Table 3 we present music tagging results on the MagnaTagATune dataset. We include two fully-supervised baselines: Short-Chunk CNN [16] and Harmonic CNN [17] for reference. In addition, we show the performance of three different models that are pre-trained on a self-supervised learning (SSL) task: CLMR [26], VCMR [15], and CLAP [13]. We report results for CLMR and VCMR from their respective papers, but we implement this same approach for CLAP since it has not been done before. We include these SSL baselines since their training procedures are similar to the one used in our approach. The three bottom rows show the results for Emo-CLIM, using three different audio encoder variants. Emo-CLIM performs on par with the SSL baselines, demonstrating that the emotion-aligned music embeddings are effective in capturing general music semantics in addition to affective information. ## 5 Conclusion In this work, we presented Emo-CLIM, a framework for learning an affective alignment between images and music. By using our proposed emotion-supervised contrastive loss, Emo-CLIM successfully learns an emotion-aligned image-music embedding space. We demonstrated that this joint embedding space is effective for cross-modal and intra-modal retrieval tasks, where the goal is to retrieve emotionally-relevant images or music clips. Furthermore, the learned music embeddings effectively capture general music semantics, as shown in the automatic music tagging evaluation. In the future, we will incorporate emotion class similarities into our contrastive loss in order to improve the aligned representations. In addition, we will explore the effect of adding data augmentations to our training pipeline. We also plan to investigate the impact of different emotion label mappings. Our approach showed promising results for cross-modal affective alignment, and we hope that our work can help motivate further research in this exciting area. \begin{table} \begin{tabular}{l|c c|c c} \hline \hline Method & Input & Input Dataset & P@5 & MRR \\ \hline [7] V-A Regression & \multirow{3}{*}{Text} & Alm’s & 61.00\% & 73.98\% \\ & & ISEAR & 62.18\% & 70.75\% \\ & & Alm’s & 51.56\% & 58.80\% \\ & & ISEAR & 60.19\% & 66.75\% \\ \hline [8] Triplet + EmoSim & \multirow{3}{*}{Speech} & IEMOCAP & 68\(\pm\)33 & 76\(\pm\)33 \\ & & RAVDESS & 67\(\pm\)2\% & 75\(\pm\)33 \\ \hline Emo-CLIM (HCNN) & \multirow{3}{*}{Image} & \multirow{3}{*}{DeepEmotion} & 65.94\% & **78.59\%** \\ Emo-CLIM (CLAP) & & & **68.15\%** & 76.65\% \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison to other emotion-matched cross-modal music retrieval works. We show results for our two best-performing image-to-music retrieval models. We report Precision@5 (P@5) and Mean Reciprocal Rank (MRR) metrics. All works use the AudioSet music mood subset [20] for music retrieval. \begin{table} \begin{tabular}{l l|c c} \hline \hline Method & Pretraining Task & ROC-AUC & PR-AUC \\ \hline Short-Chunk CNN [16] & MT & 91.29\% & 46.14\% \\ Harmonic CNN [17] & MT & 91.27\% & 46.11\% \\ \hline CLMR [26] & SSL & 89.3\% & 36.0\% \\ VCMR [15] & SSL & 89.08\% & 35.27\% \\ CLAP [13] & SSL & 91.04\% & 39.40\% \\ \hline Emo-CLIM (HCNN) & MT \(\rightarrow\) SupCon & 89.70\% & 36.00\% \\ Emo-CLIM (HCNN\({}^{\dagger}\)) & MT \(\rightarrow\) SupCon & 88.55\% & 33.80\% \\ Emo-CLIM (CLAP) & SSL \(\rightarrow\) SupCon & **89.94\%** & **37.12\%** \\ \hline \hline \end{tabular} \end{table} Table 3: Music tagging (MT) results on MagnaTagATune [27]. First two rows show fully-supervised baselines, middle three rows show self-supervised (SSL) baselines, and the last three rows show the Emo-CLIM framework with three different audio encoder variants (\({}^{\dagger}\)denotes unfrozen model). We report ROC-AUC and PR-AUC evaluation metrics, which are the standard in music tagging.
2307.02286
Subleading Effects in Soft-Gluon Emission at One-Loop in Massless QCD
We elucidate the structure of the next-to-leading-power soft-gluon expansion of arbitrary one-loop massless-QCD amplitudes. The expansion is given in terms of universal colour-, spin- and flavour-dependent operators acting on process-dependent gauge-invariant amplitudes. The result is proven using the method of expansion-by-regions and tested numerically on non-trivial processes with up to six partons. In principle, collinear-region contributions are expressed in terms of convolutions of universal jet operators and process-dependent amplitudes with two collinear partons. However, we evaluate these convolutions exactly for arbitrary processes. This is achieved by deriving an expression for the next-to-leading power expansion of tree-level amplitudes in the double-collinear limit, which is a novel result as well. Compared to previous studies, our analysis, besides being more general, yields simpler formulae that avoid derivatives of process-dependent amplitudes in the collinear limit.
Michał Czakon, Felix Eschment, Tom Schellenberger
2023-07-05T13:40:25Z
http://arxiv.org/abs/2307.02286v2
# Subleading Effects in Soft-Gluon Emission at One-Loop in Massless QCD ###### Abstract We elucidate the structure of the next-to-leading-power soft-gluon expansion of arbitrary one-loop massless-QCD amplitudes. The expansion is given in terms of universal colour-, spin- and flavour-dependent operators acting on process-dependent gauge-invariant amplitudes. The result is proven using the method of expansion-by-regions and tested numerically on non-trivial processes with up to six partons. In principle, collinear-region contributions are expressed in terms of convolutions of universal jet operators and process-dependent amplitudes with two collinear partons. However, we evaluate these convolutions exactly for arbitrary processes. This is achieved by deriving an expression for the next-to-leading power expansion of tree-level amplitudes in the double-collinear limit, which is a novel result as well. Compared to previous studies, our analysis, besides being more general, yields simpler formulae that avoid derivatives of process-dependent amplitudes in the collinear limit. QCD, Scattering Amplitudes, Higher-Order Perturbative Calculations + ## 1 Introduction Soft radiation is an important topic in the context of gauge theories. In the abelian case of QED, soft photons are physical and complicate the definition of the scattering operator. In the non-abelian case, in particular in QCD, gauge bosons are not physical in the confining phase, and the presence of a mass gap protects from soft singularities. However, in the context of factorisation, which allows to obtain cross sections as a convolution of a non-perturbative contribution and a contribution that involves massless partons, the problem appears again. In either case, abelian and non-abelian, it is necessary to have a complete description of the leading singular soft asymptotics in order to obtain meaningful theoretical predictions for scattering and decay processes. This problem has been studied since the early days of Quantum Field Theory and is nowadays textbook material. While the subleading behaviour of scattering amplitudes in the soft limit is not necessary to obtain finite cross sections, it is nevertheless of interest due to the ever increasing precision of measurements at lepton and hadron colliders. First attempts at a general description in QED date back to the seminal works of Low [1], Burnett and Kroll [2]. Later, it was understood by Del Duca [3] that the description cannot be complete beyond tree-level without taking into account collinear virtual states. Recently, there has been a surge of interest in next-to-leading power (subleading) soft phenomena within resummation formalisms based on Soft-Collinear Effective Theory [4; 5; 6] and diagrammatic approaches to QCD [7; 8]. The main goal of the studies was the inclusion of subleading effects in the description of simple processes with a minimal number of partons, for example the Drell-Yan process. Even in this case, there were surprises and some assumptions on the structure of the soft expansion turned out to be wrong. For instance, the analysis of Ref. [3] that introduced collinear radiation into the picture, was shown to be incomplete. A different motivation for studying subleading soft effects in QED with massive fermions guided Refs. [9; 10]. Here, the idea was to use soft approximations of squared matrix elements to obtain numerically stable predictions for lepton scattering with account of soft photons and light leptons. Our goal in the present publication is to understand the structure of the next-to-leading-power soft expansion at the one-loop level in QCD. On the one hand, the general expression that we derive allows to put resummation formalisms for multi-parton processes on a firm footing. On other hand, this expression can be used to improve the numerical stability of matrix elements in software implementations. In our analysis, we stress not only the importance of the Ward identity for the soft gluon - as did the pioneers - but also of gauge-invariance of the occurring amplitudes. This leads to astonishingly simple expressions for the building blocks of the expansion: soft and jet operators. The cancellations that we observe remove contributions that are expected to be present based on pure power-counting arguments, for example transverse-momentum derivatives of amplitudes in the collinear limit, see Refs. [11; 12]. Furthermore, we put special emphasis on a deep understanding of the collinear asymptotics. As a side effect, we obtain a novel formula for the next-to-leading power expansion of tree-level amplitudes in the collinear limit. The publication is organised as follows. In the next section we define the main concepts and recall the colour/spin-space formalism that proves to be very useful in the present context. We define spin-space operators that encapsulate all spin effects at next-to-leading power. We also take great care to define the kinematics of the soft limit to the level of detail required in a numerical application. In Section 3, we reproduce the Low-Burnett-Kroll result for QCD, and summarise its features that have been understood in previous studies. We use, nevertheless, our original notation that will prove its power at the one-loop level. In Section 4, we state our main result, present a complete proof, and describe numerical tests. Finally, in Section 5, we state our result for the next-to-leading-power collinear asymptotics. An outlook section closes the text and discusses some obvious further directions of research. ## 2 Definitions ### Processes and amplitudes Consider the process: \[0\to a_{1}(p_{1}+\delta_{1},\sigma_{1},c_{1})+\cdots+a_{n}(p_{n}+\delta_{n}, \sigma_{n},c_{n})+g(q,\sigma_{n+1},c_{n+1})\;,\qquad a_{i}\in\{q,\bar{q},g\}\;. \tag{1}\] The momenta \(p_{i}+\delta_{i}\) of the _hard partons_ are defined as outgoing, and may thus have negative energy components if the respective parton is actually incoming in the physical process under consideration. The _soft gluon_ with momentum \(q\) is outgoing, \(q^{0}>0\). The momenta are assumed on-shell: \[p_{i}^{2}=(p_{i}+\delta_{i})^{2}=m_{i}^{2}\;,\qquad q^{2}=0\;, \tag{2}\] where \(m_{i}\) is the mass of parton \(i\). They are required to satisfy the momentum conservation constraints: \[\sum_{i}p_{i}=0\;,\qquad\sum_{i}\delta_{i}+q=0\;. \tag{3}\] Notice that Eqs. (2) and (3) are more restrictive than necessary for a physical process. The additional constraints are used to define the soft limit. Contrary to the hard momenta, \(p_{i}\), every component of the _momentum shifts_, \(\delta_{i}\), and every component of the soft-gluon momentum is assumed to be of the order of the _soft-expansion parameter_\(\lambda\): \[p_{i}^{\mu}=\mathcal{O}(1)=\mathcal{O}\big{(}\lambda^{0}\big{)}\gg\lambda\;, \qquad\delta_{i}^{\mu}=\mathcal{O}(\lambda)\;,\qquad q^{\mu}=\mathcal{O}( \lambda)\;. \tag{4}\] Finally, \(p_{i}\) and \(q\) are assumed well separated in angular distance. It follows from Eqs. (2) and (4) that \(p_{i}\) is orthogonal to \(\delta_{i}\) in first approximation: \[p_{i}\cdot\delta_{i}=\mathcal{O}\big{(}\lambda^{2}\big{)}\;. \tag{5}\] The polarisation and colour state of each parton is denoted by \(\sigma_{i}\) and \(c_{i}\) respectively. The polarisation of massive partons may be defined as rest-frame spin, whereas that of massless partons corresponds to helicity. The results of this publication are equally _valid in the case of quarks of different flavours as well as in the presence of colour-neutral particles_, as long as flavour and colour summations have been appropriately adapted. A scattering amplitude, \(M_{fi}\), is defined through the decomposition of the scattering matrix \(S_{fi}\): \[S_{fi}=\delta_{fi}-i\,(2\pi)^{4}\delta^{(4)}(p_{f}-p_{i})M_{fi}\;, \tag{6}\] where \(i\) and \(f\) stand for initial and final state, and \(p_{i}\) and \(p_{f}\) for their respective momenta. Eq. (6) unambiguously defines the sign of \(M_{fi}\), which is necessary in the context of our study. For instance, Eqs. (4.33) and (4.46) contain products of amplitudes. The scattering amplitude, \(M_{g}(\{p_{i}+\delta_{i}\},q,\{\sigma_{i}\},\{c_{i}\},g_{s}^{B})\), for the process (1) is given by an expansion in the bare strong coupling constant \(g_{s}^{B}\): \[M_{g}=\big{(}g_{s}^{B}\big{)}^{n-1}\bigg{[}M_{g}^{(0)}+\frac{\mu^{-2\epsilon} \alpha_{s}^{B}}{(4\pi)^{1-\epsilon}}\,M_{g}^{(1)}+\mathcal{O}\Big{(}\big{(} \alpha_{s}^{B}\big{)}^{2}\Big{)}\bigg{]}\;,\qquad\alpha_{s}^{B}=\frac{(g_{s}^ {B})^{2}}{4\pi}\;, \tag{7}\] where \(\epsilon\) is the parameter of dimensional regularisation with space-time dimension \(d\equiv 4-2\epsilon\). Although we work with bare quantities, we have introduced the parameter \(\mu\) with unit mass dimension in order to retain the four-dimensional mass dimension of the amplitudes. In what follows, we allow for massive quarks at tree level. Hence, \(M_{g}^{(0)}\) may depend on \(m_{i}\neq 0\). On the other hand, the soft expansion of the one-loop amplitude \(M_{g}^{(1)}\) is only provided in the massless case. The definition of \(M_{g}\) is completed once we assume that the external states are four-dimensional, which corresponds to the 't Hooft-Veltman scheme within the family of dimensional-regularisation schemes. Finally, the expansion of the _reduced scattering amplitude_, \(M(\{p_{i}\},\{\sigma_{i}\},\{c_{i}\},g_{s}^{B})\), for the process obtained from (1) by removing the soft gluon and setting the momentum shifts to zero, is given by: \[M\equiv\big{(}g_{s}^{B}\big{)}^{n-2}\bigg{[}M^{(0)}+\frac{\mu^{-2\epsilon} \alpha_{s}^{B}}{(4\pi)^{1-\epsilon}}\,M^{(1)}+\mathcal{O}\Big{(}\big{(}\alpha _{s}^{B}\big{)}^{2}\Big{)}\bigg{]}\;. \tag{8}\] ### Colour/spin-space formalism The soft expansion of Sections 3 and 4 requires manipulation of the colour state of the hard partons already at \(\mathcal{O}(1/\lambda)\). Furthermore, subleading effects at order \(\mathcal{O}\big{(}\lambda^{0}\big{)}\) require the manipulation of the polarisation state of the hard partons. The formulae are simplified by the use of the colour/spin-space formalism introduced in Ref. [13]. This formalism relies on abstract basis vectors: \[|c_{1},\ldots,c_{m};\sigma_{1},\ldots,\sigma_{m}\rangle\equiv|c_{1},\ldots,c_{ m}\rangle\,\otimes\,|\sigma_{1},\ldots,\sigma_{m}\rangle\;, \tag{9}\] with either \(m=n+1\) or \(m=n\) in the present case. Accordingly, we define1: Footnote 1: The \(\mu\) dependence for \(l>0\) is implicit. \[\Big{|}M_{g}^{(l)}(\{p_{i}+\delta_{i}\},q)\Big{\rangle}=\sum_{\{\sigma_{i}\}} \sum_{\{c_{i}\}}M_{g}^{(l)}(\{p_{i}+\delta_{i}\},q,\{\sigma_{i}\}\,\{c_{i}\})\;|c_ {1},\ldots,c_{n+1};\sigma_{1},\ldots,\sigma_{n+1}\rangle\;, \tag{10}\] and similarly for the reduced scattering amplitude: \[\left|M^{(l)}(\{p_{i}\})\right>\equiv\sum_{\{\sigma_{i}\}}\sum_{\{c_{i}\}}M^{(l)} (\{p_{i}\},\{\sigma_{i}\},\{c_{i}\})\ |c_{1},\ldots,c_{n};\sigma_{1},\ldots,\sigma_{n}\rangle\;. \tag{11}\] The soft expansion at one-loop order, Eq. (11), involves _flavour off-diagonal_ contributions that are identified by a replacement of a pair of partons, \(i\) and \(j\), in the reduced scattering amplitude w.r.t. to the original process (1). The replacement does not affect the momenta of the partons. Since we do not introduce a flavour/colour/spin-space in the present publication, the respective reduced amplitude will be denoted by: \[\left|M^{(l)}(\{p_{i}\})\left|\begin{subarray}{c}a_{i}\to\bar{a}_{i}\\ a_{j}\to\bar{a}_{j}\end{subarray}\right>\;. \tag{12}\] In order to select amplitudes with a definite polarisation and colour of parton \(i\), we define the following surjection operator: \[\mathbf{P}_{i}(\sigma,c)\left|\ldots,c_{i-1},c_{i},c_{i+1},\ldots;\ldots, \sigma_{i-1},\sigma_{i},\sigma_{i+1},\ldots\right>\equiv\delta_{\sigma\sigma_ {i}}\delta_{cc_{i}}\left|\ldots,c_{i-1},c_{i+1},\ldots;\ldots,\sigma_{i-1}, \sigma_{i+1},\ldots\right>\;, \tag{13}\] and its specialisation: \[\mathbf{P}_{g}(\sigma,c)\equiv\mathbf{P}_{n+1}(\sigma,c)\;. \tag{14}\] Furthermore, we define an operator that exchanges the quantum numbers of \(i\) and \(j\): \[\mathbf{E}_{i,j}\left|\ldots,c_{i},\ldots,c_{j},\ldots;\ldots,\sigma_{i}, \ldots,\sigma_{j},\ldots\right>\equiv\left|\ldots,c_{j},\ldots,c_{i},\ldots; \ldots,\sigma_{j},\ldots,\sigma_{i},\ldots\right>\;. \tag{15}\] ### Colour operators The leading term of the soft expansion is expressed in terms of colour-space operators \(\mathbf{T}_{i}^{c}\): \[\mathbf{T}_{i}^{c}\left|\ldots,c_{i}^{\prime},\ldots\right>\equiv\sum_{c_{i}} T_{a_{i},c_{i}^{\prime}}^{c}\left|\ldots,c_{i},\ldots\right>\;, \tag{16}\] \[T_{g,ab}^{c}=if^{acb}\;,\qquad T_{q,ab}^{c}=T_{ab}^{c}\;,\qquad T_{\bar{q},ab} ^{c}=-T_{ba}^{c}\;. \tag{17}\] The structure constants \(f^{abc}\) are defined by \(\left[\mathbf{T}_{i}^{a},\mathbf{T}_{j}^{b}\right]=if^{abc}\,\mathbf{T}_{i}^{ c}\,\delta_{ij}\), while the fundamental-representation generators, \(T_{ab}^{c}\), are normalised with \(\mathrm{Tr}\big{(}T^{a}T^{b}\big{)}=T_{F}\delta^{ab}\). ### Spin operators The subleading term of the soft expansion is expressed in terms of spin-space operators \(\mathbf{K}_{i}^{\mu\nu}\): \[\mathbf{K}_{i}^{\mu\nu}\left|\ldots,\sigma_{i}^{\prime},\ldots\right>\equiv \sum_{\sigma_{i}}K_{a_{i},\sigma_{i}\sigma_{i}^{\prime}}^{\mu\nu}(p_{i})\left| \ldots,\sigma_{i},\ldots\right>\;, \tag{18}\] with matrices \(K_{a,\,\sigma\sigma^{\prime}}^{\mu\nu}\) that are anti-symmetric in \(\mu,\nu\) and hermitian in \(\sigma,\sigma^{\prime}\): \[K_{a,\,\sigma\sigma^{\prime}}^{\mu\nu}=-K_{a,\,\sigma\sigma^{\prime}}^{\nu\mu }\;,\qquad K_{a,\,\sigma\sigma^{\prime}}^{\mu\nu\,*}=K_{a,\,\sigma^{\prime} \sigma}^{\mu\nu}\;. \tag{19}\] For \(p^{0}>0\), i.e. for outgoing quarks, anti-quarks and gluons, these matrices are uniquely defined by2: Footnote 2: These relations are a consequence of the Lorentz transformation properties of free fields, see for example Section 5.1 of Ref. [14]. \[\sum_{\sigma^{\prime}}K_{q,\sigma\sigma^{\prime}}^{\mu\nu}(p)\, \bar{u}(p,\sigma^{\prime})\equiv J^{\mu\nu}(p)\,\bar{u}(p,\sigma)-\frac{1}{2} \bar{u}(p,\sigma)\,\sigma^{\mu\nu}\;,\qquad\sigma^{\mu\nu}\equiv\frac{i}{2} \big{[}\gamma^{\mu},\gamma^{\nu}\big{]}\;,\] \[\sum_{\sigma^{\prime}}K_{\bar{q},\sigma\sigma^{\prime}}^{\mu\nu}(p )\,v(p,\sigma^{\prime})\equiv\left(J^{\mu\nu}(p)+\frac{1}{2}\sigma^{\mu\nu} \right)v(p,\sigma)\;, \tag{20}\] \[\sum_{\sigma^{\prime}}K_{g,\sigma\sigma^{\prime}}^{\mu\nu}(p)\, \epsilon_{\alpha}^{*}(p,\sigma^{\prime})\equiv\left(J^{\mu\nu}(p)\,g_{\alpha \beta}+i\big{(}\delta_{\alpha}^{\mu}\delta_{\beta}^{\nu}-\delta_{\alpha}^{ \nu}\delta_{\beta}^{\mu}\big{)}\right)\epsilon^{\beta\,*}(p,\sigma)+\text{ terms proportional to }p_{\alpha}\;,\] where \(J^{\mu\nu}(p)\) is the generator of Lorentz transformations for scalar functions of \(p\): \[J^{\mu\nu}(p)=i\left(p^{\mu}\partial_{p}^{\nu}-p^{\nu}\partial_{p}^{\mu}\right) \,,\qquad\partial_{p}^{\mu}\equiv\frac{\partial}{\partial p_{\mu}}\;. \tag{21}\] Later, we will mostly use the shorthand notations: \[J^{\mu\nu}_{i}\equiv J^{\mu\nu}(p_{i})\;,\qquad\partial_{i}^{\mu}\equiv \partial_{p_{i}}^{\mu}\;. \tag{22}\] Definitions (20) may be rewritten in terms of bi-spinors and polarisation vectors of incoming partons: \[\sum_{\sigma^{\prime}}K^{\mu\nu\,*}_{\tilde{q},\sigma\sigma^{ \prime}}(p)\,u(p,\sigma^{\prime})=-\Big{(}J^{\mu\nu}(p)+\frac{1}{2}\sigma^{ \mu\nu}\Big{)}\,u(p,\sigma)\;,\] \[\sum_{\sigma^{\prime}}K^{\mu\nu\,*}_{q,\sigma\sigma^{\prime}}(p) \,\bar{v}(p,\sigma^{\prime})=-\Big{(}J^{\mu\nu}(p)\,\bar{v}(p,\sigma)-\frac{1 }{2}\bar{v}(p,\sigma)\,\sigma^{\mu\nu}\Big{)}\;, \tag{23}\] \[\sum_{\sigma^{\prime}}K^{\mu\nu\,*}_{g,\sigma\sigma^{\prime}}(p) \,\epsilon_{\alpha}(p,\sigma^{\prime})=-\Big{(}J^{\mu\nu}(p)\,g_{\alpha\beta} +i\left(\delta^{\mu}_{\alpha}\delta^{\nu}_{\beta}-\delta^{\nu}_{\alpha}\delta ^{\mu}_{\beta}\right)\Big{)}\,\epsilon^{\beta}(p,\sigma)+\text{terms proportional to $p_{\alpha}$}\;.\] Due to our process definition (1), negative-energy momenta imply incoming partons. Hence, we define: \[K^{\mu\nu}_{a,\sigma\sigma^{\prime}}(p)\equiv-K^{\mu\nu\,*}_{\tilde{a},\sigma \sigma^{\prime}}(-p)=-K^{\mu\nu}_{\tilde{a},\sigma^{\prime}\sigma}(-p)\qquad \text{for}\qquad p^{0}<0\;. \tag{24}\] Since \(v(p,\sigma)=C\bar{u}^{T}(p,\sigma)\), with \(C\) the charge conjugation matrix, there is: \[K^{\mu\nu}_{\tilde{q},\sigma\sigma^{\prime}}(p)=K^{\mu\nu}_{q,\sigma\sigma^{ \prime}}(p)\;. \tag{25}\] This relation is consistent with the fact that spin and helicity have the same definition for particles and anti-particles. For a massive-quark bi-spinor, with spin defined in the rest-frame along the third axis, transformed with a pure boost to reach momentum \(p\) from \(p_{0}^{\mu}=(m,\mathbf{0})\), there is: \[K^{\mu\nu}_{q,\sigma\sigma^{\prime}}=\frac{\epsilon^{\mu\nu\alpha i}\left(p+p _{0}\right)_{\alpha}}{\left(p+p_{0}\right)^{0}}\,\frac{\tau^{i}_{\sigma\sigma ^{\prime}}}{2}\;, \tag{26}\] where \(\tau^{i}_{\sigma\sigma^{\prime}}\), \(i=1,2,3\) are the three Pauli matrices. For massless partons, helicity conservation implies that \(K^{\mu\nu}_{a,\sigma\sigma^{\prime}}\) is proportional to \(\delta_{\sigma\sigma^{\prime}}\). Assuming that bi-spinors and polarisation vectors for the two helicities are related by a momentum-independent anti-linear transformation, there is: \[K^{\mu\nu}_{a,\sigma\sigma^{\prime}}=\sigma\,\delta_{\sigma\sigma^{\prime}}\,K ^{\mu\nu}\;. \tag{27}\] Furthermore, it follows from the definitions (20) that \(p_{\mu}K^{\mu\nu}_{a,\sigma\sigma^{\prime}}=0\) for \(p^{2}=0\). Hence: \[K^{\mu\nu}=\epsilon^{\mu\nu\alpha\beta}p_{\alpha}r_{\beta}\;,\qquad\epsilon_{ 0123}\equiv+1\;, \tag{28}\] for some \(r\) that we assume to be lightlike3. In particular, if massless bi-spinors are defined along the third axis and then rotated in the direction of \(\boldsymbol{p}\equiv E\big{(}\sin(\theta)\cos(\varphi),\sin(\theta)\sin( \varphi),\cos(\theta)\big{)}=ER_{z}(\varphi)R_{y}(\theta)\boldsymbol{\hat{z}}\) with the composition of rotations \(R_{z}(\varphi)R_{y}(\theta)R_{z}(-\varphi)\), then: Footnote 3: If \(r^{2}\neq 0\), then the replacement \(r\to r^{\prime}\equiv r-r^{2}p/2r\cdot p\) does not change Eq. (28), while \(r^{\prime 2}=0\). \[K^{\mu\nu}(p)=\frac{\epsilon^{\mu\nu\alpha\beta}p_{\alpha}\bar{p}_{0\beta}}{p \cdot\bar{p}_{0}}\;,\qquad\bar{p}_{0}^{\mu}\equiv(E,0,0,-E)\;. \tag{29}\] This result is also valid for polarisation vectors defined in the spinor-helicity formalism using the same bi-spinors: \[\epsilon^{*}_{\mu}(p,\pm 1)\equiv\pm\frac{\langle p\pm|\gamma_{\mu}|k\pm \rangle}{\sqrt{2}\,\langle k\mp|p\pm\rangle}\equiv\pm\frac{\bar{u}(p,\pm\frac{1 }{2})\,\gamma_{\mu}\,u(k,\pm\frac{1}{2})}{\sqrt{2}\,\bar{u}(k,\mp\frac{1}{2})\, u(p,\pm\frac{1}{2})}\;, \tag{30}\] with an arbitrary lightlike reference vector \(k\). If either the massless bi-spinors or the polarisation vectors include an additional phase factor, e.g. \(\epsilon^{\prime*}(p,+1)\equiv\exp(i\phi(p))\,\epsilon^{*}(p,+1)\), then \(K^{\mu\nu}\) is modified as follows: \[K^{\prime\,\mu\nu}=K^{\mu\nu}+iJ^{\mu\nu}\phi(p)\;. \tag{31}\] With the spinor-helicity-formalism polarisation vectors, there is: \[\epsilon_{\mu}(p,+1)\,\epsilon^{*}_{\nu}(p,+1)\,iK^{\mu\nu}=1\;. \tag{32}\] However, because of (31), this result is valid in general. Contractions with \(K^{\mu\nu}\) can be efficiently evaluated with the help of: \[iK_{\mu\nu}=\sum_{\sigma}\text{sgn}(\sigma)\,\epsilon^{*}_{\mu}(p,\sigma)\, \epsilon_{\nu}(p,\sigma)\;, \tag{33}\] with the polarisation vectors (30) assuming \(k=r\) and \(r\) as in Eq. (28). Besides the spin operator \(\mathbf{K}^{\mu\nu}_{i}\), our results involve a simpler spin-dependent operator that gives the sign of the product of the helicities of parton \(i\) and gluon \(n+1\): \[\boldsymbol{\Sigma}_{g,i}\,|\dots,\sigma_{i},\dots,\sigma\rangle\equiv\text {sgn}(\sigma\sigma_{i})\,|\dots,\sigma_{i},\dots,\sigma\rangle\;. \tag{34}\] ### Splitting operators The soft expansion at one-loop order requires the collinear expansion of tree-level amplitudes. The latter expansion is expressed in terms of splitting operators that act non-trivially in both colour and spin space. The splitting operators are defined as follows: \[\langle c_{1},c_{2};\sigma_{1},\sigma_{2}|\mathbf{Split}^{(0)}_{ qg\gets\,q}(k_{1},k_{2},k)|c;\sigma\rangle=-\frac{1}{2\,k_{1}\cdot k_{2}}\,T ^{c_{2}}_{c_{1}c}\,\bar{u}(k_{1},\sigma_{1})\,\not{\epsilon}^{*}(k_{2},\sigma _{2})\,u(k,\sigma)\;, \tag{35}\] \[\langle c_{1},c_{2};\sigma_{1},\sigma_{2}|\mathbf{Split}^{(0)}_ {qg\gets\,\bar{q}}(k_{1},k_{2},k)|c;\sigma\rangle=+\frac{1}{2\,k_{1} \cdot k_{2}}\,T^{c_{2}}_{c_{1}}\,\bar{v}(k,\sigma)\,\not{\epsilon}^{*}(k_{2}, \sigma_{2})\,v(k_{1},\sigma_{1})\;,\] (36) \[\langle c_{1},c_{2};\sigma_{1},\sigma_{2}|\mathbf{Split}^{(0)}_ {q\bar{q}\gets\,g}(k_{1},k_{2},k)|c;\sigma\rangle=-\frac{1}{2\,k_{1} \cdot k_{2}}\,T^{c}_{c_{1}c_{2}}\,\bar{u}(k_{1},\sigma_{1})\,\not{\epsilon}(k,\sigma)\,v(k_{2},\sigma_{2})\;,\] (37) \[\langle c_{1},c_{2};\sigma_{1},\sigma_{2}|\mathbf{Split}^{(0)}_ {gg\gets\,g}(k_{1},k_{2},k)|c;\sigma\rangle=-\frac{1}{2\,k_{1}\cdot k_{2}} \,if^{c_{1}c_{2}}\] \[\times\big{(}+(k_{1}+k)\cdot\epsilon^{*}(k_{2},\sigma_{2})\, \epsilon^{*}(k_{1},\sigma_{1})\cdot\epsilon(k,\sigma)\] (38) \[\phantom{\times\big{(}}-(k_{2}+k)\cdot\epsilon^{*}(k_{1},\sigma_ {1})\,\epsilon^{*}(k_{2},\sigma_{2})\cdot\epsilon(k,\sigma)\] \[\phantom{\times\big{(}}+(k_{2}-k_{1})\cdot\epsilon(k,\sigma)\, \epsilon^{*}(k_{1},\sigma_{1})\cdot\epsilon^{*}(k_{2},\sigma_{2})\big{)}\;.\] In order to simplify the notation, for example in (4.15) and (4.19), we also define the following operator: \[\mathbf{Split}^{(0)}_{i,n+1\,\leftarrow\,i}(p_{i},p_{n+1},p^{ \prime}_{i})\,|\dots,c^{\prime}_{i},\dots,\dots,\sigma^{\prime}_{i},\dots \rangle=\\ \sum_{\sigma_{i}c_{i}}\sum_{\sigma_{n+1}c_{n+1}}\big{\langle}c_{i },c_{n+1};\sigma_{i},\sigma_{n+1}\big{|}\mathbf{Split}^{(0)}_{a_{i}a_{n+1} \leftarrow\,q^{\prime}_{i}}(p_{i},p_{n+1},p^{\prime}_{i})\big{|}c^{\prime}_{i} ;\sigma^{\prime}_{i}\big{\rangle}\\ \times|\dots,c_{i},\dots,c_{n+1};\dots,\sigma_{i},\dots,\sigma_{n +1}\rangle\;, \tag{39}\] where \(a^{\prime}_{i}\) is parton \(i\) corresponding to the ket on the left-hand side, and \(a_{i}\), \(a_{j}\) are partons \(i\),\(j\) corresponding to the ket on the right-hand side of (39). In general \(a^{\prime}_{i}\neq a_{i}\), as for example in (4.19). ow-Burnett-Kroll theorem for tree-level QCD The leading and subleading term of the soft expansion, i.e. expansion in \(\lambda\), of the tree-level amplitude \(\Big{|}M_{g}^{(0)}(\{p_{i}+\delta_{i}\},q,\sigma,c)\Big{>}\) are given by the QCD generalisation [15; 16; 17] of the Low-Burnett-Kroll (LBK) theorem [1; 2] originally proven for QED4: Footnote 4: The sign in Eq. (3.2) is a consequence of our convention for the strong coupling constant: we assume that the quark-gluon interaction term in the Lagrangian is \(+g^{0}\bar{q}\bar{d}^{*}T^{a}q\). \[\Big{|}M_{g}^{(0)}(\{p_{i}+\delta_{i}\},q)\Big{>}=\mathbf{S}^{(0)}( \{p_{i}\},\{\delta_{i}\},q)\;\Big{|}M^{(0)}(\{p_{i}\})\Big{>}+\mathcal{O}( \lambda)\;, \tag{3.1}\] \[\mathbf{P}_{g}(\sigma,c)\,\mathbf{S}^{(0)}(\{p_{i}\},\{\delta_{ i}\},q)=-\sum_{i}\mathbf{T}_{i}^{c}\otimes\mathbf{S}_{i}^{(0)}(p_{i},\delta_{ i},q,\sigma)\;\Big{|}M^{(0)}(\{p_{i}\})\Big{>}\;,\] (3.2) \[\mathbf{S}_{i}^{(0)}=\frac{p_{i}\cdot\epsilon^{*}}{p_{i}\cdot q }+\frac{1}{p_{i}\cdot q}\bigg{[}\Big{(}\epsilon^{*}-\frac{p_{i}\cdot\epsilon^ {*}}{p_{i}\cdot q}\,q\Big{)}\cdot\delta_{i}+p_{i}\cdot\epsilon^{*}\sum_{j} \delta_{j}\cdot\partial_{j}+\frac{1}{2}F_{\mu\nu}\Big{(}J_{i}^{\mu\nu}- \mathbf{K}_{i}^{\mu\nu}\Big{)}\bigg{]}\;, \tag{3.3}\] with: \[\langle q\sigma c|A_{\mu}^{a}(0)|0\rangle=\delta^{\alpha a}\epsilon^{*}(q, \sigma)\;, \tag{3.4}\] where \(A_{\mu}^{a}(x)\) and \(F_{\mu\nu}^{a}(x)\) are the gluon field and the respective field-strength tensor, while \(|q\sigma c\rangle\) is a single-gluon state with momentum \(q\), polarisation \(\sigma\) and colour \(c\). ### Derivation and constraints Most of the terms in Eq. (3.3) are obtained by extending the eikonal approximation to one order higher in \(\lambda\). Indeed, consider the diagram of Fig. 1. The leading term as well as the first term in the square bracket of Eq. (3.3) are due to the expansion of the eikonal approximation taken with the original momentum, \(p_{i}+\delta_{i}\), of the hard-parton, i.e. outgoing quark in Fig. 1: \[\frac{(p_{i}+\delta_{i})\cdot\epsilon^{*}}{(p_{i}+\delta_{i})\cdot q}=\frac{ p_{i}\cdot\epsilon^{*}}{p_{i}\cdot q}+\frac{1}{p_{i}\cdot q}\Big{(} \epsilon^{*}-\frac{p_{i}\cdot\epsilon^{*}}{p_{i}\cdot q}\,q\Big{)}\cdot\delta _{i}+\mathcal{O}(\lambda)\;. \tag{3.5}\] The second term in the square bracket in Eq. (3.3) is due to the expansion of the reduced scattering amplitude represented by the shaded circle in Fig. 1 in \(\delta_{j}\), \(j=1,\ldots,n\). The additional expansion of this amplitude in \(q\) is taken into account by the first term on the right-hand side of: \[\frac{1}{2\,p_{i}\cdot q}F_{\mu\nu}J_{i}^{\mu\nu}=\frac{p_{i}\cdot\epsilon^{ *}}{p_{i}\cdot q}q\cdot\partial_{i}-\epsilon^{*}\cdot\partial_{i}\;. \tag{3.6}\] The classic LBK argument that generates the second term on the right-hand side of the above equation from the first term on the right-hand side, consists in requiring the soft expansion to fulfil the (QED) Figure 1: External-emission diagram that yields a contribution to the eikonal approximation in the case of an outgoing quark. Ward identity, i.e. transversality of the amplitude with respect to the soft-gluon momentum. This accounts for emissions from the internal off-shell lines, i.e. diagrams that do not have the structure of Fig. 1. While spin effects can be obtained by explicit calculation of the expression for Fig. 1 and similarly for anti-quarks and gluons, there is a simpler argument that allows to understand the result. From Fig. 1, we conclude that the external wave function, i.e. bi-spinor for quarks and anti-quarks or polarisation vector for gluons, does not depend on \(q\). Hence, the differential operator \(q\cdot\partial_{i}\) in Eq. (11) should not act on it. We thus have to subtract the action of \(J^{\mu\nu}\) on the external wave function. The result should, however, still contain a gauge-invariant amplitude with \(n\) hard partons. Thus, the subtracted term can be at most a linear combination of amplitudes with different polarisations of the hard parton \(a_{i}\), which leads to the replacement of \(J^{\mu\nu}_{i}\) by \(J^{\mu\nu}_{i}-\mathbf{K}^{\mu\nu}_{i}\) in Eq. (11). The latter difference does not contain any derivatives when acting on external wave functions according to Eqs. (20). This argument has the virtue of applying at higher orders as well. In consequence, the one-loop expression for the soft operator in Eq. (10) also only contains the combination \(J^{\mu\nu}_{i}-\mathbf{K}^{\mu\nu}_{i}\). The soft expansion (10) is strongly constrained by Lorentz covariance and gauge invariance (Ward identity) as has been discussed in great detail previously in Ref. [17], albeit only in the case of pure gluon amplitudes. Here, we would like to stress once more that the process-dependent input on the r.h.s. of Eq. (10), i.e. the amplitude \(\big{|}M^{(0)}(\{p_{i}\})\big{>}\), is gauge invariant on its own. This is not a trivial fact, since it does not naively apply in high-energy factorization for example, see Ref. [18] and references therein. In the present case, the issue of gauge invariance is entangled with the issue of defining momentum derivatives in Eq. (11). Indeed, the amplitude \(\big{|}M^{(0)}(\{p_{i}\})\big{>}\) must be on-shell, and it thus only depends on the spatial components of the momentum vectors. The momentum derivatives in Eq. (11), on the other hand, also involve the energy component. Fortunately, Eq. (11), hence also Eq. (10), is consistent with on-shellness since: \[\big{(}\sum_{j}\delta_{j}\cdot\partial_{j}\big{)}\,p_{i}^{2}=2\,\delta_{i} \cdot p_{i}=0\;,\qquad J^{\mu\nu}_{i}\,p_{i}^{2}=0\;, \tag{12}\] where we have used Eq. (5) and neglected terms of higher order in \(\lambda\). An additional difficulty arises from the fact that Eq. (11) involves derivatives in all of the momenta \(p_{i}\), whereas the amplitude \(\big{|}M^{(0)}(\{p_{i}\})\big{>}\) is only a function of \(n-1\) of them due to momentum conservation. Since extension of \(\big{|}M^{(0)}(\{p_{i}\})\big{>}\) away from momentum conservation is not unique, Eq. (10) must be consistent with momentum conservation. This is indeed the case, albeit colour-conservation is required for the proof: \[\bigg{[}\mathbf{P}_{g}(\sigma,c)\mathbf{S}^{(0)}(\{p_{i}\},\{\delta_{i}\},q) \bigg{]}_{\begin{subarray}{c}\text{momentum}\\ \text{derivatives}\end{subarray}}\,\big{|}f(P)\big{>}=\bigg{(}\epsilon^{*}\cdot \frac{\partial}{\partial P}\bigg{)}\sum_{i}\mathbf{T}^{c}_{i}\;\big{|}f(P) \big{>}=0\;,\qquad P\equiv\sum_{i}p_{i}\;, \tag{13}\] where \(|f(P)\big{>}\) is invariant with respect to global gauge transformations and depends on the sum of the momenta only. The importance of this result lies in the fact that the result for the soft expansion in Eq. (10) remains the same even if we eliminate one of the \(p_{i}\) momenta in \(\big{|}M^{(0)}(\{p_{i}\})\big{>}\) by momentum conservation. In fact, one can eliminate different \(p_{i}\)'s in different diagrams that contribute to \(\big{|}M^{(0)}(\{p_{i}\})\big{>}\) without affecting the final result. ### Squared amplitudes While the focus of this publication lies on amplitudes, we would like to point out the simplifications that occur in the case of squared amplitudes summed over spin and colour. The first simplification is the lack of spin effects already noted in Ref. [2]. Indeed, squaring Eq. (10) and keeping only terms up to \(\mathcal{O}(1/\lambda)\), leaves the following contribution containing spin operators: \[-i\sum_{ij}\frac{p_{i}^{\mu}q^{\nu}}{p_{i}\cdot q}\,\Big{<}M^{(0)}\Big{|} \mathbf{T}_{i}\cdot\mathbf{T}_{j}\otimes\big{(}\mathbf{K}_{i,\mu\nu}-\mathbf{ K}^{\dagger}_{i,\mu\nu}\big{)}\Big{|}M^{(0)}\Big{>}=0\;. \tag{14}\] This contribution vanishes because of the hermiticity, (19), of the spin operators. The second simplification is the possibility [19] to include subleading soft effects through momentum shifts as follows: \[\Big{\langle}M_{g}^{(0)}(\{k_{l}\},q)\Big{|}M_{g}^{(0)}(\{k_{l}\},q) \Big{\rangle}=-\sum_{i\neq j}\Bigg{(}\frac{k_{i}\cdot k_{j}}{(k_{i}\cdot q)(k_{ j}\cdot q)}-\frac{m_{i}^{2}}{2\big{(}k_{i}\cdot q\big{)}^{2}}-\frac{m_{j}^{2}}{2 \big{(}k_{j}\cdot q\big{)}^{2}}\Bigg{)}\\ \times\Big{\langle}M^{(0)}(\{k_{l}+\delta_{il}\Delta_{i}+\delta_{ jl}\Delta_{j}\})\Big{|}\mathbf{T}_{i}\cdot\mathbf{T}_{j}\Big{|}M^{(0)}(\{k_{l}+ \delta_{il}\Delta_{i}+\delta_{jl}\Delta_{j}\})\Big{\rangle}+\mathcal{O}\big{(} \lambda^{0}\big{)}\;, \tag{20}\] with: \[\begin{split} k_{i}&\equiv p_{i}+\delta_{i}\;,\\ \Delta_{i}&\equiv\frac{1}{N_{ij}}\left[\Bigg{(}1- \frac{m_{i}^{2}\big{(}p_{j}\cdot q\big{)}}{\big{(}p_{j}\cdot p_{i}\big{)} \big{(}p_{i}\cdot q\big{)}}\Bigg{)}\,q-\frac{p_{j}\cdot q}{p_{i}\cdot p_{j}}\, p_{i}+\frac{p_{i}\cdot q}{p_{i}\cdot p_{j}}\,p_{j}\right]\;,\\ N_{ij}&\equiv 2-\frac{m_{i}^{2}\big{(}p_{j}\cdot q\big{)}}{ \big{(}p_{j}\cdot p_{i}\big{)}\big{(}p_{i}\cdot q\big{)}}-\frac{m_{j}^{2} \big{(}p_{i}\cdot q\big{)}}{\big{(}p_{i}\cdot p_{j}\big{)}\big{(}p_{j}\cdot q \big{)}}\;.\end{split} \tag{21}\] Notice that the momenta in the reduced scattering amplitude in Eq. (20) satisfy momentum conservation and are on-shell up to \(\mathcal{O}(\lambda)\): \[\sum_{l}k_{l}+\delta_{il}\Delta_{i}+\delta_{jl}\Delta_{j}=0\;,\qquad\big{(}k_ {l}+\delta_{il}\Delta_{i}+\delta_{jl}\Delta_{j}\big{)}^{2}=m_{l}^{2}+\mathcal{ O}\big{(}\lambda^{2}\big{)}\;. \tag{22}\] In fact, it is possible to add corrections of \(\mathcal{O}\big{(}\lambda^{2}\big{)}\) to these momenta to make them exactly on-shell. ## 4 Soft expansion of massless one-loop QCD amplitudes ### Theorem The main result of this publication is the following next-to-leading-power-accurate soft expansion of a one-loop massless-QCD amplitude: \[\Big{|}M_{g}^{(1)}(\{p_{i}+\delta_{i}\},q)\Big{\rangle}=\mathbf{S }^{(0)}(\{p_{i}\},\{\delta_{i}\},q)\;\Big{|}M^{(1)}(\{p_{i}\})\Big{\rangle}\\ +\mathbf{S}^{(1)}(\{p_{i}\},\{\delta_{i}\},q)\;\Big{|}M^{(0)}(\{p _{i}\})\Big{\rangle}+\int_{0}^{1}\mathrm{d}x\sum_{i}\mathbf{J}_{i}^{(1)}(x,p_{ i},q)\,\Big{|}H_{g,i}^{(0)}(x,\{p_{i}\},q)\Big{\rangle}\\ +\sum_{i\neq j}\sum_{\begin{subarray}{c}\tilde{a}_{i}\neq a_{i}\\ \tilde{a}_{j}\neq a_{j}\end{subarray}}\tilde{\mathbf{S}}_{a_{i}a_{j}\,\leftarrow \,\tilde{a}_{i}\tilde{a}_{j},\;ij}^{(1)}(p_{i},p_{j},q)\;\Big{|}M^{(0)}(\{p_{i }\})\Big{|}\genfrac{}{}{0.0pt}{}{a_{i}\,-\,\tilde{a}_{i}}{a_{j}\,\rightarrow\, \tilde{a}_{j}}\Big{\rangle}+\int_{0}^{1}\mathrm{d}x\,\sum_{\begin{subarray}{c} i\\ a_{i}=g\end{subarray}}\tilde{\mathbf{J}}_{i}^{(1)}(x,p_{i},q)\,\Big{|}H_{g,i}^{(0)}(x,\{p_{i}\},q) \Big{\rangle}\\ +\mathcal{O}(\lambda)\;. \tag{23}\] The _soft operator_\(\mathbf{S}^{(1)}(\{p_{i}\},\{\delta_{i}\},q)\) is an extension of the one-loop soft current, and is given by the expansion through \(\mathcal{O}\big{(}\lambda^{0}\big{)}\) of the r.h.s. of: \[\mathbf{P}_{g}(\sigma,c)\,\mathbf{S}^{(1)}(\{p_{i}\},\{\delta_{i}\},q)+ \mathcal{O}(\lambda)=\frac{2\,r_{\mathrm{Soft}}}{\epsilon^{2}}\,\sum_{i\neq j }if^{abc}\mathbf{T}_{i}^{a}\mathbf{T}_{j}^{b}\otimes\Bigg{(}-\frac{\mu^{2}s_ {ij}^{(\delta)}}{s_{iq}^{(\delta)}s_{jq}^{(\delta)}}\Bigg{)}^{\epsilon}\Bigg{[} \mathbf{S}_{i}^{(0)}(p_{i},\delta_{i},q,\sigma)\] \[+\frac{\epsilon}{1-2\epsilon}\frac{1}{p_{i}\cdot p_{j}}\Bigg{(}\frac{p_{i}^{ p_{i}^{\prime\prime}}p_{j}^{-}-p_{j}^{\mu}p_{i}^{\nu}}{p_{i}\cdot q}+\frac{p_{j}^{ \mu}p_{j}^{\nu}}{p_{j}\cdot q}\Bigg{)}F_{\mu\rho}(q,\sigma)\left(J_{i}-{\bf K}_ {i}\right)^{\nu\rho}\Bigg{]}\;, \tag{4.2}\] with: \[s_{ij}^{(\delta)}\equiv 2\left(p_{i}+\delta_{i}\right)\cdot(p_{j}+ \delta_{j})+i0^{+}\;,\qquad s_{iq}^{(\delta)}\equiv 2\left(p_{i}+\delta_{i} \right)\cdot q+i0^{+}\;,\qquad s_{jq}^{(\delta)}\equiv 2\left(p_{j}+\delta_{j} \right)\cdot q+i0^{+}\;, \tag{4.3}\] \[r_{\rm Soft}\equiv\frac{\Gamma^{3}(1-\epsilon)\Gamma^{2}(1+ \epsilon)}{\Gamma(1-2\epsilon)}=1+{\cal O}(\epsilon)\;. \tag{4.4}\] For convenience, we have not expanded the factor containing \(s_{ij}^{(\delta)}\), \(s_{iq}^{(\delta)}\) and \(s_{jq}^{(\delta)}\). A strict expansion depends on: \[s_{ij}\equiv 2\,p_{i}\cdot p_{j}+i0^{+}\;,\qquad s_{iq}\equiv 2\,p_{i}\cdot q+i0^ {+}\;,\qquad s_{jq}\equiv 2\,p_{j}\cdot q+i0^{+}\;, \tag{4.5}\] and on the scalar products of \(\delta_{i}\) and \(\delta_{j}\) with \(p_{i}\), \(p_{j}\) and \(q\). Finally, we notice that contractions of \({\bf K}_{i}^{\mu\nu}\) with other vectors can be conveniently evaluated with the help of Eq. (2.33). The _flavour-off-diagonal soft operator_ is given by: \[\tilde{\bf S}_{a_{i}a_{j}\leftarrow\tilde{a}_{i}\tilde{a}_{j},ij }^{(1)}(p_{i},p_{j},q)\;\big{|}\ldots,c_{i}^{\prime},\ldots,c_{j}^{\prime}, \ldots;\ldots,\sigma_{i}^{\prime},\ldots,\sigma_{j}^{\prime},\ldots\big{>}\] \[=-\frac{r_{\rm Soft}}{\epsilon(1-2\epsilon)}\Bigg{(}-\frac{\mu^{2 }s_{ij}}{s_{iq}s_{jq}}\Bigg{)}^{\epsilon}\sum_{\sigma\epsilon}\sum_{\sigma_{i }c_{i}}\sum_{\sigma_{j}c_{j}^{\prime}}\sum_{\sigma_{i}^{\prime}c_{j}^{\prime \prime}}\] \[\Bigg{\{}T_{c_{j}^{\prime\prime}c_{i}^{\prime\prime}}^{c_{\mu}} \,\bar{v}(p_{j},\sigma_{j}^{\prime\prime})\,\ell^{*}(q,p_{i},\sigma)\,u(p_{i}, \sigma_{j}^{\prime\prime})\quad{\rm for}\;a_{i}=q\;{\rm or}\;\tilde{a}_{i}=\bar{q} \tag{4.6}\] \[\times\big{<}c_{i},c_{j}^{\prime\prime};\sigma_{i},\sigma_{j}^{ \prime\prime}\big{|}{\bf Split}_{a_{i}\tilde{a}_{j}\leftarrow\tilde{a}_{i}}^{(0 )}(p_{i},p_{j},p_{i})\big{|}c_{i}^{\prime};\sigma_{i}^{\prime}\big{>}\;\big{<}c _{j},c_{i}^{\prime\prime};\sigma_{j},\sigma_{i}^{\prime\prime}\big{|}{\bf Split }_{a_{j}\tilde{a}_{i}\leftarrow\tilde{a}_{j}}^{(0)}(p_{j},p_{i},p_{j})\big{|}c _{j}^{\prime};\sigma_{j}^{\prime}\big{>}\] \[\times|\ldots,c_{i},\ldots,c_{j},\ldots,c;\ldots,c_{i},\ldots, \sigma_{j},\ldots,\sigma\big{>}\;,\] where: \[\epsilon_{\mu}^{*}(q,p_{i},\sigma)\equiv\epsilon_{\mu}^{*}(q,\sigma)-\frac{p_{i }\cdot\epsilon^{*}(q,\sigma)}{p_{i}\cdot q}\;q_{\mu}=iF_{\mu\nu}(q,\sigma)\frac {p_{i}^{\nu}}{p_{i}\cdot q}\;,\qquad\epsilon^{*}(q,p_{i},\sigma)\cdot q= \epsilon^{*}(q,p_{i},\sigma)\cdot p_{i}=0\;. \tag{4.7}\] The partons \(\tilde{\tilde{a}}_{i}\) and \(\tilde{\tilde{a}}_{j}\) are uniquely determined by flavour conservation in the splitting processes \(a_{i}\tilde{\tilde{a}}_{j}\leftarrow\tilde{a}_{i}\) and \(a_{j}\tilde{\tilde{a}}_{i}\leftarrow\tilde{a}_{j}\). The contribution corresponds to the emission of a soft quark-anti-quark pair, which then produces the soft gluon as depicted in Fig. 2. Finally, we notice that due to chirality and angular-momentum conservation, there is: \[{\rm sgn}(\sigma_{i})={\rm sgn}(\sigma_{i}^{\prime})={\rm sgn}(\sigma_{i}^{ \prime\prime})=-{\rm sgn}(\sigma_{j}^{\prime\prime})=-{\rm sgn}(\sigma_{j}^{ \prime})=-{\rm sgn}(\sigma_{j})\;. \tag{4.8}\] The _jet operator_\({\bf J}_{i}^{(1)}(x,p_{i},q)\) is given by: \[{\bf P}_{g}(\sigma,c)\,{\bf J}_{i}^{(1)}(x,p_{i},q) =\frac{\Gamma(1+\epsilon)}{1-\epsilon}\bigg{(}-\frac{\mu^{2}}{s_{ iq}}\bigg{)}^{\epsilon}\big{(}x(1-x)\big{)}^{-\epsilon}\sum_{\sigma^{\prime}c^{ \prime}}\epsilon_{\mu}^{*}(q,p_{i},\sigma)\epsilon_{\nu}(p_{i},\sigma^{\prime})\,{ \bf P}_{g}(\sigma^{\prime},c^{\prime})\] \[\qquad\times\Bigg{[}\bigg{(}{\bf T}_{i}^{c}{\bf T}_{i}^{c^{\prime} }+\frac{1}{x}if^{cdc^{\prime}}{\bf T}_{i}^{d}\bigg{)}\otimes\big{(}(x-2)\,g^{ \mu\nu}+\big{(}1+2\dim(a_{i})\big{)}\,x\,i{\bf K}_{i}^{\mu\nu}\big{)}\Bigg{]} \tag{4.9}\] \[=\frac{\Gamma(1+\epsilon)}{1-\epsilon}\bigg{(}-\frac{\mu^{2}}{s_{ iq}}\bigg{)}^{\epsilon}\big{(}x(1-x)\big{)}^{-\epsilon}\epsilon^{*}(q,p_{i}, \sigma)\cdot\epsilon(p_{i},-\sigma)\,{\bf P}_{g}(\sigma^{\prime},c^{\prime})\] \[\qquad\times\sum_{c^{\prime}}\Bigg{[}\bigg{(}{\bf T}_{i}^{c}{\bf T }_{i}^{c^{\prime}}+\frac{1}{x}if^{cdc^{\prime}}{\bf T}_{i}^{d}\bigg{)}\otimes \big{(}-2+x\big{(}1+{\bf\Sigma}_{g,i}\big{)}\big{)}\Bigg{]}\;,\] here \(\dim(a_{i})\) is the mass dimension of the wave function of parton \(i\), \(\dim(q)=\dim(\bar{q})=\nicefrac{{1}}{{2}}\) and \(\dim(g)=0\). The second equality follows from Eqs. (2.27) and (2.32): \[\epsilon^{*}_{\mu}(q,p_{i},\sigma)\epsilon_{\nu}(p_{i},\sigma^{\prime})\,iK^{ \mu\nu}_{a_{i}\sigma_{i}^{\prime}\sigma_{i}^{\prime}}(p_{i})=-\sigma\delta_{- \sigma\sigma^{\prime}}\,\sigma_{i}\delta_{\sigma_{i}\sigma_{i}^{\prime}} \,\epsilon^{*}(q,p_{i},\sigma)\cdot\epsilon(p_{i},-\sigma)\;, \tag{4.10}\] because \(\epsilon^{*}(q,p_{i},\sigma)\) has helicity \(\sigma\) as a polarisation vector for \(q\) and helicity \(-\sigma\) as a polarisation vector for \(p_{i}\). This can be proven in the rest-frame of \(q+p_{i}\), where a clockwise rotation around \(q\) is equivalent to an anti-clockwise rotation around \(p_{i}\). The jet operator of a gluon, \(a_{i}=g\), is not symmetric w.r.t. the gluons \(i\) and \(n+1\). On the other hand it is given by the same expression as that of the (anti-)quark up to the factor depending on \(\dim(a_{i})\). In fact, because of Eq. (2.27), the spin-dependent parts of the (anti-)quark and gluon jet operators are numerically identical. This is not a coincidence, but rather a consequence of a hidden supersymmetry. Indeed, if the quark field transformed with the adjoint representation of the gauge group, then it could belong to the same superfield as the gluon, and the diagrams that enter the calculation of the jet operator for a quark and for a gluon would be related by supersymmetry. The missing symmetry of the gluon jet operator, on the other hand, is restored in the convolution with the symmetric collinear-gluon amplitude (4.14). The _flavour-off-diagonal jet operator_\(\mathbf{\tilde{J}}_{i}^{(1)}(x,p_{i},q)\) is given by: \[\mathbf{\tilde{J}}_{i}^{(1)}(x,p_{i},q)\;\big{|}\ldots,c_{i}^{ \prime},\ldots,c^{\prime};\ldots,\sigma_{i}^{\prime},\ldots,\sigma^{\prime} \big{>}=\] \[\qquad\times\big{(}(1-2x)\,g^{\mu\nu}\,\mathds{1}+2\,iK^{\mu\nu}_ {q}(p_{i})\big{)}_{-\sigma^{\prime}\sigma_{i}^{\prime}}\big{|}\ldots,c_{i}, \ldots,c_{\bar{i}},\ldots,\sigma_{i},\ldots,\sigma\big{>}\] \[=\frac{\Gamma(1+\epsilon)}{1-\epsilon}\bigg{(}-\frac{\mu^{2}}{s_ {iq}}\bigg{)}^{\epsilon}\big{(}x(1-x)\big{)}^{-\epsilon}\sum_{cc_{i}}\bigg{(} T^{c}_{q}T^{c_{i}}_{q}+xif^{cdc_{i}}T^{d}_{q}\bigg{)}_{\sigma_{i}^{\prime}c_{i}^{ \prime}}\sum_{\sigma\sigma_{i}}\delta_{\sigma\sigma_{i}}\epsilon^{*}(q,p_{i}, \sigma)\cdot\epsilon^{*}(p_{i},\sigma_{i})\] \[\qquad\times\big{(}-2x+1+\text{sgn}(\sigma_{i}\sigma^{\prime}) \big{)}\,|\ldots,c_{i},\ldots,c;\ldots,\sigma_{i},\ldots,\sigma\big{>}\;. \tag{4.11}\] The operator transforms a state with \(a_{i}=q\), \(a_{n+1}=\bar{q}\) into a state with \(a_{i}=a_{n+1}=g\). The sign of the r.h.s. of Eq. (4.1) is a consequence of our convention: \[v(p,\sigma)=-u(p,-\sigma)\;, \tag{4.12}\] see Eq. (4.69). We point out that there is a crossing-like relation between \(\mathbf{J}_{i}\) and \(\mathbf{\tilde{J}}_{i}\) which becomes apparent by comparing the r.h.s. of (4.1) with \(x\mathbf{J}_{i}(1/x,p_{i},q)\) at vanishing \(\epsilon\). The _collinear-gluon amplitude_\(|H^{(0)}_{g,i}(x,\{p_{i}\},q)\rangle\) is defined as follows for \(a_{i}\in\{q,\bar{q}\}\): Figure 2: Flavour-off-diagonal contributions described by the operator (4.6). \[\mathbf{P}_{g}(\sigma,c)\left|H^{(0)}_{g,i}(x,\{p_{i}\},q)\right> \equiv\] \[(1-x)^{-\dim(a_{i})}\mathbf{P}_{g}(\sigma,c)\left|\Delta M^{(0)}_{ g}(x,\{p_{i}\},q)\right>-\frac{1}{x}\frac{q\cdot\epsilon^{*}(p_{i},\sigma)}{q\cdot p _{i}}\mathbf{T}_{i}^{c}\left|M^{(0)}(\{p_{i}\})\right>\;, \tag{4.13}\] and as follows for \(a_{i}=g\): \[\mathbf{P}_{i}(\sigma_{i},c_{i})\mathbf{P}_{n+1}(\sigma_{n+1},c_ {n+1}) \left|H^{(0)}_{g,i}(x,\{p_{i}\},q)\right>\equiv \tag{4.14}\] \[(1-x)^{-\dim(a_{i})}\mathbf{P}_{i}(\sigma_{i},c_{i})\mathbf{P}_{ n+1}(\sigma_{n+1},c_{n+1})\left|\Delta M^{(0)}_{g}(x,\{p_{i}\},q)\right>\] \[-\frac{1}{x}\frac{q\cdot\epsilon^{*}(p_{i},\sigma_{n+1})}{q\cdot p _{i}}\mathbf{P}_{i}(\sigma_{i},c_{i})\mathbf{T}_{i}^{c_{n+1}}\left|M^{(0)}( \{p_{i}\})\right>\] \[-\frac{1}{1-x}\frac{q\cdot\epsilon^{*}(p_{i},\sigma_{i})}{q\cdot p _{i}}\mathbf{P}_{i}(\sigma_{n+1},c_{n+1})\mathbf{T}_{i}^{c_{i}}\left|M^{(0)}( \{p_{i}\})\right>\;,\] where: \[\left|\Delta M^{(0)}_{g,i}(x,\{p_{i}\},q)\right>\equiv\lim_{l_{\perp}\to 0} \left[\left|M^{(0)}_{g}(\{k_{i}\}_{i=1}^{n},k_{g})\right>-\mathbf{Split}^{(0)}_ {i,n+1\,\leftarrow\,i}(k_{i},k_{g},p_{i})\left|M^{(0)}(\{p_{i}\})\right>\, \right], \tag{4.15}\] is the subleading term of the expansion of the tree-level soft-gluon emission amplitude in the limit of the soft gluon collinear to parton \(i\) as specified by the following configuration: \[k_{g} \equiv xp_{i}+l_{\perp}-\frac{l_{\perp}^{2}}{2x}\frac{q}{p_{i} \cdot q}\;, \text{with}\qquad l_{\perp}\cdot p_{i}=l_{\perp}\cdot q=0\;, \tag{4.16}\] \[k_{i} \equiv(1-x)p_{i}-l_{\perp}-\frac{l_{\perp}^{2}}{2(1-x)}\frac{q}{ p_{i}\cdot q}\;, \text{and}\qquad k_{j}\equiv p_{j}+\mathcal{O}\big{(}l_{\perp}^{2}\big{)}\;, \qquad j\neq i\;. \tag{4.17}\] For \(a_{i}=g\), we further require that the gluon polarisation vector in the amplitude for the subtraction term and hence also in the splitting operator in (4.15) be defined with _reference vector_\(q\) yielding the helicity sum: \[\sum_{\sigma}\epsilon_{\mu}(p_{i},\sigma)\epsilon_{\nu}^{*}(p_{i},\sigma)=-g_{ \mu\nu}+\frac{p_{\mu}q_{\nu}+p_{\nu}q_{\mu}}{p\cdot q}\;. \tag{4.18}\] Without this requirement, the collinear-gluon amplitude depends on the additional reference vector. Notice that the subtraction in (4.15) removes not only the leading collinear-singular asymptotics, but also part of the regular \(\mathcal{O}\big{(}l_{\perp}^{0}\big{)}\) term. The additional term in Eq. (4.14) w.r.t. (4.13) is necessary in order to retain symmetry w.r.t. to the exchange of the gluons \(i\) and \(n+1\). The _collinear-quark amplitude_\(\left.|H^{(0)}_{\bar{q},i}(x,\{p_{i}\},q)\right>\) is given by: \[\left|H^{(0)}_{\bar{q},i}(x,\{p_{i}\},q)\right> \equiv\] \[\left(x(1-x)\right)^{-\sfrac{1}{2}}\lim_{l_{\perp}\to 0}\left[ \left|M^{(0)}_{\bar{q}}(\{k_{i}\}_{i=1}^{n},k_{g})\left|\vphantom{\frac{1}{ 2}}\right|_{a_{i}\to q}\right>-\mathbf{Split}^{(0)}_{i,n+1\,\leftarrow\,i}(k_ {i},k_{g},p_{i})\left|M^{(0)}(\{p_{i}\})\right>\,\right]\,, \tag{4.19}\] where \(\left<c_{1},\ldots,c;\sigma_{1},\ldots,\sigma|M^{(0)}_{\bar{q}}(\{k_{i}\}_{i=1 }^{n},k_{g})\left|\vphantom{\frac{1}{2}}\right.|_{a_{i}\to\bar{q}}\right>\) is the amplitude for the process: \[0\to a_{1}(k_{1},\sigma_{1},c_{1})+\cdots+q(k_{i},\sigma_{i},c_{i})+\cdots+a_ {n}(k_{n},\sigma_{n},c_{n})+\bar{q}(k_{g},\sigma_{n+1},c_{n+1})\;. \tag{4.20}\] If there is more than one massless quark flavour, then the last term in Eq. (4.1) includes summation over flavours. The _collinear convolutions_, i.e. integrals over \(x\), in Eq. (4.1) are evaluated explicitly in Section 4.3. ### Collinear amplitudes Although Eq. (4.1) involves convolutions of jet operators with collinear amplitudes, the \(x\)-integrals can be performed analytically which yields an expression in terms of tree-level amplitudes independent of \(x\). In order to derive the relevant formulae, we first list the properties of the collinear amplitudes. #### Gauge invariance and Ward identity By construction, \(\left|\Delta M^{(0)}_{g,i}(x,\{p_{i}\},q)\right\rangle\) defined in Eq. (4.15) is gauge invariant, since it only involves gauge invariant amplitudes. However, it does not satisfy the naive Ward identity w.r.t. to the gluon with momentum \(xp_{i}\). If we denote by \(s\) the scalar polarisation, i.e. \(\epsilon^{*}(p,\sigma=s)=p\), then: \[\lim_{l_{\perp}\to 0}\mathbf{P}_{g}(\sigma=s,c)\bigg{[} \left|M^{(0)}_{g}((k_{i})_{i=1}^{n},k_{g})\right\rangle-\mathbf{Split}^{(0)}_{i,n+1\,\leftarrow\,i}(k_{i},k_{g},p_{i})\left|M^{(0)}(\{p_{i}\})\right\rangle \bigg{]}=\\ (1-x)^{\dim(a_{i})}\mathbf{T}_{i}^{c}\left|M^{(0)}(\{p_{i}\}) \right\rangle\,. \tag{4.21}\] The result is entirely due to the second term in the square bracket. It follows that the collinear-gluon amplitudes defined in Eqs. (4.13) and (4.14) satisfy the Ward identity: \[\mathbf{P}_{g}(\sigma=s,c)\left|H^{(0)}_{g,i}(x,\{p_{i}\},q)\right\rangle=0\;. \tag{4.22}\] #### Evaluation for arbitrary \(x\) The limit in the definition (4.15) can be obtained directly from Feynman diagrams as follows: \[\mathbf{P}_{i}(\sigma_{i},c_{i})\mathbf{P}_{g}(\sigma,c)\left| \Delta M^{(0)}_{g,i}(x,\{p_{i}\},q)\right\rangle=\\ \left[\mathbf{P}_{i}(\sigma_{i},c_{i})\mathbf{P}_{g}(\sigma,c) \left|M^{(0)}_{g}(\{p_{1},\ldots,(1-x)p_{i},\ldots,p_{n}\},xp_{i})\right\rangle \right]_{\text{\scriptsize{non-singular}}}\\ -\delta_{\sigma_{i},-s_{i}\sigma}\sum_{c^{\prime}_{i}}T^{c}_{a_{ i},c^{\prime}_{i}}\left[\begin{array}{ll}\left\{\frac{\bar{u}\big{(}(1-x)p_{i}, \sigma_{i}\big{)}\not{\epsilon}^{*}(p_{i},\sigma)\not{q}}{2\,p_{i}\cdot q}\frac {\partial}{\partial\bar{u}_{i}}&\text{if $a_{i}=q$}\\ \left\{\frac{\not{q}\not{\epsilon}^{*}(p_{i},\sigma)v\big{(}(1-x)p_{i},\sigma _{i}\big{)}}{2\,p_{i}\cdot q}\frac{\partial}{\partial v_{i}}&\text{if $a_{i}= \bar{q}$}\\ \left(\frac{(2x-1)q}{p_{i}\cdot q}\cdot\frac{\partial}{\partial\epsilon_{i}^{* }}&\text{if $a_{i}=g$}\right)\end{array}\right.\mathbf{P}_{i}(\sigma_{i},c^{ \prime}_{i})\left|M^{(0)}(\{p_{i}\})\right\rangle\,, \tag{4.23}\] where \(s_{i}=\nicefrac{{1}}{{2}}\) if either \(a_{i}=q\) or \(a_{i}=\bar{q}\), and \(s_{i}=1\) if \(a_{i}=g\). The derivatives \(\left.\partial/\partial\psi_{i}\right.\), \(\psi_{i}\in\{\bar{u}_{i},v_{i},\epsilon_{i}^{*}\}\), remove the wave function \(\psi_{i}\) of parton \(i\) in the amplitude. The collinear-quark amplitude is obtained similarly: \[\mathbf{P}_{i}(\sigma_{i},c_{i})\mathbf{P}_{g}(\sigma,c)\left|H^{(0)}_{\bar{q },i}(x,\{p_{i}\},q)\right\rangle=\\ \left(x(1-x)\right)^{-\nicefrac{{1}}{{2}}}\left[\mathbf{P}_{i}( \sigma_{i},c_{i})\mathbf{P}_{g}(\sigma,c)\left|M^{(0)}_{\bar{q}}(\{p_{1}, \ldots,(1-x)p_{i},\ldots,p_{n}\},xp_{i})\right\rangle\left|{}_{a_{i}\to q} \right]_{\text{\scriptsize{non-singular}}}\\ -\delta_{\sigma_{i},-\sigma}\sum_{c^{\prime}_{i}}T^{c^{\prime}_{i}}_{c _{i}c}\frac{2q}{p_{i}\cdot q}\cdot\frac{\partial}{\partial\epsilon_{i}^{*}} \,\mathbf{P}_{i}(\sigma_{i},c^{\prime}_{i})\left|M^{(0)}(\{p_{i}\})\right\rangle \,. \tag{4.24}\] #### Small-\(x\) expansion \[\mathbf{P}_{g}(\sigma,c)\,\Big{|}H^{(0)}_{g,i}(x,\{p_{i}\},q)\Big{\rangle}=-\sum_ {j\neq i}\mathbf{T}_{j}^{c}\otimes\left[\bigg{(}\frac{1}{x}+\dim(a_{i})\bigg{)} \bigg{(}\frac{p_{j}\cdot\epsilon_{i}^{*}}{p_{j}\cdot p_{i}}-\frac{q\cdot \epsilon_{i}^{*}}{q\cdot p_{i}}\bigg{)}\right.\\ +\frac{F_{i\,\mu\nu}}{2\,p_{j}\cdot p_{i}}\Big{(}-i\big{(}p_{j}^{ \mu}\partial_{i}^{\nu}-p_{j}^{\nu}\partial_{i}^{\mu}\big{)}+J_{j}^{\mu\nu}- \mathbf{K}_{j}^{\mu\nu}\Big{)}+\frac{iq_{\mu}\epsilon_{i\nu}^{*}}{q\cdot p_{i }}\,\mathbf{K}_{i}^{\mu\nu}\right]\Big{|}M^{(0)}(\{p_{i}\})\Big{\rangle}+ \mathcal{O}(x)\;, \tag{4.25}\] where: \[\epsilon_{i}^{*}\equiv\epsilon^{*}(p_{i},\sigma)\;,\qquad F_{i}^{\mu\nu}=i \big{(}p_{i}^{\mu}\epsilon_{i}^{\nu\,*}-p_{i}^{\nu}\epsilon_{i}^{\mu\,*}\big{)}\;. \tag{4.26}\] The above result can be obtained similarly to Eq. (3.3) by extending the eikonal approximation of Eq. (4.23) for soft-gluon emission from partons \(j\neq i\) with \(\delta_{k}=-\delta_{ki}\,xp_{i}\) and \(q=xp_{i}\). Subsequently requiring the Ward identity to be satisfied introduces the term: \[-\sum_{j\neq i}\mathbf{T}_{j}^{c}\,\epsilon_{i}^{*}\cdot(\partial_{i}- \partial_{j})\;. \tag{4.27}\] Spin effects for partons \(j\neq i\) are restored as discussed in Section 3. Finally, contributions due to soft-gluon emission from parton \(i\) are given explicitly in Eqs. (4.13) and (4.14), while spin effects can be determined from Eq. (4.23). #### Dependence on \(x\) It follows from the definitions Eqs. (4.13), (4.14) together with Eq. (4.23) evaluated in Feynman gauge that the collinear-gluon amplitudes are not only rational in \(x\) but can be reduced by partial fractioning to the form: \[\begin{split}\Big{|}H^{(0)}_{g,i}(x,\{p_{i}\},q)\Big{\rangle}& =\left(\frac{1}{x}+\dim(a_{i})\right)\Big{|}S^{(0)}_{g,i}(\{p_{i} \},q)\Big{\rangle}+\Big{|}C^{(0)}_{g,i}(\{p_{i}\},q)\Big{\rangle}+\frac{x}{1- x}\,\Big{|}\bar{S}^{(0)}_{g,i}(\{p_{i}\},q)\Big{\rangle}\\ &\quad+\sum_{I}\left(\frac{1}{x_{I}-x}-\frac{1}{x_{I}}\right) \Big{|}R^{(0)}_{g,i,I}(\{p_{i}\})\Big{\rangle}+x\,\Big{|}L^{(0)}_{g,i}(\{p_{i} \},q)\Big{\rangle}\;,\end{split} \tag{4.28}\] where the sum in the second line is taken over subsets: \[I\subset\{1,\ldots,n\}\backslash\{i\}\;,\qquad 2\leqslant|I|<n-2\;, \tag{4.29}\] with: \[x_{I}\equiv-\frac{P_{I}^{2}+i0^{+}}{2\,p_{i}\cdot P_{I}}\;,\qquad P_{I}\equiv \sum_{j\in I}p_{j}\;. \tag{4.30}\] The _soft-pole_ and _constant_ contributions, \(\,|S^{(0)}_{g,i}(\{p_{i}\},q)\rangle\) and \(\,|C^{(0)}_{g,i}(\{p_{i}\},q)\rangle\), follow from Eq. (4.25): \[\mathbf{P}_{g}(\sigma,c)\,\left|S^{(0)}_{g,i}(\{p_{i}\},q)\right\rangle=-\sum_ {j\neq i}\mathbf{T}_{j}^{c}\left(\frac{p_{j}}{p_{j}\cdot p_{i}}-\frac{q}{q\cdot p _{i}}\right)\cdot\epsilon^{*}(p_{i},\sigma)\,\Big{|}M^{(0)}(\{p_{i}\})\Big{\rangle} \;, \tag{4.31}\] \[\mathbf{P}_{g}(\sigma,c)\,\left|C^{(0)}_{g,i}(\{p_{i}\},q)\right\rangle=\\ -\sum_{j\neq i}\mathbf{T}_{j}^{c}\otimes\left(\frac{p_{i}\mu \epsilon_{j}^{*}(p_{i},\sigma)}{p_{j}\cdot p_{i}}\big{(}p_{j}^{\mu}\partial_{i} ^{\nu}-p_{j}^{\nu}\partial_{i}^{\mu}+iJ_{j}^{\mu\nu}-i\mathbf{K}_{j}^{\mu\nu} \big{)}+\frac{q_{\mu}\epsilon_{j}^{*}(p_{i},\sigma))}{q\cdot p_{i}}\,i\mathbf{ K}_{i}^{\mu\nu}\right)\Big{|}M^{(0)}(\{p_{i}\})\Big{\rangle}\;. \tag{4.32}\] The _residue_ contributions, \(\,|R^{(0)}_{g,i,I}(\{p_{i}\})\rangle\), correspond to poles5 of internal propagators that carry momentum \(P_{I}+x\,p_{i}\) in the first term on the r.h.s. of Eq. (4.23) as illustrated in Fig. 3: Footnote 5: If massive colour-neutral particles, e.g. electroweak gauge bosons, were included in the theory then the value of \(x_{I}\) would have to be modified to include the mass of the intermediate particle. \[\Big{\langle}c_{1},\ldots,c_{n+1};\sigma_{1},\ldots,\sigma_{n+1} \Big{|}R^{(0)}_{g,i,I}(\{p_{i}\})\Big{\rangle}=\\ \big{(}1-x_{I}\big{)}^{-\dim(a_{i})}\,\frac{1}{2p_{i}\cdot P_{I}} \sum_{\sigma c}M^{(0)}_{I}(\{p_{i}\},\{\sigma_{i}\},\{c_{i}\},\sigma,c)\, \overline{M}^{(0)}_{I}(\{p_{i}\},\{\sigma_{i}\},\{c_{i}\},\sigma,c)\;. \tag{4.33}\] \(M^{(0)}_{I}(\{p_{i}\},\{\sigma_{i}\},\{c_{i}\},\sigma,c)\) and \(\overline{M}^{(0)}_{I}(\{p_{i}\},\{\sigma_{i}\},\{c_{i}\},\sigma,c)\) are the tree-level amplitudes for the respective processes: \[0\to\sum_{j\in I}a_{j}(p_{j},\sigma_{j},c_{j})+g(x_{I}\,p_{i}, \sigma_{n+1},c_{n+1})+b(-P_{I}-x_{I}\,p_{i},\sigma,c)\qquad\text{and} \tag{4.34}\] \[0\to\sum_{\begin{subarray}{c}j\not\in I\\ j\not\in i\end{subarray}}a_{j}(p_{j},\sigma_{j},c_{j})+a_{i}((1-x_{I})\,p_{i}, \sigma_{i},c_{i})+\bar{b}(P_{I}+x_{I}\,p_{i},-\sigma,c)\;, \tag{4.35}\] where parton \(b\) is determined by flavour conservation, while \(\bar{b}\) is its anti-particle. If the flavour constraint cannot be met, then the contribution for the given \(I\) vanishes by definition. The _anti-soft-pole_ contribution, \(\left.\left|\bar{S}^{(0)}_{g,i}(\{p_{i}\},q)\right\rangle\right\rangle\), is given by: \[\left|\bar{S}^{(0)}_{g,i}(\{p_{i}\},q)\right\rangle=\mathbf{E}_{i,n+1}\begin{cases} \sum_{j\not\in i}\mathbf{Split}^{(0)}_{j,n+1\,\leftarrow\,j}(p_{j},p_{i},p_{ j})\,\left|M^{(0)}(\{p_{i}\})\,\Big{|}a_{j}\,\to\,g\\ \Big{|}S^{(0)}_{g,i}(\{p_{i}\},q)\Big{\rangle}\end{cases}\text{for }a_{i}=g\;, \tag{4.36}\] where the splitting operator corresponds to the transition \(a_{j}a_{i}\leftarrow\tilde{a}_{j}\). The result for \(a_{i}\in\{q,\bar{q}\}\) is given by (4.33) in the special case \(|I|=n-2\) where: \[\lim_{x_{I}\to 1}\Big{\langle}\ldots,c^{\prime}_{i},c^{\prime}_{j};\ldots, \sigma^{\prime}_{i},\sigma^{\prime}_{j}\Big{|}M^{(0)}_{I_{j}}(\{p_{i}\}) \Big{\rangle}=\Big{\langle}\ldots,c^{\prime}_{i},\ldots,c^{\prime}_{j},\ldots ;\ldots,\sigma^{\prime}_{i},\ldots,\sigma^{\prime}_{j},\ldots\Big{|}M^{(0)}( \{p_{i}\})\,\Big{|}a_{j}\,\to\,\tilde{a}_{j}\Big{\rangle}\;, \tag{4.37}\] \[\lim_{x_{I}\to 1}\frac{\big{(}1-x_{I}\big{)}^{1-\dim(a_{i})}}{ \big{(}-P_{I_{j}}-x_{I}p_{i}\big{)}^{2}}\,\Big{\langle}c_{j},c_{i},c^{\prime} _{j};\sigma_{j},\sigma_{i},\sigma^{\prime}_{j}\Big{|}\overline{M}^{(0)}_{I_{j }}(\{p_{i}\})\Big{\rangle}=\\ \lim_{x_{I}\to 1}\big{(}1-x_{I}\big{)}^{1-\dim(a_{i})}\,\big{\langle}c_ {j},c_{i};\sigma_{j},\sigma_{i}|\mathbf{Split}^{(0)}_{a_{ja}a_{\,\leftarrow\, \tilde{a}_{j}}}(p_{j},(1-x_{I})\,p_{i},p_{j})\big{|}c^{\prime}_{j},\sigma^{ \prime}_{j}\big{\rangle}=\\ \big{\langle}c_{j},c_{i};\sigma_{j},\sigma_{i}|\mathbf{Split}^{(0 )}_{a_{ja}a_{\,\leftarrow\,\tilde{a}_{j}}}(p_{j},p_{i},p_{j})\big{|}c^{ \prime}_{j},\sigma^{\prime}_{j}\big{\rangle}\;, \tag{4.38}\] with: \[I_{j}\equiv\{1,\ldots,n\}\backslash\{i,j\}\;,\qquad P_{I_{j}}=-p_{i}-p_{j}\;, \qquad\tilde{a}_{j}\equiv b\;. \tag{4.39}\] In principle, the result for \(a_{i}=g\) can be obtained with the above method as well. However, the second equality in (4.38) does not apply for \(a_{j}=\tilde{a}_{j}=g\): Figure 3: Class of diagrams that yields a residue contribution to the collinear-gluon amplitude. Detailed description in text following Eq. (4.33). \[\lim_{x_{I}\to 1}\big{(}1-x_{I}\big{)}\,\big{\langle}c_{j},c_{i}; \sigma_{j},\sigma_{i}\big{|}\mathbf{Split}^{(0)}_{gg\leftarrow\,g}(p_{j},(1-x_{ I})\,p_{i},p_{j})\big{|}c^{\prime}_{j},\sigma^{\prime}_{j}\big{\rangle}\neq\] \[\big{\langle}c_{j},c_{i};\sigma_{j},\sigma_{i}\big{|}\mathbf{Split }^{(0)}_{gg\leftarrow\,g}(p_{j},p_{i},p_{j})\big{|}c^{\prime}_{j},\sigma^{ \prime}_{j}\big{\rangle}. \tag{4.40}\] Instead, the three splitting operators (2.35), (2.36) and (2.38) yield eikonal factors. Moreover, in order to obtain the complete anti-soft pole contribution, it is still necessary to include the contribution of the last term in Eq. (4.14). These difficulties may be overcome by using the symmetry of the collinear-gluon amplitude w.r.t. the exchange of the gluons \(i\) and \(n+1\), which straightforwardly yields (4.36). Finally, the _linear_ contribution, \(\left.\left|L^{(0)}_{g,i}(\{p_{i}\},q)\right\rangle\right\rangle\), vanishes for \(a_{i}\in\{q,\bar{q}\}\), while for \(a_{i}=g\) it is again determined by the symmetry of the collinear-gluon amplitude w.r.t. the exchange of the gluons \(i\) and \(n+1\): \[\begin{split}\left|L^{(0)}_{g,i}(\{p_{i}\},q)\right\rangle& =\left|\bar{S}^{(0)}_{g,i}(\{p_{i}\},q)\right\rangle-\left|S^{(0) }_{g,i}(\{p_{i}\},q)\right\rangle+\left|\bar{C}^{(0)}_{g,i}(\{p_{i}\},q) \right\rangle-\left|C^{(0)}_{g,i}(\{p_{i}\},q)\right\rangle\\ &+\frac{1}{2}\sum_{I}\left(\frac{1}{x_{I}}+\frac{1}{1-x_{I}} \right)\Big{(}\left|R^{(0)}_{g,i,I}(\{p_{i}\})\right\rangle-\left|\bar{R}^{(0 )}_{g,i,I}(\{p_{i}\})\right\rangle\Big{)}\,\end{split} \tag{4.41}\] where: \[\left|\bar{C}^{(0)}_{g,i}(\{p_{i}\},q)\right\rangle=\mathbf{E}_{i,n+1}\,\left| C^{(0)}_{g,i}(\{p_{i}\},q)\right\rangle\,\qquad\left|\bar{R}^{(0)}_{g,i,I}(\{p_{i}\},q)\right\rangle=\mathbf{E}_{i,n +1}\,\left|R^{(0)}_{g,i,I}(\{p_{i}\},q)\right\rangle. \tag{4.42}\] The \(x\)-dependence of the collinear-quark amplitude is given by: \[\begin{split}\left|H^{(0)}_{\bar{q},i}(x,\{p_{i}\},q)\right\rangle \right\rangle&=\frac{1}{x}\left|S^{(0)}_{\bar{q},i}(\{p_{i}\}) \right\rangle+\left|C^{(0)}_{\bar{q},i}(\{p_{i}\},q)\right\rangle+\frac{x}{1-x }\left|\bar{S}^{(0)}_{\bar{q},i}(\{p_{i}\})\right\rangle\\ &\qquad\qquad+\sum_{I}\left(\frac{1}{x_{I}-x}-\frac{1}{x_{I}} \right)\left|R^{(0)}_{\bar{q},i,I}(\{p_{i}\})\right\rangle\.\end{split} \tag{4.43}\] The soft-pole and anti-soft pole contributions are given by a similar expression to (4.36) for the case \(a_{i}\in\{q,\bar{q}\}\): \[\left|S^{(0)}_{\bar{q},i}(\{p_{i}\})\right\rangle =\qquad\quad\sum_{j\neq i}\mathbf{Split}^{(0)}_{j,n+1\leftarrow\, j}(p_{j},p_{i},p_{j})\,\left|M^{(0)}(\{p_{i}\})\,\Big{|}a^{\leftarrow\,q}_{j} \right\rangle\, \tag{4.44}\] \[\left|\bar{S}^{(0)}_{\bar{q},i}(\{p_{i}\})\right\rangle =\mathbf{E}_{i,n+1}\sum_{j\neq i}\mathbf{Split}^{(0)}_{j,n+1 \leftarrow\, j}(p_{j},p_{i},p_{j})\,\left|M^{(0)}(\{p_{i}\})\,\Big{|}a^{\leftarrow\,\bar{q}}_{j} \right\rangle. \tag{4.45}\] The splitting operator in Eq. (4.44) corresponds to the transition \(a_{j}\bar{q}\leftarrow\tilde{a}_{j}\), while that in Eq. (4.45) to \(a_{j}q\leftarrow\tilde{a}_{j}\). The constant contribution, \(\left.\left|C^{(0)}_{\bar{q},i}(\{p_{i}\},q)\right\rangle\right\rangle\), corresponds to the subleading term of the soft-anti-quark expansion of the collinear-quark amplitude. An expression for this term analogous to the LBK theorem is not yet known. Hence, it has to be evaluated by using the direct expression Eq. (4.44) at a single convenient point. The residue contributions are obtained in analogy to Eq. (4.43): \[\begin{split}\Big{\langle}c_{1},\ldots,c_{n+1};\sigma_{1},\ldots, \sigma_{n+1}\Big{|}R^{(0)}_{\bar{q},i,I}(\{p_{i}\})\Big{\rangle}=\\ \big{(}x_{I}(1-x_{I})\big{)}^{-\!\!1\!/_{2}}\,\frac{1}{2p_{i}\cdot P _{I}}\sum_{\sigma c}\,M^{(0)}_{I}(\{p_{i}\},\{\sigma_{i}\},\{c_{i}\},\sigma,c) \,\bar{M}^{(0)}_{I}(\{p_{i}\},\{\sigma_{i}\},\{c_{i}\},\sigma,c)\.\end{split} \tag{4.46}\] \(M^{(0)}_{I}(\{p_{i}\},\{\sigma_{i}\},\{c_{i}\},\sigma,c)\) and \(\bar{M}^{(0)}_{I}(\{p_{i}\},\{\sigma_{i}\},\{c_{i}\},\sigma,c)\) are now the tree-level amplitudes for the respective processes: \[\begin{split} 0&\rightarrow\sum_{j\in I}a_{j}(p_{j},\sigma_{j},c_{ j})+\bar{q}(x_{I}\,p_{i},\sigma_{n+1},c_{n+1})+b(-P_{I}-x_{I}\,p_{i},\sigma,c) \qquad\text{and}\\ 0&\rightarrow\sum_{\begin{subarray}{c}\neq I\\ j\neq i\end{subarray}}a_{j}(p_{j},\sigma_{j},c_{j})+q((1-x_{I})\,p_{i}, \sigma_{i},c_{i})+\bar{b}(P_{I}+x_{I}\,p_{i},-\sigma,c)\.\end{split} \tag{4.47}\] ### Collinear convolutions The convolution of the jet operator with the collinear-gluon amplitude can be evaluated explicitly using Eqs. (4.9), (4.10) and (4.28): \[\mathbf{P}_{g}(\sigma,c)\int_{0}^{1}\mathrm{d}x\,\mathbf{J}_{i}^{( 1)}(x,p_{i},q)\left|H_{g,i}^{(0)}(x,\{p_{i}\},q)\right\rangle\] \[=\frac{r_{\Gamma}}{\epsilon(1-\epsilon)(1-2\epsilon)}\left(- \frac{\mu^{2}}{s_{iq}}\right)^{\epsilon}\epsilon^{*}(q,p_{i},\sigma)\cdot \epsilon(p_{i},-\sigma)\sum_{c^{\prime}}\mathbf{P}_{g}(-\sigma,c^{\prime})\] \[\left\{\mathbf{T}_{i}^{c^{\prime}}\mathbf{T}_{i}^{c}\bigg{[}- \frac{1-2\epsilon}{1+\epsilon}\left(1-3\epsilon+(1+\epsilon)\mathbf{\Sigma}_{g,i}\right)\left|S_{g,i}^{(0)}\right\rangle+(1-3\epsilon-(1-\epsilon)\mathbf{ \Sigma}_{g,i})\left|\bar{S}_{g,i}^{(0)}\right\rangle\] \[\quad+(2-3\epsilon+\epsilon\mathbf{\Sigma}_{g,i})\left(\left|C_{ g,i}^{(0)}\right\rangle+\dim(a_{i})\left|S_{g,i}^{(0)}\right\rangle\right)-\frac{ \epsilon}{2}\left(3-\mathbf{\Sigma}_{g,i}\right)\left|L_{g,i}^{(0)}\right\rangle \tag{4.48}\] \[\quad+\sum_{I}\frac{\epsilon}{2x_{I}^{2}(1-x_{I})}\big{(}2x_{I} -2x_{I}\,\mathbf{\Sigma}_{g,i}-(2-x_{I}-x_{I}\,\mathbf{\Sigma}_{g,i})\,_{2}F_ {1}(1,1-\epsilon,3-2\epsilon,1/x_{I})\big{)}\left|R_{g,i,I}^{(0)}\right\rangle \bigg{]}\] \[\quad+\sum_{I}\frac{\epsilon}{2x_{I}^{2}}\big{(}x_{I}+x_{I}\, \mathbf{\Sigma}_{g,i}+(2-x_{I}-x_{I}\,\mathbf{\Sigma}_{g,i})\,_{2}F_{1}(1,1- \epsilon,3-2\epsilon,1/x_{I})\big{)}\left|R_{g,i,I}^{(0)}\right\rangle\bigg{]} \bigg{\}}\;,\] where: \[r_{\Gamma}=\frac{\Gamma^{2}(1-\epsilon)\Gamma(1+\epsilon)}{\Gamma(1-2\epsilon) }\;. \tag{4.49}\] The coefficient of the \(\epsilon\)-pole in Eq. (4.48) for (anti-)quarks and gluons is provided in Eqs. (4.78), (4.80) and (4.81) in Section 4.4. In order to approximate a finite remainder of a one-loop amplitude in the 't Hooft-Veltman scheme with Eq. (4.1), it is sufficient to know the \(\mathcal{O}\big{(}\epsilon^{0}\big{)}\) term of the Laurent expansion of Eq. (4.48): \[\left[\mathbf{P}_{g}(\sigma,c)e^{\epsilon\gamma_{E}}\int_{0}^{1} \mathrm{d}x\,\mathbf{J}_{i}^{(1)}(x,p_{i},q)\left|H_{g,i}^{(0)}(x,\{p_{i}\},q) \right\rangle\right]_{\mathcal{O}(\epsilon^{0})}\] \[=\epsilon^{*}(q,p_{i},\sigma)\cdot\epsilon(p_{i},-\sigma)\sum_{ c^{\prime}}\mathbf{P}_{g}(-\sigma,c^{\prime})\bigg{\{}\mathbf{T}_{i}^{c^{ \prime}}\mathbf{T}_{i}^{c}\bigg{[}\left(3-\mathbf{\Sigma}_{g,i}-(1+\mathbf{ \Sigma}_{g,i})\ln\!\left(-\frac{\mu^{2}}{s_{iq}}\right)\!\right)\left|S_{g,i}^{ (0)}\right\rangle\] \[\quad+\left(3+\mathbf{\Sigma}_{g,i}+2\ln\!\left(-\frac{\mu^{2}}{ s_{iq}}\right)\right)\left(\left|C_{g,i}^{(0)}\right\rangle+\dim(a_{i})\left|S_{g,i}^{ (0)}\right\rangle\right)-\frac{1}{2}(3-\mathbf{\Sigma}_{g,i})\left|L_{g,i}^{( 0)}\right\rangle\] \[\quad-\sum_{I}\frac{1}{x_{I}}\left(1+\mathbf{\Sigma}_{g,i}-(2-x_{ I}-x_{I}\,\mathbf{\Sigma}_{g,i})\ln\!\left(1-\frac{1}{x_{I}}\right)\right)\left|R_{g,i,I}^{(0)} \right\rangle\bigg{]} \tag{4.50}\] \[+\mathbf{T}_{i}^{c}\mathbf{T}_{i}^{c^{\prime}}\bigg{[}\left(2\, \mathbf{\Sigma}_{g,i}+(3+\mathbf{\Sigma}_{g,i})\ln\!\left(-\frac{\mu^{2}}{s_{iq }}\right)\right)\left|S_{g,i}^{(0)}\right\rangle+\frac{1}{2}(3-\mathbf{\Sigma}_ {g,i})\left|\bar{S}_{g,i}^{(0)}\right\rangle\] \[\quad-\frac{1}{2}\left(9+\mathbf{\Sigma}_{g,i}+4\ln\!\left(-\frac{ \mu^{2}}{s_{iq}}\right)\right)\left(\left|C_{g,i}^{(0)}\right\rangle+\dim(a_{ i})\left|S_{g,i}^{(0)}\right\rangle\right)+\frac{1}{6}(5-\mathbf{\Sigma}_{g,i}) \left|L_{g,i}^{(0)}\right\rangle\] \[\quad+\sum_{I}\frac{1}{2x_{I}}\left(5-2x_{I}+(1-2x_{I})\mathbf{ \Sigma}_{g,i}-2(1-x_{I})(2-x_{I}-x_{I}\,\mathbf{\Sigma}_{g,i})\ln\!\left(1- \frac{1}{x_{I}}\right)\right)\left|R_{g,i,I}^{(0)}\right\rangle\bigg{]}\bigg{\}}\;,\] where we have removed the Euler-Mascheroni constant \(\gamma_{E}\) as would be done in the \(\overline{\mathrm{MS}}\) scheme. The convolution of the flavour-off-diagonal jet operator with the collinear-quark amplitude can be evaluated explicitly using Eqs. (4.11) and (4.43): \[\mathbf{P}_{i}(\sigma_{i},c_{i})\mathbf{P}_{g}(\sigma,c)\!\int_{0}^ {1}\!\mathrm{d}x\,\mathbf{\tilde{J}}_{i}^{(1)}(x,p_{i},q)\left|H_{\bar{q},i}^{ (0)}(x,\{p_{i}\},q)\right\rangle\] \[=\frac{r_{\Gamma}}{(1-\epsilon)(1-2\epsilon)}\left(-\frac{\mu^{2 }}{s_{iq}}\right)^{\epsilon}\epsilon^{*}(q,p_{i},\sigma)\cdot\epsilon^{*}(p_{ i},\sigma_{i})\sum_{\sigma^{\prime}\in\mathcal{C}}\sum_{\ell^{\prime}_{i}} \mathbf{P}_{i}(-\sigma^{\prime},c^{\prime}_{i})\mathbf{P}_{n+1}(\sigma^{ \prime},c^{\prime})\] \[\left\{(T_{q}^{c}T_{q}^{c})_{c^{\prime}_{i}}\!\left[2\sigma_{i} \sigma^{\prime}\left|S_{\bar{q},i}^{(0)}\right\rangle+\left(\frac{1-(2- \epsilon)\sigma_{i}\sigma^{\prime}}{\epsilon}+\frac{1}{2(3-2\epsilon)}\right) \left|\bar{S}_{\bar{q},i}^{(0)}\right\rangle+\left(\sigma_{i}\sigma^{\prime}- \frac{1}{2(3-2\epsilon)}\right)\left|C_{\bar{q},i}^{(0)}\right\rangle\right.\right.\] \[\left.\left.+\sum_{I}\frac{1}{x_{I}}\left(2x_{I}^{2}-(1+2x_{I}) \sigma_{i}\sigma^{\prime}+\frac{1}{2(3-2\epsilon)}+x_{I}(1-2x_{I}+2\sigma_{i} \sigma^{\prime})_{2}F_{1}(1,1-\epsilon,2-2\epsilon,1/x_{I})\right)\left|R_{ \bar{q},i,I}^{(0)}\right\rangle\right.\right]\] \[+(T_{q}^{c}T_{q}^{c_{i}})_{c^{\prime}_{i}}\!\left[\left(2\sigma_ {i}\sigma^{\prime}-\frac{1+2\sigma_{i}\sigma^{\prime}}{\epsilon}\right) \left|S_{\bar{q},i}^{(0)}\right\rangle+\left(\sigma_{i}\sigma^{\prime}-\frac{ 1}{2(3-2\epsilon)}\right)\left|\bar{S}_{\bar{q},i}^{(0)}\right\rangle+\left( \sigma_{i}\sigma^{\prime}+\frac{1}{2(3-2\epsilon)}\right)\left|C_{\bar{q},i}^ {(0)}\right\rangle\right.\] \[\left.\left.+\sum_{I}\frac{1}{x_{I}}\Big{(}2x_{I}-2x_{I}^{2}-(1-2x _{I})\sigma_{i}\sigma^{\prime}-\frac{1}{2(3-2\epsilon)}\right.\right.\] \[\left.\left.\left.+\left(1-x_{I}\right)(1-2x_{I}+2\sigma_{i} \sigma^{\prime})_{2}F_{1}(1,1-\epsilon,2-2\epsilon,1/x_{I})\right)\left|R_{ \bar{q},i,I}^{(0)}\right\rangle\right.\right]\right\}\,. \tag{4.51}\] The \(\mathcal{O}(\epsilon^{0})\) term of the Laurent expansion is given by: \[\left[\mathbf{P}_{i}(\sigma_{i},c_{i})\mathbf{P}_{g}(\sigma,c)e^ {\epsilon\gamma_{E}}\int_{0}^{1}\mathrm{d}x\,\mathbf{\tilde{J}}_{i}^{(1)}(x,p_ {i},q)\left|H_{\bar{q},i}^{(0)}(x,\{p_{i}\},q)\right\rangle\right]_{\mathcal{O} (\epsilon^{0})}\] \[=\epsilon^{*}(q,p_{i},\sigma)\cdot\epsilon^{*}(p_{i},\sigma_{i}) \sum_{\sigma^{\prime}\in\mathcal{C}}\sum_{\ell^{\prime}_{i}}\mathbf{P}_{i}(- \sigma^{\prime},c^{\prime}_{i})\mathbf{P}_{n+1}(\sigma^{\prime},c^{\prime})\] \[\left\{(T_{q}^{c_{i}}T_{q}^{c})_{c^{\prime}_{i}}\!\left[2\sigma_ {i}\sigma^{\prime}\left|S_{\bar{q},i}^{(0)}\right\rangle+\left(\frac{19}{6}-5 \sigma_{i}\sigma^{\prime}+(1-2\sigma_{i}\sigma^{\prime})\ln\!\left(-\frac{\mu^ {2}}{s_{iq}}\right)\right)\left|\bar{S}_{\bar{q},i}^{(0)}\right\rangle-\left( \frac{1}{6}-\sigma_{i}\sigma^{\prime}\right)\left|C_{\bar{q},i}^{(0)}\right\rangle\right.\right.\] \[\left.\left.+\sum_{I}\!\left(\frac{1}{6x_{I}}\left(1+12x_{I}^{2}-6 (1+2x_{I})\sigma_{i}\sigma^{\prime}\right)-x_{I}(1-2x_{I}+2\sigma_{i}\sigma^{ \prime})\ln\!\left(1-\frac{1}{x_{I}}\right)\right)\left|R_{\bar{q},i,I}^{(0)} \right\rangle\right.\right]\] \[+(T_{q}^{c}T_{q}^{c_{i}})_{c^{\prime}_{i}}\!\left[-\!\left(3+4 \sigma_{i}\sigma^{\prime}+(1+2\sigma_{i}\sigma^{\prime})\ln\!\left(-\frac{\mu^{ 2}}{s_{iq}}\right)\right)\left|S_{\bar{q},i}^{(0)}\right\rangle-\left(\frac{1} {6}-\sigma_{i}\sigma^{\prime}\right)\left|\bar{S}_{\bar{q},i}^{(0)}\right\rangle\right.\] \[\left.\left.\left.+\left(1+x_{I}\right)(1-2x_{I}+2\sigma_{i} \sigma^{\prime})\ln\!\left(1-\frac{1}{x_{I}}\right)\right)\left|R_{\bar{q},i,I }^{(0)}\right\rangle\right]\right\}\,. \tag{4.52}\] ### Proof based on the expansion-by-regions method Theorem 4.1 has been obtained by applying the expansion-by-regions method [20] (see also Refs. [21; 22]). The method is anchored in dimensional regularisation, and can be used to expand Feynman diagrams in any parameter. There are three difficulties: 1) identification of contributing regions, 2) appearance of unregulated integrals, 3) application to a large number of diagrams. Problem 1) has been solved for several standard expansions. The soft expansion has been analysed most recently in Refs. [9; 10; 11; 12] albeit for soft-photon emissions. The most important observation is the appearance of a collinear region besides the expected hard and soft regions. Although the collinear region has been anticipated already in Ref. [3], the latter analysis has been shown to be incomplete. Irrespective of the listed publications, the identification of contributing regions can nowadays be performed automatically with dedicated tools [23; 24; 25]. As far as problem 2) is concerned, it turns out that no unregulated integrals appear in the soft expansion considered here. Finally, problem 3) is alleviated by organising the contributions according to physical intuition. The three contributing regions, hard, soft and collinear, are rather classes of regions defined by a scaling of the loop momentum w.r.t. the expansion parameter. In each class, an actual region is defined by a loop-momentum routing. Actually, momentum routing is relevant in all but the hard region. The latter is defined by assuming that each component of the loop momentum is large compared to the expansion parameter. This region is the easiest to analyse. In fact, the respective Feynman integrands are obtained by Taylor expansion in the momentum shifts \(\delta_{i}\) and the soft-gluon momentum \(q\). It follows immediately that the hard-region contribution is given by the first term in Eq. (4.1). This corresponds to Eq. (3.1) upon replacement of tree-level amplitudes by their one-loop counterparts. The soft and collinear regions present more subtleties and are analysed below. One important property should already be stressed at this point. Each region has a different \(d\)-dimensional scaling w.r.t. to the expansion parameter. Hence, each region is gauge-invariant on its own. We will exploit this property to make the calculations as simple as possible. The only subtle point is that some gauges, e.g. the lightcone gauge, may generate additional singularities and hence additional regions. These unphysical regions must cancel entirely upon summation of the contributions in a given class due to the gauge invariance of the original amplitude. With the choices made below, no unphysical regions appear in the first place. ### Soft regions In any soft region, the loop momentum, \(l\), is assumed be of the order of the soft-gluon momentum, \(l^{\mu}=\mathcal{O}(\lambda)\). A particular soft region is defined by selecting a pair of external partons \(i,j\). We differentiate between flavour-diagonal, Fig. 5, and flavour-off-diagonal contributions, Fig. 6. In principle, the soft gluon may attach anywhere else on the visible lines in Figs. 5 and 6. However, a scaling argument demonstrates that the shown topologies are the only ones that yield non-vanishing integrals after expansion, since alternative topologies result in scaleless integrals. The momentum routing in the \((i,j)\)-soft region is specified in Fig. 4. The calculation is conveniently performed in the Feynman gauge. The matrix element represented by the shaded circle is expanded in \(\delta_{l}\), \(l\) and \(q\) just as in Section 3.1. In the case of flavour-off-diagonal diagrams, the expansion is trivial and amounts to setting these parameters to zero. Tensor integrals are reduced to scalar integrals with Passarino-Veltman reduction [26]. The diagrams are expressed in terms of a single non-vanishing integral: \[\begin{split} I^{\text{soft}}&=\mu^{2\epsilon}\int \frac{d^{d}l}{i\pi^{d/2}}\frac{(p_{i}+\delta_{i})\cdot(p_{j}+\delta_{j})}{[l^{2} +i0^{+}][(l+q)^{2}+i0^{+}][(p_{i}+\delta_{i})\cdot(l+q)+i0^{+}][-(p_{j}+\delta_ {j})\cdot l+i0^{+}]}\\ &=\frac{r_{\text{Soft}}}{\epsilon^{2}}\frac{4s_{ij}^{(\delta)}}{s_ {iq}^{(\delta)}s_{jq}^{(\delta)}}\Bigg{(}-\frac{\mu^{2}s_{ij}^{(\delta)}}{s_{iq }^{(\delta)}s_{jq}^{(\delta)}}\Bigg{)}\;,\end{split} \tag{4.53}\] where we have not yet expanded in \(\delta_{i}\), \(\delta_{j}\). \(r_{\text{Soft}}\) has been defined in (4.4) while the invariants \(s_{\dots}^{(\delta)}\) in (4.3). The results are summarized in Eqs. (4.2) and (4.6). They have all the desired properties: they satsify the Ward identity w.r.t. to the soft-gluon momentum, they are expressed through gauge-invariant reduced scattering amplitudes, the occurring differential operators are consistent with on-shellness and momentum conservation. As expected, each of these properties applies in a single \((i,j)\)-soft region. Notice, however, that momentum conservation requires symmetrisation w.r.t. \(i\) and \(j\) due to the fact that Eq. (4.2) is written in a non-symmetric form. #### Collinear regions A particular collinear region is defined by selecting a parton \(i\) whose momentum specifies the _collinear direction_\(n\) with \(n\varpropto p_{i}\). An _anti-collinear direction_\(\bar{n}\), \(\bar{n}^{2}=0\), \(\bar{n}\varkappa n\) must also be specified. In principle, the only natural choice is \(\bar{n}\varpropto q\). In the following, we will nevertheless keep \(\bar{n}\) generic albeit normalised to conveniently satisfy \(n\cdot\bar{n}=\nicefrac{{1}}{{2}}\). An arbitrary vector \(k\) can now be decomposed as follows: \[k=k_{+}n+k_{-}\bar{n}+k_{\perp}\;,\qquad k_{\pm}\in\mathbb{R}\;,\qquad k_{\perp }\cdot n=k_{\perp}\cdot\bar{n}=0\;,\qquad k_{\perp}^{2}\leqslant 0\;,\qquad k^{2} =k_{+}k_{-}+k_{\perp}^{2}\;. \tag{4.54}\] The expanded amplitude will be calculated in the lightcone gauge with gauge vector \(\bar{n}\). The use of a physical gauge simplifies the analysis of the singularity structure of diagrams and is particularly important in the study of collinear radiation. In particular, our gauge choice yields results that do not necessitate derivatives of process-dependent scattering amplitudes. This is at variance with Ref. [12], where tests of a factorisation formulae for soft-photon radiation were performed in the Feynman gauge, which led to the appearance of different jet operators than ours. Finally, the disappearance of \(\bar{n}\) from the final expressions will serve as a test of independence from the particular physical gauge chosen. The routing of the loop momentum \(l\) is specified in Fig. 7 for the three topologies characteristic of the \(i\)-collinear region. The integration measure is given by: \[d^{d}l=\frac{1}{2}\,\mathrm{d}l_{+}\,\mathrm{d}l_{-}\,\mathrm{d}^{d-2}l_{\perp }\;. \tag{4.55}\] Expansion in \(\lambda\) is performed according to: \[l_{+}=\mathcal{O}(1)\;,\qquad l_{\perp}=\mathcal{O}\Big{(}\lambda^{\nicefrac{{ 1}}{{2}}}\Big{)}\;,\qquad l_{-}=\mathcal{O}(\lambda)\;. \tag{4.56}\] Propagator denominators are, therefore, approximated as follows: \[\begin{split}(l+q)^{2}+i0^{+}\approx l_{+}(l_{-}+q_{-})+l_{\perp }^{2}+i0^{+}\;,\qquad(l-p_{i})^{2}+i0^{+}\approx(l_{+}-p_{i+})l_{-}+l_{\perp}^ {2}+i0^{+}\;,\\ (l-p_{i}+q)^{2}+i0^{+}\approx(l_{+}-p_{i+})(l_{-}+q_{-})+l_{\perp }^{2}+i0^{+}\;.\end{split} \tag{4.57}\] Figure 7: Routing of the loop-momentum \(l\) in the three topologies occurring in the \(i\)-collinear region. xpansion of the actual propagators generates, of course, further terms polynomial in \(q_{-}\), \(l_{-}\) and \(l_{\perp}\) accompanied by higher powers of the propagator denominators. The part of the integrand represented by the shaded circle in Fig. 7 must also be expanded according to (110). Hence, this part depends non-trivially on \(l_{+}\), while any dependence on \(l_{-}\) and \(l_{\perp}\) is introduced through differential operators \((l_{-}\,\partial/\partial l_{-})^{k_{1}}(l_{\perp}\cdot\partial/\partial l_{ \perp})^{k_{2}}\) with the derivatives evaluated at vanishing \(l_{-}\) and \(l_{\perp}\). One can factor out \(p_{i+}\) and \(q_{-}\) from the integrand term-by-term. This is achieved by the change of variable: \[l_{+}\equiv x\,p_{i+}\;, \tag{111}\] and the rescalings \(l_{-}\to q_{-}\,l_{-}\), \(l_{\perp}^{2}\to p_{i+}\,q_{-}\,l_{\perp}^{2}\). In consequence, expanded integrals are proportional to \(\left(p_{i+}q_{-}\right)^{-\epsilon}=\left(p_{i}\cdot q\right)^{-\epsilon}\). Furthermore, both \(p_{i+}\) and \(q_{-}\) must be present in the propagator denominators without possibility to remove them by loop-momentum shifts, or otherwise a given integral is scaleless. After expansion, integration over \(l_{-}\) can be performed by closing the integration contour in the upper complex half-plane, and taking residues at: \[\frac{l_{\perp}^{2}+i0^{+}}{-l_{+}}\;,\qquad-q_{-}+\frac{l_{\perp}^{2}+i0^{+} }{-l_{+}}\;,\qquad\frac{l_{\perp}^{2}+i0^{+}}{p_{+}-l_{+}}\;,\qquad-q_{-}+ \frac{l_{\perp}^{2}+i0^{+}}{p_{+}-l_{+}}\;. \tag{112}\] The first two of the residues contribute only for \(l_{+}<0\), while the second two only for \(l_{+}<p_{+}\). The final integration over \(l_{\perp}\) effectively only involves \((d-2)\)-dimensional massive vacuum integrals. For this reason, any contribution odd in \(l_{\perp}\) vanishes. In the case of collinear-region contributions depicted in Figs. 8 and 9 the loop-momentum integration can be performed explicitly. In particular, denoting by \(\sigma\),\(c\) and \(\sigma_{i}\),\(c_{i}\) the helicity and colour of the soft-gluon and parton \(i\) respectively, there is: \[\text{Fig.~{}\ref{fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig: fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig: fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig: fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig: fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:figfig:fig:fig:figfig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:figfig:fig:figfig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:figfig:fig:figfig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:figfig:fig:figfig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:figfig:fig:figfig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:figfig:fig:fig:fig:figfig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:figfig:fig:figfig:fig:fig:fig:figfig:fig:fig:figfig:figfig:fig:figfig:fig:fig:figfig:fig:figfig:fig:figfig:figfig:fig:figfig:fig:figfig:figfig:fig:figfig:fig:fig:figfig:fig:fig:figfig:fig:figfig:fig:figfig:fig:fig:figfig:fig:figfig:fig:fig:figfig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:fig:fig:figfig:figfig:fig:fig:fig:figfig:fig:figfig:fig:figfig:fig:fig:fig:figfig:fig:fig:figfig:fig:figfig:fig:fig:figfig:fig:fig:fig:figfig:fig:fig:fig:fig:figfig:fig:figfig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:figfig \[\text{Fig.~{}\ref{fig:fig:fig1}}=r_{\Gamma}\bigg{(}-\frac{\mu^{2}}{s_{iq} }\bigg{)}^{\epsilon}\mathbf{P}_{i}(\sigma_{i},c_{i})\mathbf{T}_{i}^{c}\,\epsilon _{\mu}^{*}(q,p_{i},\sigma)\epsilon_{\beta}^{*}(p_{i},\sigma_{i})\\ \times\bigg{\{}-C_{A}\bigg{[}\frac{1}{1-2\epsilon}\bigg{(}\frac{ 1}{3-2\epsilon}\frac{g^{\mu\beta}q^{\alpha}}{p_{i}\cdot q}+\frac{1}{(1- \epsilon)\epsilon}\frac{g^{\mu\beta}\bar{n}^{\alpha}}{p_{i}\cdot\bar{n}} \bigg{)}+\frac{2}{(1-\epsilon)(1+\epsilon)\epsilon}\frac{\bar{n}^{\mu}g^{ \beta\alpha}}{p_{i}\cdot\bar{n}}\bigg{]}\\ +T_{F}n_{l}\frac{2}{(1-\epsilon)(1-2\epsilon)(3-2\epsilon)} \frac{g^{\mu\beta}q^{\alpha}}{p_{i}\cdot q}\bigg{\}}\frac{\partial}{\partial \epsilon_{\alpha}^{*}(p_{i},\sigma_{i})}\,\Big{|}M^{(0)}\Big{\rangle}\, \tag{4.61}\] with \(r_{\Gamma}\) defined in (4.49). The remaining collinear-region contributions require the knowledge of the \(x\)-dependence of the part of the integrand represented by the shaded circle in Fig. 7. It turns out that no derivatives in \(l_{-}\), \(l_{\perp}\) are needed at \(\mathcal{O}\big{(}\lambda^{0}\big{)}\), since contributions containing differential operators \((l_{-}\,\partial/\partial l_{-})^{k_{1}}(l_{\perp}\cdot\partial/\partial l_{ \perp})^{k_{2}}\), \(2k_{1}+k_{2}\leqslant 2\) cancel. Hence, integration over \(l_{-}\), \(l_{\perp}\) only involves the subdiagrams depicted in Figs. 10, 11 and 12. The results are as follows: \[\text{Fig.~{}\ref{fig:fig:fig1}}\equiv\mathbf{J}_{q}^{\nu,c^{ \prime}}=\frac{\Gamma(1+\epsilon)}{1-\epsilon}\bigg{(}-\frac{\mu^{2}}{s_{iq} }\bigg{)}^{\epsilon}(x(1-x))^{-\epsilon}\,\mathbf{P}_{i}(\sigma_{i},c_{i}) \left(\mathbf{T}_{i}^{c}\mathbf{T}_{i}^{c^{\prime}}+\frac{1}{x}if^{cd\epsilon ^{\prime}}\mathbf{T}_{i}^{d}\right)\\ \times\epsilon_{\mu}^{*}(q,p_{i},\sigma)\,\bar{u}(p_{i},\sigma_{ i})\bigg{[}2g^{\mu\nu}-\frac{2\bar{n}^{\mu}p_{i}^{\nu}}{\bar{n}\cdot p_{i}}+x \left(-\gamma^{\mu}\gamma^{\nu}+\frac{\gamma^{\mu}\bar{n}p_{i}^{\nu}}{\bar{n} \cdot p_{i}}\right)\bigg{]}\, \tag{4.62}\] \[\text{Fig.~{}\ref{fig:fig:fig1}}=\mathbf{J}_{g}^{\alpha\nu,c^{ \prime}}=\frac{\Gamma(1+\epsilon)}{1-\epsilon}\bigg{(}-\frac{\mu^{2}}{s_{iq} }\bigg{)}^{\epsilon}(x(1-x))^{-\epsilon}\,\mathbf{P}_{i}(\sigma_{i},c_{i}) \left(\mathbf{T}_{i}^{c}\mathbf{T}_{i}^{c^{\prime}}+\frac{1}{x}if^{cd\epsilon ^{\prime}}\mathbf{T}_{i}^{d}\right)\\ \times\epsilon_{\mu}^{*}(q,p_{i},\sigma)\,\epsilon_{\beta}^{*}(p_ {i},\sigma_{i})\bigg{[}\left(\delta_{\rho}^{\mu}-\frac{p_{i\rho}\bar{n}^{\mu}}{ p_{i}\cdot\bar{n}}\right)\left(\delta_{\sigma}^{\beta}-\frac{p_{i\sigma}\bar{n}^{ \beta}}{p_{i}\cdot\bar{n}}\right)\left(g^{\rho\alpha}g^{\sigma\nu}-g^{\rho \nu}g^{\sigma\alpha}\right) \tag{4.63}\] Figure 11: Subdiagrams contributing to the jet operator for a gluon. Description as in Fig. 10. A factor of \(\nicefrac{{1}}{{2}}\) must be included in the calculation of the diagrams in order to compensate for the symmetry of the amplitude represented by the shaded circle in Fig. 7. Figure 10: Subdiagrams contributing to the jet operator for an outgoing quark. Lines on the left-hand sides of the diagrams are not amputated and are represented by propagators in the integrand. Integration over \(l_{-}\), \(l_{\perp}\) is included in the expressions for the diagrams. \[+x\,g^{\mu\beta}\left(g^{\nu\alpha}-\frac{p_{i}^{\alpha}\bar{n}^{ \alpha}+p_{i}^{\alpha}\bar{n}^{\nu}}{p_{i}\cdot\bar{n}}\right)-\frac{1}{1-x} \left(g^{\mu\alpha}-\frac{p_{i}^{\alpha}\bar{n}^{\mu}}{p_{i}\cdot\bar{n}} \right)\left(g^{\nu\beta}-\frac{p_{i}^{\nu}\bar{n}^{\beta}}{p_{i}\cdot\bar{n}} \right)\Bigg{]}\;, \tag{111}\] Fig. 12\(\equiv\tilde{J}_{c^{\prime}c^{\prime}_{i}}=-\frac{\Gamma(1+\epsilon)}{1-\epsilon} \bigg{(}-\frac{\mu^{2}}{s_{iq}}\bigg{)}^{\epsilon}(x(1-x))^{-\epsilon}\left(T^ {c}T^{c_{i}}+ixf^{cdc_{i}}T^{d}\right)_{c^{\prime}_{i}}\\ \times\epsilon_{\mu}^{*}(q,p_{i},\sigma)\,\epsilon_{\beta}^{*}(p_ {i},\sigma_{i})\,\not{p}_{i}\left(\gamma^{\mu}\gamma^{\beta}-2x\,g^{\mu\beta} \right)\;.\] (112) The contributions of the residues in \(l_{-}\) at the points listed in (109) conspire to cancel unless: \[x\in[0,1]\;. \tag{113}\] Since Figs. 8 and 9 have the structure of Fig. 7, one might expect that the results presented in Eqs. (108) and (109) can be obtained by integrating \(\mathbf{J}_{q}^{\nu,c^{\prime}}\), \(\mathbf{J}_{g}^{\alpha\nu,c^{\prime}}\) and \(\tilde{J}_{c^{\prime}_{i}}\) with appropriate functions of \(x\). This is indeed the case: \[\text{Fig.~{}\ref{fig:10}} =\int_{0}^{1}\mathrm{d}x\,\mathbf{J}_{q}^{\nu,c^{\prime}}\; \mathbf{T}_{i}^{c^{\prime}}\frac{1}{p_{i}\cdot q}\left(-\frac{1}{2}\gamma_{ \nu}\not{q}-\frac{1}{x}q_{\nu}\right)\frac{\partial}{\partial\bar{u}(p_{i}, \sigma_{i})}\left|M^{(0)}\right\rangle\;, \tag{114}\] \[\text{Fig.~{}\ref{fig:10}} =\int_{0}^{1}\mathrm{d}x\,\mathbf{J}_{g}^{\alpha\nu,c^{\prime}} \;\mathbf{T}_{i}^{c^{\prime}}\frac{1}{p_{i}\cdot q}\left(-(1-2x)g_{\alpha\nu} q_{\beta}-\frac{q_{\nu}g_{\alpha\beta}}{x}+\frac{q_{\alpha}g_{\nu\beta}}{1-x} \right)\frac{\partial}{\partial\epsilon_{\beta}^{*}(p_{i},\sigma_{i})}\left|M ^{(0)}\right\rangle\] (115) \[\qquad\qquad\qquad+n_{l}\int_{0}^{1}\mathrm{d}x\,\mathrm{Tr} \bigg{[}\tilde{J}_{c^{\prime}c^{\prime}_{i}}\frac{\not{q}}{p_{i}\cdot q} \bigg{]}T_{c^{\prime}_{i}c^{\prime}}^{c^{\prime}}\frac{q}{p_{i}\cdot q}\cdot \frac{\partial}{\partial\epsilon^{*}(p_{i},\sigma_{i}^{\prime\prime})}\mathbf{ P}_{i}(\sigma_{i}^{\prime\prime},c^{\prime\prime}_{i})\left|M^{(0)}\right\rangle\;.\] The choice of the helicity \(\sigma_{i}^{\prime\prime}\) in the contribution proportional to \(n_{l}\) in Eq. (115) does not affect the result. The relevance of Eqs. (114) and (115) becomes apparent after consultation of the expressions for the collinear-gluon and collinear-quark amplitudes, (106), (107) and (108). Clearly, soft-gluon emissions from external lines are correctly accounted for by the convolutions of either \(\mathbf{J}_{q}^{\nu,c^{\prime}}\) with \(\left|H_{g,i}^{(0)}\right\rangle\), or of \(\mathbf{J}_{g}^{\alpha\nu,c^{\prime}}\) with \(\left|H_{g,i}^{(0)}\right\rangle\) and \(\tilde{J}_{c^{\prime}_{i}}\) with \(\left|H_{\bar{q},i}^{(0)}\right\rangle\). In both cases, it is still necessary to remove the external wave functions of partons \(i\) and \(n+1\) from the collinear amplitudes. The convolutions thus provide the entirety of the contribution of the \(i\)-collinear region. At this point we recall what has been proven in Section 4.2, namely that \(\left|H_{g,i}^{(0)}\right\rangle\) satisfies the Ward identity w.r.t. any gluon. Hence, terms proportional to \(p_{i}^{\nu}\) in Eqs. (111), (111) and additionally to \(p_{i}^{\alpha}\) in Eq. (111) vanish after contraction with the collinear-gluon amplitude. Equivalently, removing \(\bar{n}\)-dependent terms from \(\mathbf{J}_{q}^{\nu,c^{\prime}}\) and \(\mathbf{J}_{g}^{\alpha\nu,c^{\prime}}\) does not affect the \(i\)-collinear-region contribution. In consequence, our results do not depend on the anti-collinear direction and the particular physical gauge used to derive them. The result for the jet operator (101) for \(a_{i}=q\) now directly follows from Eqs. (111) and (20). In order to obtain (101) for \(a_{i}=g\), it is necessary to first transform Eq. (111) by exploiting the symmetry of the collinear-gluon amplitude w.r.t. gluons \(i\) and \(n+1\) together with the Jacobi identity in the form: \[\left(T_{g}^{c}T_{g}^{c^{\prime}}+\frac{1}{x}if^{cdc^{\prime}}T_{g}^{d}\right)_ {c_{i}c_{i}^{\prime}}=\left(\frac{1-x}{x}\,T_{g}^{c}T_{g}^{c^{\prime}_{i}}+ \frac{1}{x}if^{cdc_{i}^{\prime}}T_{g}^{d}\right)_{c_{i}c^{\prime}}\;. \tag{116}\] Finally, Eq. (104) is obtained from Eq. (112) with the help of the replacement: \[\not{p}_{i}=-\sum_{\sigma_{i}}v(p_{i},-\sigma_{i})\bar{u}(p_{i},\sigma_{i})\;. \tag{117}\] #### Spurious-pole cancellation Eq. (4.1) has been obtained with the expansion-by-regions method. Each region, i.e. hard, \((i,j)\)-soft and \(i\)-collinear, contributes _spurious poles_ in \(\epsilon\) due to the unrestricted loop-momentum integration domain. The proof of Eq. (4.1) is therefore complete when it is shown that all spurious poles cancel. To this end, it is necessary to independently derive an expression for the singularities of the soft-gluon-emission amplitude, expand this result in the soft-gluon momentum and verify agreement with the first two terms of the Laurent expansion of Eq. (4.1). The coefficients of the singular \(\epsilon\)-expansion terms of an \(n\)-parton one-loop amplitude \(\,|M_{n}^{(1)}(\{k_{i}\})\rangle\) are contained in the \(\mathbf{I}_{n}^{(1)}\)-operator [27; 28; 29; 30; 13]: \[\left|M_{n}^{(1)}(\{k_{i}\})\right\rangle=\mathbf{I}_{n}^{(1)}(\{k_{i}\}) \left|M_{n}^{(0)}(\{k_{i}\})\right\rangle+\mathcal{O}\big{(}\epsilon^{0}\big{)}\;. \tag{4.70}\] In the purely massless case, there is: \[\mathbf{I}_{n}^{(1)}(\{k_{i}\})=-\frac{1}{\epsilon^{2}}\sum_{i}C_{i}+\frac{1} {\epsilon}\sum_{i\neq j}\mathbf{T}_{i}\cdot\mathbf{T}_{j}\ln\!\left(-\frac{ \mu^{2}}{2\,k_{i}\cdot k_{j}+i0^{+}}\right)+\frac{1}{2\epsilon}\sum_{i}\gamma _{0}^{i}+\frac{n-2}{2}\frac{\beta_{0}}{\epsilon}\;. \tag{4.71}\] The last term proportional to the \(\beta\)-function coefficient \(\beta_{0}\) is of ultraviolet origin, while the remaining terms are due to soft and collinear singularities. \(C_{i}\) is either the quadratic Casimir operator of the fundamental representation, \(C_{F}=T_{F}(N_{c}^{2}-1)/N_{c}\), \(N_{c}=3\), if \(i\) is a (anti)-quark, or of the adjoint representation, \(C_{A}=2T_{F}N_{c}\), if \(i\) is a gluon. The anomalous dimensions are given by: \[\gamma_{0}^{q}=-3C_{F}\;,\qquad\gamma_{0}^{g}=-\beta_{0}=-\frac{11}{3}C_{A}+ \frac{4}{3}T_{F}n_{l}\;. \tag{4.72}\] For the setup relevant to the present publication, there is: \[\begin{split}\left|M_{g}^{(1)}(\{p_{i}+\delta_{i}\},q)\right\rangle &=\mathbf{I}_{n+1}^{(1)}(\{p_{i}+\delta_{i}\},q)\left|M_{g}^{(0) }(\{p_{i}+\delta_{i}\},q)\right\rangle+\mathcal{O}\big{(}\epsilon^{0}\big{)} \\ &=\mathbf{I}_{n+1}^{(1)}(\{p_{i}+\delta_{i}\},q)\left(\mathbf{S} ^{(0)}(\{p_{i}\},\{\delta_{i}\},q)\left|M^{(0)}(\{p_{i}\})\right\rangle+ \mathcal{O}(\lambda)\right)+\mathcal{O}\big{(}\epsilon^{0}\big{)}\;,\end{split} \tag{4.73}\] with: \[\mathbf{P}_{g}(\sigma,c)\,\mathbf{I}_{n+1}^{(1)}(\{p_{i}+\delta_{ i}\},q)\,\mathbf{S}^{(0)}\left|M^{(0)}\right\rangle=\mathbf{P}_{g}(\sigma,c)\, \mathbf{S}^{(0)}\,\mathbf{I}_{n}^{(1)}(\{p_{i}\})\left|M^{(0)}\right\rangle\\ +\sum_{j}\left(\mathbf{T}_{j}^{c}\otimes\mathbf{S}_{j}^{(0)} \,\mathbf{I}_{n}^{(1)}(\{p_{i}\})-\mathbf{I}_{n}^{(1)}(\{p_{i}+\delta_{i}\}) \,\mathbf{T}_{j}^{c}\otimes\mathbf{S}_{j}^{(0)}\right.\\ \left.+\left(\frac{1}{\epsilon^{2}}C_{A}\delta^{cb}-\frac{2}{ \epsilon}\sum_{i}if^{abc}\mathbf{T}_{i}^{a}\ln\!\left(-\frac{\mu^{2}}{s_{iq}^ {(\delta)}}\right)\right)\mathbf{T}_{j}^{b}\otimes\mathbf{S}_{j}^{(0)}\right) \left|M^{(0)}\right\rangle\;. \tag{4.74}\] The r.h.s. has already been arranged to exhibit the singularities of the first term in Eq. (4.1): \[\mathbf{S}^{(0)}\,\left|M^{(1)}\right\rangle=\mathbf{S}^{(0)}\,\mathbf{I}_{n }^{(1)}\left|M^{(0)}\right\rangle+\mathcal{O}\big{(}\epsilon^{0}\big{)}\;. \tag{4.75}\] Moreover, we have only made explicit those arguments of the occurring operators that require careful consideration. Further manipulation yields: (4.76) Contrary to Eq. (4.1), Eq. (4.76) does not contain flavour-off-diagonal contributions. Hence, their poles are entirely spurious. We begin the verification of spurious-pole cancellation with the flavour-diagonal contributions. Expansion of the soft operator (4.2) acting on the hard matrix element yields: \[\mathbf{P}_{g}(\sigma,c)\,\mathbf{S}^{(1)}\,\left|M^{(0)}\right>= \frac{2}{\epsilon^{2}}\sum_{i\neq j}if^{abc}\mathbf{T}_{i}^{a}\mathbf{T}_{j}^{b }\otimes\left(1+\epsilon\ln\!\left(-\frac{\mu^{2}s_{ij}^{(\delta)}}{s_{iq}^{( \delta)}s_{jq}^{(\delta)}}\right)\right)\!\mathbf{S}_{i}^{(0)}\left|M^{(0)}\right> \\ +\frac{2}{\epsilon}\,\sum_{i\neq j}if^{abc}\mathbf{T}_{i}^{a} \mathbf{T}_{j}^{b}\otimes\frac{1}{p_{i}\cdot p_{j}}\Bigg{(}\frac{p_{i}^{\mu} p_{j}^{\nu}-p_{j}^{\mu}p_{i}^{\nu}}{p_{i}\cdot q}+\frac{p_{j}^{\mu}p_{j}^{\nu}}{p_{j} \cdot q}\Bigg{)}F_{\mu\rho}\left(J_{i}-\mathbf{K}_{i}\right)^{\nu\rho}\, \left|M^{(0)}\right>+\mathcal{O}\!\left(\epsilon^{0}\right)\,. \tag{4.77}\] Part of the flavour-diagonal pole contributions generated by the convolution of the jet operator (4.9) with the collinear-gluon amplitude (4.28) is obtained using Eqs. (4.31) and (4.32): \[\mathbf{P}_{g}(\sigma,c)\] \[-\frac{2}{\epsilon}\,\sum_{i\neq j}if^{abc}\mathbf{T}_{i}^{a} \mathbf{T}_{j}^{b}\otimes\frac{1}{p_{i}\cdot p_{j}}\Bigg{(}\frac{p_{i}^{\mu}p_ {j}^{\nu}-p_{j}^{\mu}p_{i}^{\nu}}{p_{i}\cdot q}+\frac{p_{j}^{\mu}p_{j}^{\nu}}{ p_{j}\cdot q}\Bigg{)}F_{\mu\rho}\left(J_{i}-\mathbf{K}_{i}\right)^{\nu\rho}\, \left|M^{(0)}\right>\] \[+\frac{1}{\epsilon}\sum_{i\neq j}\left(1-2\dim(a_{i})\right)if^ {abc}\mathbf{T}_{i}^{a}\mathbf{T}_{j}^{b}\otimes\frac{p_{i}^{\rho}\,iF_{\rho \mu}}{p_{i}\cdot q}\bigg{(}\frac{p_{j}^{\mu}}{p_{j}\cdot p_{i}}+\bigg{(}\frac {p_{j}}{p_{j}\cdot p_{i}}-\frac{q}{q\cdot p_{i}}\Big{)}_{\sigma}i\mathbf{K}_{ i}^{\sigma\mu}\bigg{)}\,\left|M^{(0)}\right>\] \[+\mathcal{O}\!\left(\epsilon^{0}\right)\,. \tag{4.78}\] If parton \(i\) is a gluon, then the soft singularity of the collinear-gluon amplitude at \(x=1\) yields the remaining flavour-diagonal pole contributions. The result is conveniently obtained by rewriting Eq. (4.9) in an equivalent form using the Jacobi identity to transform the colour operators and the last of Eqs. (2.20) to transform the spin operator: \[\mathbf{P}_{g}(\sigma,c)\,\mathbf{J}_{i}^{(1)}(x,p_{i},q)=\frac{ \Gamma(1+\epsilon)}{1-\epsilon}\bigg{(}-\frac{\mu^{2}}{s_{iq}}\bigg{)}^{ \epsilon}\big{(}x(1-x)\big{)}^{-\epsilon}\sum_{\sigma^{\prime}c^{\prime}} \epsilon_{\mu}^{a}(q,p_{i},\sigma)\epsilon_{\nu}(p_{i},\sigma^{\prime})\\ \times\left[if^{cdc^{\prime}}\mathbf{T}_{i}^{d}\otimes\big{(}-g^{ \mu\nu}+i\mathbf{K}_{i}^{\mu\nu}\big{)}\right]\mathbf{P}_{g}(\sigma^{\prime},c ^{\prime})\,\mathbf{E}_{i,n+1}+\text{terms proportional to }(1-x)\;. \tag{4.79}\] Convolution using Eq. (4.36) yields: \[\mathbf{P}_{g}(\sigma,c)\,\int_{0}^{1}\mathrm{d}x\sum_{i}\left(1-2 \dim(a_{i})\right)\mathbf{J}_{i}^{(1)}\,\frac{x}{1-x}\,\Big{|}\bar{S}_{g,i}^{( 0)}\Big{\rangle}=-\frac{1}{\epsilon}\sum_{i\neq j}\left(1-2\dim(a_{i})\right) \\ \times if^{abc}\mathbf{T}_{i}^{a}\mathbf{T}_{j}^{b}\otimes \frac{p_{i}^{\rho}\,iF_{\rho\mu}}{p_{i}\cdot q}\bigg{(}\frac{p_{j}^{\mu}}{p_{j }\cdot p_{i}}+\bigg{(}\frac{p_{j}}{p_{j}\cdot p_{i}}-\frac{q}{q\cdot p_{i}} \bigg{)}_{\sigma}i\mathbf{K}_{i}^{\sigma\mu}\bigg{)}\,\Big{|}M^{(0)}\Big{\rangle} +\mathcal{O}\!\left(\epsilon^{0}\right)\,. \tag{4.80}\] Clearly, the sum of the r.h.s. of Eqs. (4.75), (4.77), (4.78) and (4.80) is equal to the r.h.s. of Eq. (4.76) up to terms of \(\mathcal{O}\!\left(\lambda\right)\) and \(\mathcal{O}\!\left(\epsilon^{0}\right)\). This completes the proof of Eq. (4.1) for the flavour-diagonal contributions. Let us turn to the poles of flavour-off-diagonal contributions in Eq. (4.1), and prove that the poles generated by the flavour-off-diagonal soft operator (4.6) are cancelled by the poles generated by the convolution of the jet operator (4.9) with the anti-soft-pole contribution (4.36) for \(a_{i}\in\{q,\bar{q}\}\) and by the convolutions of the flavour-off-diagonal jet operator (4.11) with the soft-pole and anti-soft-pole contributions (4.44) and (4.45). These three convolutions are given by: \[\int_{0}^{1}\frac{x\,{\rm d}x}{1-x}\,\left\langle\ldots,c_{i},\ldots,c;\ldots,\sigma_{i},\ldots,\sigma_{i},\ldots,\sigma\Big{|}\mathbf{J}_{i}^{(1) }\Big{|}\tilde{S}_{g,i}^{(0)}\right\rangle=\frac{1}{\epsilon}\,\epsilon_{\mu}^ {*}(q,p_{i},\sigma)\sum_{j\neq i}\sum_{\sigma^{\prime}_{i}\in\mathcal{C}_{i}^{ \prime}}\sum_{\sigma^{\prime}_{j}\in\mathcal{C}^{\prime}}\] \[\qquad\qquad\qquad\qquad\qquad\times\left\langle\ldots,c^{\prime }_{i},\ldots,c^{\prime}_{j},\ldots;\ldots,\sigma^{\prime}_{i},\ldots,\sigma^{ \prime}_{j},\ldots\Big{|}M^{(0)}(\{p_{i}\})\Big{|}^{a_{i}\ldots q}_{a_{j} \rightarrow\bar{a}_{j}}\right\rangle+\mathcal{O}\big{(}\epsilon^{0}\big{)}\] \[=\frac{1}{\epsilon}\sum_{j\neq i}\sum_{\sigma^{\prime}_{i}\in \mathcal{C}_{i}^{\prime}}\sum_{\sigma^{\prime}_{j}\in\mathcal{C}_{j}^{\prime}} J_{a_{i}a_{j}\leftarrow\bar{g}\bar{a}_{j}}^{(1,-1)}\left\langle\ldots,c^{ \prime}_{i},\ldots,c^{\prime}_{j},\ldots;\ldots,\sigma^{\prime}_{i},\ldots \sigma^{\prime}_{j},\ldots\Big{|}M^{(0)}(\{p_{i}\})\Big{|}^{a_{i}\ldots q}_{a_ {j}\rightarrow\bar{a}_{j}}\right\rangle+\mathcal{O}\big{(}\epsilon^{0}\big{)}\;, \tag{4.81}\] \[\int_{0}^{1}{\rm d}x\,\frac{1}{x}\,\left\langle\ldots,c_{i},\ldots,c;\ldots,\sigma_{i},\ldots,\sigma\Big{|}\mathbf{\tilde{J}}_{i}^{(1)}\Big{|} S_{q,i}^{(0)}\right\rangle=\frac{1}{\epsilon}\,\epsilon_{\mu}^{*}(q,p_{i}, \sigma)\epsilon_{\nu}^{*}(p_{i},\sigma_{i})\sum_{j\neq i}\sum_{\sigma^{\prime }_{i}\in\mathcal{C}_{i}^{\prime}}\sum_{\sigma^{\prime}_{j}\in\mathcal{C}^{ \prime}}\] \[\qquad\qquad\qquad\qquad\times\left\langle\ldots,c^{\prime}_{i}, \ldots,c^{\prime}_{j},\ldots;\ldots,\sigma^{\prime}_{i},\ldots,\sigma^{\prime }_{j},\ldots\Big{|}M^{(0)}(\{p_{i}\})\Big{|}^{a_{i}\ldots q}_{a_{j} \rightarrow\bar{a}_{j}}\right\rangle+\mathcal{O}\big{(}\epsilon^{0}\big{)}\;, \tag{4.82}\] \[=\frac{1}{\epsilon}\sum_{j\neq i}\sum_{\sigma^{\prime}_{i}\in \mathcal{C}_{i}^{\prime}}\sum_{\sigma^{\prime}_{j}\in\mathcal{C}_{j}^{\prime}} \tilde{J}_{a_{i}a_{j}\leftarrow\bar{a}_{j}}^{(1,-1)}\left\langle\ldots,c^{ \prime}_{i},\ldots,c^{\prime}_{j},\ldots;\ldots,\sigma^{\prime}_{i},\ldots \right\rangle,\ldots\Big{|}M^{(0)}(\{p_{i}\})\Big{|}^{a_{i}\ldots q}_{a_{j} \rightarrow\bar{a}_{j}}\right\rangle+\mathcal{O}\big{(}\epsilon^{0}\big{)}\;,\] \[\int_{0}^{1}{\rm d}x\,\frac{x}{1-x}\,\left\langle\ldots,c_{i}, \ldots,c_{i},\ldots,c_{i},\ldots,\sigma_{i},\ldots,\sigma_{i}^{\prime},\ldots \right|\mathbf{\tilde{J}}_{i}^{(1)}\Big{|}\tilde{S}_{\bar{q},i}^{(0)}\Big{\rangle} =\frac{1}{\epsilon}\,\epsilon_{\mu}^{*}(q,p_{i},\sigma)\epsilon_{\nu}^{*}(p_{ i},\sigma_{i})\sum_{j\neq i}\sum_{\sigma^{\prime}_{i}\in\mathcal{C}_{i}^{ \prime}}\sum_{\sigma^{\prime}_{j}\in\mathcal{C}_{j}^{\prime}}\] \[\qquad\qquad\qquad\qquad\times\left\langle\ldots,c^{\prime}_{i}, \ldots,c^{\prime}_{j},\ldots;\ldots,\sigma^{\prime}_{i},\ldots,\sigma^{\prime }_{j},\ldots\Big{|}M^{(0)}(\{p_{i}\})\Big{|}^{a_{i}\rightarrow\bar{q}}_{a_{j} \rightarrow\bar{a}_{j}}\right\rangle+\mathcal{O}\big{(}\epsilon^{0}\big{)}\] \[\equiv\frac{1}{\epsilon}\sum_{j\neq i}\sum_{\sigma^{\prime}_{i} \in\mathcal{C}_{i}^{\prime}}\sum_{\sigma^{\prime}_{j}\in\mathcal{C}_{j}^{ \prime}}\tilde{J}_{a_{i}a_{j}\leftarrow\bar{a}_{j}}^{(1,-1)}\left\langle\ldots, c^{\prime}_{i},\ldots,c^{\prime}_{j},\ldots;\ldots,\sigma^{\prime}_{i},\ldots,\sigma^{\prime}_{j},\ldots\Big{|}M^{(0)}(\{p_{i}\})\Big{|}^{a_{i}\rightarrow \bar{q}}_{a_{j}\rightarrow\bar{a}_{j}}\right\rangle+\mathcal{O}\big{(} \epsilon^{0}\big{)}\;. \tag{4.83}\] Substitution of the splitting operators listed in Section 2.5 and application of the definitions (2.20) of the spin operators yields: \[J_{q\bar{q}\leftarrow\bar{g}g}^{(1,-1)}=-\frac{1}{2\,p_{i}\cdot p _{j}}\big{(}T^{c^{\prime}}T^{c^{\prime}}\big{)}_{c_{i}c_{j}}\,\bar{u}(p_{i}, \sigma_{i})\not{\epsilon}(p_{i},\sigma_{i}^{\prime})\not{\epsilon}^{*}(q,p_{i}, \sigma)\not{\epsilon}(p_{j},\sigma_{j}^{\prime})v(p_{j},\sigma_{j})\;,\] \[J_{q\bar{g}\leftarrow\bar{g}q}^{(1,-1)}=-\frac{1}{2\,p_{i}\cdot p _{j}}\big{(}T^{c^{\prime}}T^{c^{\prime}}\big{)}_{c_{i}c^{\prime}_{j}}\,\bar{u}(p_{ i},\sigma_{i})\not{\epsilon}(p_{i},\sigma_{i}^{\prime})\not{\epsilon}^{*}(q,p_{i}, \sigma_{j})u(p_{j},\sigma_{j}^{\prime})\;,\] \[J_{\bar{q}g\leftarrow\bar{g}q}^{(1,-1)}=+\frac{1}{2\,p_{i}\cdot p _{j}}\big{(}T^{c^{\prime}}T^{c^{\prime}}\big{)}_{c^{\prime}_{j}\in\mathcal{C}_{j}^{ \prime}}\,\bar{v}(p_{j},\sigma_{j}^{\prime})\not{\epsilon}^{*}(p_{j},\sigma_{j} )\not{\epsilon}^{*}(q,p_{i},\sigma)\not{\epsilon}(p_{i},\sigma_{i}^{\prime})v(p_{ i},\sigma_{i})\;,\] \[\tilde{J}_{gq\leftarrow\bar{q}g}^{(1,-1)}=+\frac{1}{2\,p_{i}\cdot p _{j}}\big{(}T^{c^{\prime}}T^{c^{\prime}}\big{)}_{c^{\prime}_{j}\in\mathcal{C}_{j}^{ \prime}}\,\bar{u}(p_{j},\sigma_{j})\not{\epsilon}(p_{j},\sigma_{j}^{\prime}) \not{\epsilon}^{*}(q,p_{i},\sigma)\not{\epsilon}^{*}(p_{i},\sigma_{i})v(p_{ i},-\sigma_{i}^{\prime})\;, \tag{4.84}\] \[\tilde{J}_{gg\leftarrow\bar{q}\bar{q}}^{(1,-1)}=-\frac{1}{2\,p_{i} \cdot p_{j}}\big{(}T^{c_{j}}T^{c^{\prime}}\big{)}_{c^{\prime}_{j}\in \mathcal{C}_{j}^{\prime}}\,\bar{v}(p_{j},\sigma_{j}^{\prime})\not{\epsilon}^{*}(p_{ j},\sigma_{j})\not{\epsilon}^{*}(q,p_{i},\sigma)\not{\epsilon}^{*}(p_{i},\sigma_{i})v(p_{i},- \sigma_{i}^{\prime})\;,\] \[\tilde{J}^{(1,-1)}_{gg\,\,\tilde{\leftarrow}\,\tilde{q}g} =-\frac{1}{2\,p_{i}\cdot p_{j}}\big{(}T^{c_{i}}T^{c_{j}}\big{)}_{c^ {\prime}_{j}c^{\prime}_{j}}\,\bar{u}(p_{i},-\sigma^{\prime}_{i})\not{\epsilon}^ {*}(p_{i},\sigma_{i})\not{\epsilon}^{*}(q,p_{i},\sigma)\not{\epsilon}^{*}(p_{j },\sigma_{j})u(p_{j},\sigma^{\prime}_{j})\;,\] \[\tilde{J}^{(1,-1)}_{g\tilde{q}\,\,\tilde{\leftarrow}\,\tilde{q}g} =-\frac{1}{2\,p_{i}\cdot p_{j}}\big{(}T^{c_{i}}T^{c^{\prime}_{j}} \big{)}_{c^{\prime}_{j}c_{j}}\,\bar{u}(p_{i},-\sigma^{\prime}_{i})\not{ \epsilon}^{*}(p_{i},\sigma_{i})\not{\epsilon}^{*}(q,p_{i},\sigma^{\prime}) \not{\epsilon}(p_{j},\sigma^{\prime}_{j})v(p_{j},\sigma_{j})\;.\] Bi-spinors depending on \(-\sigma^{\prime}_{i}\) are subsequently replaced by bi-spinors depending on \(+\sigma^{\prime}_{i}\) according to Eq. (4.12). The resulting expressions can be further simplified using: \[\begin{split}\ldots\not{\epsilon}^{*}(q,p_{i},\sigma)\cdots& =-\frac{1}{2p_{i}\cdot p_{j}}\ldots\not{p}_{j}\not{\epsilon}^{*}(q,p _{i},\sigma)\not{p}_{i}\ldots\qquad\text{or}\\ \ldots\not{\epsilon}^{*}(q,p_{i},\sigma)\cdots&=- \frac{1}{2p_{i}\cdot p_{j}}\ldots\not{p}_{i}\not{\epsilon}^{*}(q,p_{i},\sigma) \not{p}_{j}\ldots\;,\end{split} \tag{4.85}\] where the dots stand for the factors occurring in Eqs. (4.84), and the first equality applies if the left factor depends on \(p_{i}\), while the second equality applies if the left factor depends on \(p_{j}\). It can now be easily verified using: \[\begin{split}\sum_{\sigma^{\prime\prime}_{i}}v(p_{i},\sigma^{ \prime\prime}_{i})\bar{v}(p_{i},\sigma^{\prime\prime}_{i})=\not{p}_{i}\;,& \sum_{\sigma^{\prime\prime}_{i}}u(p_{i},\sigma^{\prime\prime}_{i}) \bar{u}(p_{i},\sigma^{\prime\prime}_{i})=\not{p}_{i}\;,\\ \sum_{\sigma^{\prime\prime}_{j}}v(p_{j},\sigma^{\prime\prime}_{j}) \bar{v}(p_{j},\sigma^{\prime\prime}_{i})=\not{p}_{j}\;,&\sum_{ \sigma^{\prime\prime}_{j}}u(p_{j},\sigma^{\prime\prime}_{j})\bar{u}(p_{j}, \sigma^{\prime\prime}_{i})=\not{p}_{j}\;,\end{split} \tag{4.86}\] that each pole coefficient listed in (4.84) cancels a respective pole coefficient in Eq. (4.6). This completes the proof of Eq. (4.1) for the flavour-off-diagonal contributions. ### Numerical tests Although theorem (4.1) has been strictly proven in Section 4.4, it is still a useful and instructive exercise to verify the formulae of Sections 4.1, 4.2 and 4.3 on actual amplitudes. In this section, we numerically evaluate the \(\mathcal{O}\big{(}\epsilon^{0}\big{)}\) coefficient of the Laurent expansion of \(\Big{|}M_{g}^{(1)}\Big{\rangle}\) for several processes and compare it to the result of the soft expansion. For a stringent test, we consider processes that involve up to six hard partons, both incoming and outgoing, multiple quark flavours and colour-neutral particles. The list can be read off of Figs. 13 and 14. Let us define the difference between the exact and the approximate amplitude: \[\Delta_{\text{LP/NLP}}\equiv\frac{1}{N}\sum_{\begin{subarray}{c}\text{singular}\\ \text{colour flows $\{c\}$}\\ \text{helicities $\{\sigma\}$}\end{subarray}}\frac{\left|\left[\Big{\langle}\{c, \sigma\}\Big{|}M_{g}^{(1)}\Big{\rangle}-\Big{\langle}\{c,\sigma\}\Big{|}M_{g }^{(1)}\Big{\rangle}_{\text{LP/NLP}}\right]_{\mathcal{O}(\epsilon^{0})} \right|}{\Big{[}\Big{\langle}\{c,\sigma\}\Big{|}M_{g}^{(1)}\Big{\rangle}\Big{]} _{\mathcal{O}(\epsilon^{0})}}\right|\;, \tag{4.87}\] where LP (leading power) stands for soft expansion up to \(\mathcal{O}(1/\lambda)\), while NLP (next-to-leading power) up to \(\mathcal{O}\big{(}\lambda^{0}\big{)}\). The sum runs over all colour-flow and helicity configurations for which the amplitude has a soft singularity. The number of such configurations is denoted by \(N\). The one-loop \(n\)-particle amplitudes \(\big{|}M^{(1)}\big{\rangle}\) as well as their derivatives are calculated with Recola[31, 32] linked to Collier[33, 34, 35, 36] for the evaluation of tensor and scalar one-loop integrals. For the evaluation of the one-loop \((n+1)\)-particle amplitudes, \(\Big{|}M_{g}^{(1)}\Big{\rangle}\), we instead link Recola to CutTools[37] for tensor reduction and OneLOop[38, 39] for the evaluation of scalar integrals at quadruple precision. Finally, for the evaluation of the collinear amplitudes, we use Eqs. (4.23) and (4.24) implemented by calling AVH[40] with replaced spinors and polarisation vectors of the external particles as appropriate. The \(x\)-dependence of the collinear amplitudes is obtained at first by rational-function fitting. Subsequently, we verify that the results agree with those obtained by direct evaluation with the formulae from the last paragraph of Section 4.2. A subtlety arises from the fact that amplitudes for different processes are involved in the computation of (110). Indeed, the global sign of the amplitudes depends on the external fermion ordering and the algorithm used. Therefore, for the flavour-off-diagonal contributions, we have to compensate the differences between the software tools by including appropriate signs to obtain the correct result. \(\Delta_{\rm LP/NLP}\) is expected to have the following behaviour: \[\Delta_{\rm LP} =\left(c_{0}+c_{1}\log\lambda+c_{2}\log^{2}\lambda\right)\lambda +\mathcal{O}(\lambda^{2})\;, \tag{111}\] \[\Delta_{\rm NLP} =\left(d_{0}+d_{1}\log\lambda+d_{2}\log^{2}\lambda\right)\lambda^ {2}+\mathcal{O}(\lambda^{3})\;. \tag{112}\] This behaviour is reproduced for the three example processes in Fig. 13 as much as numerical precision permits. Fig. 14 shows, split by helicity configuration, the results for the process: \[q(\sigma_{1})+\bar{q}(\sigma_{2})\to g(\sigma_{3})+g(\sigma_{4})+g(q, \sigma_{5}), \tag{113}\] Figure 14: Plots analogous to Fig. 13 except that the helicity sum is restricted to a specific setup in the left plot, and the right plot contains all other helicity configurations. Figure 13: Relative error \(\Delta_{\rm LP/NLP}\) of the one-loop soft approximation to leading power (LP) and subleading power (NLP). The energy, \(q_{0}\), of the soft gluon is normalised to the centre-of-mass energy, \(\sqrt{s}\), of the process. The apparent breakdown of the approximation at low soft-gluon energies is due to the limited numerical precision of the one-loop integrals in OneLOop which impacts the result for the \((n+1)\)-particle amplitudes. where \(q\) is the soft momentum, and hard-momentum and colour dependence are suppressed for brevity. For most configurations, the test results show a strong improvement between LP and NLP in line with Fig. 13. However, in the case \(\sigma_{3}=\sigma_{4}\neq\sigma_{5}\), the improvement is less pronounced while still remaining consistent with (4.89). This spin configuration is distinguished by the fact that it does not contain any logarithms containing the soft momentum through next-to-leading power. For example, the flavour-diagonal soft-region contribution is proportional to the tree-level amplitude of the process: \[q(\sigma_{1})+\bar{q}(\sigma_{2})\to g(\sigma_{3})+g(\sigma_{4}), \tag{4.91}\] which vanishes if \(\sigma_{3}=\sigma_{4}\) due to helicity conservation. It is not hard to convince oneself that all flavour-off-diagonal soft-region contributions vanish in full analogy. The flavour-diagonal collinear region does not contribute because the collinear hard function is derived from the subleading collinear behaviour of the process: \[q(\sigma_{1})+\bar{q}(\sigma_{2})\to g(\sigma_{3})+g(\sigma_{4})+g(- \sigma_{5}), \tag{4.92}\] which follows from the full process definition (4.90) and the properties of the jet operator. In particular, the occurrence of \(-\sigma_{5}\) can be conveniently read off Eq. (4.48). Again, this process vanishes at tree level for \(\sigma_{3}=\sigma_{4}\neq\sigma_{5}\) due to helicity conservation. Finally, the flavour-off-diagonal jet operator is only non-zero if \(\sigma_{i}=\sigma_{5}\) for \(a_{i}=g\), i.e. \(i\in\{3,4\}\), which is not fulfilled for the considered helicity configuration. Altogether, only the hard-region contribution to Eq. (4.1), \(\mathbf{S}^{(0)}\left|M^{(1)}\right>\), is non-zero for the considered spin configuration. While next-to-next-to-leading-power contributions to the soft expansion are not discussed in the present publication, the behaviour observed in Fig. 14 shows that one can expect soft logarithms starting to appear there, implying a less-constrained helicity structure. The poorer numerical behaviour is not expected to pose a problem in practical applications because for squared amplitudes summed over colour and helicity, the helicity configurations which contain soft logarithms already at leading power dominate numerically in the soft momentum region. ## 5 Next-to-leading-power double-collinear asymptotics at tree-level The collinear-gluon and collinear-quark amplitude constructed in Section 4.2 may be used to derive a result for the double-collinear asymptotics of massless tree-level QCD amplitudes that correctly accounts for subleading effects. We consider the collinear limit for partons \(i\) and \(n+1\): \[k_{n+1} =xp_{i}+l_{\perp}-\frac{l_{\perp}^{2}}{2x}\frac{q}{p_{i}\cdot q}\;, \qquad\qquad\qquad\text{with}\qquad l_{\perp}\cdot p_{i}=l_{\perp}\cdot q=0\;, \tag{5.1}\] \[k_{i} \equiv(1-x)p_{i}-l_{\perp}-\frac{l_{\perp}^{2}}{2(1-x)}\frac{q}{ p_{i}\cdot q}\;, \qquad\quad\text{and}\qquad k_{j} \equiv p_{j}+\mathcal{O}\big{(}l_{\perp}^{2}\big{)}\;,\qquad j\neq i\;. \tag{5.2}\] For \(a_{i}=a_{n+1}=g\) there is: \[\mathbf{P}_{i}(\sigma_{i},c_{i})\mathbf{P}_{n+1}(\sigma_{n+1},c_ {n+1})\left|M^{(0)}(\{k_{i}\}_{i=1}^{n+1})\right>=\mathbf{P}_{i}(\sigma_{i},c _{i})\mathbf{P}_{n+1}(\sigma_{n+1},c_{n+1})\bigg{[}\] \[\quad\quad+\left(\frac{1-x^{2}}{x}+\frac{1-(1-x)^{2}}{1-x}\mathbf{ E}_{i,n+1}\right)\,\left|S^{(0)}_{g,i}(\{p_{i}\},q)\right>+\left((1-x)+x \mathbf{E}_{i,n+1}\right)\left|C^{(0)}_{g,i}(\{p_{i}\},q)\right>\] \[\quad\quad+\frac{1}{2}\sum_{I}\frac{x(1-x)}{x_{I}(1-x_{I})}\bigg{(} \frac{1}{x_{I}-x}+\frac{1}{x_{I}-(1-x)}\mathbf{E}_{i,n+1}\bigg{)}\left|R^{(0) }_{g,i}(\{p_{i}\})\right>\bigg{]} \tag{5.3}\] \[\quad\quad+\left[\frac{1}{x}\frac{q\cdot\epsilon^{*}(p_{i}, \sigma_{n+1})}{q\cdot p_{i}}\mathbf{P}_{i}(\sigma_{i},c_{i})\mathbf{T}_{i}^{c _{n+1}}+\frac{1}{1-x}\frac{q\cdot\epsilon^{*}(p_{i},\sigma_{i})}{q\cdot p_{i} }\mathbf{P}_{i}(\sigma_{n+1},c_{n+1})\mathbf{T}_{i}^{c_{i}}\right]\left|M^{(0) }(\{p_{i}\})\right>\] \[+\mathcal{O}(l_{\perp})\;,\] with \(\left|S^{(0)}_{g,i}(\{p_{i}\},q)\right\rangle\), \(\left|C^{(0)}_{g,i}(\{p_{i}\},q)\right\rangle\) and \(\left|R^{(0)}_{g,i,I}(\{p_{i}\})\right\rangle\) defined in Eqs. (4.31), (4.32) and (4.33) respectively. The splitting function acting on \(\left.\left|M^{(0)}(\{p_{i}\})\right\rangle\right\rangle\) introduces a helicity sum for the intermediate gluon. This sum must be consistent with Eq. (4.18). We note that the subleading collinear asymptotics requires the subleading soft asymptotics contained in \(\left.|C^{(0)}_{g,i}(\{p_{i}\},q)\right\rangle\). For \(a_{i}\in\{q,\bar{q}\}\), \(a_{n+1}=g\), there is: \[\mathbf{P}_{n+1}(\sigma_{n+1},c_{n+1})\left|M^{(0)}(\{k_{i}\}_{i =1}^{n+1})\right\rangle=\mathbf{P}_{n+1}(\sigma_{n+1},c_{n+1})\bigg{[}\] \[\mathbf{Split}^{(0)}_{i,n+1\,\gets\,i}(k_{i},k_{n+1},p_{i}) \left|M^{(0)}(\{p_{i}\})\right\rangle\] \[+\sqrt{1-x}\bigg{(}\bigg{(}\frac{1}{x}+\frac{1}{2}\bigg{)}\, \left|S^{(0)}_{g,i}(\{p_{i}\},q)\right\rangle+\left|C^{(0)}_{g,i}(\{p_{i}\},q )\right\rangle+\frac{x}{1-x}\left|\bar{S}^{(0)}_{g,i}(\{p_{i}\},q)\right\rangle \tag{5.4}\] \[+\sum_{I}\left(\frac{1}{x_{I}-x}-\frac{1}{x_{I}}\right)\left|R^ {(0)}_{g,i,I}(\{p_{i}\})\right\rangle\bigg{)}\bigg{]}+\frac{\sqrt{1-x}}{x} \frac{q\cdot\epsilon^{*}(p_{i},\sigma_{n+1})}{q\cdot p_{i}}\mathbf{T}^{c_{n+1 }}_{i}\left|M^{(0)}(\{p_{i}\})\right\rangle\] \[+\mathcal{O}(l_{\perp})\;.\] Finally, for \(a_{i}=q\), \(a_{n+1}=\bar{q}\), there is: \[\left|M^{(0)}(\{k_{i}\}_{i=1}^{n+1})\right\rangle=\mathbf{Split} ^{(0)}_{i,n+1\,\leftarrow\,i}(k_{i},k_{n+1},p_{i})\left|M^{(0)}(\{p_{i}\}) \right\rangle\\ +\sqrt{x(1-x)}\bigg{(}\frac{1}{x}\left|S^{(0)}_{\bar{q},i}(\{p_{ i}\})\right\rangle+\left|C^{(0)}_{\bar{q},i}(\{p_{i}\},q)\right\rangle+\frac{x}{1-x} \left|\bar{S}^{(0)}_{\bar{q},i}(\{p_{i}\})\right\rangle\\ +\sum_{I}\left(\frac{1}{x_{I}-x}-\frac{1}{x_{I}}\right)\left|R^{ (0)}_{\bar{q},i,I}(\{p_{i}\})\right\rangle\bigg{)}+\mathcal{O}(l_{\perp})\;. \tag{5.5}\] Since the splitting proceeds via an intermediate gluon, the occurring helicity sum must be consistent with Eq. (4.18). The contributions \(\left|S^{(0)}_{\bar{q},i}(\{p_{i}\})\right\rangle\), \(\left|\bar{S}^{(0)}_{\bar{q},i}(\{p_{i}\})\right\rangle\) and \(\left|R^{(0)}_{\bar{q},i,I}(\{p_{i}\})\right\rangle\) are defined in Eqs. (4.44), (4.45) and (4.46) respectively. As remarked at the end of Section 4.2, the contribution \(\left|C^{(0)}_{\bar{q},i}(\{p_{i}\},q)\right\rangle\) corresponds to the subleading term of the soft-anti-quark expansion of the collinear-quark amplitude. As we do not provide an explicit expression in terms of \(\left|M^{(0)}(\{p_{i}\})\right\rangle\) for this contribution, it must be evaluated by using Eq. (4.24) at a convenient point in \(x\). ## 6 Summary and outlook This publication contains two novel results. The first one is the general formula for the approximation of a one-loop soft-gluon emission amplitude at next-to-leading power presented in Section 4. The second are the general formulae for the approximation of tree-level amplitudes in the collinear limit at next-to-leading power presented in Section 5. Both results are limited to massless partons, but allow for the inclusion of arbitrary colour-neutral particles. They are expressed through universal factors and process-dependent gauge-invariant amplitudes. As such, they cannot be further simplified. It is interesting to note that the tree-level collinear approximations require the knowledge of the tree-level soft approximations, while the one-loop soft approximation requires the knowledge of both the tree-level collinear and soft approximation. We expect this pattern to extend to higher orders, i.e. higher order soft approximations should depend on lower order collinear approximations. In any case, extension of the results to higher orders is one natural direction for future research. We must point out once more that the provided next-to-leading power approximation for a collinear quark-anti-quark pair requires the subleading soft term of the soft-anti-quark expansion of the collinear amplitude, for which no general formula is known at present. In practice, one can obtain the necessary result by a single evaluation of a suitably prepared amplitude at fixed kinematics. Nevertheless, it would be much more elegant to have an expression similar to the LBK theorem. We leave this problem to future work. Our results should be extended to massive partons in a next step. On the one hand, this extension should be simpler by not containing collinear regions and flavour-off-diagonal contributions for massive partons. On the other hand, the difference between the leading soft asymptotics for massless [41] and massive partons [42; 43] suggests that the expression for the soft operator will be much more complex in the massive case. ###### Acknowledgements. We would like to thank Daniel Stremmer for help with linking Recola to CutTools and OneLOop. This work was supported by the Deutsche Forschungsgemeinschaft (DFG) under grant 396021762 - TRR 257: Particle Physics Phenomenology after the Higgs Discovery, and grant 400140256 - GRK 2497: The Physics of the Heaviest Particles at the LHC. Diagrams were drawn using JaxoDraw[44; 45; 46].
2303.07609
Training Robust Spiking Neural Networks with ViewPoint Transform and SpatioTemporal Stretching
Neuromorphic vision sensors (event cameras) simulate biological visual perception systems and have the advantages of high temporal resolution, less data redundancy, low power consumption, and large dynamic range. Since both events and spikes are modeled from neural signals, event cameras are inherently suitable for spiking neural networks (SNNs), which are considered promising models for artificial intelligence (AI) and theoretical neuroscience. However, the unconventional visual signals of these cameras pose a great challenge to the robustness of spiking neural networks. In this paper, we propose a novel data augmentation method, ViewPoint Transform and SpatioTemporal Stretching (VPT-STS). It improves the robustness of SNNs by transforming the rotation centers and angles in the spatiotemporal domain to generate samples from different viewpoints. Furthermore, we introduce the spatiotemporal stretching to avoid potential information loss in viewpoint transformation. Extensive experiments on prevailing neuromorphic datasets demonstrate that VPT-STS is broadly effective on multi-event representations and significantly outperforms pure spatial geometric transformations. Notably, the SNNs model with VPT-STS achieves a state-of-the-art accuracy of 84.4\% on the DVS-CIFAR10 dataset.
Haibo Shen, Juyu Xiao, Yihao Luo, Xiang Cao, Liangqi Zhang, Tianjiang Wang
2023-03-14T03:09:56Z
http://arxiv.org/abs/2303.07609v1
# Training Robust Spiking Neural Networks with Viewpoint Transform and Spatiotemporal Stretching ###### Abstract Neuromorphic vision sensors (event cameras) simulate biological visual perception systems and have the advantages of high temporal resolution, less data redundancy, low power consumption, and large dynamic range. Since both events and spikes are modeled from neural signals, event cameras are inherently suitable for spiking neural networks (SNNs), which are considered promising models for artificial intelligence (AI) and theoretical neuroscience. However, the unconventional visual signals of these cameras pose a great challenge to the robustness of spiking neural networks. In this paper, we propose a novel data augmentation method, ViewPoint Transform and SpatioTemporal Stretching (VPT-STS). It improves the robustness of SNNs by transforming the rotation centers and angles in the spatiotemporal domain to generate samples from different viewpoints. Furthermore, we introduce the spatiotemporal stretching to avoid potential information loss in viewpoint transformation. Extensive experiments on prevailing neuromorphic datasets demonstrate that VPT-STS is broadly effective on multi-event representations and significantly outperforms pure spatial geometric transformations. Notably, the SNNs model with VPT-STS achieves a state-of-the-art accuracy of 84.4% on the DVS-CIFAR10 dataset. Haibo Shen\({}^{1}\), Juyu Xiao\({}^{1}\), Yihao Luo\({}^{2,1}\), Xiang Cao\({}^{3,1}\), Liangqi Zhang\({}^{1}\), Tianjiang Wang\({}^{1}\)+School of Huazhong University of Science and Technology\({}^{1}\) Yichang Testing Technique Research Institute\({}^{2}\) Changsha University\({}^{3}\) Spiking Neural Networks, Neuromorphic Data, Data Augmentation, ViewPoint Transform and SpatioTemporal Stretching Footnote †: This work was supported in part by the National Natural Science Foundation of China under Grant 61572214 and Seed Foundation of Huazhong University of Science and Technology (2020kfyXGYJ114). (Corresponding author: Tianjiang Wang.) ## 1 Introduction Inspired by the primate visual system, neuromorphic vision cameras generate events by sampling the brightness of objects. For example, the Dynamic Vision Sensor (DVS) [1] camera and the Vidar [2] camera are inspired by the outer three-layer structure of the retina and the foveal three-layer structure, respectively. Both of them have the advantages of high temporal resolution, less data redundancy, low power consumption, and large dynamic range [3]. In addition, spiking neural networks (SNNs) are similarly inspired by the learning mechanisms of the mammalian brain and are considered a promising model for artificial intelligence (AI) and theoretical neuroscience [4]. In theory, as the third generation of neural networks, SNNs are computationally more powerful than traditional convolutional neural networks (CNNs) [4]. Therefore, event cameras are inherently suitable for SNNs. However, the unconventional visual signals of these cameras also pose a great challenge to the robustness of SNNs. Most existing data augmentations are fundamentally designed for RGB data and lack exploration of neuromorphic events. For example, Cutout [5] artificially impedes a rectangular block in the image to simulate the impact of occlusion on the image. Random erasing [6] further optimizes the erased pixel value by adding noise. Mixup [7] uses the weighted sum of two images as training samples to smooth the transition line between classes. Since neuromorphic data have an additional temporal dimension and differ widely in imaging principles, novel data augmentations are required to process the spatiotemporal visual signals of these cameras. In this paper, we propose a novel data augmentation method suitable for events, ViewPoint Transformation and SpatioTemporal Stretching (VPT-STS). Viewpoint transformation solves the spatiotemporal scale mismatch of samples by introducing a balance coefficient, and generates samples from different viewpoints by transforming the rotation centers and angles in the spatiotemporal domain. Furthermore, we introduce spatiotemporal stretching to avoid potential information loss in viewpoint transformation. Extensive experiments are performed on prevailing neuromorphic datasets. It turns out that VPT-STS is broadly effective on multiple event representations and significantly outperforms pure spatial geometric transformations. Insightful analysis shows that VPT-STS improves the robustness of SNNs against different spatial locations. In particular, the SNNs model with VPT-STS achieves a state-of-the-art accuracy of 84.4% on the DVS-CIFAR10 dataset. Furthermore, while this work is related to EventDrop [8], NDA [9], there are some notable differences. For example, NDA is a pure global geometric transformation, while VPT-STS changes the viewpoint of samples in the spatiotemporal domain. EventDrop is only experimented on CNNs, it introduces noise by dropping events, but may cause problems with dead neurons on SNNs. VPT-STS is applicable to both CNNs and SNNs, maintaining the continuity of samples. In addition, EventDrop transforms both temporal and spatial domains, but as two independent strategies, it does not combine the spatiotemporal information of the samples. To our knowledge, VPT-STS is the first event data augmentation that simultaneously incorporates spatiotemporal transformations. ## 2 Method ### Event Generation Model The event generation model [3, 4] is abstracted from dynamic vision sensors [1]. Each pixel of the event camera responds to changes in its logarithmic photocurrent \(L=\log(I)\). Specifically, in a noise-free scenario, an event \(e_{k}=(x_{k},y_{k},t_{k},p_{k})\) is triggered at pixel \(X_{k}=(y_{k},x_{k})\) and at time \(t_{k}\) as soon as the brightness variation \(|\Delta L|\) reaches a temporal contrast threshold \(C\) since the last event at the pixel. The event generation model can be expressed by the following formula: \[\Delta L(X_{k},t_{k})=L(X_{k},t_{k})-L(X_{k},t_{k}-\Delta t_{k})=p_{k}C \tag{1}\] where \(C>0\), \(\Delta t_{k}\) is the time elapsed since the last event at the same pixel, and the polarity \(p_{k}\in\{+1,-1\}\) is the sign of the brightness change. During a period, the event camera triggers event stream \(\mathcal{E}\): \[\mathcal{E}=\{e_{k}\}_{k=1}^{N}=\{(X_{k},t_{k},p_{k})\}_{k=1}^{N} \tag{2}\] where \(N\) represents the number of events in the set \(\mathcal{E}\). As shown in Figure 1, an event is generated each time the brightness variances reach the threshold, and then \(|\Delta L|\) is cleared. The event stream can be represented as a matrix: \[M_{\varepsilon}=\begin{pmatrix}y_{1}&x_{1}&t_{1}&1\\ \vdots&\vdots&\vdots&\vdots\\ y_{N}&x_{N}&t_{N}&1\end{pmatrix}_{4\times N} \tag{3}\] For convenience, we omit the unconverted polarity \(p\). ### Motivation This work stems from the observation that it is difficult to maintain absolute frontal view between the sample and cameras, which easily leads to a slight shift of the viewpoint. Considering this small offset distance, we use viewpoint rotation to approximate the deformation of samples in space and time. In addition, since events record the brightness change of samples, especially changes of the edge, variations of the illumination angle will also cause the effect of viewpoint transformation, which suggests that we can enhance the robustness of SNNs by generating viewpoint-transformed samples. ### The Proposed Method. To generate viewpoint-transformed samples, we draw on the idea of spatio-temporal rotation. For viewpoint transformation (**VPT**), we introduce translation matrices \(T_{b}\), \(T_{a}\), which represent the translation to the rotation center \((x_{c},y_{c},t_{c})\) and the translation back to the original position, respectively. \[T_{b}=\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ -y_{c}&-x_{c}&-t_{c}&1\end{pmatrix},T_{a}=\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ y_{c}&x_{c}&t_{c}&1\end{pmatrix} \tag{4}\] Suppose that rotate along the \(y\) and \(t\) planes with \(x\) as the axis, we can easily derive the rotation matrix \(R_{r}^{YT}\): \[R_{r}^{YT}=\begin{pmatrix}cos\theta&0&sin\theta&0\\ 0&1&0&0\\ -sin\theta&0&cos\theta&0\\ 0&0&0&1\end{pmatrix} \tag{5}\] where \(\theta\) is the rotation angle. In practice, Eq 5 is an unbalanced matrix due to the mismatch between the time and space dimensions in the \(M_{\varepsilon}\) matrix. Therefore, we introduce a balance coefficient \(\tau\) to scale the space and time dimension, which results in a better visual effects. The balanced matrix \(R_{br}^{YT}\) can be formulated as: \[R_{br}^{YT}=\begin{pmatrix}cos\theta&0&\tau sin\theta&0\\ 0&1&0&0\\ -\frac{1}{\tau}sin\theta&0&cos\theta&0\\ 0&0&0&1\end{pmatrix} \tag{6}\] Set \(x_{c}=0\), the viewpoint transformation matrix \(M_{br}^{YT}\) can be formulated by calculating \(T_{b}R_{br}^{YT}T_{a}\): \[\begin{pmatrix}cos\theta&0&\tau sin\theta&0\\ 0&1&0&0\\ -\frac{1}{\tau}sin\theta&0&cos\theta&0\\ -x_{c}cos\theta+\frac{1}{\tau}csin\theta+x_{c}&0&-\tau x_{c}sin\theta-t_{c}cos \theta+t_{c}&1\end{pmatrix} \tag{7}\] Similarly, the viewpoint transformation matrix \(M_{br}^{XT}\) in Figure 1: Event generation model. the \(x\) and \(t\) dimensions can be formulated as: \[\left.\begin{pmatrix}1&0&0&0\\ 0&cos\theta&\tau sin\theta&0\\ 0&-\frac{1}{\tau}sin\theta&cos\theta&0\\ 0&-x_{c}cos\theta+\frac{1}{\tau}t_{c}sin\theta+x_{c}&-\tau x_{c}sin\theta-t_{c} cos\theta+t_{c}&1\end{pmatrix}\right\} \tag{8}\] Therefore, the viewpoint-transformed matrix \(M_{VPT}^{YT}\) and \(M_{VPT}^{XT}\) can be formulated as: \[M_{VPT}^{YT}=M_{e}M_{br}^{YT} \tag{9}\] \[M_{VPT}^{XT}=M_{e}M_{br}^{XT}\] Furthermore, since events beyond the resolution will be discarded during the viewpoint transformation, we introduce spatiotemporal stretching (**STS**) to avoid potential information loss. STS stretches the temporal mapping in the VPT by a coefficient \(\frac{1}{cos\theta}\) while maintaining the spatial coordinates unchanged. Therefore, by setting \(t_{c}=0\), we get the transformed \((t)_{STS}^{YT}\) and \((t)_{STS}^{XT}\) from Eq. 7 and Eq. 8: \[(t_{k})_{VPT}^{YT}=(t_{k})-\tau tan\theta\cdot((y_{k})-y_{c}) \tag{10}\] \[(t_{k})_{VPT}^{XT}=(t_{k})-\tau tan\theta\cdot((x_{k})-x_{c})\] The time of STS is advanced or delayed according to the distance from the center \(|x-x_{c}|\) (\(|y-y_{c}|\)), causing event stream to be stretched long the time axis according to the spatial coordinates. ## 3 Experiments ### Implementation Extensive experiments are performed to demonstrate the superiority of the VPT-STS method on prevailing neuromorphic datasets, including CIFAR10-DVS(CIF-DVS) [10], N-Caltech101(N-Cal) [11], N-CARS [12] datasets. N-Caltech101 and CIFAR10-DVS datasets are generated by neuromorphic vision sensors on the basis of traditional datasets, while N-CARS is collected in the real world. For the convenience of comparison, the model without VPT-STS with the same parameters is used as the baseline. STBP [13] methods are used to train SNN-VGG9 network, other parameters mainly refer to NDA [14]. For example, the Adam optimizer is used with an initial learning rate of \(1e-3\). The neuron threshold and leakage coefficient are \(1\) and \(0.5\), respectively. In addition, we also evaluate the performance of VPT-STS on various event representations with the Resnet9 network, including EST [15], VoxelGrid [16], EventFrame [17] and EventCount [18] representations. ### Performance on various representations Extensive experiments are conducted to evaluate the performance of VPT-STS method on different event representations, covering SNNs and CNNs. As shown in Tab. 1, SNNs with VPT-STS methods achieve significant improvements on three prevailing datasets. And VPT-STS also performs well on four representations commonly used by CNNs. It is worth noting that EST maintains the most spatiotemporal information from neuromorphic data and thus performs best overall. Furthermore, since the samples of N-CARS are collected in the real world, its initial viewpoint diversity is already enriched compared to the other two datasets. Considering the high baseline on N-CARS, VPT-STS still further imporves the robustness of SNNs. \begin{table} \begin{tabular}{c c c c c c c} \hline \multirow{2}{*}{Datasets} & \multirow{2}{*}{Method} & \multicolumn{5}{c}{Accuracy (\%)} \\ \cline{3-6} & & SNNs & EventFrame & EventCount & VoxelGrid & EST \\ \hline \multirow{2}{*}{CIFAR10-DVS} & Baseline & 83.20 & 78.71 & 78.85 & 77.47 & 78.81 \\ & VPT-STS & 84.40 & 79.58 & 79.12 & 79.62 & 79.37 \\ \hline \multirow{2}{*}{N-Caltech101} & Baseline & 78.98 & 73.08 & 73.66 & 77.08 & 78.41 \\ & VPT-STS & 81.05 & 76.96 & 76.38 & 79.13 & 78.88 \\ \hline \multirow{2}{*}{N-CARS} & Baseline & 95.40 & 94.44 & 94.76 & 93.86 & 94.97 \\ & VPT-STS & 95.85 & 94.60 & 94.81 & 94.30 & 94.99 \\ \hline \end{tabular} \end{table} Table 1: Performance of VPT-STS on SNNs and CNNs with various representations. \begin{table} \begin{tabular}{c c c c} \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{References} & \multicolumn{2}{c}{Accuracy (\%)} \\ \cline{3-4} & & CIF-DVS & N-CARS \\ \hline HATS[12] & CVPR 2018 & 52.40 & 81.0 \\ Dart[19] & TPAMI 2020 & 65.80 & - \\ Dspike [20] & NeurIPS 2021 & 75.40 & - \\ STBP [13] & AAAI 2021 & 67.80 & - \\ AutoSNN [21] & ICML 2022 & 72.50 & - \\ RecDis [22] & CVPR 2022 & 72.42 & - \\ DSR [23] & CVPR 2022 & 77.27 & - \\ NDA [9] & ECCV 2022 & 81.70 & 90.1 \\ \hline **VPT-STS** & - & **84.40** & **95.85** \\ \hline \end{tabular} \end{table} Table 2: Performance of VPT-STS and previous SOTAs on CIFAR10-DVS and N-CARS datasets. ### Compared with SOTAs As shown in Tab. 2, we compare VPT-STS with recent state-of-the-art results on neuromorphic datasets. The results show that VPT-STS achieves substantial improvements over previous SOTAs. It is worth noting that VPT-STS significantly outperforms NDA, which is an ensemble of six geometric transformations. The experimental results demonstrate the superiority of combining spatiotemporal information for data augmentation. Since VPT-STS is orthogonal to most training algorithms, it can provide a better baseline and improve the performance of existing models. ### Ablation Studies on VPT-STS As shown in Fig. 3, the performance of VPT-STS with different rotation angles is evaluated on the N-Caltech101 dataset. It turns out that a suitable rotation angle is important for the performance of data augmentation, which can increase data diversity without losing features. ### Analysis of VPT-STS To gain further insight into the workings of VPT-STS, we add different strategies on the baseline to analyze the effective components of VPT-STS. As shown in Table 3, spatial rotation (Rotation) is performed as a comparative experiment for VPT-STS. It turns out that both VPT and STS including spatiotemporal transformations are significantly better than pure spatial geometric transformations on all three datasets, which illustrate the importance of spatiotemporal transformations. While VPT and STS are implemented with operations similar to rotation, it actually improves the robustness of SNNs to different viewpoints. Furthermore, we evaluate the robustness of SNNs to viewpoint fluctuations by adding different degrees of spatiotemporal rotation to the test data. Figures 2(a) and 2(b) show the performance of the baseline model and the model trained by VPT-STS under different disturbances, respectively. The results show that the general trend of the accuracy change is to decrease with the increase of the perturbation amplitude. In addition, Fig. 2(c) shows the difference in the accuracy reduction of the VPT-STS compared to baseline. As the perturbation amplitude increases, the difference in the accuracy reduction of the two models is less than zero, and the absolute value grows, which illustrate that the accuracy reduction of baseline is larger than that of VPT-STS. Experimental results show that the model trained with VPT-STS generalize better and improves the robustness of SNNs against spatial location variances. ## 4 Conclusion We propose a novel data augmentation method suitable for events, viewpoint transformation and spatiotemporal stretching (VPT-STS). Extensive experiments on prevailing neuromorphic datasets show that VPT-STS is broadly effective on multiple event representations and significantly outperforms pure spatial geometric transformations. It achieves substan \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c}{Accuracy (\%)} \\ \cline{2-4} & CIF-DVS & N-Cal & N-CARS \\ \hline Baseline & 83.20 & 78.98 & 95.40 \\ Rotation & 83.90 & 80.19 & 95.46 \\ VPT & **84.40** & **81.05** & 95.56 \\ STS & 84.30 & 80.56 & **95.85** \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of Different Strategies. Figure 2: Performance of VPT-STS and Baseline under different perturbations. tial improvements over previous SOTAs by improving the robustness of SNNs to different viewpoints.
2310.07731
Multi-Robot Task Planning to Secure Human Group Progress
Recent years have seen an increasing number of deployment of fleets of autonomous vehicles. As the problem scales up, in terms of autonomous vehicles number and complexity of their objectives, there is a growing need for decision-support tooling to help the operators in controlling the fleet. In this paper, we present an automated planning system developed to assist the operators in the CoHoMa II challenge, where a fleet of robots, remotely controlled by a handful of operators, must explore and progress through a potential hostile area. In this context, we use planning to provide the operators with suggestions about the actions to consider and their allocation to the robots. This paper especially focus on the modelling of the problem as a hierarchical planning problem for which we use a state-of-the-art automated solver.
Roland Godet, Charles Lesire, Arthur Bit-Monnot
2023-10-03T18:31:03Z
http://arxiv.org/abs/2310.07731v1
# Multi-Robot Task Planning to Secure Human Group Progress ###### Abstract Recent years have seen an increasing number of deployment of fleets of autonomous vehicles. As the problem scales up, in terms of autonomous vehicles number and complexity of their objectives, there is a growing need for decision-support tooling to help the operators in controlling the fleet. In this paper, we present an automated planning system developed to assist the operators in the CoHoMa II challenge, where a fleet of robots, remotely controlled by a handful of operators, must explore and progress through a potential hostile area. In this context, we use planning to provide the operators with suggestions about the actions to consider and their allocation to the robots. This paper especially focus on the modelling of the problem as a hierarchical planning problem for which we use a state-of-the-art automated solver. ## 1 Introduction The "Battle-Lab Terre", a part of the French Army studying innovation, organized in 2022 the second version of the CoHoMa challenge [16] in order to study the collaboration between human operators and autonomous multi-robot systems. The task was to navigate through a dangerous terrain in an Armoured Vanguard Vehicle (AVV) (Figure 0(a)). The land included 1m-wide red cube (Figure 0(b)) representing a trap said to be explosive and capable of damaging the AVV. Therefore, the human operators on board had to ensure that the AVV's environment was safe before moving it. To do this, they had to use various Unmanned Aerial Vehicles (UAVs) and Unmanned Ground Vehicles (UGVs), to perform reconnaissance missions, seek out traps, and avoid or disable them. A general system architecture of these vehicles has been studied in [8]. When the number of unmanned vehicles is too important for the number of human operators (6 UGVs and 3 UAVs for 4 human operators in our case), a decision-making aid is welcomed. This aid must decide which actions are to be performed, when, and by which vehicle. This problem of multi-robot task allocation is highly studied [11], especially when there are communications issues [2] which will be ignored in this study. The model proposed in this paper is rooted in the CoHoMa challenge. At a high level it abstracts of emergency and rescue missions [7] such as floods controlling [15] or subterranean rescue [14], using mixed-initiative planning with automated vehicles [4]. The mission is for a group of humans to go through a hazardous zone with securable obstacles that they must avoid. Because the obstacles are unknown at the beginning of the mission, the operators have at their disposal UAVs and UGVs to explore the area, detect obstacles and secure them. The fleet of robots is typically heterogeneous: they have different capacities, in order to be complementary and be able to secure the human movements. The obstacles will be discovered as the progression goes on, so there will be replanning steps for each event. To simplify the interactions with the robots, their locations are discretized. Indeed, the operator does not need to have a precise representation of the robot's location for the planning process, the points of interest are sufficient. Therefore, a navigation graph as shown in the Figure 2 is used. This graph regroups the location of the vehicles, the location of the obstacles, and the objectives of the mission. Moreover, the edges of the graph are configured to forbid the access to some vehicles, a UAV can cross a cliff where the other vehicles cannot. This way, a unique graph can be used to store all the possible displacements. The Figure 2(a) shows the Human-Machine Interface (HMI) used by a human operator to visualize the environment, the real location of the robots and the detected obstacles, the navigation graph with more details, on a satellite view of the terrain. The operator can interact with the map to specify events, an obstacle detection, and to change the mission's objectives. When the mission details have been updated, the operator can request a plan to achieve those objectives on the right side of the HMI. This plan is not sent to the robots directly. First, it is shown in the HMI (see Figure 2(b)) on the left side for potential modification, allocate an action to another robot, and for approach. Thus, the plan needs to be as simple as possible in order to be easily understandable by the operator. Once the plan is validated, it is sent to each robot which are able to accomplish it. For example, considering the calculated plan Move UAV from Move UAV from Move UAV from Move UAV from The operator does not need to know which path the vehicle will take since it is autonomous, so the tasks can be regrouped into a single task 'Move UAV from Figure 1: Illustrations of the CoHoMa challenge where the UAV is at the beginning (_i.e._ in location \(L_{1}\)), so the action can be transformed, and the plan can be simplified as 'Move UAV to \(L_{12}\)'. After validation, the task is sent to the concerned robot UAV, which knows how to go to the location \(L_{12}\). Although the robots are capable of detecting obstacles, they do not modify the mission on their own because their detection cannot be perfectly accurate; they add several false obstacles next to the real one. To compensate for this, the detected obstacles are displayed in the HMI by grouping nearby obstacles together, with a customizable threshold, and operator approval is required to add the obstacle to the mission problem. This approach is generalized to all possible events. In this way, no uncertainty is taken into account in the planning process; it is taken upstream by the operator who has validated the event. Finally, as two events can occur at the same time for two different robots, replanning is not triggered automatically after each event but only when the operator requests it. This paper will begin by presenting the necessary background for chronic modelling. Next, a Figure 3: HMI used by the operator to interact with the robot fleet Figure 2: Example of a navigation graph where (i) the vehicles (UAV, UGV, H for Humans) are in L1 (ii) the objective is for H to go to L8 (iii) there are undetected obstacles in L2, L5, L6, L9 and L11 model that is as simple as possible for a non-expert user, called the _natural_ model in the following, will be proposed to show the limitations of simple models. Finally, some optimizations of this first model will be introduced, and the time needed for the planner to find a solution will be compared. ## 2 Background To model the planning problem, we wish to exploit the hierarchical nature of the task where some high-level tasks to accomplished are specified by the operator that must then be refined into sets of primitives actions executable by the autonomous vehicles. An Hierarchical Task Network (HTN)[3] can represent this kind of decomposition and is easily defined with the HDDL language [13]. This language however lacks the ability to express temporal properties of the problem such as the duration of action or deadlines. Instead, we rely on the formalism of chronicles [10] that support the specification of rich temporal planning problem. In particular, we exploit their extension for hierarchical task networks can represent combined temporal and hierarchical problems [12]. However, it does not have an input language that can represent both. A **type** is a set of values that can be either domain constants (_e.g._ the type \(\mathit{Vehicle}=\{\,V_{1},V_{2}\,\}\) defines two vehicles objects \(V_{1}\), \(V_{2}\)) or numeric values (_e.g._**timepoints** are regularly spaced numerical values describing absolute times when events occur). The types can present a hierarchy, _e.g._ the type \(\mathit{Robot}\) is a subtype of \(\mathit{Vehicle}\) meaning that a \(\mathit{Robot}\) is a \(\mathit{Vehicle}\), but the reverse is not necessarily true. When there is a type hierarchy, an abstract root type named \(\mathit{Object}\) is defined in order to have a decomposition tree. A **state variable** describes the evolution of an environment characteristic over time. Generally, it is parametrized by one or multiple variables. Its value will depend on the value of the variables, _e.g._\(\mathit{loc}(v)\) denotes the evolution of the location of the vehicle \(v\), its value will be \(\mathit{loc}(V_{1})\) or \(\mathit{loc}(V_{2})\) depending on the value taken by \(v\) of type \(\mathit{Vehicle}\). A **task** is a high-level operation to accomplish over time. Generally, it is parametrized by one or multiple variables. It is of the form \([s,e]\mathit{task}\,(\,x_{1},\ldots,x_{n}\,)\) where \(s\) and \(e\) are timepoints denoting the start and end instants when the task occurs, \(\mathit{task}\,(\,x_{1},\ldots,x_{n}\,)\) is the task with each \(x_{i}\) a variable. For instance, \([2,4]\mathit{Move}(V_{1},L_{2})\) denotes the operation of moving the vehicle \(V_{1}\) to the location \(L_{2}\) during the temporal interval \([2,4]\). The set of available tasks of the planning problems is \(\mathcal{T}\). A **chronicle** defines the requirements of a process in the planning problem. A chronicle is a tuple \(\mathcal{C}=(\,V,T,X,C,E,S)\) where: * \(V\) is the set of _variables_ of the chronicle. This set is split into a set of temporal variables \(V_{T}\) whose domains are timepoints and a set of non-temporal variables \(V_{O}\). * \(T\in\mathcal{T}\) is the parametrized _task_ achieved by the chronicle. The start and the end instants of the task correspond to the start and the end instants when the chronicle is active, it is its _active temporal interval_. * \(X\) is a set of _constraints_ over the variables of \(V\). The chronicle cannot be _active_ (defined bellow) if at least one constraint is not respected over its active temporal interval. * \(C\) is a set of _conditions_ with each condition of the form \([s,e]\mathit{var}\,(\,x_{1},\ldots,x_{n}\,)=v\) where \((\,s,e\,)\in V_{T}^{2}\) such that the temporal interval \([s,e]\) is contained in the active temporal interval of the chronicle, \(\mathit{var}\,(\,x_{1},\ldots,x_{n}\,)\) is a parametrized state variable with each \(x_{i}\in V_{O}\), and \(v\in V_{O}\). A condition is verified if the state variable \(\mathit{var}\,(\,x_{1},\ldots,x_{n}\,)\) has the value \(v\) over the temporal interval \([s,e]\). The chronicle cannot be active if at least one condition is not verified. * \(E\) is a set of _effects_ with each effect of the form \([s,e]var\left(x_{1},\ldots,x_{n}\right)\gets v\) where \(\left(s,e\right)\in V_{T}^{2}\) such that the temporal interval \([s,e]\) is contained in the active temporal interval of the chronicle, \(var\left(x_{1},\ldots,x_{n}\right)\) is a parametrized state variable with each \(x_{i}\in V_{O}\), and \(v\in V_{O}\). An effect states that the state variable \(var\left(x_{1},\ldots,x_{n}\right)\) takes the value \(v\) at time \(e\). The temporal interval \(]s,e[\) is the moment when the state variable is transitioning from its previous value to its new value. During this transition, the value of the state variable is undetermined. * \(S\) is a set of _subtasks_ where each subtask is a task in \(\mathcal{T}\) that must be achieved by another chronicle. A chronicle can be **active** or not, defining whether the chronicle is present in the final solution. If the chronicle is not active, then the planner must find another chronicle achieving the same task to replace it. We make the distinction between three types of chronicles: the **action chronicle** which has effects but no subtasks (_i.e._\(S=\emptyset\)), the **method chronicle** which has subtasks but no effects (_i.e._\(E=\emptyset\)), and the **initial chronicle** encoding the initial state as effect and the objectives of the problem as conditions and subtasks, it is the only one which does not have a task \(T\) to achieve (_i.e._\(T=\emptyset\)). As an alternative to specifying chronicles manually, the AIPlan4EU project1 offers a Python API2 for modelling different kinds of planning problems, notably temporal and hierarchical ones. The corresponding problems map almost immediately to the chronicles defined above. The python API for constructing planning problems is especially useful in our case where the new problems are defined online, as the situation evolves during the mission. Footnote 1: [https://www.aiplan4eu-project.eu/](https://www.aiplan4eu-project.eu/) Footnote 2: [https://github.com/aiplan4eu/unified-planning](https://github.com/aiplan4eu/unified-planning) ## 3 Initial Model According to the mission specification, the humans need to be able to move while the autonomous vehicles need to move, explore to detect obstacles and secure them. This way, a list of high-level tasks appears: * \([s,e]goto(v,l):\) The vehicle \(v\) (humans, UAV or UGV) goes to the location \(l\) * \([s,e]explore(r,f,t)\): The robot \(r\) (UAV or UGV) explores the path from the location \(f\) to \(t\) * \([s,e]secure(r,o)\): The robot \(r\) secures the obstacle \(o\) From this list, one can easily extract the type hierarchy shown in the Figure 4. The _Obstacle_ allows handling different types of obstacles for the _secure_ task, _e.g._ in a fire rescue mission we could image to use different types of extinguishers (water, CO\({}_{2}\) or powder), each one for a different type of obstacle. Figure 4: Type hierarchy ### Goto task A vehicle needs to be able to go from a location to another. However, a human, a UAV and a UGV does not move the same way. A human will _walk_ while a UAV will _fly_ and a UGV will _roll_ on land. Therefore, we obtain the three following action chronicles: \[\begin{array}{ll}[s,e]walk(h,f,t)&\\ \text{variables:}&\text{ Humans }h\\ &\text{Locations }f\text{ (from) and }t\text{ (to)}\\ \text{task:}&[s,e]walk(h,f,t)\\ \text{constraints:}f\neq t&\\ &e-s=dur(h,f,t)\\ \text{conditions:}&[s,s]loc(h)=f\\ &[s,e]path(f,t)=\top\\ &[s,e]explored\ air(f,t)=\top\\ &[s,e]explored\ ground(f,t)=\top\\ &[s,e]obstacle(f,t)=\bot\\ \text{effects:}&[s,e]loc(h)\gets t\\ &[s,e]fly(a,f,t)\\ &\text{variables:}&\text{ UAV }a\\ &\text{Locations }f\text{ (from) and }t\text{ (to)}\\ \text{task:}&[s,e]fly(a,f,t)\\ \text{constraints:}f\neq t&\\ &e-s=dur(a,f,t)\\ \text{conditions:}&[s,s]loc(a)=f\\ &[s,e]path(f,t)=\top\\ \text{effects:}&[s,e]loc(a)\gets t\\ \end{array}\] The chronicle \([s,e]roll(g,f,t)\) is similar to the chronicle \([s,e]fly(a,f,t)\) by replacing the UAV \(a\) by the UGV \(g\). However, the distinction is made because in a more detailed model it could be more conditions and effects making a difference between the air and ground movements. The Figure 5 shows a possible decomposition of the \([s,e]goto(v,t)\) task made by a user. There are four possibilities for the vehicle \(v\) to go to the location \(t\): * It is already at the location, _i.e._\(loc(v)=t\), then there is no operation (_Noop_) to do. The associated chronicle is detailed in the Figure 5(a). * It is a UAV, then it flies to another location and retry to go to \(t\) from this new location. The recursion will end when \(loc(v)=t\) with the _Noop_ method. The associated chronicle is detailed in the Figure 5(b). * In the same way as UAVs, the UGVs and humans will move and try again. The associated chronicle are similar to the one of _UAV_. ### Explore task The robots need to be able to explore an edge the navigation graph in order to detect the obstacles and secure the path for the humans. To explore the edge going from the location \(f\) to the location \(t\), the robot \(r\) needs to be either in location \(f\) or location \(t\). Therefore, there are two methods to explore an edge (shown in Figure 7): * going to the location \(f\) then explores from \(f\) to \(t\): _forward_ method * going to the location \(t\) then explore from \(t\) to \(f\): _backward_ method These two methods can be accomplished by a UAV with the _air_ method or by a UGV with the _ground_ method. The distinction between the two associated actions is that one effect of the _explore air_ action will be _explored air_\((f,t)\leftarrow\top\), and for the _explore ground_ action it will be _explored ground_\((f,t)\leftarrow\top\). These two state variables are used in conditions of the _walk_ action in order for the humans to move securely. As for the movement actions, the duration of an exploration is based on the state variable \(dur(v,f,t)\). Figure 5: Natural decomposition of the goto task where (i) rectangles are action chronicles (ii) diamonds are method chronicles and (iii) ovals are tasks to achieve Figure 6: Some method chronicles used to decompose the goto task Figure 7: Natural decomposition of the explore task ### Secure task Finally, the robots need to be able to secure detected obstacles so that they can be crossed by humans. Because there are several types of obstacles (see Figure 4), there will be several methods to secure them as shown in the Figure 8. We made the assumption that the robot \(r\) needs to be close to the obstacle \(o\) to secure it for every method. In the case where it is not needed, _e.g._ in a military context as CoHoMa II where some obstacles representing enemy's troops could be secured in distance with artillery fire, the associated _goto_ task should be removed. For the following simulations, we consider only one way to secure an obstacle with the duration of 15 minutes. ### Initial State and Objectives Once the different high-level tasks have been defined, an initial chronicle needs to be specified to encode the initial state and the objectives. \[\begin{array}{ll}[s,e]initial&\mbox{The initial chronicle starts at the time-point $0$ and ends at the timepoint $e$. This timepoint can be used to specify objectives.}\\ \mbox{{\tt\ Next, the three subtasks \([s_{2},e_{2}]freedom(UAV)\), \([s_{3},e_{3}]freedom(UGV)\), and \([s_{4},e_{4}]freedom(H)\) can be added to the initial chronicle's subtasks. Note that the \(freedom(H)\) task only allows the humans to go wherever they want, they cannot do exploration even if it is present in the decomposition of the task. This is caused by the type hierarchy and the definition of the \(explore\) task that only take robots as parameters. In general these freedom tasks allow the planner to insert some classes of actions in the plans regardless of the rest of the hierarchy. In this sense, it simulates in the HTN the notion of task insertion [9], where any action can be inserted along the hierarchy. It is in particular close to the _task-independent_ action in FAPE [6], where only a subset of the actions are allowed to be inserted arbitrarily. ### First Planning Results Considering ground truth shown in the Figure 2, the vehicles are in the location \(L_{1}\) and the humans need to go to the location \(L_{8}\), but there are undetected obstacles on the path. Because the terrain is not fully known at the beginning of the mission, a replanning step is needed when the operator adds some details to the mission, _e.g._ when an obstacle is detected by a robot. Initially, the knowledge of the terrain is empty. Therefore, the decision-making aid has the navigation graph shown in the Figure 9(a) and will propose the associated plan. This plan is to take the shortest route to the goal, with the robots ahead of the humans to secure the path. The planning operation has been done with the Aries planner [5], it took 333.59s to find the optimal solution. During the execution of that plan, the robots detect an obstacle at the location \(L_{5}\) (see Figure 9(b)). A new plan is proposed based on the new knowledge of the terrain. The planner believes it's quicker to go back and explore a new route than to secure the current obstacle. This plan has been found in 328.25s. Finally, a new obstacle is discovered at the location \(L_{2}\) (see Figure 9(c)). Again, a new plan is proposed based on the new knowledge of the terrain. This time, it is faster to secure the current obstacle and go to the target. The planner took 308.72s to find this plan. ## 4 Optimizations While reasonable, planning times of a handful of minutes are far from ideal in mixed-initiative planning context, especially when task durations are faster than the minute. To reduce this time, one could ask for the first solution found by the planner (instead of an optimal solution) with the risk of handing out bad quality solutions. Instead, in this section, we introduce some modifications that can be brought to the planning model in order to speed up the planning process. Figure 9: Natural decomposition of the freedom task ### Recursive Tasks To find a solution, the planner needs to scan the search tree and prune the branches that lead to no solution. Therefore, the model should use the least possible recursive tasks in order to reduce the size of the search tree. Considering the _goto_ task (see Figure 5) and \(n>0\) the decomposition depth, _i.e._ the number of times _goto_ leaves are replaced by the decomposition. Note that if a leaf is not decomposed, the associated method is removed from the three since it will not be applicable. Then, the size of the tree, _i.e._ the number of nodes, is \(2+3*4^{n}=\mathcal{O}(4^{n})\) which is **exponential**. However, one can notice that all the methods have the same pattern. There is an action followed by the recursive call to the _goto_ task. Then, the actions can be grouped in a _goto once_ task and the _goto_ task Figure 10: Terrain knowledge with their associated plan to solve the problem can be moved outside in order to be present only once as shown in the Figure 11. With this new decomposition and \(n>0\) the decomposition depth, the size of the tree is \(2+10*n=\mathcal{O}(n)\) which is **linear**. This method can be applied to all the tasks. For \(goto\), \(explore\), and \(secure\), it is the call to \(goto\) which will be extracted. For the \(freedom\) task, it is the recursive call to \(freedom\). ### Complete Navigation Graph One of the mission's assumptions is that the calculated plan is not intended directly for the robots, but for a human operator to approve, so the plan must be as simple as possible, which translates in particular into the aggregation of movement actions. The edges of the navigation graph can be set to prohibit the passage of certain vehicles. Therefore, one could make the graph complete, _i.e._ each node is connected to all the others, and make an edge allowed for a given vehicle if : * the vehicle is a robot, humans need to know exactly where they are going * there is a path in the initial graph corresponding to that new edge * the vehicle is allowed to go through all the edges of this path * the time taken by the vehicle to pass this new edge is the time taken to cover the associated path This way, the action of going from \(L_{1}\) to \(L_{9}\) for a robot can be done with only one decomposition of the \(goto\) task rather than 4 decomposition without this navigation graph manipulation. As a result, the search tree will be smaller, and the solution will be found more quickly. ### Objectives as Conditions The initial chronicle defines the objective as a subtask. That means the given subtask needs to be accomplished. However, the subtasks also contain three \(freedom\) tasks in order to the vehicles to do whatever they want to complete the objective. Looking closer, one can see that this allows many opportunities to achieve the objective \(goto(H,L8)\), which are all the possible combinations of the two tasks \(goto(H,L8)\) and \(freedom(H)\), both allowing the humans to move. Figure 11: Optimized decomposition of the goto task To avoid that, the objective can be encoded in another way. The objective is not for the humans to go to the location \(L_{8}\), but to be at the location \(L_{8}\) at the end, _i.e._\(loc(H)=L_{8}\). Therefore, the initial chronicle can be updated as shown below. With this new encoding, the only way for the humans to be at the location \(L_{8}\) is to use the \(goto(H,L8)\) hidden in the \(freedom(H)\) subtask. As a result, the search tree will be reduced. \[\begin{array}{ll}[s,e]initial&\\ \texttt{constraints:}s=0&\\ \texttt{effects:}&[s,s]loc(H)\gets L1&\\ &[s,s]loc(UAV)\gets L1&\\ &[s,s]loc(UGV)\gets L1&\\ &\cdots&\\ \texttt{conditions:}&[e,e]loc(H)=L8&\\ \texttt{subtasks:}&[s_{1},e_{1}]freedom(H)&\\ &[s_{2},e_{2}]freedom(UAV)&\\ &[s_{3},e_{3}]freedom(UGV)&\\ \end{array}\] ### Final Planning Results Considering the same mission studied in the previous section, the planner found the same plans as shown in the Figure 10. This demonstrates that the proposed optimizations do not change the problem represented by the model, both are equivalent. However, as shown in the Table 1, the planner is 95% faster with these optimizations. As these optimizations are independent of the domain, the same results should be observed in other use cases. ## 5 Conclusion In this paper, we presented a planning-based decision-making aid that exploits a hierarchical task planner for the control of a fleet of robots in an exploration scenario. A first natural model of the problem has been proposed. We then proposed some domain-agnostic optimization of this initial model, which resulting in the planner being at least 20 times faster to provide an optimal solution. Some assumptions have been made in the current model, notably that the robot's battery level is infinite. It could be interesting to be able to represent these kinds of resources in order to accomplish more complex missions. Moreover, the planner is optimizing the makespan of the plan, _i.e._ it tries to make the plan as short as possible in time. It could be useful to associate a cost to each action to optimize the global cost of the plan, _i.e._ the sum of the present action's cost. This way, it could be possible to minimize, for example, the total distance travelled by the human group. \begin{table} \begin{tabular}{l l l l} \hline \hline & **Step 1** & **Step 2** & **Step 3** \\ \hline **Natural model** & 333.59s & 328.25s & 308.72s \\ **Optimized model** & 13.61s & 14.17s & 9.19s \\ \hline **Global reduction** & 95.9\% & 95.7\% & 97.0\% \\ \end{tabular} \end{table} Table 1: Planning time with and without the proposed optimizations
2302.08613
Drive Right: Promoting Autonomous Vehicle Education Through an Integrated Simulation Platform
Autonomous vehicles (AVs) are being rapidly introduced into our lives. However, public misunderstanding and mistrust have become prominent issues hindering the acceptance of these driverless technologies. The primary objective of this study is to evaluate the effectiveness of a driving simulator to help the public gain an understanding of AVs and build trust in them. To achieve this aim, we built an integrated simulation platform, designed various driving scenarios, and recruited 28 participants for the experiment. The study results indicate that a driving simulator effectively decreases the participants' perceived risk of AVs and increases perceived usefulness. The proposed methodologies and findings of this study can be further explored by auto manufacturers and policymakers to provide user-friendly AV design.
Zhijie Qiao, Helen Loeb, Venkata Gurrla, Matt Lebermann, Johannes Betz, Rahul Mangharam
2023-02-16T22:35:08Z
http://arxiv.org/abs/2302.08613v1
# Drive Right: Autonomous Vehicle Education Through an Integrated Simulation Platform ###### Abstract Autonomous vehicles are being rapidly introduced into our lives. However, public misunderstanding and mistrust have become prominent issues hindering the acceptance of these driverless technologies. The primary objective of this study is to evaluate the effectiveness of a driving simulator to help the public gain an understanding of autonomous vehicles and build trust in them. To achieve this aim, we built an integrated simulation platform, designed various driving scenarios, and recruited 28 participants for the experiment. The study results indicate that a driving simulator effectively decreases the participants' perceived risk of autonomous vehicles and increases perceived usefulness. The proposed methodologies and findings of this study can be further explored by auto manufacturers and policy makers to provide user-friendly autonomous vehicle design. Keywords:Autonomous Driving, Human Factor, Simulation, Education ## I Introduction While billions of dollars have been invested in autonomous driving (AD) technology, little work has been done to prepare the public for this paradigm shift. As auto manufacturers press forward with the introduction of ever more advanced driver assistance features, the concept of AD still sounds unsettling to many people. According to a survey of 1,200 adult drivers conducted by the Partners for Automated Vehicle Education (PAVE), 48 % Americans said they "would never get into a taxi or ride-share vehicle that was driven autonomously", while 20 % believed autonomous vehicles (AVs) would never be safe [1]. Another survey conducted by AAA of over 1,000 U.S. adults found that 54 % of all participants were afraid to ride in an autonomous vehicle, while 32 % were unsure about it [2]. AVs have the potential of saving millions of lives from needless traffic accidents and could drastically reduce the cost associated with the transportation industry. However, public mistrust and the drivers' reluctance to relinquish control have become prominent issues hindering the acceptance of these driverless technologies [3]. Researchers from MIT AgeLab found that most people, including the elder generation, were comfortable with the idea of technological innovation in the driving industry. However, improved training methods and preferred training strategies played an important role in the eventual adoption of the technology [4]. Further, drivers must receive information that helps them explain and predict the vehicle's behavior [5]. To properly demonstrate AVs, one would need to sit in an actual car, with an experienced human supervisor monitoring the safety and explaining the vehicle's operation. However, such demonstrations can be expensive and take a considerably long time. Moreover, many people may not feel comfortable stepping into an autonomous vehicle before they fully trust its performance [6]. In such circumstances, a driving simulator provides an alternative approach to demonstrate AVs in a safe and controllable environment. Driving simulators are also time-efficient, cost-effective, and can be used in a variety of places. In this study, we developed an autonomous vehicle simulation and demonstration system. Our system leveraged the latest simulation tools and autonomous driving platforms: SVL Simulator [7, 8], Baidu Apollo [9], and Autoware Auto [10, 11]. This integrated system was designed to improve the public's understanding of AD technology and help them build trust. The paper is structured as follows: Section II provides a literature review of the existing efforts made by the academic community to explore various user-AV interactions. Section III describes our simulator system design and the test scenario development. Section IV presents the procedures of our human study. Section V presents the results, and Section VI discusses the implications. Finally, Section VII summarizes the findings and provides insights for future development. ## II Literature Review _Information Delivery:_ extensive research has been performed to evaluate the kind of AD information that should be provided to passengers and in what form it should be provided. Koo et al., in their human machine interaction research, discovered that the "why" information explaining the vehicle's behavior is more important than the "how" information, reaffirming the vehicle's action [12]. Morra et al. found that the AD system should display a complete picture of the vehicle's surrounding environment including other vehicles, pedestrians, and traffic indicators. This informative approach, although more cognitively demanding, could contribute to a less stressful riding experience [13]. Similar results have been found in Haeuslschmid's AD trust research. In their study comparing three visualization methods, "A world in miniature" was preferred by most people and received the highest score of trust. According to the researchers, this method presented the "car's perception of the surroundings, its interpretation, and its actions in a clear and competent way" [14]. Besides providing reasoning to justify the operation of autonomous vehicles, researchers have conducted experiments to exploit the mental and psychological benefits of AV system design. For instance, Sun and Zhang proposed their synesthetic-based multimodal interaction (SMBI) model, which utilized voice and lightning to raise the driver's awareness. Under emergent conditions, the vehicle's speech prompt changes from low to high frequency, and the ambient light changes from blue to red. Sun and Zhang found the drivers were more likely to hold the steering wheel and pay attention to the road when experiencing this sudden ambient change. Drivers also reported higher scores towards this interactive system design in terms of trust, technical competence, situational management, and perceived ease of use [15]. _Anthropomorphism:_ another important aspect of the AV human interaction is the concept of anthropomorphism. In an anthropomorphic design, the vehicle is imbued with humanlike characteristics, motivations, intentions, or emotions [16]. Providing human-like features is a common approach to increase trust and acceptance in non-human agents [17]. Research has shown that adding a humanized conversational interface alongside the traditional graphics interface could increase system transparency and portray the vehicle as "smart" [18]. It is also suggested that representing the vehicle's symbolic indicators (e.g., right turn, go straight, accelerate) as animated facial movements would increase user liking and trust of the system [19]. All these experimental results indicate that anthropomorphic design could be an effective approach to help users build trust and confidence in AVs. _Simulated Riding Experience_: the most efficient way for users to gain understanding of AVs is to ride in an actual vehicle and interact with it. However, these demonstrations are expensive and time-consuming. Moreover, the real-road test poses potential physical and psychological harm to the participants. In response to this challenge, various driving simulators have been introduced to test and demonstrate AVs in a safe and controllable environment. For instance, Dosovitskiy et al. designed an open urban driving simulator called CARLA that aimed to support the "development, training, and validation" of AVs [20]. Best and his colleagues built an AD simulation platform "AutonoVi-Sim" that focused on weather, sensing, and traffic control [21]. Manawadu and his team used a simulator to study the driving performance of humans versus AVs. They found that on average, the autonomous vehicle decreased the task completion time, rate of collision, and driver's mental workload [22]. Building an AD system that follows conventional traffic norms and driving styles has also proven to be important. Drivers and passengers expect AVs to make intelligent decisions and act like human agents. For instance, a study showed that users reported higher perceived risk when an autonomous vehicle drove slowly on a clear day and lower perceived risk when the same vehicle drove slowly on a snowy night [23]. Furthermore, Sun et al. demonstrated that building a personalized vehicle that mimics the driver's behavior could reduce the perceived risk and increase perceived usefulness. In their study, the drivers' driving data was recorded to build a personalized AD system. In the subsequent experiments, the personalized AV received higher scores in terms of trust, comfort, and situational awareness compared to the standardized AV [24]. A clear limitation of the driving simulator is the users' awareness of the simulation environment and their potential bias during the engagement. To create a real-road autonomous driving experience, researchers at Stanford University introduced their RRADS Platform: "A Real Road Autonomous Driving Simulator". In their setup, a driving wizard (human driver hidden from passengers) mimicked the control of an autonomous vehicle, and an interaction wizard (researcher that assisted the participants) explained the vehicle's operation. While this innovative design provided some real-road AD experience, the wizard's driving style clearly affected the passengers' attitude towards AVs [25]. ## III Methods _SVL Simulator_: the SVL Simulator is an open-source driving simulator designed to facilitate the development of AD research. Built with the game engine Unity, the simulator allows the construction of 3D digital twins of the real world using point cloud and image data. The simulator supports multiple real-time sensor inputs and outputs including the Camera, LiDAR, Radar, GPS, and IMU. Environmental parameters can also be adjusted, such as time of the day, weather, vehicles and pedestrians using a Python API interface. With internal bridge support such as the Robotic Operating System (ROS), ROS2, and CyberRT, the simulator can be connected to popular AD platforms like Apollo and Autoware. _Logitech G920:_ Logitech G920 is a driving force steering wheel and pedal set. Its full throttle, full control capability makes it suitable for various driving tasks. In our simulation, this steering wheel pedal set is connected to the SVL Simulator to provide a high fidelity driving experience. _Apollo 5.0:_ Apollo is an open-source AD platform developed by the leading technology company Baidu. It is an industrially used Level 4 AD platform as defined by the Society of Automotive Engineers (SAE) [26]. Apollo's MinuBus project launched massive production in 2018, and its Robo Taxi project is the first attempt to use AD in the commercial transportation business. The latest Apollo 5.0 version (Apollo 6.0 is available, but still under modular testing) includes a set of integrated modules, such as a Map Engine, Localization, Perception, Prediction, Planning, and Control. These modules coordinate with each other to provide a safe and reliable driving experience. Apollo's CyberRT bridge allows for a connection to the SVL Simulator for AD testing and demonstration. Its web based Dreamview interface displays information in real time for user-friendly visualization and debugging. _Autoware Auto_: Autoware is an open-source software stack introduced by the Autoware Foundation. Its latest version, Autoware Auto, aims to address the problem of valet parking and autonomous cargo delivery. Autoware uses LiDAR for vehicle localization and motion planning. Its hardware and software have been successfully integrated in a Lexus vehicle, which could then perform valet parking using the mobile application control. In our implementation, Autoware is connected to the SVL Simulator using the ROS2 bridge and controlled using the rviz2 graphics interface. _Testing Scenarios_: for the research study, five testing scenarios (Figure 1) were developed in the SVL Simulator using different maps and environmental settings. The first four scenarios were successfully completed using Apollo, and the valet parking scenario was achieved using Autoware. The testing vehicle was a 2017 Lincoln MKZ. To run AD, two gaming laptops were used, both equipped with an Intel i7 Processor and an Nvidia GTX 1080 Graphics Card. The operating system was Ubuntu 20.04. One laptop was set to run the SVL Simulator, and the Figure 1: Practice driving field and five testing scenarios of the SVL Simulator other AD platform. The two machines were connected using an ethernet cable for real-time communication. This hardware setup provided sufficient computing power to ensure that both the simulator and the AD platform run smoothly. **Vehicle Following**: the ego vehicle needs to follow a reckless vehicle on a straight, single lane road. The reckless vehicle drives at a non-constant speed and occasionally makes sudden stops. The ego vehicle must respond quickly to the front vehicle's speed changes, in order to avoid a rear-end collision. **Lane Block**: the ego vehicle drives on the right lane of a two-lane road, but is soon blocked by an illegally stopped vehicle. The ego vehicle must slow down first, yield to passing vehicles on the left lane, and then switch lanes to proceed driving. This scenario is simulated on a dark night, which makes it difficult to see the illegally stopped vehicle from afar. **Pedestrian Jaywalking**: the ego vehicle must brake quickly to avoid a jaywalking pedestrian emerging from the side of the road. Time-to-collision is set at 2.6 seconds if no action is taken and the ego vehicle maintains its speed. This scenario is set on a rainy and foggy day, which increases the difficulty of seeing the pedestrian. **City Traffic**: the ego vehicle needs to drive through an urban area with high density city traffic. While driving, the ego vehicle must follow all traffic rules and indicators. The driving route includes a four-way intersection with traffic lights and an unprotected left turn. The simulation map is a digital twin of a real street block in Sunnyvale, CA. **Valet Parking**: the ego vehicle starts at the drop-off location. It needs to drive through the parking lot, reach the designated parking spot, and perform reverse parking. While driving, the ego vehicle must avoid hitting other vehicles and pedestrians. The simulation map is a digital twin of a real parking lot in San Jose, CA. _Participants_: we recruited 28 participants via email and social media. The only requirement to enter the study was the possession of a valid driver's license. Our participant group consisted of 16 males, 12 females and had a mean age of 25.2 years. The participants were not compensated but informed that they could potentially enhance their understanding of AVs. This study was approved by the Institutional Review Board (IRB) and all participants gave their informed written consent. In the human study, participants were asked to drive through the above scenarios manually using the steering wheel and pedal set. The scenarios were presented in the third-person view, as shown in the above figures. The steering wheel and pedals were set up with proper force feedback, mimicking those of a real vehicle, and their control inputs were processed quickly by the simulator without any visible delay. In addition, the participants could use buttons on the steering wheel to activate common features such as: the headlight, the taillight, the turn signal, or gear shift. After the participants completed each scenario, we demonstrated how AVs approached the same situation using Apollo or Autoware. The primary objective of running this driving experiment is to help human drivers better evaluate the performance of AVs. By taking part in the experiment, participants were made aware of how challenging some scenarios can be for human drivers. By watching the autonomous vehicle smoothly handling these same exact situations, the participants gain an understanding of the value of AD technology. The assumption is that this increased awareness will translate into a gain of trust and confidence. Notably, we are not trying to demonstrate that AVs' performance is superior to that of human drivers, a fact that could only be established with a robust Figure 2: SVL Simulator (left) and Apollo Dreamview (right). Apollo Dreamview displays the ego vehicle’s routing information, planning and control graph, sensing and prediction of moving objects, and responses to traffic indicators. analysis of billions of miles on the road. We only attempt to show that AVs are capable of handling complex traffic conditions that can be challenging for human drivers. To the best of knowledge, this is the first attempt to demonstrate the industrially employed AD platforms for educational purposes. We used both Apollo and Autoware because their interfaces with the SVL Simulator were good at demonstrating different features of an autonomous system. While both platforms supported advanced sensors such as the Camera, LiDAR, GPS, and IMU, Apollo's Dreamview interface displays the detailed object information captured by the camera (Figure 2), meanwhile Autoware's rviz2 interface shows a comprehensive image of the vehicle's LiDAR datapoints (Figure 3). Another noticeable thing is that both Apollo and Autoware contain integrated hardware and software support, which means the same system can be run either on a real vehicle or in the simulation. The dual nature implies that the performance of the vehicle in the simulation has implications for the real world. This provides support for our study, as the participants could reflect on their simulator experience and gain insight on how AVs perform in real traffic. This connection has been further validated by the work of Fremont et al. In their research, various AV testing scenarios were set up at the GoMentum Station, and their digital twins were built and imported into the LGSVL Simulator (previous version of the SVL Simulator). AD was implemented using Apollo 3.5 and the researchers found that: "62.5 % of unsafe simulated test cases resulted in unsafe behavior on the track... 93.3 % of safe simulated test cases resulted in safe behavior on the track" [27]. While the simulation result did not exactly match real-world testing, this experiment clearly indicates that simulations can be mapped to the real world. ## IV Experiment _Survey_: we created a two-part survey to evaluate the effectiveness of our simulator. The first part was to be completed before the simulator experiment, and the second part afterwards. At the beginning of the first part, participants were asked to assess their Figure 3: Autoware and SVL Simulator synchronization demonstration understanding of AVs. For this question, we provided the following choices: "I hear about it from the news and social media", "I know the vehicle uses sensors and artificial intelligence but have no understanding of the technology", "I have some understanding of the different type of data collected by the sensors on an autonomous vehicle", "I have some understanding of the software and algorithms running on an autonomous vehicle". Then, participants were asked to answer twelve quantitative questions that measured perception of AVs from six perspectives: perceived risk, perceived usefulness, perceived ease-of-use, technical competence, situational management, and behavioral intention (Table 1). The questions were adapted from the AV acceptance model [5, 28] and used the seven-point Likert scale. The second part was to be completed after the simulator demonstration. This part repeated the twelve quantitative questions asked earlier. This second survey allowed us to assess the evolution of opinion after the simulator experiment. Further, participants were asked to evaluate the usefulness (using the seven-point scale) of the driving information that was being displayed to them during the demonstration. The information included "the AV's planned routing on the map", "the AV's planning and control graph", "the AV's sensing of vehicles and pedestrians.", "the AV's prediction of vehicles and pedestrians", and "the AV's sensing of traffic indicators". The rest of the survey was mainly qualitative, focusing on the participants' opinions on the use of a driving simulator, their satisfaction with the AV's performance, and their remaining questions with autonomous vehicles. _Procedure:_ at the beginning of the study, participants were given a short introduction to the SVL Simulator. Then, they practiced driving through it using the steering wheel and pedal set. After they felt comfortable with the controls, participants were asked to drive through five pre-defined scenarios. For each scenario, we briefly introduced the map, environmental settings, and the route they needed to follow. Participants were not informed about the traffic emergency that would happen. Instead, they were told to pay attention to the road and drive safely. Each testing scenario took about two minutes, and the participants' driving performance was recorded. After the participants completed each scenario task, we demonstrated how AVs handled the same situation. For the Apollo demonstration, we first mentioned that it was an industrially used AD platform and introduced the real-world Apollo projects. Then, we explained to the participants that our system is the same one running on a real vehicle, and the AV's performance in the simulator reflects what could happen in the real world. In the Apollo Dreamview interface, participants could watch the camera-detected vehicles, pedestrians, and their predicted movements. The main map showed the AV's routing information, and the panel on the right displayed the AV's planning and control graph (Figure 2). For each testing scenario, we also explained the AV's movements. For example, the AV stopped at the red light; the AV was yielding to pedestrians; the AV was switching lane because the current lane was blocked; the AV received a new destination and planned its route. The valet parking scenario was demonstrated using Autoware. Similarly, to Apollo, we first explained that this was another industrially used AD platform and showed the Autoware Lexus vehicle. During the simulator demonstration, participants watched the AV navigate from the drop-off point to the designated parking spot and perform reverse parking. Two important technological concepts were introduced: the LiDAR sensor and the High Definition (HD) Map. In the Autoware rviz2 interface, participants could observe the LiDAR image moving with the vehicle and reflecting the shape of the surrounding objects. We explained that \begin{table} \begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline \multirow{2}{*}{P Perceived Risk} & I am worried about the safety of autonomous driving technology. \\ \cline{2-3} & I am concerned that failure or malfunction of the autonomous vehicle may cause accidents. \\ \hline \multirow{3}{*}{P Perceived Usefulness} & Using autonomous vehicles will increase my productivity. \\ \cline{2-3} & Using autonomous vehicles will increase my driving performance. \\ \hline \multirow{3}{*}{P Perceived Ease-of-use} & Learning to operate an autonomous vehicle would be easy for me. \\ \cline{2-3} & Interacting with autonomous vehicles would not require a lot of my mental effort. \\ \hline \multirow{3}{*}{Technical Competence} & I believe that autonomous vehicles act consistently, and their behavior can be forecast. \\ \cline{2-3} & I believe that I can form a mental model and predict future behavior of an autonomous vehicle. \\ \hline \multirow{3}{*}{Situational Management} & I believe that autonomous vehicles will perform consistently under a variety of circumstances. \\ \cline{2-3} & I intend to ride in autonomous vehicles in the future. \\ \cline{2-3} & I expect to purchase an autonomous vehicle in the future. \\ \hline \end{tabular} \end{table} Table 1: Twelve quantitative questions this \(360^{\circ}\), high precision laser sensor could detect obstacles and help avoid collisions. For the HD Map, we showed the parking lot with detailed road features and LiDAR-generated point clouds. The point clouds included houses, trees, and roadside shrubs. Participants were informed that this high-resolution map could provide valuable topographical information to the AVs and significantly boost their performance (Figure 3). The explanations were given in plain, non-technical language and, when combined with the simulator demonstration, were interesting and easily understandable to individuals with no technical background. ## 5 Results _Awareness of Autonomous Driving:_ of the 28 participants, 7.1 % answered that they have heard about AVs in the news and social media, 42.9 % stated that they knew the vehicle used sensors and artificial intelligence, but had no understanding of the technology, 39.3 % stated that they knew about the different types of data collected by the sensors on an autonomous vehicle, and 10.7 % said they had some understanding of the software and algorithms running on an autonomous vehicle. This result was desired, as our demonstration was designed for people with limited understanding of the AD technology. _Driving Performance:_ in the vehicle following scenario, 42.9 % of the participants collided with the vehicle in the front, and 32.1 % had at least one near-collision case. In comparison, the Apollo autonomous vehicle maintained a safe distance throughout the way and responded almost simultaneously to the front vehicle's speed changes. In the lane block scenario, 21.4 % of the participants collided with the illegally stopped vehicle, and 17.9 % collided with the passing vehicles on the left when attempting to switch lanes. In this test, Apollo detected the illegally stopped vehicle from afar and switched lanes after ensuring safety. In the pedestrian jaywalking scenario, 46.4 % of the participants failed to stop the vehicle and hit the pedestrian. In contrast, Apollo activated an emergency brake as soon as the pedestrians appeared and avoided the accident. In the city traffic scenario, all of the participants reached the destination safely. In this scenario, the Apollo vehicle stopped at the unprotected left turn, yielded to other agents, and then proceeded with caution. The valet parking scenario was not intense and successfully completed by all participants. This task was also achieved using Autoware. _Quantitative Questions:_ participants' responses to the twelve quantitative questions before and after the simulator experiment were categorized into six measurements and analyzed using the Paired-Samples T Test. After the simulator demonstration, we observed a significant decrease \(t(26)=2.994,p=0.011\), in the participants' perceived risk of autonomous vehicles. Meanwhile, there was a significant increase in the perceived usefulness of the AD technology, \(t(26)=-2.327,p=0.040\). We also observed some increase in the AV's score of situational management, \(t(26)=-1.494,p=0.161\), and technical competence, \(t(26)=-1.298,p=0.219\), while the result was not statistically significant. Participants' views on the AV's perceived ease-of-use did not change by much, \(t(26)=0.424,p=0.679\), and their intention to use an autonomous vehicle remained almost the same, \(t(26)=-0.200,p=0.844\) (Table 2). _Information Evaluation:_ participants also rated the usefulness of the AD information that was being displayed during the demonstration. Of the five choices, the AV's routing information (\(M=6.071,SD=0.997\)), sensing of vehicles and pedestrians (\(M=6.143,SD=1.292\)), sensing of traffic indicators (\(M=6.071,SD=1.141\)), and prediction of the vehicles and pedestrians (\(M=6.214,SD=0.802\)), all received an average score above six. These were very high ratings, as the maximum score was seven. The AV's planning and control information received a slightly lower score (\(M=5.214,SD=2.007\)), which was justified as this part involved some technical terms that require professional knowledge for understanding, such as the "Planning Theta", "V-T Graph", and "Kappa Derivative". _Qualitative Questions:_ in the qualitative questions, most participants stated they were very satisfied with the AV's performance. They believed the AV was equipped with advanced software and hardware that were capable of handling complicated situations. Some answered that this simulator experience provided them with great insight of how AVs work, while others mentioned this simulated experience \begin{table} \begin{tabular}{l l l l l} \hline \hline Measure & \multicolumn{2}{l}{Before simulator} & \multicolumn{2}{l}{After simulator} \\ & M & SD & M & SD \\ Perceived risk & 5.214 & 1.139 & 3.857 & 1.406 \\ Perceived usefulness & 5.143 & 1.307 & 5.714 & 1.437 \\ Perceived ease of use & 5.357 & 0.969 & 5.571 & 1.254 \\ Technical competence & 5.143 & 1.216 & 5.393 & 1.347 \\ Situational management & 3.714 & 1.032 & 4.393 & 1.521 \\ Behavioral intention & 5.571 & 1.222 & 5.536 & 1.365 \\ \hline \hline \end{tabular} \end{table} Table 2: Quantitative measurements before and after the simulator experiment encouraged them to ride in an actual vehicle when provided with the opportunity. There were several questions and concerns as well. Some participants complained that the AV drove too cautiously and proactively, and that it may not be the most efficient way of transportation. Some stated that because the AV put considerable emphasis on safety, human drivers may deliberately take advantage of them by cutting them in lane or not yielding. Despite all the disputes, all our participants seemed to agree that the AV had a high score of safety and provided an alternative way of transportation. **VI. Discussion** _Driving Performance_: while our autonomous control produced significantly better driving results, a lot of factors could be affecting the performance of human drivers. First, the driving experiment was done in third person view, which was unconventional to a human driver. Also, our stationary workstation could not generate the movements and forces one would normally experience in a real car, and therefore may not be immersive enough. While the limitations exist, note that the primary purpose of this experiment was not to show that AVs drive better than humans, but to help human drivers build a correct perception of the AVs. After driving through the testing scenarios and watching how AVs handled the same situations, the participants could get the sense that: the AVs can react quickly to an emergency, navigate safely along a pre-defined route, and perform valet parking in a learned space. _Survey Feedback_: we were not surprised to find that our simulator demonstration reduced the participants' perceived risk of autonomous vehicles. In all of our testing scenarios, the AV acted very cautiously and responded to emergencies without hesitation. Further, the vehicle's routing, planning, sensing, and control information was quite self-explanatory and clear enough to reassure the participants. This was demonstrated in the participants' high ratings of the AD information. While a detailed demonstration of the AD technology reduced the participants' perceived risk, many found that it required a lot of their mental effort to keep track of the system information and be cognitive of the vehicle's state. As a result, the participants' ratings on the AV's perceived ease of use did not increase. Some drivers initially assumed that interacting with an autonomous vehicle was going to be easy. However, they soon found there was still information that required their attention or even intervention. Despite the higher cognitive load, drivers still preferred for the information to be displayed, as it increased the system transparency and helped them understand the vehicle's movements. We also observed different levels of increase in the vehicle's perceived usefulness, technical competence, and situational management. Since the AV handled all five testing scenarios smoothly when compared to human drivers, many participants believed that the AV would increase their driving performance and productivity. In addition, some participants felt that they could form a mental model to understand and predict the vehicle's behavior. With a more in-depth understanding of the technology, participants showed less concern towards system malfunction and believed in its consistency and reliability under a variety of circumstances. Despite a significant decrease in the perceived risk and a significant increase in the perceived usefulness, we did not observe an increase in the participants' intention to use an autonomous vehicle. This finding differed from our initial expectations, and we attribute it to two reasons. First, the participants' intention to use an autonomous vehicle was already high before our simulator demonstration. While our participants had limited understanding of AD technology, they believed that the technological shift is inevitable and expected to use an autonomous vehicle in the future. Second, there is a lack of commercially used AVs on the market (SAE Level 3 to 5), and an absence of laws and government regulations. While our participants embraced the idea of AVs, they did not feel like there was an imminent need to use or purchase one. _AD Failure Cases_: a final point to mention is that some AD failures, although not life threatening, were not covered in our demonstration. During our engineering test and scenario development, we found that the AV consistently failed to handle some challenging cases such as: traversing the center line to avoid a stopped vehicle, deliberately cutting in front of other vehicles to make a turn, or navigating from some private, unmarked area to the main road. While we were aware of the limitations of the AV, they were not presented to participants as the primary purpose of this experiment was education and trust development. Meanwhile, we believe these technological imperfections should be recorded and reported to the industrial developers so they can fix the problems and improve the next generation of AVs. In the future, we will work on identifying the regular and edge cases and focus on human-AV interaction to help drivers further understand the merits and limitations of the technology. ## VII Conclusion We presented in this study a driving simulator and testing scenarios to improve people's understanding and trust in autonomous vehicles. To that effect, we leveraged the industrially used autonomous driving platform Apollo and Autoware. Our study of 28 participants showed that this system successfully reduced the perceived risk and increased the perceived usefulness of autonomous vehicles. There were limitations as well. First, our five testing scenarios are not representative of all traffic conditions and require further development. Also, our third person view simulation could be changed to first person view, and implemented in more immersive environments such as virtual reality or augmented reality. Furthermore, our small number of participants should be regarded as a pilot towards the development of a larger study, where perception can be correlated with age and gender. Overall, our simulation system acts as a low-cost and reliable platform for autonomous driving testing and demonstration.
2310.11153
Unsupervised Pre-Training Using Masked Autoencoders for ECG Analysis
Unsupervised learning methods have become increasingly important in deep learning due to their demonstrated large utilization of datasets and higher accuracy in computer vision and natural language processing tasks. There is a growing trend to extend unsupervised learning methods to other domains, which helps to utilize a large amount of unlabelled data. This paper proposes an unsupervised pre-training technique based on masked autoencoder (MAE) for electrocardiogram (ECG) signals. In addition, we propose a task-specific fine-tuning to form a complete framework for ECG analysis. The framework is high-level, universal, and not individually adapted to specific model architectures or tasks. Experiments are conducted using various model architectures and large-scale datasets, resulting in an accuracy of 94.39% on the MITDB dataset for ECG arrhythmia classification task. The result shows a better performance for the classification of previously unseen data for the proposed approach compared to fully supervised methods.
Guoxin Wang, Qingyuan Wang, Ganesh Neelakanta Iyer, Avishek Nag, Deepu John
2023-10-17T11:19:51Z
http://arxiv.org/abs/2310.11153v1
# Unsupervised Pre-Training Using Masked Autoencoders for ECG Analysis ###### Abstract Unsupervised learning methods have become increasingly important in deep learning due to their demonstrated large utilization of datasets and higher accuracy in computer vision and natural language processing tasks. There is a growing trend to extend unsupervised learning methods to other domains, which helps to utilize a large amount of unlabelled data. This paper proposes an unsupervised pre-training technique based on masked autoencoder (MAE) for electrocardiogram (ECG) signals. In addition, we propose a task-specific fine-tuning to form a complete framework for ECG analysis. The framework is high-level, universal, and not individually adapted to specific model architectures or tasks. Experiments are conducted using various model architectures and large-scale datasets, resulting in an accuracy of 94.39% on the MITDB dataset for ECG arrhythmia classification task. The result shows a better performance for the classification of previously unseen data for the proposed approach compared to fully supervised methods. Masked Autoencoder, Unsupervised Learning, Big Data, Electrocardiogram ## I Introduction Electrocardiogram (ECG) analysis is crucial in diagnosing heart disease and related biomedical applications [1]. Typically, characteristic features of ECG such as time intervals, amplitude, and statistical parameters are extracted, and traditional machine learning methods are used to analyze ECG based on these features [2]. More recently, deep learning methods have demonstrated improved performance and effectiveness in biomedical signal analysis [3, 4]. These methods apply supervised learning, and results are often improved by designing better model architectures. One of the benefits of deep learning is that it facilitates the extraction of high-dimensional features from the signal without the need for complex manual pre-processing. In [5], Takalo-Mattila et al. built an automatic ECG classification system using Convolutional-Neural-Network (CNN)-based feature extraction. A multilayer perceptron (MLP) is used to classify ECG beats. This framework achieves an accuracy of \(89.9\%\) when tested with \(49712\) samples. Li et al. [6] presented an arrhythmia classification method that extracted ECG features by Residual Neural Network (ResNet) and enhanced it by overlapping segmentation method. They reported an accuracy of \(88.9\%\) over \(7942\) subjects. However, traditional supervised learning methods heavily rely on annotated labels from a single dataset; model training tends to overfit because the dataset is too small and has possible label errors, limiting the resulting models' generalization capability. In addition, many methods claiming excellent results are performed on intra-patient tasks, a division that introduces the same record into the training and testing sets, and, therefore, the high performance that cannot be obtained in real scenarios. Moreover, these methods often employ engineering techniques that may introduce data leakage or biases, raising concerns about the credibility of the reported results. In contrast, unsupervised learning methods offer a compelling alternative as they do not require labelled data, enabling the utilization of larger datasets while reducing errors associated with manual annotation. Masked autoencoder (MAE) is an effective, simple, unsupervised representation learning strategy proven in computer vision tasks [7], which could be extended to other research areas [8, 9]. This paper introduces a novel unsupervised pre-training technique based on the MAE for ECG-related applications and leveraging data augmentation techniques to improve performance. A task-specific fine-tuning is proposed for downstream applications. The complete framework is presented systematically, encompassing all essential aspects ranging from training to testing. To assess its efficacy, we choose cardiac arrhythmia classification as a case task, and the framework's performance is thoroughly evaluated through simulations, providing valuable information about its capabilities and potential clinical utility. The contributions of this research are as follows: * Unsupervised Pre-training and Task-specific Fine-tuning for ECG: This study presents a novel framework based on MAE for ECG signal analysis. The proposed framework achieves an accuracy of 94.39% on classifying cardiac arrhythmias in previously unseen data. Using unsupervised learning techniques, the framework overcomes the limitations of traditional supervised methods, which require extensive manual labelling of ECG records. * Using Larger-Scale datasets: Unlike conventional approaches that rely on labour-intensive labelling of individual ECG records, the proposed pre-training reduces the need for independent annotations. This feature facilitates more accessible and more efficient model training, enabling the utilization of large-scale datasets without the requirement of extensive manual labelling. This significantly contributes to improved scalability and potential for real-world implementations. The remainder of this paper is organized as follows: Section II details the proposed unsupervised learning-based technique and fine-tuning. The experimental results are presented in Section III, and Section IV provides conclusions and directions for future work. ## II Method The complete framework comprises two main parts: pre-training and fine-tuning. In the pre-training phase, we develop a training strategy for electrocardiogram signals based on the MAE. We then train the base model using a sizeable unlabelled dataset. In the fine-tuning phase, we freeze the base model and train the classifier using a small labelled dataset for the specific task. An overview of the framework is presented in _Figure 1_. ### _Pre-train_ The predominant approach to classifying cardiac arrhythmias involves supervised training using a limited dataset. However, this method has a potential drawback in that the trained model can become over-fitted, resulting in satisfactory performance on the training dataset but poor generalizability to other datasets [10]. To alleviate this issue, we propose using an unsupervised learning method. To this end, MAE-based training has been devised, described in greater detail below. The MAE-based solution is conceptually simple in that it removes a portion of the data and learns to predict what was removed. Its effectiveness has also been proven in computer vision and natural language processing. Specifically, this encoder-decoder structure operates by dividing input data into patches, with the encoder only processing a visible subset of patches, as depicted in _Figure 2b_. Subsequently, the decoder reconstructs the input with incomplete information. Although the reconstructed output may not be perfect, this approach helps the model better comprehend the input. Once trained, the decoder can be removed, and the encoder can serve as a practical feature extractor in other related tasks. An instance of the MAE applied to ECG signals is shown in _Figure 2_. MAE is beneficial for using large unlabelled datasets and can be advantageous for various downstream tasks, particularly for ECG classification with limited annotated data. In this paper, we utilize the one-dimensional version of the MAE. This paper utilizes ConvNeXtV2 [11] to implement a fully convolutional MAE (FCMAE), which is state-of-the-art in the image classification task. This method employs a non-symmetric encoder-decoder design and sparse convolution to reduce computational burden during the pre-training phase. The original ConvNeXtV2 model was designed for images, whereas our implementation ConvNeXtV2-1D includes the necessary modifications to accommodate the 1D ECG signal. The architecture is illustrated in _Figure 3_. ### _Fine-tune_ #### Ii-B1 Data Augmentation Data augmentation enhances the model's generalization performance in fine-tuning. Various augmented methods include mixup as described in [12] and Fig. 1: Overview of the MAE-based unsupervised pre-training technique with downstream fine-tuning additional white noise. By combining these methods, we ensured that the fundamental characteristics of the original data, such as the relative positions of fiducial points and peak intervals, remained unchanged. #### Ii-A2 Modelling We extended the pre-trained model into our framework by adding an MLP head as a classifier for detecting arrhythmia. We used well-established forward and backward propagation techniques for supervised learning during this phase. However, it is essential to note that the pre-trained model remained frozen throughout this process, ensuring that the previously learned features were retained without alteration. Therefore, only the newly added classifier underwent training, allowing it to specialize in accurately identifying arrhythmia patterns. ## III Evaluation and Results ### _Datasets_ This research used multiple datasets, each serving specific purposes. The first dataset used for pre-training the model was the PhysioNet / Computing in Cardiology Challenge 2021 (CINC2021) introduced by Alday et al. [13]. This large-scale database assembled nine databases with \(131,149\) unlabelled 12-lead ECG records for over 1000 hours. For the subsequent stages of fine-tuning and testing, we employed the MIT-BIH Arrhythmia Database (MITDB) curated by Moody and Mark [14]. This specific database consists of 48 records derived from 47 individual subjects. We also used the St Petersburg INCART 12-lead Arrhythmia Database (INCARTDB) [15] for fine-tuning. The INCARTDB database includes 75 records from 32 subjects. To partition the cardiac rhythm data extracted from the MIT-BIH Arrhythmia Database (MITDB), we adopt a methodology similar to the approach employed by [16]. The dataset denoted as DS1 is used for fine-tuning, while DS2 serves as the designated testing dataset. The heartbeats of DS1 and DS2 come from different individuals. Such division protocol is called in the literature inter-patient paradigm [17]. Furthermore, DS1 is divided into a ratio of 9:1, where 90% of the data are allocated for training purposes, while the remaining 10% are allocated for validation. For INCARTDB, we use the whole dataset for fine-tuning. The MITDB and INCARTDB database contains labelled annotations for four classes: N-type (normal beat), SVEB-type (atrial premature beat), VEB-type (premature ventricular contraction), F-type (fusion of ventricular and normal beat), and Q-type (unknown beat). Table I summarises the data distribution for these classes. In the pre-processing step of ECG signals, segmenting them into shorter pieces using fiducial points is standard practice. Initially, we re-sampled the ECG signals at a frequency of 360 Hz. Then, we detect all the 'R' peaks in the ECG signals for the unlabelled dataset. For labelled datasets, we use annotations, including peak position and diagnosis information. Next, we extract a segment of 480 sample points for each 'R' peak. This segment encompasses 360 points to the left and 120 points to the right of the 'R' peak. To ensure consistency across instances, we normalize each segment to a range between 0 and 1 in the final. ### _Architecture_ Our proposed framework is trained at a high level, and we evaluate it using different architectures tailored explicitly to the model. To leverage the simplicity of implementation, we utilize multiple variations of ConvNeXtV2-1D in the MAE-based training and the supervised training baseline for comparison and benchmarking purposes. We follow the same configurations of the stage, block (\(B\)), and channel (\(C\)) settings in [11]. * ConvNeXtV2-1D-Atto: \(C=40\), \(B=(2,2,6,2)\) Fig. 3: ConvNeXtV2-1D for ECG applications Fig. 2: An example of MAE, which attempts to reconstruct the original signal with limited information from the masked signal. * ConvNeXtV2-1D-Tiny: \(C=96\), \(B=(3,3,9,3)\) * ConvNeXtV2-1D-Base: \(C=192\), \(B=(3,3,27,3)\) ### _Experiment Setup_ The pre-trained model uses Stochastic Gradient Descent (SGD) with a batch size of 512 for 500 epochs. Adam optimizer is employed with a batch size of 1024 for 100 epochs for fine-tuning, initializing the learning rate to 0.0003. The learning rate is gradually reduced using cosine annealing. To establish a benchmark, we conduct complete supervised training using the same parameters as the fine-tuning process. ### _Evaluation_ MITDB-DS2 is the test set to calculate global performance for different methods. Accuracy is the main critical metric for the classification task. _Table II_ shows detailed results of different model architectures and training strategies. The results show that the proposed method achieved a higher accuracy than traditional supervised training, which is 90.83% for ConvNeXtV2-1D-Atto, 91.30% for ConvNeXtV2-1D-Tiny and 91.34% for ConvNeXtV2-1D-Base. In addition, _Table III_ shows that adding different datasets for fine-tuning and supervised learning is helpful for performance improvement. MAE-based training with MITDB-DS1 and INCARTDB fine-tuning achieved higher accuracy among multiple architecture complexity, which is 94.39% for ConvNeXtV2-1D-Atto, 93.98% for ConvNeXtV2-1D-Tiny and 93.89% for ConvNeXtV2-1D-Base. _Table IV_ compares the proposed framework and the current, reliable state of the art. [5, 6, 18], who have reported performance on the MITDB-DS2 with supervised deep learning methods, which is less than ours. Furthermore, their system used a single dataset for training and did not take full advantage of the available ECG dataset, resulting in low accuracy. Furthermore, [19] used methods based on machine learning and required a lot of manual handling but also reported low precision. [20, 21] have reported an accuracy of 100%, but they actually conducted the intra-patient test, where heartbeats of the same records probably appear in training and the testing dataset. However, in a realistic scenario, a fully automatic method will find patients' heartbeats different from those they used to learn in the training phase; hence, the high accuracy they report is questionable. Compared with these frameworks, our proposed framework achieves higher accuracy within the same task, fully uses existing datasets, and meets practical needs. ## IV Conclusion Our study introduced a new MAE-based cardiac arrhythmia classification system. The system uses unsupervised learning to learn generic ECG information and classify arrhythmia after fine-tuning. Experiments show that the proposed approach improves performance compared to traditional methods. Future work includes using different unsupervised learning approaches, more architectures, larger datasets, model compression, embedded system deployment, and transfer learning exploration.
2304.13803
Translate to Disambiguate: Zero-shot Multilingual Word Sense Disambiguation with Pretrained Language Models
Pretrained Language Models (PLMs) learn rich cross-lingual knowledge and can be finetuned to perform well on diverse tasks such as translation and multilingual word sense disambiguation (WSD). However, they often struggle at disambiguating word sense in a zero-shot setting. To better understand this contrast, we present a new study investigating how well PLMs capture cross-lingual word sense with Contextual Word-Level Translation (C-WLT), an extension of word-level translation that prompts the model to translate a given word in context. We find that as the model size increases, PLMs encode more cross-lingual word sense knowledge and better use context to improve WLT performance. Building on C-WLT, we introduce a zero-shot approach for WSD, tested on 18 languages from the XL-WSD dataset. Our method outperforms fully supervised baselines on recall for many evaluation languages without additional training or finetuning. This study presents a first step towards understanding how to best leverage the cross-lingual knowledge inside PLMs for robust zero-shot reasoning in any language.
Haoqiang Kang, Terra Blevins, Luke Zettlemoyer
2023-04-26T19:55:52Z
http://arxiv.org/abs/2304.13803v1
# Translate to Disambiguate: Zero-shot Multilingual Word Sense ###### Abstract Pretrained Language Models (PLMs) learn rich cross-lingual knowledge and can be fine-tuned to perform well on diverse tasks such as translation and multilingual word sense disambiguation (WSD). However, they often struggle at disambiguating word sense in a zero-shot setting. To better understand this contrast, we present a new study investigating how well PLMs capture cross-lingual word sense with Contextual Word-Level Translation (C-WLT), an extension of word-level translation that prompts the model to translate a given word in context. We find that as the model size increases, PLMs encode more cross-lingual word sense knowledge and better use context to improve WLT performance. Building on C-WLT, we introduce a zero-shot approach for WSD, tested on 18 languages from the XL-WSD dataset. Our method outperforms fully supervised baselines on recall for many evaluation languages without additional training or finetuning. This study presents a first step towards understanding how to best leverage the cross-lingual knowledge inside PLMs for robust zero-shot reasoning in any language. ## 1 Introduction Pretrained Language Models (PLMs) have been found to perform many cross-lingual tasks without explicit cross-lingual training signals, including word-level translation (WLT) across languages Gonen et al. (2020). These models also demonstrate cross-lingual knowledge when finetuned for the word sense disambiguation (WSD) Raganato et al. (2020); Pasini et al. (2021). However, little is known about the extent to which word sense knowledge comes from pretraining rather than finetuning: many PLMs struggle to disambiguate word sense when formulated as a binary classification task, the most common word sense setup for prompting language models Shi et al. (2022); Scao et al. (2022). To investigate this, we measure the ability of multilingual autoregressive language models to understand the cross-lingual meaning of words in a given context. Specifically, we extend the WLT task setup to include a specific context in the prompt, which we call Contextual Word-Level Translation (C-WLT). We show empirically that pretrained language models are able to take advantage of contextual information in the prompt to improve WLT performance, and as the model size increases, both English and multilingual PLM demonstrate improved cross-lingual knowledge resulting in better performance in contextual WLT. Translations of a word that change based on context are frequently due to differing word senses not shared by an analogous word in the target language Resnik and Yarowsky (1999). Inspired by this, we apply C-WLT to the task of WSD by translating the ambiguous word \(w\) in context with WLT and then assigning \(w\) with the senses in the overlap of the translated word's sense set with \(w\)'s senses (Figure 4, left). We test this zero-shot approach for WSD on 18 languages from the XL-WSD dataset Pasini et al. (2021), and find that in our best setting, WSD via C-WLT outperforms prior works on recall for many evaluation languages with no additional training or finetuning of the model. We also observe that ensembling diverse target languages with this method narrows down the predicted set of senses, as demonstrated by the improvements in Jaccard similarity with the reference set. Finally, we analyze our design choices and the types of errors made by this approach to better understand the behavior of WSD via C-WLT and how it relates to supervised WSD classification. The overall findings of this work are as follows: * PLMs leverage contextual information to encode cross-lingual knowledge and better cap ture lexical information, such as word translations and meanings. * We can leverage this contextual knowledge of lexical translation to effectively perform zero-shot WSD for many languages, including low-resource ones and languages on which the PLM was not pretrained. * The efficacy of WSD via C-WLT depends on the interplay between pretraining languages, model size, and target language choice: smaller multilingual PLMs perform better on seen languages but are more sensitive to design choices and do not generalize as well as larger English PLMs. In sum, we evaluate the lexical translation skills of PLMs in context, and we present a first step towards applying that skill to the downstream task of WSD. Given that most WSD training data outside of English are automatically created (e.g., Scarlini et al., 2019; Barba et al., 2021), zero-shot approaches such as our proposed WSD via C-WLT approach are crucial for improving WSD in lower-resource languages. ## 2 Contextual Word-Level Translation A common method of evaluating the cross-lingual capabilities of PLMs is the task of a word-level translation (WLT), where the model is prompted to translate a word \(w_{s}\) from a source language \(L_{s}\) into another target language \(L_{t}\)(Gonen et al., 2020). However, this setup does not consider variations in the translation of \(w_{s}\) into \(L_{t}\) that occur when the surface form of \(w_{s}\) represents multiple meanings, or senses, in different contexts. We propose an extension of the word-level translation task, Contextual Word-Level Translation (C-WLT), which requires translating words correctly based on how they are used in a given context (Figure 4, right panel). Specifically, we prompt the PLM to translate \(w_{s}\) from \(L_{s}\) into \(L_{t}\) when conditioned on a specific context \(c_{s}\) where \(w_{s}\in c_{s}\); we then measure whether it produced the correct translation(s) \(w_{t}\) in context of \(w_{s}\). For example, if we want to translate "plant" into Chinese based on the context sentence "The plant sprouted a new leaf", we prompt the PLM with _In the sentence "The plant sprouted a new leaf", the word "plant" is translated into Chinese as ___._ This evaluation allows us to quantify a PLM's ability to align meaning across languages in a context-specific manner. ### Experimental Setup Prompts and LanguagesAfter a preliminary analysis of potential prompt formats, our experiments use the following prompts: * **Without Context**: The word "\(w_{s}\)" is translated into \(L_{t}\) as ___ * **With Context**: In the sentence "\(c_{s}\)", the word "\(w_{s}\)" is translated into \(L_{t}\) as ___ We perform experiments with English as the source language and translate into Chinese, French, and Spanish as the target languages. ModelsWe use the GPT-Neo series of LMs with model sizes between 125 million to 20 billion parameters (including the GPT-J model that contains 6B parameters) and the BLOOM series with different model sizes from 560 million to 7.1 billion; all LMs are autoregressive models trained for next token prediction. We note that BLOOM is explicitly pretrained on all three of our target languages, whereas GPT-NeoX (Black et al., 2022) is trained as an English LM; however, GPT-NeoX's pretraining corpus contains an estimated \(\sim 2.6\%\) of non-English text (Gao et al., 2020), and prior work has found even small percentages of non-English text can facilitate cross-lingual transfer in English PLMs (Blevins and Zettlemoyer, 2022). DatasetWe select candidate source words from the English inventory of the XL-WSD dataset (Pasini et al., 2021). We then filter these into language pair datasets with <_source word, source example context, translations in context>_ tuples, where the sense-specific translations and example contexts are obtained from WordNet (Miller, 1995). We include in our dataset the source words where the most common sense (the first sense in Word-Net) and at least one other sense meet the following criteria: (a) both senses have non-overlapping sets of translations in the target language and (b) both senses are annotated with example contexts in the source language. For each sense, we use the translations for the other sense and 50 randomly selected words in the target language as incorrect translations, which are used as negative samples. Due to limited cross-lingual coverage with WordNet, the EN-FR, EN-ES, and EN-ZH experiments include 2448, 2470, and 2084 evaluation examples respectively. MetricsWe present three different types of metrics to evaluate the performance of models on the WLT task, with and without context. * **Accuracy**: We calculate two metrics to measure the accuracy of the models. (1) _top-1 accuracy_ measures the percentage of test instances in which the translation with the highest log-likelihood is one of the correct translations for a given sense. (2) _All translations accuracy_ measures the percentage of test instances where all \(k\) correct translations for that sense are assigned the \(k\) highest likelihoods by the model. * **Negative Log-Likelihoods (NLL)**: We compare the average _negative log-likelihood (NLL)_ of all (1) correct and (2) incorrect translations for each sense, as well as (3) the _ratio_ of the average NLL of the top-1 correct translations to the average NLL of all incorrect translations for each sense. * **Error Reduction**: We evaluate the impact of adding context sentences on resolving two types of errors. The first is _disambiguation_ errors, where the model produces a valid translation without context that would be an incorrect sense in the additional context; the second is _translation_ errors, where the model correctly translates the word in question (based on the context sentence) but produces a mistranslation without context. ### Results Adding Context Improves Word-Level Translation AccuracyFigure 1 presents the overall WLT results with and without context, averaged across the three target languages; word-level translation performance improves across all settings with the addition of context.1 We also observe that the performance of both uncontextualized and contextualized word-level translation improves as the model size increases, which corroborates prior findings that larger models better capture cross-lingual information from pretraining (e.g. Lin et al., 2021). Footnote 1: The results for each specific target language can be found in the appendix. (Figure 7 for Chinese; Figure 8 for French; Figure 9 for Spanish) Our experiments also show that, on average, the multilingual models outperform comparably sized English models in both WLT settings: the multilingual models achieve an average _top-1 accuracy_ of 47.94% in the uncontextualized task and 57.51% in the contextual task, whereas the English models obtain 30.20% and 53.2% in these settings, respectively. However, the performance gap between English and multilingual models narrows when we add sentences that use the word in context. Specifically, the experiments show that the largest English model, GPT-NeoX, performs similarly to the (smaller) multilingual BLOOM models; this suggests that English language models become more effective in leveraging limited cross-lingual knowledge at larger scales. Figure 1: Results of the zero-shot contextual WLT accuracies on GPT and BLOOM family models of different sizes (a) The results of top-1 accuracies across models. (b) The results of all translations accuracies across models. N: GPT-Neo, B: BLOOM, J: GPT-J Finally, in the setting of _all translations_, we observe that the improvement in performance due to the addition of context is more significant for multilingual models than for English models, leading to the larger performance gaps between these two types of models. Negative Log-LikelihoodsWe also consider the negative log-likelihoods produced by each model for the top correct translation compared to the incorrect translations (Figure 2). These results show that the negative log-likelihood (NLL) of the correct translations improves as the model size increases, suggesting that the models become more confident in their predictions in absolute terms. Furthermore, we find that the NLL ratio between correct and incorrect translation words generally increases as the model size improves; the multilingual models also demonstrate better differentiation ability between correct and incorrect translations than English models. Specifically, we observe an average ratio of 1.53 between incorrect and correct translations for multilingual models, compared to 1.28 for English models. Translation Error Reduction with ContextFinally, we analyze the extent to which adding context sentences fixes errors made by the PLMs in the standard WLT setting (Figure 3). Our results show that larger models benefit more than smaller ones from using contextual information to correct translation errors, with a larger percentage of prior errors resolved with the addition of context; this further highlights their ability to better leverage the additional context. In addition, multilingual models fix errors at a higher rate compared to English models With the addition of context. Surprisingly, we also observe that context helps correct complete _translation_ errors at higher rates than it does to _disambiguate_ the appropriate translation given a context sentence. This generally holds true for both the English and multilingual models and across all model scales, with the smallest English models as an exception (where very few errors of either type are resolved by the addition of context). ## 3 Zero-shot Word Sense Disambiguation via C-WLT Building on the intuition from the previous section that contextual word-level translation can differentiate between different meanings of a word in the source language, we apply C-WLT to the task of multilingual word sense disambiguation (Figure 4). Specifically, we propose a two-step process wherein we (1) prompt the PLM for C-WLT to translate the word being disambiguated, \(w\), in the relevant context and (2) disambiguate \(w\) based on the senses of its translation. For instance, if we would like to disambiguate the sense of the word "plant" as it is used in the context "The plant sprouted a new leaf", we would first prompt the PLM to translate "plant" into the chosen target language (such as Chinese) using the C-WLT setup from the previous section. We then take the top translation from the PLM (in this case, "["]") and obtain its senses from a multilingual word sense ontology. The example is then disambiguated with the set of senses that overlap between the senses of "plant" and the senses of "["]". Figure 3: The impact of adding context to WLT on translation (trans.) and disambiguation (disam.) errors across different model sizes. Figure 2: The average NLL of all correct and incorrect words across models in the contextual WLT analysis (less negative is better). The numbers in the figure represent the ratios of the negative NLLs of incorrect to correct translations (larger is better). ### Method The goal of word sense disambiguation (WSD) is to determine the meaning of a word \(w\) as it is used in a specific context \(c\) and label it with the sense label (or labels) that represents this meaning out of the candidate set of senses associated with that word, \(S\). In our proposed approach, **WSD via C-WLT**, \(w\) and \(c\) are in a language \(L_{s}\), and word senses are obtained from a multilingual ontology [1] and shared across languages. First, we prompt a PLM with the C-WLT setting to translate \(w_{s}\) based on \(c_{s}\) into the target language \(L_{t}\). We then obtain the inventory of all possible translations of \(w_{s}\) into \(L_{t}\) from the multilingual word sense ontology and rank them with the PLM conditioned on the C-WLT prompt. We then label \(w_{s}\) with the set of senses in the intersection of its candidate senses, \(S(w_{s})\), and those of the top-scoring translation under the PLM, \(S(w_{top1})\). We note that this means the WSD via C-WLT method assigns a set of labels to \(w\) rather than a single sense label like most trained WSD classifiers. Ensembling Target LanguagesThe described method for WSD via C-WLT obtains potential senses from translating into a single target language. We extend the method to ensemble the senses from a set of target languages \(T\), as we hypothesize that senses shared by translations of \(w_{s}\) in multiple typologically diverse languages are more likely to be relevant to the specific context at hand. Specifically, we consider the multiset of senses for the top translation in every target language: \(S(T)=\{S(w_{top1}^{t}):t\in T\}\). Our target set \(S(T)^{\prime}\) is the subset of \(S(T)\) that contains all senses that share the highest multiplicity (i.e., occur most frequently) in \(S(T)\). This means that senses shared by translations of \(w_{s}\) into multiple languages are more likely to be included in \(S(T)^{\prime}\). Similar to the single target language setting, we obtain the final predicted sense set from the intersection of \(S(T)^{\prime}\) and \(S(w_{s})\). ### Experimental Setup DatasetsWe evaluate performance with the XL-WSD dataset [10], which is comprised of 18 languages: Basque, Bulgarian, Catalan, Chinese, Croatian, Danish, Dutch, English, Estonian, French, Galician, German, Hungarian, Italian, Japanese, Korean, Slovenian, and Spanish. We use the BabelNet API 4.0.1 [20] as our multilingual word sense ontology to obtain translations and sense inventories of the data. We consider five target languages for our experiments: English, Chinese, Russian, Spanish, and Finnish; we aim to consider a wide range of typologically diverse languages as targets while maintaining high coverage of the source language examples in the multilingual ontology.2 In the case where a (non-English) evaluation example does not have at least one corresponding translation in the target language, we back off to the English translation setting as it provides full coverage over all Figure 4: Overview of the proposed method for multilingual WSD via C-WLT (left) and the prompting setup for C-WLT (right). We translate each ambiguous word \(w_{s}\) in context into a target language \(t\) with a PLM and label it with the intersection of its labels and the labels of the translation \(w_{top1}\). non-English evaluation sets. When evaluating English, we instead back off to the most common sense (MCS) of the word when an example is not covered by the target language(s) in each evaluation setting. ModelsPicking the three most powerful PLMs from the previous section, we use the BLOOM models (Scao et al., 2022) with 3 billion parameters and 7.1 billion parameters and the GPT-NeoX model with 20 billion parameters (Black et al., 2022). While GPT-NeoX is primarily trained in English, the Bloom models are specifically pre-trained on 6 out of the 18 evaluation languages of the XL-WSD dataset. BaselinesWe consider the Most Common Sense (MCS) method as a baseline, which predicts each word's most common sense according to BabelNet (Pasini et al., 2021). Additionally, we report the best results from the models introduced to benchmark the XL-WSD dataset in Pasini et al. (2021) as well as those in Zhang et al. (2022) and Berend (2022). Prior results are presented as a point of reference for the task scores. However, previous models for the XL-WSD dataset all require supervised training with annotated WSD data, unlike our approach, which is zero-shot and assumes no additional data or finetuning of the PLM. Evaluation Metrics for WSD via C-WLtWe consider two automatic metrics for evaluating the performance of the WSD via C-WLT approach. The first is _recall_, or how often the predicted label set contains at least one of the gold annotations for a given example. This metric is obtained from the dataset's evaluation script and is the standard for XL-WSD evaluation; it is often reported as F1 or accuracy in cases where the WSD approach produces a single prediction. However, recall overestimates performance in cases where a WSD approach predicts many unrelated sense labels in addition to a correct one. We therefore also calculate the _Jaccard index_ between the predicted set and the reference set of sense labels for each example: \(\frac{|L_{true}\cap L_{pred}|}{|L_{true}\cup L_{pred}|}\). While the Jaccard index is a better automatic measure of similarity in the setting of sense sets than recall, the metric can underestimate performance in cases where other, closely related senses are also appropriate in the given context but are not included in the reference sense set.3 Footnote 3: This type of annotation error is the most common found in an audit of English WSD corpora (Maru et al., 2022). ## 4 Multilingual WSD Results and Analysis We first present the performance of our method for multilingual WSD on the two automatic metrics, recall and Jaccard index, and compare this approach to prior work on this task (Section 4.1). We then consider the effect of ablating different modeling choices on our method (such as the choice of target language for C-WLT and prompt language; Section 4.2), and we analyze the outputs and types of errors the approach produces more closely (Section 4.3). ### Results The multilingual WSD results are summarized in Table 1. In our experiments, we found that the best setting for achieving a balance between recall and Jaccard Index was to ensemble English, Chinese, and Russian as the target languages with English prompts (Table 2). The results show that our approach achieves higher recall compared to the prior works in 11 out of the 18 source languages, despite the fact that our method is performed zero-shot from a pretrained language model. If considered as an upper bound measure on performance, this result shows that translation-based approaches for WSD can identify the correct sense label(s) as well or better than supervised methods. We also find that despite being primarily pre-trained on English, GPT-NeoX (20B) achieves higher recall and Jaccard index scores than Bloom-7.1 on 10 source languages; most settings where the multilingual model performs better are on its pretraining languages, with little generalization to other languages. Finally, despite the Jaccard index scoring lower (by definition) than recall, we see similar performance trends across languages and models between recall and the Jaccard index in this ensemble setting. ### Modeling Ablations Different Target LanguagesWe consider five different target languages: English, Chinese, Russian, Finnish, and Spanish. In addition to the five individual target language settings, we experiment with all combinations of joint target language settings (Table 2).4 We also calculate the _delta_ in crease in the sense prediction rates, normalized by the number of senses for each example. We compare the standard classification setting of predicting a single label per WSD example, and the number of labels predicted by each target language setting: \(\frac{1}{n}\sum_{i=0}^{n}\frac{|\hat{S}_{i}|}{|\hat{S}_{i}|}-\frac{1}{n}\sum_{i=0 }^{n}\frac{1}{|\hat{S}_{i}|}\) where \(\hat{S}_{i}\) is the candidate sense set for the \(i\)th evaluation example and \(\hat{S}_{i}\) is the set of senses predicted by our approach. Our ablations indicate a tradeoff between the Jaccard index and recall metrics. For example, our approach achieves the highest recall performance using Spanish as the sole target language, but the resulting Jaccard index is worse than any other target setting we test. This behavior is likely because target languages more similar to the source (such as Spanish, which is closely related to many of the Western European source languages in the XL-WSD dataset) return a larger set of predicted senses, which in turn improves recall but at the expense of set similarity with the gold labels. This hypothesis is corroborated by the high delta increase of 20% in the predicted set size of the Spanish setting over the standard single-label predicted setting. However, this undesirable behavior is mitigated by using less similar target languages and by ensembling a diverse set of languages. In our best setting of ensembling English, Chinese, and Russian we find that the delta increase in the predicted set size is only 6.7%, while the Jaccard index increases by \(\sim\)6 points over Spanish. Furthermore, this ensembled setting still often outperforms prior approaches on recall. Prompts in Different LanguagesWe also consider the effect of prompt language on the WSD via C-WLT method by ablating English prompts, prompts in the source language, and prompts in the target language for C-WLT. The English, Chinese, French, and Spanish prompts were obtained from or verified by native speakers; prompts in other languages were obtained directly from Google Trans \begin{table} \begin{tabular}{c|c c|c c||c c c} \hline \hline \multirow{2}{*}{**Language**} & \multirow{2}{*}{**MCS**} & \multirow{2}{*}{**Prior Work\({}^{*}\)**} & \multicolumn{3}{c||}{**Recall**} & \multicolumn{3}{c}{**Jaccard Index**} \\ & & & **NeoX** & **B-3B** & **B-7.1B** & **NeoX** & **B-3B** & **B-7.1B** \\ \hline Basque & 32.72 & 51.71 (b) & 47.85 & 52.53 & **54.31** & 37.20 & 41.04 & **42.95** \\ Bulgarian & 58.16 & 73.60 (d) & **75.51** & 71.56 & 72.05 & **66.28** & 63.32 & 63.78 \\ Catalan & 27.17 & **57.47** (b) & 55.73 & 55.83 & 56.40 & 39.44 & 40.41 & **40.85** \\ Chinese & 29.62 & 57.05 (b) & **61.03** & 60.64 & 58.87 & **46.86** & 46.78 & 46.26 \\ Croatian & 62.88 & 74.40 (b) & **77.01** & 74.85 & 74.82 & **70.00** & 68.53 & 68.46 \\ Danish & 64.33 & 81.80 (c) & **81.86** & 76.76 & 77.38 & **73.50** & 69.69 & 70.32 \\ Dutch & 44.61 & 61.95 (b) & **66.25** & 61.89 & 63.46 & **55.72** & 52.07 & 53.33 \\ English\({}^{\dagger}\) & 63.37 & **76.77** (a) & 72.61 & 72.15 & 73.20 & 60.56 & 60.13 & **61.39** \\ Estonian & 46.87 & 68.88 (b) & **70.24** & 65.58 & 65.88 & **61.72** & 58.94 & 58.80 \\ French & 59.31 & **83.88** (a) & 76.04 & 76.47 & **78.02** & 64.67 & 65.62 & **68.00** \\ Galician & 60.85 & 67.3(b) & 74.15 & 74.63 & **74.82** & 60.47 & **61.06** & 60.84 \\ German & 75.99 & **84.69** (b) & 81.45 & 78.31 & 81.57 & **74.40** & 71.60 & 74.02 \\ Hungarian & 47.29 & **76.4** (c) & 75.52 & 71.56 & 72.04 & **66.28** & 63.32 & 63.77 \\ Italian & 52.77 & **77.8** (c) & 76.63 & 74.50 & 74.58 & **57.91** & 57.62 & 57.63 \\ Japanese & 48.71 & 67.47 (c) & **71.63** & 70.78 & 71.38 & 57.56 & 57.38 & 55.72 \\ Korean & 52.48 & **68.2** (c) & 66.39 & 67.52 & 67.73 & 60.95 & 61.01 & 61.46 \\ Slovenian & 36.71 & **68.36** (a) & 53.12 & 46.21 & 47.93 & **40.32** & 33.36 & 37.05 \\ Spanish & 55.65 & 76.93 (b) & 75.42 & 75.53 & **77.66** & 55.58 & **56.50** & **58.36** \\ \hline Avg. & 49.31 & – & 70.35 & 68.62 & 69.45 & 58.59 & 57.42 & 58.24 \\ \hline \hline \end{tabular} \end{table} Table 1: Zero-shot Recall and Jaccard Index for multilingual WSD on the XL-WSD dataset in the best-ensembled setting. Results for languages on which Bloom was pre-trained are underlined. \({}^{*}\)Prior work numbers are drawn from the best results reported in (a) Pasini et al. (2021), (b) Berend (2022), and (c) Zhang et al. (2022); note that prior approaches are _not_ zero-shot as they require finetuning on labeled WSD data. \({}^{\dagger}\)For the 1512 (out of 8062) English examples with coverage issues, we used MCS as predictions. \begin{table} \begin{tabular}{c|c|c||c} **Target Lang.** & **Recall** & **Jaccard** & **Delta\({}^{*}\)** \\ \hline Spanish & 74.23 & 52.94 & 20.0 \\ English & 67.16 & 53.37 & 11.7 \\ Finnish & 66.35 & 54.28 & 12.9 \\ Russian & 67.42 & 55.08 & 10.2 \\ Chinese & 70.84 & 57.77 & 9.6 \\ \hline Best Setting & 70.35 & 58.59 & 8.7 \\ All 5 Joint\({}^{\dagger}\) & 66.60 & 57.50 & 6.7 \\ \hline \end{tabular} \end{table} Table 2: The average Recall and Jaccard Index (%) for the different target language settings of the GPT-NeoX model, as well as the delta(*) increase in sense label prediction rates. \({}^{\dagger}\)“All 5 joint” refers to the setting of using all five target languages above, whereas the “best setting” ensembles English, Chinese, and Russian. late. In this study, we use a subset of the evaluation languages, Spanish and Chinese, as our target languages and evaluate based on (a) the overall performance of the method in the prompt language (Figure 5) and (b) the language of the top-scoring prediction for each prompt language setting, out of the union of the candidate word sets from the prompt, source, and target languages (Figure 6). We observe that prompts in English and target languages outperform prompts in the source languages, with English prompts generally performing the best (though the target language prompts are comparable to English in Bloom). We also find that the non-English prompts are more likely to produce a top-1 prediction in the wrong (not target) language. This is particularly true in the case of source language prompts; along with the observed performance decrease, this suggests that prompting the model to generate a label in a different language than the prompt itself is difficult - unless the prompt language is English. Moreover, our results show that the multilingual LM (BLOOM-7.1b) is more prone to predicting words in the wrong languages than the English LM (GPT-NeoX). ### Error Analysis **Effect of Sense Frequency on Performance** Supervised WSD classifiers often learn to predict more commonly seen senses in the training data, which leads to stronger performance on examples of the most common sense (MCS) of words than the less common senses (LCS) [10]. We test whether this behavior holds with the unsupervised WSD via C-WLT approach by evaluating performance on examples where the gold sense is the MCS of the word and those annotated with an LCS separately (Table 6). The results show that the gap between MCS and LCS performance is relatively large for both metrics: we observe an average difference of 28.7 and 36.3 between MCS and LCS examples for recall and Jaccard index, respectively. We also find that the size of this performance gap is consistent between the GPT-NeoX and Bloom-7.1B models. We hypothesize that this performance gap stems from unbalanced latent sense supervision in the pretraining data that is due to the natural Zipfian distribution of senses in language [13]. This finding then highlights that even zero-shot methods extrapolating from the pretraining signals are still vulnerable to unbalanced data. Manual Precision AnalysisBased on our observation, the gold annotations in the test sets across all 18 languages mostly consist of one label (and occasionally two). This leads us to hypothesize that there may be other closely related senses that are suitable in the given context but not included in the reference sense set. To investigate this further, we have three native language speakers manually re-annotate 392 examples in the Chinese test set. Interestingly, our analysis finds that 172 examples (or 44%) have additional, closely related senses Figure 5: The results of performance differences by using prompts in different languages. The blue and red bars represent the results of GPT-NeoX and BLOOM-7.1b, respectively. Figure 6: The proportions of top 1 predictions in different languages by using prompts in different languages. A larger darkest area indicates better performance. \begin{table} \begin{tabular}{c|c c|c c} **Label Set** & \multicolumn{2}{c|}{**Recall**} & \multicolumn{2}{c}{**Jaccard**} \\ & **NeoX** & **B-7.1B** & **NeoX** & **B-7.1B** \\ \hline Orig. & 63.78 & 57.74 & 52.01 & 50.98 \\ Annot. & **74.01** & **74.54** & **54.29** & **52.73** \\ \end{tabular} \end{table} Table 3: The results for the subset of the Chinese evaluation set that was re-annotated in comparison to the original labels of the dataset. that are not included in the original annotations. For example, consider the sentence: "["] "["] "["] "["] "["] "["] "["] "["] "["] "["] "["] "["] "["] "["] "["] "["] "["] "["] "["] "["] "["] "["] "["] "["] "["] "["] "["] "["] "["] "["] " ["] "["] " ["] "["] " ["] "["] "["] "["] " ["] "["] "["] " ["] "["] "["] " ["] "["] "["] "["] "["] "["] "["] "["] " ["] "["] "["] " ["] "["] "["] "["] "["] "["] " ["] "["] "["] "["] " ["] "["] "["] "["] "["] " [""] " ["] "["] "["] "["] " ["] "[""] " ["] "["] "["] " [""] " [""] "["] " [""] " [""] "[""] "["] " [""] "[""] "[""] " [""] "[""] "[""] " [""] "[""] "[""] " [""] "[""] "[""] "[""] " [""] "[""] "[""] "[""] "[""] "[""] " [""] "[""] " [""] "[""] "[""] " ["""] "[""] "[""] "[""] " [""] "[""] " [""] " [""] "[""] " [""] "[""] "["""] " [""] "[""] "[""] " ["""] " [""] "[""] " ["""] "[""] " ["""] " ["""] "[""] " ["""] "[""] "[""] "[""] "[""] " ["""] "["""] " ["""] " [""] " ["""] " ["""] " ["""] " ["""] "[""] "[""] "[""] "[""] "[""] "["""] "[""] "[""] "[""] "[""] "[""] "[""] "[""] "["""] "[""] "[""] "[""] "[""] "[""] "[""] "[""] "[""] "[""] "["""] "[""] "[""] "[""] "[""] "["""] "[""] "[""] "[""] "[""] "[""] "[""] "[""] "[""] " [""] "[""] "[""] ["""] " ["""] "["""] "[""] "["""] "[""] "[""] "[""] "[""] "["""] " ["""] "[""] "[""] "[""] " ["""] "[""] "[""] " ["""] "["""] " ["""] "["""] "[""] "" ["""] " ["""] " ["""] " ["""] " ["""] "" ["""] "["""] " ["""] "["""] " ["""] " ["""] " ["""] " ["""] " [""""] "["""] "" ["""] "" ["""] " [""""] "" ["""] "" ["""] "" ["""] "" ["""] " ["""] "" ["""] "" ["""] "" ["""] "" [""""] " [""""] "" [""""] " [""""] " [""""] "" [""""] " [""""] "" [""""] "" ["""] "" [""""] " [""""] "" [""""] "" [""""] "" [""""] "" [""""] "" [""""] " [""""] " [""""] "" [""""] "" [""""] " [""""] " [""""] "" ["""] " [""""] " [""""] "" [""""] "" [""""] "" [""""] " [""""] " [""""] " [""""] " [""""] " [""""] "" [""""] "" [""""] " [""""] "" [""""] " [""""] " [""""] [""""] "" [""""] "" [""""] " [""""] " [""""] " [""""] " [""""] " [""""] " [""""] " [""""] "" [""""] " [""""] "" [""""] "" [""""] " ["""""] " ["""""] " ["""""] "" ["""""] [""""] "" [""""] "" [""""] "" [""""] "" [""""] "" [""""] "" [""""] [""""] " [""""] " ["""""] " [""""] " ["""""] [""""] " [""""] "" ["""""] "" [""""] "" ["""""] " ["""""] "" ["""""] " ["""""] "" [""""] [""""] "" [""""] "" [""""] "" ["""""] "" ["""""] [""""] "" [""""] ["""""] " ["""""] " ["""""] [""""] " ["""""] ["""""] " ["""""] ["""""] ["""""] " ["""""] ["""""] ["""""] ["""""] [""""] [""""] ["""""] ["""""] ["""""] [""""] ["""""] ["""""] ["""""] [""""] [""""] ["""""] ["""""] ["""""] [""""] [""""] [""""] [""""] [""""] ["""""] ["""""] ["""""] [""""] ["""""] ["""""] ["""""] [""""] [""""] [""""] ["""""] [""""] [""""] [""""] [""""] [""""] [""""] [""""] [""""] [""""] [""""] [""""] [""""] [""""] [""""] [""""] [""""] ["""] [""""] [""""] [""""] ["""] ["""] [""""] [""""] [""""] [""""] [""""] [""""] ["""] ["""] ["""] ["""] ["""] [""""] [""""] ["""] [""""] ["""] [""""] ["""] ["""] ["""] ["""] [""""] ["""] ["""] ["""] ["""] [""""] ["""] ["""] ["""] ["""] [""] ["""] ["""] ["""] ["""] ["""] ["""] ["""] ["""] ["""] ["""] [""""] [""""] ["""] ["""] ["""] ["""] ["""] ["""] [""""] ["""] ["""] ["""] ["""] [""""] ["""] ["""] ["""] [""""] ["""] ["""] [""""] ["""] [""""] ["""] ["""] [""""] ["""] ["""] ["""] [""""] ["""] ["""] ["""] ["""] ["""] [""""] ["""] ["""] [""""] ["""] [""""] ["""] ["""] ["""] ["""] ["""] [""""] ["""] [""""] ["""] [""""] ["""] ["""] ["""] ["""] [""""] ["""] ["""] ["""] ["""] ["""] ["""] [""""] [""""] ["""] ["""] ["""] ["""] ["""] [""""] ["""] [""""] [""""] ["""] [""""] [""""] [""""] ["""] ["""] [""""] ["""] [""""] ["""] ["""""] [" ### Limitations We recognize several limitations that influence C-WLT and our proposed approach for WSD. First, the WSD via C-WLT method is dependent on the composition of the multilingual word sense ontology we use to obtain cross-lingual word senses and translations. Lower coverage in the chosen target language will hinder the method's performance: we see this empirically in the case of English as an evaluation language, as no target language setting (including ensembling) fully covers English, which requires us to back off the MCS of each word. Similarly, the translation capability of PLMs, particularly for low-resource languages, may limit the effectiveness of both C-WLT and our WSD approach that relies on it. While we first present a study of the efficacy of C-WLT before incorporating it into our WSD method, due to data limitations (i.e., constructing a C-WLT data for each language pair that contains examples covering multiple senses of many different target words), we examine three high-resource language pairs. However, as better cross-lingual PLMs are developed, they can be directly integrated into our proposed approach to improve WSD for these languages. Finally, our approach is not well-suited for distinguishing between very fine-grained differences in word sense. While our small-scale manual precision analysis (Section 4.3) suggests that at least some WSD evaluation sets are not annotated with complete coverage of all relevant senses - leading to an underestimate of our approach's performance - the ability to differentiate between closely related senses precisely remains a hurdle for the WSD via C-WLT method, and addressing this issue in the future will further improve its applicability. ## Acknowledgements We thank the human annotators for the manual precision analysis of the Chinese XL-WSD evaluation set. We also thank Hila Gonen for her helpful comments and discussion on this work.
2308.04072
Bounded compact and dual compact approximation properties of Hardy spaces: new results and open problems
The aim of the paper is to highlight some open problems concerning approximation properties of Hardy spaces. We also present some results on the bounded compact and the dual compact approximation properties (shortly, BCAP and DCAP) of such spaces, to provide background for the open problems. Namely, we consider abstract Hardy spaces $H[X(w)]$ built upon translation-invariant Banach function spaces $X$ with weights $w$ such that $w\in X$ and $w^{-1}\in X'$, where $X'$ is the associate space of $X$. We prove that if $X$ is separable, then $H[X(w)]$ has the BCAP with the approximation constant $M(H[X(w)])\le 2$. Moreover, if $X$ is reflexive, then $H[X(w)]$ has the BCAP and the DCAP with the approximation constants $M(H[X(w)])\le 2$ and $M^*(H[X(w)])\le 2$, respectively. In the case of classical weighted Hardy space $H^p(w) = H[L^p(w)]$ with $1<p<\infty$, one has a sharper result: $M(H^p(w))\le 2^{|1-2/p|}$ and $M^*(H^p(w))\le 2^{|1-2/p|}$.
Oleksiy Karlovych, Eugene Shargorodsky
2023-08-08T06:11:49Z
http://arxiv.org/abs/2308.04072v1
# Bounded compact and dual compact ###### Abstract The aim of the paper is to highlight some open problems concerning approximation properties of Hardy spaces. We also present some results on the bounded compact and the dual compact approximation properties (shortly, BCAP and DCAP) of such spaces, to provide background for the open problems. Namely, we consider abstract Hardy spaces \(H[X(w)]\) built upon translation-invariant Banach function spaces \(X\) with weights \(w\) such that \(w\in X\) and \(w^{-1}\in X^{\prime}\), where \(X^{\prime}\) is the associate space of \(X\). We prove that if \(X\) is separable, then \(H[X(w)]\) has the BCAP with the approximation constant \(M(H[X(w)])\leq 2\). Moreover, if \(X\) is reflexive, then \(H[X(w)]\) has the BCAP and the DCAP with the approximation constants \(M(H[X(w)])\leq 2\) and \(M^{*}(H[X(w)])\leq 2\), respectively. In the case of classical weighted Hardy space \(H^{p}(w)=H[L^{p}(w)]\) with \(1<p<\infty\), one has a sharper result: \(M(H^{p}(w))\leq 2^{\left\lvert 1-2/p\right\rvert}\) and \(M^{*}(H^{p}(w))\leq 2^{\left\lvert 1-2/p\right\rvert}\). keywords: Bounded compact and dual compact approximation properties, translation-invariant Banach function space, weighted Hardy space. Msc: [2020] 41A44, 41A65, 46B50, 46E30. + Footnote †: journal: url][https://docentes.fct.unl.pt/oyk](https://docentes.fct.unl.pt/oyk) [MISSING_PAGE_POST] **1. Introduction** For a Banach space \(E\), let \({\cal B}(E)\) and \({\cal K}(E)\) denote the sets of bounded linear and compact linear operators on \(E\), respectively. The norm of an operator \(A\in{\cal B}(E)\) is denoted by \(\|A\|_{{\cal B}(E)}\). The essential norm of \(A\in{\cal B}(E)\) is defined as follows: \[\|A\|_{{\cal B}(E),{\rm e}}:=\inf\{\|A-K\|_{{\cal B}(E)}\ :\ K\in{\cal K}(E)\}.\] For a Banach space \(E\) and an operator \(A\in{\cal B}(E)\), consider the following measure of noncompactness: \[\|A\|_{{\cal B}(E),m}:=\inf_{L\ \subseteq\ E\ {\rm closed\ linear\ subspace}\atop{\rm dim }(E/L)<\infty}\ \left\|A|_{L}\right\|_{{\cal B}(L)},\] where \(A|_{L}\) denotes the restriction of \(A\) to \(L\). It follows from [19, formula (3.29)] that if \(A\in{\cal B}(E)\), then \[\|A\|_{{\cal B}(E),m}\leq\|A\|_{{\cal B}(E),{\rm e}}.\] Motivated by applications to the Fredholm theory of Toeplitz operators (see [24]), we are interested in the smallest constant \(C\) in the reverse estimate: \[\|A\|_{{\cal B}(E),{\rm e}}\leq C\|A\|_{{\cal B}(E),m}\quad\mbox{for all}\quad A\in{\cal B}(E).\] Note that such estimate is not true without additional assumptions on \(E\) (see [2] and also [16]). A Banach space \(E\) is said to have the bounded compact approximation property (BCAP) if there exists a constant \(M\in(0,\infty)\) such that given any \(\varepsilon>0\) and any finite set \(F\subset E\), there exists an operator \(T\in{\cal K}(E)\) such that \[\|I-T\|_{{\cal B}(E)}\leq M,\quad\|y-Ty\|_{E}<\varepsilon\quad\mbox{for all}\quad y\in F.\] Here \(I\) is the identity map from \(E\) to itself. The greatest lower bound of the constants \(M\) for which (1.3) holds will be denoted by \(M(E)\). A Banach space \(E\) with the dual space \(E^{*}\) is said to have the dual compact approximation property (DCAP) if there is a constant \(M^{*}\in(0,\infty)\) such that given any \(\varepsilon>0\) and any finite set \(G\subset E^{*}\) there exists an operator \(T\in{\cal K}(E)\) such that \[\|I-T\|_{{\cal B}(E)}\leq M^{*},\quad\|z-T^{*}z\|_{E^{*}}<\varepsilon\quad \mbox{for all}\quad z\in G.\] The greatest lower bound of the constants \(M^{*}\), for which (1.4) holds, will be denoted by \(M^{*}(E)\). It is easy to see that if \(E\) is reflexive, then \(E\) has the DCAP if and only if its dual space \(E^{*}\) has the BCAP. In this case \(M^{*}(E)=M(E^{*})\). **Theorem 1.1**.: _Let \(E\) be a Banach space._ * _If_ \(E\) _has the BCAP, then (_1.2_) holds with_ \(C=2M(E)\)_._ * _If_ \(E\) _has the DCAP, then (_1.2_) holds with_ \(C=M^{*}(E)\)_._ Part (a) follows from [19, Theorems 3.1 and 3.6] (note that there is a typo in [19, formula (3.7)], where the factor \(2\) is missing). Part (b) was proved in [24, Theorem 2.2]. It follows from (1.1) and Theorem 1.1 that if a Banach space \(E\) has the BCAP or the DCAP, then the essential norm \(\|\cdot\|_{\mathcal{B}(E),\mathrm{e}}\) and the \(m\)-measure of noncompactness \(\|\cdot\|_{\mathcal{B}(E),m}\) are equivalent. The condition \(\|I-T\|_{\mathcal{B}(E)}\leq M\) is often substituted by \(\|T\|_{\mathcal{B}(E)}\leq M\) in the definition of BCAP (see, e.g., [5, 6, 20, 21], and the references therein). Let \(m(E)\) be the greatest lower bound of the constants \(M\) for which the conditions in this alternative definition of BCAP are satisfied. Clearly, \[m(E)-1\leq M(E)\leq m(E)+1.\] We are interested in \(M(E)\) rather than in \(m(E)\) because the former appears naturally in estimates for the essential norms of operators by their measures of noncompactnes (see Theorem 1.1 and [2, 9, 19, 24]). It is well known that \(m(L^{p}[0,1])=1\), \(1\leq p<\infty\) (see, e.g., [22, Lemma 19.3.5]). The value of \(M(L^{p}[0,1])\) was found in [25, Theorem 3.2]: if \(1\leq p<\infty\), then \(M(L^{p}[0,1])=C_{p}\), where \(C_{p}\) is the norm of the operator \[L^{p}[0,1]\ni f\ \longmapsto\ f-\int_{0}^{1}f(t)\,dt\in L^{p}[0,1],\] i.e. \(C_{1}=2\) and, for \(1<p<\infty\), \[C_{p}:=\max_{0\leq\alpha\leq 1}\left(\alpha^{p-1}+(1-\alpha)^{p-1}\right)^{1/p} \left(\alpha^{1/(p-1)}+(1-\alpha)^{1/(p-1)}\right)^{1-1/p} \tag{1.5}\] (see [11, formula (8)]). For a function \(f\in L^{1}\) on the unit circle \(\mathbb{T}:=\{z\in\mathbb{C}:\ |z|=1\}\), let \[\widehat{f}(n)=\frac{1}{2\pi}\int_{-\pi}^{\pi}f\left(e^{i\theta}\right)e^{-in \theta}\,d\theta,\quad n\in\mathbb{Z}\] be the Fourier coefficients of \(f\). Let \(X\) be a Banach space of measurable complex-valued functions on \(\mathbb{T}\) continuously embedded into \(L^{1}\). Let \[H[X]:=\{g\in X\ :\ \widehat{g}(n)=0\quad\text{for all}\quad n<0\}\] denote the abstract Hardy space built upon the space \(X\). In the case \(X=L^{p}\), where \(1\leq p\leq\infty\), we will use the standard notation \(H^{p}:=H[L^{p}]\). The classical Hardy spaces \(H^{p}\) with \(1<p<\infty\) have the BCAP and the DCAP with \[M(H^{p})\leq 2^{|1-2/p|},\quad M^{*}(H^{p})\leq 2^{|1-2/p|} \tag{1.6}\] (see [24, Theorem 3.1]). A measurable function \(w:\mathbb{T}\to[0,\infty]\) is said to be a weight if \(0<w<\infty\) a.e. on \(\mathbb{T}\). Let \(1<p<\infty\) and \(w\) be a weight. Weighted Lebesgue spaces \(L^{p}(w)\) consist of all measurable functions \(f:\mathbb{T}\to\mathbb{C}\) such that \(fw\in L^{p}\). The norm in \(L^{p}(w)\) is defined by \[\|f\|_{L^{p}(w)}:=\|fw\|_{L^{p}}=\left(\int_{\mathbb{T}}|f(t)|^{p}w^{p}(t)\,dm (t)\right)^{1/p},\] where \(m\) is the Lebesgue measure on \(\mathbb{T}\) normalized so that \(m(\mathbb{T})=1\). Estimates (1.6) remain true for the weighted Hardy spaces \(H^{p}(w):=H[L^{p}(w)]\). **Theorem 1.2**.: _Let \(1<p<\infty\), \(1/p+1/p^{\prime}=1\), and let \(w\) be a weight such that \(w\in L^{p}\) and \(1/w\in L^{p^{\prime}}\). Then the weighted Hardy space \(H^{p}(w)\) has the BCAP and the DCAP with_ \[M(H^{p}(w))\leq 2^{|1-2/p|},\quad M^{*}(H^{p}(w))\leq 2^{|1-2/p|}.\] Let \(X\) be a Banach function space on the unit circle \(\mathbb{T}\) equipped with the Lebesgue measure \(dm\) and let \(X^{\prime}\) be its associate space (see [4, Ch. 1]). We postpone the definitions of these notions until Section 2.1. Here we only mention that the class of Banach function spaces is very rich, it includes all Lebesgue spaces \(L^{p}\), \(1\leq p\leq\infty\), Orlicz spaces \(L^{\varphi}\) (see, e.g., [4, Ch. 4, Section 8]), and Lorentz spaces \(L^{p,q}\) (see, e.g., [4, Ch. 4, Section 4]). For a weight \(w\), the weighted space \(X(w)\) consists of all measurable functions \(f:\mathbb{T}\to\mathbb{C}\) such that \(fw\in X\). We equip it with the norm \[\|f\|_{X(w)}=\|fw\|_{X}.\] We will suppose that \(w\in X\) and \(1/w\in X^{\prime}\). Then \(X(w)\) is a Banach function space itself and \(L^{\infty}\hookrightarrow X(w)\hookrightarrow L^{1}\) (see [13, Lemma 2.3(b)]). For \(f\in X\) we will use the following notation: \[(\tau_{\vartheta}f)(e^{it}):=f(e^{i(t-\vartheta)}),\quad t,\vartheta\in[-\pi, \pi].\] A Banach function space is said to be translation-invariant if for every \(f\in X\) and every \(\vartheta\in[-\pi,\pi]\), one has \(\tau_{\vartheta}f\in X\) and \(\|\tau_{\vartheta}f\|_{X}=\|f\|_{X}\). Note that all rearrangement-invariant Banach function spaces (see [4, Ch. 2]) are translation-invariant. The following analogue of (1.6) holds for the spaces \(H[X(w)]\). **Theorem 1.3**.: _Let \(X\) be a translation-invariant Banach function space with the associate space \(X^{\prime}\) and let \(w\) be a weight such that \(w\in X\) and \(1/w\in X^{\prime}\)._ * _If_ \(X\) _is separable, then the abstract Hardy space has the BCAP with_ \[M(H[X(w)])\leq 2.\] * _If_ \(X\) _is reflexive, then the abstract Hardy space_ \(H[X(w)]\) _has the BCAP and the DCAP with_ \[M(H[X(w)])\leq 2,\quad M^{*}(H[X(w)])\leq 2.\] The paper is organised as follows. In Section 2, we collect preliminaries on Banach function spaces. Further, we give some estimates for the adjoints to restrictions of operators. In Section 3, we show that if a translation-invariant Banach function space \(X\) is separable, then the Hardy space \(H[X]\) has the BCAP with \(M(H[X])\leq 2\). Moreover, if \(X\) is reflexive, then \(H[X]\) has the BCAP and the DCAP with \(M(H[X])\leq 2\) and \(M^{*}(H[X])\leq 2\), respectively. In Section 4, we observe that if \(X\) is a Banach function space and \(w\) is a weight such that \(w\in X\) and \(1/w\in X^{\prime}\), then the Hardy spaces \(H[X]\) and \(H[X(w)]\) are isometrically isomorphic. This result combined with (1.6) and the main result of Section 3 implies Theorems 1.2 and 1.3. The main part of the paper is Section 5, where some open problems concerning approximation properties of Hardy spaces are stated and discussed. ## 2 Preliminaries ### Banach function spaces Let \(\mathcal{M}\) be the set of all measurable extended complex-valued functions on \(\mathbb{T}\) equipped with the normalized measure \(dm(t)=|dt|/(2\pi)\) and let \(\mathcal{M}^{+}\) be the subset of functions in \(\mathcal{M}\) whose values lie in \([0,\infty]\). Following [4, Ch. 1, Definition 1.1], a mapping \(\rho:\mathcal{M}^{+}\to[0,\infty]\) is called a Banach function norm if, for all functions \(f,g,f_{n}\in\mathcal{M}^{+}\) with \(n\in\mathbb{N}\), and for all constants \(a\geq 0\), the following properties hold: \[\mathrm{(A1)} \rho(f)=0\Leftrightarrow f=0\ \mathrm{a.e.},\ \rho(af)=a\rho(f),\ \rho(f+g)\leq\rho(f)+\rho(g),\] \[\mathrm{(A2)} 0\leq g\leq f\ \mathrm{a.e.}\ \Rightarrow\ \rho(g)\leq\rho(f)\quad\text{(the lattice property)},\] \[\mathrm{(A3)} 0\leq f_{n}\uparrow f\ \mathrm{a.e.}\ \Rightarrow\ \rho(f_{n})\uparrow\rho(f)\quad\text{(the Fatou property)},\] \[\mathrm{(A4)} \rho(\mathbb{1})<\infty,\] \[\int_{\mathbb{T}}f(t)\,dm(t)\leq C\rho(f)\] with a constant \(C\in(0,\infty)\) that may depend on \(\rho\), but is independent of \(f\). When functions differing only on a set of measure zero are identified, the set \(X\) of all functions \(f\in\mathcal{M}\) for which \(\rho(|f|)<\infty\) is called a Banach function space. For each \(f\in X\), the norm of \(f\) is defined by \(\|f\|_{X}:=\rho(|f|)\). The set \(X\) equipped with the natural linear space operations and with this norm becomes a Banach space (see [4, Ch. 1, Theorems 1.4 and 1.6]). If \(\rho\) is a Banach function norm, its associate norm \(\rho^{\prime}\) is defined on \(\mathcal{M}^{+}\) by \[\rho^{\prime}(g):=\sup\left\{\int_{\mathbb{T}}f(t)g(t)\,dm(t)\ :\ f\in \mathcal{M}^{+},\ \rho(f)\leq 1\right\},\quad g\in\mathcal{M}^{+}.\] It is a Banach function norm itself ([4, Ch. 1, Theorem 2.2]). The Banach function space \(X^{\prime}\) defined by the Banach function norm \(\rho^{\prime}\) is called the associate space (Kothe dual) of \(X\). The associate space \(X^{\prime}\) can be viewed as a subspace of the Banach dual space \(X^{*}\) (see [4, Ch. 1, Theorem 2.9]). The following lemma can be proved as in the non-periodic case (see [15, Lemma 2.1]). **Lemma 2.1**.: _Let \(X\) be a Banach function space and \(X^{\prime}\) be its associate space. Then \(X\) is translation-invariant if and only if \(X^{\prime}\) is translation-invariant._ ### Adjoints to restrictions of operators In this subsection, we present some simple results, for which we could not find a convenient reference. Let \(X\) and \(Y\) be Banach spaces, \(X_{0}\subseteq X\) and \(Y_{0}\subseteq Y\) be closed linear subspaces, and let \(A\in\mathcal{B}(X,Y)\) be such that \(A(X_{0})\subseteq Y_{0}\). Let \(A_{0}\in\mathcal{B}(X_{0},Y_{0})\) be the restriction of \(A\) to \(X_{0}\): \[A_{0}x_{0}:=Ax_{0}\in Y_{0}\ \ \text{for all}\ \ x_{0}\in X_{0}.\] Let \[X_{0}^{\perp}:=\{x^{*}\in X^{*}:\ x^{*}(x_{0})=0\ \ \text{for all}\ \ x_{0}\in X_{0}\}\] and let \(Y_{0}^{\perp}\) be defined similarly. Then \(X_{0}^{*}\) and \(Y_{0}^{*}\) are isometrically isomorphic to the quotient spaces \(X^{*}/X_{0}^{\perp}\) and \(Y^{*}/Y_{0}^{\perp}\), respectively (see, e.g., [7, Theorem 7.1]). We will identify these spaces and will denote by \([x^{*}]\) the element of \(X^{*}/X_{0}^{\perp}\) corresponding to \(x^{*}\in X^{*}\), and similarly for \([y^{*}]\). It is easy to see that \(A^{*}(Y_{0}^{\perp})\subseteq X_{0}^{\perp}\). Indeed, take any \(y_{0}^{*}\in Y_{0}^{\perp}\) and \(x_{0}\in X_{0}\). Since \(Ax_{0}\in Y_{0}\), one has \[(A^{*}y_{0}^{*})(x_{0})=y_{0}^{*}(Ax_{0})=0.\] So, \(A^{*}y_{0}^{*}\in X_{0}^{\perp}\). Hence the operator \([A^{*}]\), \[[A^{*}][y^{*}]:=[A^{*}y^{*}]\in X^{*}/X_{0}^{\perp},\quad[y^{*}]\in Y^{*}/Y_{0 }^{\perp}\] is a well defined element of \(\mathcal{B}(Y^{*}/Y_{0}^{\perp},X^{*}/X_{0}^{\perp})=\mathcal{B}(Y_{0}^{*},X_ {0}^{*})\), and it is easy to see that \(A_{0}^{*}=[A^{*}]\). Indeed, one has for every \([y^{*}]\in Y^{*}/Y_{0}^{\perp}\) and \(x_{0}\in X_{0}\), \[(A_{0}^{*}[y^{*}])(x_{0}) =[y^{*}](A_{0}x_{0})=[y^{*}](Ax_{0})=y^{*}(Ax_{0})=(A^{*}y^{*})(x _{0})\] \[=[A^{*}y^{*}](x_{0})=([A^{*}][y^{*}])(x_{0}).\] **Lemma 2.2**.: _Let \(X\) and \(Y\) be Banach spaces, \(X_{0}\subseteq X\) and \(Y_{0}\subseteq Y\) be closed linear subspaces, and \(A\in\mathcal{B}(X,Y)\) be such that \(A(X_{0})\subseteq Y_{0}\). If \(A_{0}:=A|_{X_{0}}\), then for every \(y^{*}\in Y^{*}\), one has_ \[\|A_{0}^{*}[y^{*}]\|_{X_{0}^{*}}\leq\|A^{*}y^{*}\|_{X^{*}},\] _where \([y^{*}]\) is the element of \(Y^{*}/Y_{0}^{\perp}\) corresponding to \(y^{*}\)._ Proof.: We have \[\|A_{0}^{*}[y^{*}]\|_{X_{0}^{*}} =\|A_{0}^{*}[y^{*}]\|_{X^{*}/X_{0}^{\perp}}=\|[A^{*}][y^{*}]\|_{X ^{*}/X_{0}^{\perp}}=\|[A^{*}y^{*}]\|_{X^{*}/X_{0}^{\perp}}\] \[=\inf_{x_{0}\in X_{0}}\|A^{*}y^{*}+x_{0}\|_{X^{*}}\leq\|A^{*}y^{* }\|_{X^{*}}\,.\] which completes the proof. Bounded compact and dual compact approximation properties of abstract Hardy spaces built upon translation-invariant spaces ### Continuity of shifts in separable translation-invariant Banach function spaces We start with the following simple lemma. **Lemma 3.1**.: _Let \(X\) be a translation-invariant Banach function space. If \(X\) is separable, then for every \(f\in X\),_ \[\lim_{\vartheta\to 0}\|\tau_{\vartheta}f-f\|_{X}=0. \tag{3.1}\] Proof.: By (14, Lemma 2.2.1), a Banach function space \(X\) is separable if and only if the set of continuous functions \(C\) is dense in \(X\). Let \(f\in X\) and \(\varepsilon>0\). Then there exists \(g\in C\) such that \(\|f-g\|_{X}<\varepsilon/3\). Taking into account that \(X\) is translation-invariant, we see that for all \(\vartheta\in[-\pi,\pi]\), \[\|\tau_{\vartheta}f-f\|_{X} \leq\|\tau_{\vartheta}f-\tau_{\vartheta}g\|_{X}+\|\tau_{ \vartheta}g-g\|_{X}+\|g-f\|_{X}\] \[=2\|f-g\|_{X}+\|\tau_{\vartheta}g-g\|_{X}\] \[<\frac{2}{3}\varepsilon+\|1\|_{X}\|\tau_{\vartheta}g-g\|_{C}.\] Since \[\lim_{\vartheta\to 0}\|\tau_{\vartheta}g-g\|_{C}=0,\] the above inequality yields \[\limsup_{\vartheta\to 0}\|\tau_{\vartheta}f-f\|_{X}\leq\frac{2}{3} \varepsilon<\varepsilon.\] Letting \(\varepsilon\to 0\), we arrive at (3.1). ### Convolutions with integrable functions on translation-invariant Banach function spaces Recall that the convolution of two functions \(f,g\in L^{1}\) is defined by \[(f*g)(e^{i\varphi}):=\frac{1}{2\pi}\int_{-\pi}^{\pi}f(e^{i(\varphi-\theta)})g (e^{i\theta})\,d\theta.\] The following lemmas might be known to experts, however we were not able to find an explicit reference. **Lemma 3.2**.: _Suppose that \(X\) is a translation-invariant Banach function spaces. If \(K\in L^{1}\), then the convolution operator \(C_{K}\) defined by_ \[C_{K}g=K*g,\quad g\in X, \tag{3.2}\] _is bounded on \(X\) and_ \[\|C_{K}\|_{\mathcal{B}(X)}\leq\|K\|_{L^{1}}. \tag{3.3}\] _If, in addition, \(K\geq 0\), then_ \[\|C_{K}\|_{\mathcal{B}(X)}=\|K\|_{L^{1}}. \tag{3.4}\] Proof.: For every \(h\in X^{\prime}\), in view of Tonelli's theorem (see, e.g., [3, Theorem 5.28]) and Holder's inequality for Banach function spaces (see [4, Ch. 1, Theorem 2.4]), one has \[\int_{-\pi}^{\pi}\left|(K*g)(e^{i\vartheta})h(e^{i\vartheta}) \right|\,d\vartheta\leq\frac{1}{2\pi}\int_{-\pi}^{\pi}\int_{-\pi}^{\pi}\left|K( e^{i(\vartheta-\theta)})\right|\left|g(e^{i\theta})\right|\left|h(e^{i \vartheta})\right|\,d\theta\,d\vartheta\] \[\qquad=\frac{1}{2\pi}\int_{-\pi}^{\pi}\int_{-\pi}^{\pi}\left|K(e ^{i\theta})\right|\left|g(e^{i(\vartheta-\theta)})\right|\left|h(e^{i \vartheta})\right|\,d\theta\,d\vartheta\] \[\qquad=\frac{1}{2\pi}\int_{-\pi}^{\pi}\left|K(e^{i\theta}) \right|\left(\int_{-\pi}^{\pi}\left|(\tau_{\theta}g)(e^{i\vartheta})\right| \left|h(e^{i\vartheta})\right|\,d\vartheta\right)d\theta\] \[\qquad\leq\int_{-\pi}^{\pi}\left|K(e^{i\theta})\right|\|\tau_{ \theta}g\|_{X}\|h\|_{X^{\prime}}\,d\theta=2\pi\|K\|_{L^{1}}\|g\|_{X}\|h\|_{X^{ \prime}}. \tag{3.5}\] In view of the Lorentz-Luxemburg theorem (see [4, Ch. 1, Theorem 2.7]), the last inequality implies that \[\|K*g\|_{X} =\|K*g\|_{X^{\prime\prime}}\] \[=\sup\left\{\frac{1}{2\pi}\int_{-\pi}^{\pi}\left|(K*g)(e^{i \vartheta})h(e^{i\vartheta})\right|\,d\vartheta:\ h\in X^{\prime},\|h\|_{X^{ \prime}}\leq 1\right\}\] \[\leq\|K\|_{L^{1}}\|g\|_{X},\] which implies (3.3). If, in addition, we suppose that \(K\geq 0\), then for a.e. \(\varphi\in[-\pi,\pi]\), \[(K*1)(e^{i\varphi})=\frac{1}{2\pi}\int_{-\pi}^{\pi}K(e^{i(\varphi-\theta)})\, d\theta=\frac{1}{2\pi}\int_{-\pi}^{\pi}K(e^{it})\,dt=\|K\|_{L^{1}}.\] Hence, \[\|C_{K}\|_{\mathcal{B}(X)}=\sup_{f\in X\setminus\{0\}}\frac{\|K*f\|_{X}}{\|f \|_{X}}\geq\frac{\|K*1\|_{X}}{\|1\|_{X}}=\frac{\|K\|_{L^{1}}\|1\|_{X}}{\|1\|_{X }}=\|K\|_{L^{1}}.\] Combining this inequality with (3.3), we arrive at (3.4). ### BCAP and DCAP of abstract Hardy spaces built upon translation-invariant Banach function spaces Now we are in a position to prove the main result of this section. **Theorem 3.3**.: _Let \(X\) be a translation-invariant Banach function space._ * _If_ \(X\) _is separable, then the abstract Hardy space_ \(H[X]\) _has the BCAP with_ \[M(H[X])\leq 2.\] * _If_ \(X\) _is reflexive, then the abstract Hardy space_ \(H[X]\) _has the BCAP and DCAP with_ \[M(H[X])\leq 2,\quad M^{*}(H[X])\leq 2.\] Proof.: (a) For \(\theta\in[-\pi,\pi]\) and \(n=0,1,2,\dots\), let \[K_{n}\left(e^{i\theta}\right):=\sum_{k=-n}^{n}\left(1-\frac{|k|}{n+1}\right)e^ {ik\theta}=\frac{1}{n+1}\left(\frac{\sin\frac{(n+1)\theta}{2}}{\sin\frac{ \theta}{2}}\right)^{2},\] be the \(n\)-th Fejer kernel, and let \[\mathbf{K}_{n}f:=K_{n}*f,\quad f\in X.\] It is well known that \(K_{n}\geq 0\), \(\|K_{n}\|_{L^{1}}=1\), and \[(\mathbf{K}_{n}f)\left(e^{i\vartheta}\right)=\sum_{k=-n}^{n}\widehat{f}(k) \left(1-\frac{|k|}{n+1}\right)e^{ik\theta}, \tag{3.6}\] where \(\widehat{f}(k)\) is the \(k\)-th Fourier coefficient of \(f\) (see, e.g., [18, Ch. I, Section 2.5]). It follows from Lemma 3.2 that \(\|\mathbf{K}_{n}\|_{X\to X}=1\). Hence \[\|I-\mathbf{K}_{n}\|_{\mathcal{B}(X)}\leq 1+\|\mathbf{K}_{n}\|_{\mathcal{B}(X)}=2.\] It follows from Lemma 3.1 that a separable translation-invariant Banach function space \(X\) is a homogeneous Banach space in the sense of [18, Ch. I, Definition 2.10]. Hence [18, Ch. I, Theorem 2.11] implies that \(\mathbf{K}_{n}\) converge strongly to the identity operator on \(X\) as \(n\to\infty\). Moreover, (3.6) implies that \(\mathbf{K}_{n}\) maps \(H[X]\) to \(H[X]\). Thus \(M(H[X])\leq 2\). (b) If \(X\) is reflexive, then \(X^{*}=X^{\prime}\) is also separable (see [4, Ch. 1, Corollaries 4.3-4.4 and 5.6]) and translation-invariant (see Lemma 2.1). It follows from the above that the adjoint operators \(\mathbf{K}_{n}^{*}=\mathbf{K}_{n}:X^{\prime}\to X^{\prime}\) converge strongly to the identity operator as \(n\to\infty\). Applying Lemma 2.2 to \(A=I-\mathbf{K}_{n}\), \(X_{0}=Y_{0}=H[X]\), one concludes that the adjoint operators \(\mathbf{K}_{n}^{*}:(H[X])^{*}\to(H[X])^{*}\) also converge strongly to the identity operator as \(n\to\infty\). Hence \(M^{*}(H[X])\leq 2\). ## 4 Proofs of Theorems 1.2 and 1.3 ### BCAP and DCAP of isometrically isomorphic Banach spaces The next lemma follows immediately form the definitions of the BCAP and the DCAP. **Lemma 4.1**.: _Let \(E\) and \(F\) be isometrically isomorphic Banach spaces._ 1. _The space_ \(E\) _has the BCAP if and only if_ \(F\) _has the BCAP. In this case_ \[M(E)=M(F).\] 2. _The space_ \(E\) _has the DCAP if and only if_ \(F\) _has the DCAP. In this case_ \[M^{*}(E)=M^{*}(F).\] ### Isometric isomorphism of weighted and nonweighted abstract Hardy spaces Having in mind the previous lemma, we show that \(H[X]\) and \(H[X(w)]\) are isometrically isomorphic under natural assumptions on weights \(w\). **Lemma 4.2**.: _Let \(X\) be a Banach function space with the associate space \(X^{\prime}\) and let \(w\) be a weight such that \(w\in X\) and \(1/w\in X^{\prime}\). Then \(H[X(w)]\) is isometrically isomorphic to \(H[X]\)._ Proof.: Let \(\mathbb{D}\) be the unit disc: \(\mathbb{D}:=\{z\in\mathbb{C}:\ |z|<1\}\). A function \(F\) analytic in \(\mathbb{D}\) is said to belong to the Hardy space \(H^{p}(\mathbb{D})\), \(0<p\leq\infty\), if the integral mean \[M_{p}(r,F)=\left(\frac{1}{2\pi}\int_{-\pi}^{\pi}|F(re^{i\theta} )|^{p}\,d\theta\right)^{1/p},\quad 0<p<\infty,\] \[M_{\infty}(r,F)=\max_{-\pi\leq\theta\leq\pi}|F(re^{i\theta})|,\] remains bounded as \(r\to 1\). If \(F\in H^{p}(\mathbb{D})\), \(0<p\leq\infty\), then the nontangential limit \(F(e^{i\theta})\) exists almost everywhere on \(\mathbb{T}\) and \(F\in L^{p}(\mathbb{T})\) (see, e.g., [7, Theorem 2.2]). If \(1\leq p\leq\infty\), then \(F\in H^{p}\) (see, e.g., [7, Theorem 3.4]). It follows from \(w\in X\), \(1/w\in X^{\prime}\) and Axiom (A5) that \(w\in L^{1}\), \(\frac{1}{w}\in L^{1}\). Then \(\log w\in L^{1}\). Consider the outer function \[W(z):=\exp\left(\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{e^{it}+z}{e^{it}-z}\,\log w (e^{it})\,dt\right),\quad z\in\mathbb{D}\] (see [12, Ch. 5]). It belongs to \(H^{1}(\mathbb{D})\) and \(|W|=w\) a.e. on \(\mathbb{T}\). It follows from the definition of \(X(w)\) that \[\|Wf\|_{X}=\|wf\|_{X}=\|f\|_{X(w)}\quad\text{for all}\quad f\in H[X(w)]. \tag{4.1}\] Since \(X(w)\) is a Banach function space, Axiom (A5) implies that \(X(w)\subseteq L^{1}\) and \(H[X(w)]\subseteq H^{1}\). Take any \(f\in H[X(w)]\). Let \(F\in H^{1}(\mathbb{D})\) be its analytic extensions to the unit disk \(\mathbb{D}\) by means of the Poisson integral (see the proof of [7, Theorem 3.4]). Since \(W,F\subseteq H^{1}(\mathbb{D})\), Holder's inequality implies that \(WF\in H^{1/2}(\mathbb{D})\). It follows from (4.1) and Axiom (A5) that \(Wf\in X\subseteq L^{1}\). Hence \(WF\in H^{1}(\mathbb{D})\) (see [7, Theorem 2.11]). So, \(Wf\in H^{1}\cap X=H[X]\). This proves that the mapping \(f\mapsto Wf\) is an isometric isomorphism of \(H[X(w)]\) into \(H[X]\). Repeating the above argument, one gets that the mapping \(g\mapsto\frac{1}{W}\,g\) is an isometric isomorphism of \(H[X]\) into \(H[X(w)]\). Hence \(H[X(w)]\) and \(H[X]\) are isometrically isomorphic. ### Proof of Theorem 1.2 By Lemma 4.2, the spaces \(H^{p}\) and \(H^{p}(w)\) are isometrically isomorphic. Therefore, in view of (1.6) and Lemma 4.1, the weighted Hardy space has the BCAP and the DCAP and \[M(H^{p}(w))=M(H^{p})\leq 2^{|1-2/p|},\quad M^{*}(H^{p}(w))=M^{*}(H^{p})\leq 2^{|1 -2/p|},\] which completes the proof. ### Proof of Theorem 1.3 It follows from Lemma 4.2 that the spaces \(H[X]\) and \(H[X(w)]\) are isometrically isomorphic. Now part (a) (resp., part (b)) follows from part (a) (resp., part (b)) of Lemma 4.1 and part (a) (resp., part (b)) of Theorem 3.3. ## 5 Concluding remarks and open problems Exact values of the norms of the operators \(I-{\bf K}_{n}\) and \(I-{\bf P}_{r}\) on \(L^{p}\) and \(H^{p}\) Upper estimates for the norms of the operators \(I-{\bf K}_{n}\) play a crucial role in the proof of estimates (1.6) (see [24]). Consider also the operators \(I-{\bf P}_{r}\), where \[{\bf P}_{r}f:=P_{r}*f,\quad 0\leq r<1,\] and \(P_{r}\) is the Poisson kernel \[P_{r}(e^{i\theta}):=\sum_{k=-\infty}^{\infty}r^{|k|}e^{ik\theta}=\frac{1-r^{2} }{1+r^{2}-2r\cos\theta},\quad\theta\in[-\pi,\pi],\quad 0\leq r<1.\] The following theorem provides a two-sided estimate for operators of this type. **Theorem 5.1**.: _Let \(K\in L^{1}\), \(\|K\|_{L^{1}}=1\), \(K\geq 0\), and \(\widehat{K}(n)\geq 0\) for all \(n\in\mathbb{Z}\). Then the following estimate holds for the convolution operator \(C_{K}\) defined by_ (3.2) \[C_{p}\leq\|I-C_{K}\|_{\mathcal{B}(L^{p})}\leq 2^{|1-2/p|},\quad 1\leq p\leq\infty,\] (5.1) _where \(C_{1}=2=C_{\infty}\) and \(C_{p}\) is given by_ (1.5) _for \(p\in(1,\infty)\)._ Proof.: It follows from Lemma 3.2 that \(\|C_{K}\|_{\mathcal{B}(L^{p})}=1\), and hence \[\|I-C_{K}\|_{\mathcal{B}(L^{1})}\leq 2,\quad\|I-C_{K}\|_{\mathcal{B}(L^{\infty})}\leq 2\] (cf. the proof of Theorem 3.3). Since \(\widehat{K}(n)\geq 0\) and \(\widehat{K}(n)\leq\|K\|_{L^{1}}=1\), \(n\in\mathbb{Z}\), the Parseval theorem gives \(\|I-C_{K}\|_{\mathcal{B}(L^{2})}\leq 1\). (In fact, one can easily see that \(\|I-C_{K}\|_{\mathcal{B}(L^{2})}=1\), since \(\widehat{K}(n)\to 0\) as \(n\to\infty\) due the to Riemann-Lebesgue lemma.) Then the Riesz-Thorin interpolation theorem implies that \[\|I-C_{K}\|_{\mathcal{B}(L^{p})}\leq 2^{|1-2/p|},\quad 1<p<\infty, \tag{5.2}\] which proves the upper estimate in (5.1). Since trigonometric polynomials are dense in \(L^{1}\), it follows from Lemma 3.2 that \(C_{K}\) can be approximated in norm by finite rank operators. So, \(C_{K}:L^{p}\to L^{p}\) is a compact operator. The equality \[\frac{1}{2\pi}\int_{-\pi}^{\pi}K(e^{i(\varphi-\theta)})\cdot 1\,d\theta=\|K\|_{L ^{1}}=1\] implies that \(C_{K}\) preserves constant functions. Then \[\|I-C_{K}\|_{\mathcal{B}(L^{p})}\geq C_{p},\quad 1\leq p<\infty,\] (see [25, Theorem 3.4]). It is left to prove the lower estimate in (5.1) for \(p=\infty\). In the case \(p=\infty\) or \(p=1\), (5.1) turns into the equality \(\|I-C_{K}\|_{\mathcal{B}(L^{p})}=2\), which follows from Lemma 3.2 and the fact that \(L^{\infty}\) and \(L^{1}\) have the Daugavet property (see [1, Theorem 1 and the references therein] and [26, Corollary 6 and its proof]): \(\|I-T\|_{\mathcal{B}(L^{p})}=1+\|T\|_{\mathcal{B}(L^{p})}\) for every operator \(T\in\mathcal{K}(L^{p})\), \(p=\infty\) or \(p=1\). It is easy to see that \(\mathbf{K}_{n}\) and \(\mathbf{P}_{r}\) satisfy the conditions of Theorem 5.1 and map \(H^{p}\) into itself. Clearly, \[\|I-\mathbf{K}_{n}\|_{\mathcal{B}(H^{p})}\leq\|I-\mathbf{K}_{n}\|_{\mathcal{B }(L^{p})},\ \|I-\mathbf{P}_{r}\|_{\mathcal{B}(H^{p})}\leq\|I-\mathbf{P}_{r}\|_{ \mathcal{B}(L^{p})},\ 1\leq p\leq\infty.\] The above remarks lead to the following. **Open problem 5.2**.: _Let \(n\in\mathbb{Z}_{+}\) and \(r\in[0,1)\). Find the exact values of \(\|I-\mathbf{K}_{n}\|_{\mathcal{B}(L^{p})}\) and \(\|I-\mathbf{P}_{r}\|_{\mathcal{B}(L^{p})}\) for \(1<p<\infty\), and of \(\|I-\mathbf{K}_{n}\|_{\mathcal{B}(H^{p})}\) and \(\|I-\mathbf{P}_{r}\|_{\mathcal{B}(H^{p})}\) for \(1\leq p\leq\infty\)._ It seems that the above problem is open even for \(n=1\). For \(n=0\), one has \((I-\mathbf{K}_{0})f=f-\widehat{f}(0)=(I-\mathbf{P}_{0})f\) and \[\|I-\mathbf{K}_{0}\|_{\mathcal{B}(L^{p})}=C_{p}\] (see [11, formula (8)]), but the value of \(\|I-\mathbf{K}_{0}\|_{\mathcal{B}(H^{p})}\) does not seem to be known for \(p\in[1,\infty)\setminus\{2\}\). What is known is that \[\|I-\mathbf{K}_{0}\|_{\mathcal{B}(H^{\infty})}=2 \tag{5.3}\] (see [10, Theorem 2.5]) and \[\|I-\mathbf{K}_{0}\|_{\mathcal{B}(H^{p})}<\|I-\mathbf{K}_{0}\|_{\mathcal{B}( L^{p})}\] for sufficiently small \(p\geq 1\). Indeed, \(\|I-\mathbf{K}_{0}\|_{\mathcal{B}(L^{p})}=C_{p}\to 2\) as \(p\to 1\), while \[\|I-\mathbf{K}_{0}\|_{\mathcal{B}(H^{p})}<1.7047\] for sufficiently small \(p\geq 1\) (see the proof of [10, Theorem 2.4]). It follows from the lower estimate in (5.1) that \[\|I-\mathbf{K}_{n}\|_{\mathcal{B}(L^{p})}\geq\|I-\mathbf{K}_{0}\|_{\mathcal{B }(L^{p})}. \tag{5.4}\] An analogue of this estimate holds in the \(H^{p}\) setting. **Lemma 5.3**.: _For every \(n\in\mathbb{Z}_{+}\),_ \[\|I-\mathbf{K}_{n}\|_{\mathcal{B}(H^{p})}\geq\|I-\mathbf{K}_{0}\|_{\mathcal{B}(H ^{p})}.\] Proof.: Take any \(f\in H^{p}\setminus\{0\}\) and set \(f_{m}(e^{i\theta}):=f(e^{im\theta})\), \(m\in\mathbb{N}\). Then \(f\in H^{p}\) and \(\|f_{m}\|_{H^{p}}=\|f\|_{H^{p}}\) (see [8, Theorem 5.5]). Let \(m>n\). It follows from (3.6) that \(\mathbf{K}_{n}f_{m}=\widehat{f}(0)=\mathbf{K}_{0}f_{m}=\mathbf{K}_{0}f\). Hence \[\|I-\mathbf{K}_{n}\|_{\mathcal{B}(H^{p})} =\sup_{g\in H^{p}\setminus\{0\}}\frac{\|(I-\mathbf{K}_{n})g\|_{H ^{p}}}{\|g\|_{H^{p}}}\geq\sup_{f\in H^{p}\setminus\{0\}}\frac{\|(I-\mathbf{K} _{n})f_{m}\|_{H^{p}}}{\|f_{m}\|_{H^{p}}}\] \[=\sup_{f\in H^{p}\setminus\{0\}}\frac{\|(I-\mathbf{K}_{0})f_{m}\| _{H^{p}}}{\|f\|_{H^{p}}}=\sup_{f\in H^{p}\setminus\{0\}}\frac{\|(I-\mathbf{K} _{0})f\|_{H^{p}}}{\|f\|_{H^{p}}}\] \[=\|I-\mathbf{K}_{0}\|_{\mathcal{B}(H^{p})},\] which completes the proof. The same argument as in the proof of Lemma 5.3 applies in the \(L^{p}\) setting and provides a simpler proof of (5.4). ### Exact value of the norm of the backward shift operator on \(H^{p}\) We think that the question about the exact value of \(\|I-\mathbf{K}_{0}\|_{\mathcal{B}(H^{p})}\) is particularly interesting, and although it is a special case of Problem 5.2, we state it again below in terms of the backward shift operator \[(\mathbf{B}f)(e^{i\theta}):=e^{-i\theta}\left(f(e^{i\theta})-\widehat{f}(0) \right)=e^{-i\theta}\big{(}(I-\mathbf{K}_{0})f\big{)}(e^{i\theta}),\quad f\in H ^{p}.\] Clearly, \[|\mathbf{B}f|=|(I-\mathbf{K}_{0})f| \Longrightarrow \|\mathbf{B}f\|_{H^{p}}=\|(I-\mathbf{K}_{0})f\|_{H^{p}}\ \ \text{for all}\ \ f\in H^{p}\] \[\Longrightarrow \|\mathbf{B}\|_{\mathcal{B}(H^{p})}=\|I-\mathbf{K}_{0}\|_{ \mathcal{B}(H^{p})}.\] In particular, \[\|B\|_{\mathcal{B}(H^{\infty})}=2\] (see (5.3) and [10, Theorem 2.5]). **Open problem 5.4**.: _Let \(1\leq p<\infty\). Find the exact value of the norm \(\|\mathbf{B}\|_{\mathcal{B}(H^{p})}\) of the backward shift operator._ ### Exact values for \(M(H^{p})\) and \(M^{*}(H^{p})\) It seems that estimates (1.6) and the estimate \(M(H^{1})\leq 2\), which follows from Theorem 1.3(a), are all what is known about the values of \(M(H^{p})\) and \(M^{*}(H^{p})\). So, it would be interesting to get nontrivial lower and better upper bounds for \(M(H^{p})\) and \(M^{*}(H^{q})\) and, moreover, to solve the following. **Open problem 5.5**.: (a) _Find the exact value of \(M(H^{p})\), \(1\leq p<\infty\)._ (b) _Find the exact value of \(M^{*}(H^{p})\), \(1<p<\infty\)._ Given that \(M(L^{p})=\|I-\mathbf{K}_{0}\|_{\mathcal{B}(L^{p})}\) (see [25, Theorem 3.2]), it would be interesting to know whether \(M(H^{p})=\|I-\mathbf{K}_{0}\|_{\mathcal{B}(H^{p})}\). Estimates for \(M(H[L^{\varphi}])\) and \(M^{*}(H[L^{\varphi}])\) in the case of some Orlicz spaces \(L^{\varphi}\) Let \(\varphi:[0,\infty)\to[0,\infty]\) be a convex nondecreasing left-continuous function that is not identically zero or infinity on \((0,\infty)\) and satisfies \(\varphi(0)=0\). For a measurable function \(f:\mathbb{T}\to\mathbb{C}\), define \[I_{\varphi}(f):=\int_{\mathbb{T}}\varphi(|f(t)|)\,dm(t).\] The Orlicz space \(L^{\varphi}\) is the set of all measurable functions \(f:\mathbb{T}\to\mathbb{C}\) such that \(I_{\varphi}(\lambda f)<\infty\) for some \(\lambda=\lambda(f)>0\). This space is a Banach space when equipped with either of the following two equivalent norms: the Luxemburg norm \[\|f\|_{\varphi}:=\inf\{\lambda>0:I_{\varphi}(f/\lambda)\leq 1\}\] and the Orlicz norm (in the Amemiya form) \[\|f\|_{\varphi}^{0}:=\inf_{k>0}\frac{1}{k}(1+I_{\varphi}(kf)).\] It is well known that \[\|f\|_{\varphi}\leq\|f\|_{\varphi}^{0}\leq 2\|f\|_{\varphi}\quad\text{for all} \quad f\in L^{\varphi}.\] We denote by \(\mathcal{P}\) the set of all quasi-concave functions \(\rho:[0,\infty)\to[0,\infty)\), that is, the functions \(\rho\) such that \(\rho(x)=0\) precisely when \(x=0\), the function \(\rho(x)\) is increasing and the function \(\rho(x)/x\) is decreasing on \((0,\infty)\). Let \(\widetilde{\mathcal{P}}\) denote the subset of all concave functions in \(\mathcal{P}\). It follows from [17, Lemma 3.2] that if \(1\leq p<q\leq\infty\) and \(\rho\in\widetilde{\mathcal{P}}\), then the function \(\varphi\), inverse to the function \(\varphi^{-1}\) defined by \[\varphi^{-1}(0):=0,\quad\varphi^{-1}(x):=x^{1/p}\rho\left(x^{1/q-1/p}\right), \quad x\in(0,\infty), \tag{5.5}\] is convex. Moreover, if \(1<p<q<\infty\), then \(\varphi\) and its complementary function \(\varphi^{*}\) defined by \[\varphi^{*}(x):=\sup_{y>0}(xy-\varphi(y)),\] satisfy the \(\Delta_{2}\)-condition for all \(x\geq 0\), that is, there exist \(K,K^{*}>0\) such that \(\varphi(2x)\leq K\varphi(x)\) and \(\varphi^{*}(2x)\leq K^{*}\varphi^{*}(x)\) for all \(x\geq 0\). Then \(L^{\varphi}\) is reflexive (see, e.g., [23, Corollary 15.4.2]). For \(1<p,q<\infty\), put \[\gamma_{p,q}:=\inf\left\{\gamma>0:\inf_{x+y=\gamma,\ x\geq 0,\ y\geq 0}(x^{p}+y^ {q})=1\right\}.\] It follows from [17, Proposition 4.3] that \(\gamma_{p,q}\) continuously increases in \(p\) and \(q\). Moreover, if \(p\leq q\), then \[2^{1-1/p}\leq\gamma_{p,q}\leq 2^{1-1/q}.\] For \(r\in(1,\infty)\), define \(r^{\prime}\) by \(1/r+1/r^{\prime}=1\). **Theorem 5.6** ([17, Theorem 5.1]).: _Let \(1<p<q<\infty\) and \(\rho\in\widetilde{\mathcal{P}}\). Suppose that \(\varphi^{-1}\) is defined by (5.5). If \(T\in\mathcal{B}(L^{p})\) and \(T\in\mathcal{B}(L^{q})\), then \(T\in\mathcal{B}(L^{\varphi})\) and_ \[\|T\|_{\mathcal{B}(L^{\varphi})}\leq C_{p,q}\max\left\{\|T\|_{\mathcal{B}(L^{ p})},\|T\|_{\mathcal{B}(L^{q})}\right\},\] _where \(L^{\varphi}\) is equipped with the Luxemburg norm or with the Orlicz norm, and_ \[1\leq C_{p,q}:=\min\left\{(2\gamma_{p,q})^{1/p},(2\gamma_{q^{\prime},p^{ \prime}})^{1/q^{\prime}}\right\}\leq 2^{1/(pq^{\prime})+\min\{1/p,1/q^{ \prime}\}}. \tag{5.6}\] Using this interpolation theorem, we can refine the results of Theorem 3.3(b) for some Orlicz spaces. **Theorem 5.7**.: _Let \(1<p<q<\infty\) and \(\rho\in\widetilde{\mathcal{P}}\). Suppose that \(\varphi^{-1}\) is defined by (5.5) and the corresponding Orlicz space \(L^{\varphi}\) is equipped with the Luxemburg norm or with the Orlicz norm. Then the Hardy-Orlicz space \(H[L^{\varphi}]\) has the BCAP and the DCAP with_ \[M(H[L^{\varphi}])\leq\min\{2,\Lambda_{p,q}\},\quad M^{*}(H[L^{\varphi}])\leq \min\{2,\Lambda_{p,q}\},\] _where_ \[\Lambda_{p,q}:=C_{p,q}\max\left\{2^{|1-2/p|},2^{|1-2/q|}\right\}, \tag{5.7}\] _and the constant \(C_{p,q}\) is defined by (5.6)._ Proof.: It is well-known and easy to check that each Orlicz space is translation-invariant. As it was mentioned above, \(L^{\varphi}\) is reflexive under the assumptions of the Theorem. Therefore, by Theorem 3.3(b), the Hardy-Orlicz space \(H[L^{\varphi}]\) has the BCAP and the DCAP with \(M(H[L^{\varphi}])\leq 2\) and \(M^{*}(H[L^{\varphi}])\leq 2\). It remains to show that \[M(H[L^{\varphi}])\leq\Lambda_{p,q},\quad M^{*}(H[L^{\varphi}])\leq\Lambda_{p,q}. \tag{5.8}\] It follows from (3.6), (5.2) and Theorem 5.6 that for all \(n\in\mathbb{Z}_{+}\), \[\|I-\mathbf{K}_{n}\|_{\mathcal{B}(H[L^{\varphi}])} \leq\|I-\mathbf{K}_{n}\|_{\mathcal{B}(L^{\varphi})}\] \[\leq C_{p,q}\max\left\{\|I-\mathbf{K}_{n}\|_{\mathcal{B}(L^{p})},\|I-\mathbf{K}_{n}\|_{\mathcal{B}(L^{q})}\right\}\] \[\leq C_{p,q}\max\left\{2^{|1-2/p|},2^{|1-2/q|}\right\}=\Lambda_{ p,q},\] where the Orlicz space \(L^{\varphi}\) is equipped with the Luxemburg norm or the Orlicz norm. As in the proof of Theorem 3.3(b), this implies (5.8). It follows from (5.6) and (5.7) that if \(p\) and \(q\) are sufficiently close to \(2\), then \(M(H[L^{\varphi}])<2\) and \(M^{*}(H[L^{\varphi}])<2\). Given that the value of \(M(H^{p})\) is not known, it would perhaps be too ambitious to ask about the exact values of \(M(H[L^{\varphi}])\) and \(M^{*}(H[L^{\varphi}])\). Nevertheless, we think it would be interesting to get more information on these quantities. ### Estimates for \(M(H[L^{p,q}])\) and \(M^{*}(H[L^{p,q}])\) in the case of Lorentz spaces \(L^{p,q}\) The distribution function \(m_{f}\) of a measurable a.e. finite function \(f:\mathbb{T}\to\mathbb{C}\) is given by \[m_{f}(\lambda):=m\{t\in\mathbb{T}:|f(t)|>\lambda\},\quad\lambda\geq 0.\] The non-increasing rearrangement of \(f\) is defined by \[f^{*}(x):=\inf\{\lambda:m_{f}(\lambda)\leq x\},\quad x\geq 0.\] We refer to [4, Ch. 2, Section 1] for properties of distribution functions and non-increasing rearrangements. One of the closest classes of translation-invariant spaces to the class of Lebesgue spaces \(L^{p}\), \(1\leq p\leq\infty\) consists of the Lorentz spaces \(L^{p,q}\) defined as follows. For \(1\leq q\leq p<\infty\), the Lorentz space \(L^{p,q}\) consists of all measurable functions \(f:\mathbb{T}\to\mathbb{C}\) for which \[\|f\|_{p,q}:=\left(\int_{0}^{1}[t^{1/p}f^{*}(t)]^{q}\frac{dt}{t}\right)^{1/q}<\infty.\] This is a rearrangement-invariant Banach function space with respect to the norm \(\|\cdot\|_{p,q}\) (see, e.g., [4, Ch. 4, Theorem 4.3]). The Lorentz space \(L^{p,p}\) is isometrically isomorphic to the Lebesgue space \(L^{p}\). It follows from Theorem 3.3 that if \(1\leq q\leq p<\infty\), then the Hardy-Lorentz space \(H[L^{p,q}]\) has the BCAP with \[M(H[L^{p,q}])\leq 2, \tag{5.9}\] because \(L^{p,q}\) is separable in this case. Moreover, if \(1<q\leq p<\infty\), then the Hardy-Lorentz space \(H[L^{p,q}]\) has the DCAP with \[M^{*}(H[L^{p,q}])\leq 2, \tag{5.10}\] since the Lorentz space \(L^{p,q}\) is reflexive in this case. Having in mind estimates (1.6), which can be stated as follows: \[M(H[L^{p,p}])\leq 2^{|1-2/p|},\quad M^{*}(H[L^{p,p}])\leq 2^{|1-2/p|},\quad 1<p<\infty,\] it seems natural to formulate the following. **Open problem 5.8**.: (a) _Let \(1\leq q\leq p<\infty\). Find a nontrivial lower bound for \(M(H[L^{p,q}])\). Improve the upper bound for \(M(H[L^{p,q}])\) given by (5.9)._ (b) _Let \(1<q\leq p<\infty\). Find a nontrivial lower bound for \(M^{*}(H[L^{p,q}])\). Improve the upper bound for \(M^{*}(H[L^{p,q}])\) given by (5.10)._ _Acknowledgments_ This work is funded by national funds through the FCT - Fundacao para a Ciencia e a Tecnologia, I.P., under the scope of the projects UIDB/00297/2020 and UIDP/00297/2020 (Center for Mathematics and Applications).
2303.01278
Intrinsically Typed Sessions With Callbacks
All formalizations of session types rely on linear types for soundness as session-typed communication channels must change their type at every operation. Embedded language implementations of session types follow suit. They either rely on clever typing constructions to guarantee linearity statically, or on run-time checks that approximate linearity. We present a new language embedded implementation of session types, which is inspired by the inversion of control design principle. With our approach, all application programs are intrinsically session typed and unable to break linearity by construction. Linearity remains a proof obligation for a tiny encapsulated library that can be discharged once and for all when the library is built. We demonstrate that our proposed design extends to a wide range of features of session type systems: branching, recursion, multichannel and higher-order session, as well as context-free sessions. The multichannel extension provides an embedded implementation of session types which guarantees deadlock freedom by construction. The development reported in this paper is fully backed by type-checked Agda code.
Peter Thiemann
2023-03-02T14:04:37Z
http://arxiv.org/abs/2303.01278v1
# Intrinsically Typed Sessions With Callbacks ###### Abstract. All formalizations of session types rely on linear types for soundness as session-typed communication channels must change their type at every operation. Embedded language implementations of session types follow suit. They either rely on clever typing constructions to guarantee linearity statically, or on run-time checks that approximate linearity. We present a new language embedded implementation of session types, which is inspired by the inversion of control design principle. With our approach, all application programs are intrinsically session typed and unable to break linearity by construction. Linearity remains a proof obligation for a tiny encapsulated library that can be discharged once and for all when the library is built. We demonstrate that our proposed design extends to a wide range of features of session type systems: branching, recursion, multichannel and higher-order session, as well as context-free sessions. The multichannel extension provides an embedded implementation of session types which guarantees deadlock freedom by construction. The development reported in this paper is fully backed by type-checked Agda code. Keywords:Session types, domain specific languages, dependent types, Agda + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote † †: Footnote † †: thanks: [ + Footnote † †: Footnote † †: thanks: [ + Footnote † †: Footnote † † †: thanks: [ + Footnote constructed around session types are usually special purpose languages that embrace linearity so that their type checker rejects violations thereof. Examples are plenty: Links [36], Sepi [14, 59], Sill [56], C0 [62], and so on. While these languages and their implementation have fostered research and encouraged experimentation, they are not widely used. To boost the use of session types, a lot of work has been dedicated to embedding session types in mainstream languages, most of which do not have native support for linearity. There are plenty of examples for such embeddings for functional languages like Haskell [40, 45, 49] and OCaml [26, 43], as well as object-oriented languages like C# [29], Scala [50], Java [23]. Most of these approaches ignore the issue of linearity at compile time; some ignore it entirely. Some (e.g., [43]) rely on run-time checks, others rely on encodings of linearity using lenses [25] or monads [45]. There are also extension languages with a separate checker that add sessions [41] or, more generally, typestate [32] to an underlying Java program. We comment on some recent implementation that rely on Rust in the related work (Section 9). Recent work on multi-party session types [39, 64] suggests an alternative approach that does not rely on linearity. It is inspired by the design principle _inversion of control_ which is familiar to programmers from GUI programming. The systems described in those works translate a description of a multi-party session type into a library that encapsulates the implementation of all communication. For each communication action, the library provides an interface where the programmer specifies a callback function for this particular action. To clarify this idea, we give a very simple example, continued and extended in Section 2, in the context of a functional language. Unlike the cited work, our work as well as this example rely on _binary_ session types. We start with the following grammar for types \(T\) and session types \(S\). \[T\coloneqq\mathsf{int}\mid\mathsf{bool} S\coloneqq\mathsf{!}T\cdot S\mid\mathsf{?}T\cdot S\mid\mathsf{end}\] The session type \(\mathsf{!}T\cdot S\) (\(\mathsf{?}T\cdot S\)) describes a channel that is ready to send (receive) a value of payload type \(T\) and the continue as \(S\). The session type end describes a channel that can only be closed. **Traditional setting** (cf. [17]) The traditional interface to session-typed communication consists of primitive operations like \[\mathsf{send}:\mathsf{!}T\cdot S\otimes T\multimap S\qquad\qquad\mathsf{recv}:\mathsf{?}T\cdot S\multimap(T\otimes S) \qquad\qquad\mathsf{close}:\mathsf{end}\multimap()\] that send on a channel, receive from a channel, and close a channel. The crucial observation is that the type system must treat channels linearly to ensure protocol fidelity. Programs typically look like this: \begin{tabular}{l l l} negp-server : &?int.!int.end -o () \\ negp-server & c0 = & -- c0 :?int.!int.end \\ let (x, c1) = & recv c0 in & -- c1 :!int.end \\ let c2 = & send (c1, -x) in & -- c2 : & \multicolumn{1}{l}{end} \\ close c2 & & & & & \\ \end{tabular} By linearity, the \(\mathsf{recv}\) operation consumes c0; otherwise, another \(\mathsf{recv}\) could be applied could c0, thus breaking the protocol. Analogous arguments apply to c1 and c2, e.g., once the channel c2 is closed, it cannot be closed again. **Callback approach**: The callback interface to session-typed communication proposed in this work consists of two items. 1. A datatype of commands, \(\mathsf{Cmd}\), indexed by an application state \(A\) and a session type. This datatype constitutes an intrinsically session-typed encoding of communicating functional programs. (2) An encapsulated interpreter exec to execute commands.1 Footnote 1: The function \(\mathbb{T}[\_]\) : Type \(\rightarrow\) Set maps type syntax to its interpretation as an Agda type. For further details see Section 2. \(\mathsf{CLOSE:Cmd}\ A\ \mathsf{end}\) \(\mathsf{SEND}\ :(A\to A\times\mathbb{T}[\ T])\rightarrow\mathsf{Cmd}\ A\ S \rightarrow\mathsf{Cmd}\ A\ (!\ T\cdot S)\) \(\mathsf{RECV}\ :(\mathbb{T}[\ T]\to A\to A)\rightarrow\mathsf{Cmd}\ A\ S \rightarrow\mathsf{Cmd}\ A\ (?\ T\cdot S)\) The \(\mathsf{SEND}\) command has a callback to obtain the value to be sent from the application state. Similarly, the \(\mathsf{RECV}\) command has a callback to inject the received value in the application state. Both take a continuation command of type \(\mathsf{Cmd}\ A\ S\) that deals with the continuation session \(S\). The \(\mathsf{CLOSE}\) command signifies the end of the protocol. A program is expressed as a value of type \(\mathsf{Cmd}\). It looks similar to the traditional one where we choose \(\mathbb{Z}\), the integers, as the application state. The encoding relies strongly on the core idea of functional programming: functions (callbacks) as first class values.2 Footnote 2: The operator \(\_\$\) stands for infix function application. It associates to the right. (The second \(\_\$\) could be omitted.) \(\mathsf{negp-command:Cmd}\ \mathbb{Z}\ (?\ \mathsf{int}\cdot!\ \mathsf{int}\cdot \mathsf{end})\) \(\mathsf{negp-command=RECV}\ (\lambda\ x\ a\to x)\$\) SEND (\(\lambda\ a\rightarrow\langle\ a\,,\ -\ a\,\rangle\)) S CLOSE The least sophisticated interpreter takes a command, a suitable initial application state, an untyped channel, and results in an IO action that produces the final application state. \(\mathsf{exec:Cmd}\ A\ S\to A\rightarrow\mathsf{Channel}\rightarrow \mathsf{IO}\ A\) This interpreter is implemented once and for all in an encapsulated library. In a sense, it forms the trusted computing base of our approach, as we have the obligation to prove that it performs the commands on the channel according to the session type index of the channel. ### Contributions * We introduce the callback approach to binary session types in the context of dependently-typed functional programming. We deploy it as a proof-of-concept specification in the language Agda, but we expect our development to be transferrable to Haskell, either via compilation or via translation [10]. * Linearity of session handling is ensured by verifying linear handling of command execution in a small interpreter that forms the trusted computing base of our approach. There is no need for linear types in the type system of the host language, nor is there a need for clever type constructions to simulate linearity. * We demonstrate that the approach extends to most familiar session type constructions: branching (Section 3), recursion (Section 4), multichannel and higher-order sessions (Section 7). In Section 4.3 we offer a novel and significant improvement of the API-based treatment of recursion. * The extension to operate on multiple channels is significant and mostly orthogonal to the other features. Our approach is inspired by Wadler's GV calculus [61] and thus yields deadlock-free programs by construction. * We propose a new dynamic selection operation in the context of branching session types (Section 3). * We extend the callback approach to context-free session types (with branching and recursion), which in turn requires a more sophisticated, dependently-typed encoding of commands than regular session types (Section 6). * For monad lovers Section 5 describes a version with a monadic encoding of callbacks. The source of this document includes a number of literate Agda scripts which will be submitted as an anonymized supplement (to be turned into an artifact). Every line of code that is typeset in color has been checked by Agda. At present, the interpreters are implemented against a small API of monadic IO operations to manipulate untyped channels. This API can be implemented in Haskell using Agda's foreign function interface.3 Footnote 3: An as-yet unfinished programming exercise that we plan to complete for artifact submission. As a functional pearl, this paper concentrates on the library design, it contains no formal proofs of the proof obligations on the library interpreter (i.e., linearity and freedom of deadlock for the multichannel case). The discussion in Section 8 contains some suggestions how this task may be approached. Working knowledge of Agda is not a hard requirement for understanding the paper. We strive to make the code accessible to readers who are knowledgable in Haskell by explaining features specific to Agda as they are encountered. ## 2. Finite non-branching session types Let's start straight away with the simplest instance, finite non-branching simple session types, to convey the gist of the approach. Subsequent sections show how to add most of the usual features of session types. A binary session type describes a bidirectional communication between two peers, let's call them server and client. The session type is attached to the type of the communication channel.4 Footnote 4: Agda supports a mixfix syntax where underlines in the identifier indicate the position of the arguments. For example, \(!\_\_\) and \(?\_\_\_\) are operators with two arguments. We declare these operators to associate to the right to save parentheses. ``` dataType:Setwhere int:Type bool:Type dataSession:Setwhere \(!\_\_\_:Type\to Session\to Session\) \(?\_\_\_\_:Type\to Session\to Session\) end:Session ``` These Agda types correspond to the standard grammar of session types, where \(T\) is the type of payload values that can be transmitted and \(S\) is the type of sessions. \[T\coloneqq\text{int }|\text{bool}\qquad\qquad\qquad\qquad S\coloneqq!T\cdot S \mid?T\cdot S\mid\text{end}\] The session type \(!T\cdot S\) (\(?T\cdot S\)) describes a channel that is ready to send (receive) a value of payload type \(T\) and then continue as \(S\). The session type end describes a channel that can only be closed. Here are two examples for session types: the types of the server for a binary operation and a unary operation, respectively. ``` binaryp=?int-?int-!int-end unaryp=?int-!int-end ``` In GV, a widely studied functional session type theory (Gvak, 2017), there are primitives to send and receive values and to close a channel with types like this: ``` send:(!T\cdot S\otimes T)->Srecv:?T\cdot S->(T\otimes S)close:end->() ``` The types indicate that we must treat channel values of session type _linearly_: the send operation _consumes_ a channel, which is ready to send, paired with the payload and returns it in a state described by \(S\); the \(\mathsf{recv}\) operation _consumes_ a channel, which is ready to receive, and returns a pair with the received value and the updated channel; the \(\mathsf{close}\) operation _consumes_ the channel and returns a unit value. Enforcing this linearity is required for soundness. In this work, we take a different approach inspired by callback programming. Instead of providing \(\mathsf{send}\) and \(\mathsf{recv}\) primitives to the programmer, we ask the programmer to define the "application logic" by implementing a command value whose type \(\mathsf{Cmd}\) is indexed by a session type. This definition relies on an interpretation of types as Agda types. \(\mathsf{T}[\_]:\mathsf{Type}\rightarrow\mathsf{Set}\) \(\mathsf{T}[\![\;\mathsf{int}\;]\!]=\mathbb{Z}\) \(\mathsf{T}[\![\;\mathsf{bool}\;]\!]=\mathsf{Bool}\) data \(\mathsf{Cmd}\)\((A:\mathsf{Set}):\mathsf{Session}\rightarrow\mathsf{Set}\) where \(\mathsf{CLOSE}:\mathsf{Cmd}\)\(A\) end \(\mathsf{SEND}:(A\to A\times\mathsf{T}[\![\;T]\!])\rightarrow\mathsf{Cmd}\)\(A\)\(S\rightarrow\mathsf{Cmd}\)\(A\)\((!\;T\cdot S)\) \(\mathsf{RECV}:(\mathsf{T}[\![\;T]\!]\to A\to A)\rightarrow\mathsf{Cmd}\)\(A\)\(S\rightarrow\mathsf{Cmd}\)\(A\)\((?\;T\cdot S)\) In this definition, the type parameter \(A\) embodies the application state. Each \(\mathsf{SEND}\) command takes a state transformer that extracts the value to send from the current application state; each \(\mathsf{RECV}\) command takes a state transformer that is indexed by the received value; the \(\mathsf{CLOSE}\) command terminates the session. In fact, we could provide the application logic by actions in a state monad over the application state \(A\). We defer the shift to a monadic interface to Section 5, when we have the full picture. Continuing our example, we define commands that implement a server for the protocols \(\mathsf{unaryp}\) and \(\mathsf{binaryp}\) with the operation instantiated to negation and addition, respectively. \(\mathsf{negp-command}:\mathsf{Cmd}\)\(\mathbb{Z}\)\((?\)\(\mathsf{int}\cdot!\mathsf{int}\cdot\mathsf{end})\) \(\mathsf{negp-command}:\mathsf{RECV}\)\((\lambda\;x\,a\to x)\)\(\$\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\)\(\)\(\)\)\(\)\(\)\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\)\(\)\(\)\(\)\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\ This API should be self-explanatory.5 It declares an abstract type of untyped, raw channels with operations to accept a connection, close a channel, as well as send and receive a value over the channel. It glosses over issues like serialization, which can be addressed using type class constraints like Serialize \(A\) (in Haskell) on primSend and primRecv. Footnote 5: The type \(\top\) is like Haskell’s unit type with single element tt. The interpreter itself is defined by induction on the type Cmd. The actual computation takes place in the IO monad and is expressed using the do notation, both familiar from Haskell. exec : Cmd \(A\)\(S\)\(\rightarrow\)\(A\)\(\rightarrow\)Channel \(\rightarrow\)IO \(A\) exec CLOSE _state ch_ = do primClose _ch_ pure _state_ exec (SEND _getx cmd_) _state ch_ = do let \(\langle\)_state\({}^{\prime}\)_, \(x\)\(\rangle\) - _getx state_ primSend \(x\)_ch exec _cmd state\({}^{\prime}\) ch_ exec (RECV _putx cmd_) _state ch_ = do \(x\)\(\leftarrow\)primRecv _ch_ let _state\({}^{\prime}\) - _putx x state_ exec _cmd state\({}^{\prime}\) ch_ To actually run a server, it remains to provide a wrapper that accepts a connection and invokes the interpreter. record Accepting \(A\)\(S\): Set where constructor ACC fieldcmd : Cmd \(A\)\(S\) acceptor : Accepting \(A\)\(S\)\(\rightarrow\)\(A\)\(\rightarrow\)IO \(A\) acceptor (ACC _cmd_) \(a\) = primAccept >> = exec _cmd a_ Examining the interpreter, we finally see the full monadic structure. We need a stack of monad transformers starting with a state monad for the application state on top of a reader monad providing the channel on top of the IO monad. Our proof obligation for the interpreter boils down to verifying that the interpretation of each command executes the single communication action designated by the corresponding session type operator. The correct sequencing according to the session type is imposed by the sequencing constraint underlying the IO monad. Indeed, this observation was the reason to employ monads for APIs to state-based operations in pure functional languages (Haskell, 2018). ## 3. Selection and Choice Adding branching to our development is straightforward. The standard theory of session types allows branching on a finite set of labels using this syntax: \[S:=\cdots\mid\oplus\{\ell:S_{\ell}\mid\ell\in L\}\mid\&\{\ell:S_{\ell}\mid \ell\in L\}\] Here \(L\) is a finite, non-empty set of labels, which can be chosen differently at every use of the type operator. The type constructor \(\oplus\) corresponds to an _internal choice_ of the program. The select primitive sends one of the labels, say \(\ell\in L\), available in the type and continues according to \(S_{\ell}\): select \(\ell:\oplus\{\ell:S_{\ell}\mid\ell\in L\}\)\(\multimap S_{\ell}\) The type constructor & corresponds to an _external choice_. The primitive branch receives one of the labels mentioned in the type and chooses a continuation according to the label. In the presence of sum types and linearity, the primitive can be typed as follows [44]. \[\mathtt{branch}:\&\{\ell:S_{\ell}\mid\ell\in L\}\,\multimap+\{\ell:S_{\ell} \mid\ell\in L\}\] Our modeling in Agda extends the definitions of Session, \(\mathtt{Cmd}\), and exec from Section 2. A label set of size \(k\) is modeled by the type \(\mathtt{Fin}\ k=\{0,\ldots,k-1\}\) and the alternative continuation sessions by functions from labels to Session (isomorphic to vectors of sessions, cf. Section 8.1). ``` dataSession:Setwhere \(\oplus^{\prime}:(Si:(i\colon\mathtt{Fin}\ k)\rightarrow\mathsf{Session}) \rightarrow\mathsf{Session}\) \(\&^{\prime}:(Si:(i\colon\mathtt{Fin}\ k)\rightarrow\mathsf{Session}) \rightarrow\mathsf{Session}\) ``` ``` data\(\mathtt{Cmd}\ (A:\mathsf{Set}):\mathsf{Session}\rightarrow\mathsf{Set}\)where \(\mathtt{SELECT}:\forall\ \{S\}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! at compile time, we can supply a dynamic selector command where the label is computed by a callback get! at run time: \[\mathtt{data\ Cmd}\ (A:\mathtt{Set}) :\ \mathtt{Session}\rightarrow\mathtt{Set\ where}\] \[\mathtt{DSELECT}:\ \forall\ \{\mathit{S}\!\!\mathit{\SIUnitSymbolMicro}\} \rightarrow(\mathit{getl}:A\to A\ \mathtt{\times}\ \mathtt{Fin\ }k)\] \[\rightarrow((i\!:\mathtt{Fin\ }k)\rightarrow\mathtt{Cmd}\ A\ (\mathit{S}i\ \!i))\] \[\rightarrow\mathtt{Cmd}\ A\ (\oplus^{\prime}\mathit{S}i)\] Extending exec to this command is straightforward. There is still room for improvement in the type of this command. We come back to this issue in Section 6. ## 4. Going in circles Recursive types are a common feature of session types. They are required to model protocols for servers that perform the same functionality over and over again. Our running example will be a server that allows a client to repeatedly perform a unary operation until the client quits. The pen-and-paper syntax of recursive types relies on type variables and a \(\mu\) operator like so:7 Footnote 7: We gloss over the issue of guardedness (or contractiveness) for recursive types to avoid further complexity in the types. \[S:=!T\!\cdot\!S\ |\ \ \ \ \ \ \ \ \ \ \mathtt{?}\!\cdot\!S\ |\ \mathtt{end}\ |\ \mu\,X.S\ |\ \chi\ |\ \dots\] The intended semantics of the \(\mu\) operator is that \(\mu\,X.S\) is equivalent to \(S[X\mapsto\mu\,X.S]\), the unfolding, where we substitute the recursive type itself for the variable \(X\) in its body. For the Agda formalization we choose the standard de Bruijn encoding of bound variables. The parameter \(n\) of the Session type denotes the number of type variables in scope. The \(\mu\) operator opens a new scope and '\(i\) references the \(i\)th variable, the innermost binding being 0. \[\mathtt{data\ Session}\ (n:\mathbb{N}):\ \mathtt{Set\ where}\] \[\mathtt{!\_\_\_:}:\mathtt{Type}\rightarrow\mathtt{Session\ }n\rightarrow\mathtt{Session\ }n\] \[\mathtt{?\_\_\_:}:\mathtt{Type}\rightarrow\mathtt{Session\ }n\rightarrow\mathtt{Session\ }n\] \[\mathtt{end}:\mathtt{Session\ }n\] \[\mathtt{\oplus^{\prime}}:(\mathit{S}i:(i\!:\mathtt{Fin\ }k) \rightarrow\mathtt{Session\ }n)\rightarrow\mathtt{Session\ }n\] \[\mathtt{\&^{\prime}}:(\mathit{S}i:(i\!:\mathtt{Fin\ }k) \rightarrow\mathtt{Session\ }n)\rightarrow\mathtt{Session\ }n\] \[\mathtt{\mu\_\_:}:\mathtt{Session\ (suc\ n)\rightarrow\mathtt{Session\ }n}\] \[\mathtt{`\_\_:}:\mathtt{Fin\ }n\rightarrow\mathtt{Session\ }n\] Further extending our running example, we redefine the protocol \(\mathtt{unaryp}\) as a function that takes the rest of the protocol and wraps it into a recursive type. The session type \(\mathtt{many\_unaryp}\) is a recursive type that either runs a unary function and recurses or just ends the protocol. We define & as a smart constructor as in Section 3. \[\mathtt{unaryp}:\mathtt{Session\ }n\rightarrow\mathtt{Session\ }n\] \[\mathtt{unaryp}\ \mathtt{S}=\?\ \mathtt{int\ }\cdot\!\mathtt{int\ }\cdot S\] \[\mathtt{many\_unaryp}:\mathtt{Session\ }0\] \[\mathtt{many\_unaryp}=\mu\ \&\ [\mathtt{unaryp}(\mathtt{`\ zero}), \mathtt{end}\ ]\] ### Commands The Cmd type obtains a new parameter \(n\) to match the parameter of the session type used as an index. We only show the two new cases. \[\begin{array}{l}\mbox{data}\;\mbox{\small Cmd}\;(n:\mathbb{N})\;(A:\mbox{ \small Set}):\mbox{\small Session}\;n\rightarrow\mbox{\small Set where}\\ \mbox{LOOP}:\mbox{\small Cmd}\;(\mbox{suc }n)\;A\;S\rightarrow\mbox{\small Cmd}\;n\;A \;(\mu\;S)\\ \mbox{CONTINUE}:(i:\mbox{\small Fin }n)\rightarrow\mbox{\small Cmd}\;n\;A(\mbox{`` }i)\end{array}\] With this type we are ready to implement a service that repeatedly adds numbers as they are received and sends the partial sum as a response each time. \[\begin{array}{l}\mbox{addup-command}:\mbox{\small Cmd}\;n\;\mathbb{Z}\;S \rightarrow\mbox{\small Cmd}\;n\;\mathbb{Z}\;(\mbox{unaryp }S)\\ \mbox{addup-command}\;\mbox{\small Cmd}=\mbox{\small RECV}\;(\lambda\;x\;a \to x+a)\; The push function takes a CmdStore and a suitable Cmd and returns the store extended by this command (at position zero). The pop1 function pops the first entry off the CmdStore. It gets used in defining the inductive step of the function pop, which pops any (legal) number \(i\) of entries from the stack. The definitions are simple but omitted from the text as they require invoking some technical lemmas about injections (i.e., identity functions) from \(\mathsf{Fin}\;n\) to \(\mathsf{Fin}\;(\mathsf{suc}\;n)\). \(\mathsf{Gas}-\mathbb{N}\) \(\mathsf{exec}:\mathsf{Gas}\rightarrow\mathsf{Cmd}\;n\;A\;S\rightarrow\mathsf{ CmdStore}\;n\;A\rightarrow(init:A)\rightarrow\mathsf{Channel}\rightarrow\mathsf{IO}\;A\) \(\mathsf{exec}\;g\;(\mathsf{LOOP}\;\mathsf{cmd})\;\mathsf{cms}\;\mathsf{ state}\;ch\;\mathsf{=}\;\mathsf{exec}\;g\;\mathsf{cmd}\;(\mathsf{push}\;\mathsf{cms}\;(\mathsf{LOOP}\; \mathsf{cmd}))\;\mathsf{state}\;ch\) \(\mathsf{exec}\;\{\mathsf{suc}\;n\}\;\{A\}\;\mathsf{zero}\;(\mathsf{CONTINUE}\;i) \;\mathsf{cms}\;\mathsf{state}\;ch\;\mathsf{-}\;\mathsf{pure}\;\mathsf{state} \;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-} \;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\; \mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\; \mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\; \mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\; \mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-} \;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\; \mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\; \mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\; \mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\; \mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\; \mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-} \;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\; \mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-} \;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\; \mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-} \;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\; \mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{- }\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\; \mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{- }\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\; \mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{- }\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{- }\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{- }\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\; \mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{- }\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\; \mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{- }\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\; \mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{- }\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{- }\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\; \mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{- }\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{- }\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{- }\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{- }\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{- }\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{- }\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{- }\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\; \mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{- }\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\; \mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{- }\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\; \mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{- }\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\; \mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{- }\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\; \mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{- }\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\; \mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{- }\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\; \mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-} \;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\; \mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{- }\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\; \mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{-}\;\mathsf{- ### What about the client? The commands we presented so far for recursive sessions are fine for servers that repeatedly perform the same action. However, a client might want to perform different actions on each iteration. While such a behavior may be encoded in the application state, it would not be an enjoyable experience for the programmer. Hence, we propose an UNROLL command that enables the specification of a command for one loop iteration at a time. \[\text{UNROLL}:\text{Cmd}\ (\text{suc }n)\ A\ S\rightarrow\text{Cmd}\ n\ A \ (\mu\ S)\rightarrow\text{Cmd}\ n\ A\ (\mu\ S)\] It specifies one command for the session type's loop body and a continuation command for the whole type to cover subsequent iterations. Executing this command means to execute its body and push its continuation on the stack. exec \(g\) (UNROLL _body-cmd next-cmd_) _cms st ch_ = exec \(g\) _body-cmd_ (push _cms next-cmd_) _st ch_ As an example, we write a client for the many-unaryp protocol that iterates the protocol two times before it terminates. In each round, it sends an integer and ignores the response. \[\text{runningsum-client}:\text{Cmd}\ 0\ T\ (\text{dual many-unaryp})\] \[\text{runningsum-client}=\] \[\text{UNROLL}\ (\text{SELECT zero}\ \$\text{S}\ \text{ But wait, using the Agda standard library, we have to state that \(M\) has a type that fits a monad and that it implements the interface \(\mathsf{RawMonad}\) (a record that contains the basic monadic operations). Fortunately, we can abstract from these issues and adopt a Haskell-inspired syntax with a straightforward Agda definition.8 Footnote 8: Agda’s syntax command defines a macro that enables abstraction over binders. The newly introduced syntax, the definien-dum, is _on the right_ of the equals sign. Putting the record \(\mathsf{RawMonad}\)\(M\) in double braces enables overloading of the monadic operators [13]. \[\mathsf{Monadic}:((\mathsf{Set}\to\mathsf{Set}_{1})\to\mathsf{Set}_{1})\to \mathsf{Set}_{2}\] \[\mathsf{Monadic}\,f=\,\forall\,\{M:\mathsf{Set}\to\mathsf{Set}_{1}\} \rightarrow\{\mathsf{RawMonad}\,M\}\}\to fM\] \[\mathsf{syntax}\ \mathsf{Monadic}\ (\lambda\ M\to X)=\mathsf{Monad}\ M\Rightarrow X\] Our running examples become (even more?) concise: \[\mathsf{addp-command}:\mathsf{Cmd}\ \mathbb{Z}\ \mathsf{binary}\] \[\mathsf{addp-command}=\mathsf{RECV}\ \mathsf{put}\ \mathsf{\$\mathsf{R}\mathsf{C}\mathsf{V}\ (\mathsf{modify}\ \circ\ \ record Accepting \(A\,s:\text{Set}_{2}\) where constructor ACC field pgm: Cmd \(A\,s\) acceptor: Accepting \(A\,s\to A\to\text{IO}\ A\) acceptor (ACC pgm) a = do \(ch\,\leftarrow\) primAccept \(\langle\,\textit{final}\,,\,\rangle\,\leftarrow\) runReaderT (runStateT (exec pgm) a) ch pure final ## 6. Context-free session types Context-free session types (Bahdan et al., 2017; Chen et al., 2018; Chen et al., 2018) have been conceived to liberate session types from the restriction to tail recursion. Alleviating this restriction makes session-typed programming more compositional and enables low-level programming tasks like the serialization of tree structures. The basic idea (Chen et al., 2018) is to reorganize the type language of session types as follows. \[S:=!T\mid?T\mid S\sharp S\mid\text{skip}\] Now \(!T\) (\(?T\)) describes just the act of sending (receiving) a value of type \(T\). To combine two session types, we have to use sequential composition \(S\sharp S\) with unit skip. The branching types and recursion are as before, so we do not repeat them here. The Agda encoding of this structure combines straightforwardly with the accumulated work of the previous sections. data Session\((n:\mathbb{N}):\text{Set}\) where?_:Type\(\rightarrow\)Session\(n\)!_:Type\(\rightarrow\)Session\(n\)!_:Type\(\rightarrow\)Session\(n\)!_:(_si_:(_i_:Fin\(k\))\(\rightarrow\)Session\(n\))!_:(_si_:(_i_:Fin\(k\))\(\rightarrow\)Session\(n\))!_:Session\(n\)!_:Session\(n\)!_:Session\(n\)!_:Session\((suc\))!_:Session\(n\)!_:Session\((suc\))!_:Session\(n\)!_:Session\(n\)!_:'_:Fin\(n\)\(\rightarrow\)Session\(n\) The revised command structure has a few catches that require explanation. variable \(V\,W\):Vec Set \(n\) data Cmd :Set\(\rightarrow\)Set\(\rightarrow\)Vec Set \(n\rightarrow\)Vec Set \(n\rightarrow\)Session \(n\rightarrow\)Set\({}_{1}\) where SKIP :(\(A\to B\))\(\rightarrow\)Cmd \(A\,B\,V\,W\,\text{skip}\) :(\(A\to B\)\(\times\)T\(\llbracket\,\,\rrbracket\)) \(\rightarrow\)Cmd \(A\,B\,V\,W\,(!\,T)\) :(\(\llbracket\,\,\rrbracket\)) \(\rightarrow\)Cmd \(A\,B\,V\,W\,(?\,T)\) :(\(\llbracket\,\,\rrbracket\)) \(\rightarrow\)A \(B\)) \(\rightarrow\)Cmd \(A\,B\,V\,W\,(?\,T)\) SELECT :\(\forall\langle S\hat{\mathit{sl}}\,\{F\,\text{:Fin}\,\,k\rightarrow\text{Set} \}\rangle\rightarrow(A\rightarrow\Sigma\,(\text{Fin}\,\,\&\,\,\&\,\,\&\,\, \&\,\,\&\,\,\&\,\,\&\,\,\&\,\,\&\,\,\&\,\,\&\,\,\&\,\,\&\,\,\&\,\,\&\, \,\&\,\,\&\,\,\&\,\,\&\,\,\&\,\,\&\,\,\&\,\,\&\,\,\&\,\,\&\,\,\&\,\,\, \&\,\,\&\,\,\,\&\,\,\&\,\,\,\&\,\,\,\&\,\,\,\&\,\,\,\&\,\,\,\&\,\,\,\&\,\, \,\&\,\,\,\&\,\,\,\&\,\,\,\&\,\,\,\,\&\,\,\,\&\,\,\,\&\,\,\,\,\&\,\,\,\&\,\,\, \&\,\,\,\,\&\,\,\,\,\&\,\,\,\,\&\,\,\,\,\&\,\,\,\,\&\,\,\,\,\&\,\,\,\,\,\&\,\,\,\, \,\&\,\,\,\,\,\&\,\,\,\,\,\&\,\,\,\,\,\,\&\,\,\,\,\,\&\,\,\,\,\,\&\,\,\,\,\, \&\,\,\,\,\,\,\,\&\,\,\,\,\,\,\&\,\,\,\,\,\,\&\,\,\,\,\,\,\&\,\,\,\,\,\,\,\&\,\,\,\,\,\, \&\,\,\,\,\,\,\,\,\&\,\,\,\,\,\,\,\,\&\,\,\,\,\,\,\,\,\&\,\,\,\,\,\,\,\,\&\,\,\,\,\,\,\,\,\,\,\,\&\,\,\,\,\,\,\,\,\,\,\,\,\,\&\,\ \[\begin{array}{ll}\text{LOOP}&:\text{Cmd}\ A\ B\ (A::V)\ (B::W)\ S\to\text{Cmd}\ A\ B\ V\ W(\mu\ S)\\ \text{CONTINUE}&:(i\ \text{:}\ \text{Fin}\ n)\to\text{Cmd}\ (\text{lookup}\ V\ i)\ (\text{lookup}\ W\ i)\ V\ W(\text{`}\ i)\end{array}\] The Cmd datatype now carries four type-related parameters on top of the session type. A command of type Cmd \(A\ B\ V\ W\ S\) is firstly an action that takes an input of type \(A\), yields an output of type \(B\), and executes a session according to \(S\). The additional parameters, \(V\) and \(W\), are vectors of types that control the typing of the currently pending loops, which are explained with the commands LOOP and CONTINUE. For now, we can think of them as stack of the input and output types of those pending loops. The SKIP command is associated with a skip in the session type. It comes with a function that transforms \(A\)s into \(B\)s. The SEND and RECV commands no longer take a command parameter to process the continuation session. This functionality is now provided once and for all by the composition operator. The command SELECT prescribes dynamic selection. It takes a callback that maps a value of input type \(A\) to a dependent pair of a label \(i\) and a value of type \(F\ i\).9 The continuation takes advantage of this fine-grained type information by associating the label \(i\) with the input type \(F\ i\) as produced by the callback. Footnote 9: The negative occurrence of the \(F\) parameter is not permitted in the constructor of a Set datatype. Its presence forces the command type one level up into Set\({}_{1}\). The CHOICE command is as before up to the additional type parameters. The composition operator for two commands has three additional callback arguments, which we call split, cross, and join. The first callback split splits the input type into one part that gets consumed by the first command and another that bypasses it. The second callback cross joins the output type and the bypass into the input type for the second command. The third callback join combines the outputs of the two commands. In the context-free case, the behavior of the LOOP command is more general than the simple tail-recursive iteration in Section 4. It takes a loop body that transforms \(A\)s into \(B\)s with these same types pushed on the stacks. The CONTINUE\(i\) command invokes the \(i\)th pending loop. To do so, the current type must match the typing of the loop, which we find by lookup up the input and output type on the stacks at position \(i\). ### Examples Before we delve into the interpreter, let's take stock what we achieved by reviewing our old examples as well as a new one that demonstrates the additional expressivity of context-free sessions. - service protocol for a binary function binary : Session \(n\) binary =? int ;? int ;! int - service protocol for a unary function unary : Session \(n\) unaryp =? int ;! int - service protocol for choosing between a binary and a unary function arithp = & [ binaryp, unaryp ] - many unary functions many-unaryp : Session \(n\) many-unaryp - \(\mu\) (& [ unaryp!'zero, skip ]) Compare these types with the corresponding ones from Section 4, where the protocol fragments are metafunctions that take a continuation session as a parameter. No such parameterization is needed with context-free session types. They are intrinsically compositional and modular. The final example many-unaryp contains a tail recursive part. The second component of the choice must make use of the skip type, because the arm of a choice cannot be empty. Let's consider servers that implement those protocols. addp-command : Cmd T T V W binary addp-command = RECV const!? RECV_+_?! SEND (l x \(\rightarrow\) ( tt, x )) negp-command : Cmd T T V W unary negp-command = RECV (const! - )!? SEND \(\lambda x\rightarrow\) ( tt, x ) arithp-command : Cmd T T V W arithp arithp-command = CHOICE \(\lambda\) where zero \(\rightarrow\) addp-command (suc zero) \(\rightarrow\) negp-command many-unaryp-command : Cmd Z Z V W many-unaryp many-unaryp-command = LOOP $ CHOICE \(\lambda\) where zero \(\rightarrow\) (RECV_+_? SEND! id, id?)! CONTINUE zero (suc zero) \(\rightarrow\) SKIP id The code of the servers does not look that different from before. However, unlike before, each server is now reusable as part of the implementation of a larger protocol. One indication is the use of the parameters \(V\) and \(W\) in the types, another is the lack of the CLOSE constructor which prescribes closing the connection. As these protocols are still tail-recursive, their implementation uses a binary composition operator for commands that is tailored for this use case. The split function feeds everything to the first command; the cross operation ignores the bypassed value, passing only the output of the first command to the second; and the join operation passes only the output of the second command. \(\stackrel{{\phi^{\prime}}}{{\rightarrow}}\) : Cmd \(A\) B V W S \(\mathsf{t}\mathsf{e}\mathsf{e}\mathsf{p}\) : Session \(n\)\(\mathsf{t}\mathsf{e}\mathsf{p}\) = \(\mu\) & [ \(\mathsf{e}\mathsf{e}\mathsf{p}\), \(\mathsf{b}\mathsf{a}\mathsf{r}\mathsf{p}\) ] The session types \(\mathsf{e}\mathsf{e}\mathsf{p}\) and \(\mathsf{b}\mathsf{a}\mathsf{r}\mathsf{p}\) encode receiving a leaf and a branch of the binary tree type \(\mathsf{IntTree}\). The session type \(\mathsf{t}\mathsf{e}\mathsf{p}\mathsf{p}\) provides the enclosing recursion and choice between the leaf and branch protocols. As \(\mathsf{b}\mathsf{a}\mathsf{r}\mathsf{p}\) contains two recursive calls, the protocol \(\mathsf{t}\mathsf{e}\mathsf{p}\mathsf{p}\) is no longer tail recursive. \(\mathsf{t}\mathsf{e}\mathsf{v}\mathsf{T}\mathsf{e}\mathsf{e}\mathsf{p}\) : \(\mathsf{C}\mathsf{m}\mathsf{d}\mathsf{T}\mathsf{e}\mathsf{e}\mathsf{p}\) \(V\,W\mathsf{t}\mathsf{e}\mathsf{e}\mathsf{p}\)\(\mathsf{r}\mathsf{e}\mathsf{v}\mathsf{T}\mathsf{e}\mathsf{e}\mathsf{p}\) = LOOP $\mathsf{S}$ CHOICE $\lambda$ where}\)\(\mathsf{t}\mathsf{e}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p} \mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{p}\mathsf{ pure (_putx x a_) exec (SELECT _getx cont_) _cms a_ with _getx a_... \(\langle\)_i, ai_\(\rangle\)= do ask >>= liftIO o prim Send \(i\) exec (_cont i_) _cms ai_ exec (CHOICE _cont_) _cms a_= do \(i\leftarrow\)ask >>= liftIO o primRecv exec (_cont i_) _cms a_ exec [ split ] _cmd_\({}_{1}\)_s_[ cross ] _cmd_\({}_{2}\) [ join ] _cms a_= do let \(\langle\)_a_\({}_{1}\), a' \(\rangle\) = split a_\(b_{1}\leftarrow\)exec _cmd_\({}_{1}\)_cms a_\({}_{1}\) let \(a_{2}\) = cross a' \(b_{1}\) \(b_{2}\leftarrow\)exec _cmd_\({}_{2}\)_cms a_\({}_{2}\) pure (_join_\(b_{1}\)_b_\({}_{2}\)) exec (LOOP _cmd_) _cms a_= exec _cmd_ (push _cms cmd_) a_ exec(suc _n_) (CONTINUE _i_) _cms a_ with _cms i_... \(\langle\)_s-i_, cmd-i_\(\rangle\)=exec _cmd-i_ (pop _cms i_) \(a\) The implementation of SKIP just executes the action. Sending SEND, receiving RECV, and choice CHOICE are as usual. The dynamic selection is improved with respect to Section 3. Previously, there was no connection between the selected label and the executed continuation, so that a bug in the interpreter might introduce a mismatch. In the present interpreter, no such mismatch is possible because it would be a type error to invoke any other continuation than cont \(i\) with _ai_. Composition first splits the input using the split callback. It performs the left command, obtains its final state in \(b_{1}\), combines it with the bypass value \(a^{\prime}\) using cross, and then performs the right command. It obtains its final state in \(b_{2}\) and returns join \(a_{1}\)\(a_{2}\). The implementations for LOOP and CONTINUE are as before in Section 4. The difference is that the CONTINUE operation may now appear in the context of a composition which provides a nontrivial continuation. In Section 4, the function exec is tail recursive, but here it is not! Now we turn to the question of the monadic callback interface. Examination of the code reveals that the state monad is no longer the most appropriate model of a callback. On first sight, the type \(A\to M\)\(B\) is the same as ReaderT \(A\)\(M\)\(B\). However, the case for composition shows that a reader does not fit because the type \(A\) of the reader's source changes between recursive calls to the interpreter. Moreover, the callbacks for composition do not fit the pattern of the reader monad at all. Thus, we refrain from the monadic interface. ## 7. Handling Multiple Channels We have to amend some final elements to fully encompass traditional binary session types: a thread can open and manipulate many channels at a time and channels can be delegated. So far, our interfaces were restricted to single channels and to transmitting pure data (i.e., no channels). We now turn to lifting these restrictions. To concentrate on the new issues, we start out by slightly rephrasing session types as we know them from Section 3. In particular, we restrict to binary branching and leave the extension to finitary branching as well as the addition of recursion as an exercise to the reader. Generally, these types describe the communication on a single channel, as before. There are two novel aspects: we factorize the specification of the direction of communication and we add a special type for channel delegation, i.e., sending or receiving a channel. ``` dataDirection:Setwhere INPUT:Direction dataSession:Setwhere transmit:(d:Direction)\(\rightarrow\)Type\(\rightarrow\)Session\(\rightarrow\)Session delegate:(d:Direction)\(\rightarrow\)Session\(\rightarrow\)Session branch:(d:Direction)\(\rightarrow\)Session\(\rightarrow\)Session end:Session ``` An actual multichannel session type describes the interleaved communication on all channels at once. It comes with features and restrictions very similar to multiparty session types [20]. ``` dataMSession:N\(\rightarrow\)Set variable\(M\,M_{1}\,M_{2}:\)MSession\(n\) Causality:Fin\(n\rightarrow\)MSession\(n\rightarrow\)MSession\(n\rightarrow\)Set CheckDual0:MSession(suc\(m\))\(\rightarrow\)MSession(suc\(n\))\(\rightarrow\)Set dataMSessionwhere transmit:(d:Direction)\(\rightarrow\)(c:Fin\(n\))\(\rightarrow\)(\(T\):Type)\(\rightarrow\)MSession\(n\rightarrow\)MSession\(n\) branch:(d:Direction)\(\rightarrow\)(c:Fin\(n\))\(\rightarrow\)(\(M_{1}:\)MSession\(n\))\(\rightarrow\)(\(M_{2}:\)MSession\(n\)) \(\rightarrow\)Causality\(c\,M_{1}\,M_{2}\rightarrow\)MSession\(n\) close:(c:Fin(suc\(n\)))\(\rightarrow\)MSession\(n\rightarrow\)MSession(suc\(n\)) terminate:MSession zero connect:Split\(m\)\(n\)\(\rightarrow\)(\(M_{1}:\)MSession(suc\(m\)))\(\rightarrow\)(\(M_{2}:\)MSession(suc\(n\))) \(\rightarrow\)CheckDual0\(M_{1}\,M_{2}\rightarrow\)MSession(\(m+n\)) - assumenew channel has address zero in both threads delegateOUT:(cj:Fin(suc\(n\)))\(\rightarrow\)cj\(j\)\(\rightarrow\)Session\(\rightarrow\)MSession(suc\(n\)) delegateIN:(c:Fin\(n\))\(\rightarrow\)MSession(suc\(n\))\(\rightarrow\)MSession(suc\(n\)) - receivedchannel has address zero in continuation ``` A multichannel session type MSsession n, ranged over by \(M\), is indexed by the number \(n\) of channels that it governs. Multichannel session types are loosely based on Wadler's GV calculus [61]. In particular, the connection topology of multichannel programs is restricted in the same way as in Wadler's GV, with the pleasing consequence that they are guaranteed to be free of deadlocks. The names of the channels are represented by de Bruijn indices. Unlike in other embedded implementations, a multichannel type covers the full choreography of a communicating application: connecting (forking) to a new process is combined with channel creation, branching and transmitting values as usual, closing a channel, delegation, and terminating the application after all protocols are finished and their respective channels closed. Each multichannel type, except connect and terminate, takes a parameter \(c\) that identifies the channel on which the command operates. The type to transmit a value (transmit) is as before, save the direction and channel parameters. Branching (branch) is as before except the extra function Causality: it ensures that a branch is reasonable by enforcing that the session on every channel except \(c\) is not affected by the branch. This restriction is called "Causality" in the context of multiparty session types. It is needed because only the party on other end of channel \(c\) knows about the branching, but the others do not. The close type has a continuation parameter because a channel can be closed in the middle of a choreography without terminating the multichannel protocol. Once all channels are closed, the choreography can be concluded using the type terminate. The type connect indicates creation of a new thread which runs protocol \(M_{1}\) whereas the current thread continues with \(M_{2}\). The currently open channels are distributed among the threads according to the Split parameter. Communication with the new thread is established by creating a new channel, which is mapped to address zero in both threads. Moreover, the CheckDual0 predicate makes sure that the session types of the two ends of this channel are dual to one another.10 Footnote 10: We elide details about the Split type as well as duality from the paper for space reasons. See the supplement. The delegation types deal with sending (receiving) a channel over another channel. Their types are sufficiently different so as not to factor out the direction. To send a channel \(j\) using delegateOUT, we need to know that the transmitted channel is not the same channel we are sending on, its simple session type, and that the continuation command has one fewer channel. To receive a channel using delegateIN we just map it to address zero in the continuation session. The function project is similar to the projection from global types to local types in multiparty session types. We make use of auxiliary functions that manipulate de Bruijn indices: locate-split sp-c c determines the target thread of channel \(c\) and its local index; adjust changes the index to account for a channel that is removed from choreography. Causality and CheckDual0 are implemented using the obvious formulas. \[\begin{array}{l}\text{project : Fin }n\to\text{MSSession }n\to\text{Session}\\ \text{project }c\text{ (connect }sp\text{-}c\text{ }M_{1}\text{ }M_{2}\_\text{)}\text{ with locate-split }sp\text{-}c\text{ }c\\...\mid\text{inj}_{1}x=\text{project (suc }x)\text{ }M_{1}\\...\mid\text{inj}_{2}y=\text{project (suc }y)\text{ }M_{2}\\ \text{project }c\text{ (branch }d\text{ }x\text{ }M_{1}\text{ }M_{2}\text{ }\text Given the preceding discussion as well as the discussion in previous sections, the types of commands follow directly. data \(\mathsf{Cmd}\ (A:\mathsf{Set}):(n:\mathbb{N})\to\mathsf{MSession}\ n\to\mathsf{Set}_{1}\) where \(\mathsf{CLOSE}:\forall\ c\to(A\to A)\to\mathsf{Cmd}\ A\ n\ M\to\mathsf{Cmd}\ A\ (\text{suc}\ n)\ (\text{close}\ c\ M)\) \(\mathsf{SEND}\ :\forall\ c\to(A\to\mathsf{T}\llbracket\ T\rrbracket\ \times A)\to\mathsf{Cmd}\ A\ n\ M\to\mathsf{Cmd}\ A\ n\ (\text{send}\ c\ T\ M)\) \(\mathsf{RECV}\ :\forall\ c\to(\mathsf{T}\llbracket\ T\rrbracket\to A\to A)\to \mathsf{Cmd}\ A\ n\ M\to\mathsf{Cmd}\ A\ n\ (\text{recv}\ c\ T\ M)\) \(\mathsf{SELECT}:\forall\ c\to(causal:\text{Causality}\ c\ M_{1}\ M_{2})\to(A \to\mathsf{Bool}\ \times A)\) \(\mathsf{Cmd}\ A\ n\ M_{1}\to\mathsf{Cmd}\ A\ n\ M_{2}\to\mathsf{Cmd}\ A\ n\ (\text{select}\ c\ M_{1}\ M_{2}\ \text{causal})\) \(\mathsf{CHOICE}:\forall\ c\to(causal:\text{Causality}\ c\ M_{1}\ M_{2})\) \(\mathsf{Cmd}\ A\ n\ M_{1}\to\mathsf{Cmd}\ A\ n\ M_{2}\to\mathsf{Cmd}\ A\ n\ (\text{choice}\ c\ M_{1}\ M_{2}\ \text{causal})\) \(\mathsf{CONNECT}:\forall\{M_{1}:\mathsf{MSession}\ (\text{suc}\ m)\}\ \{M_{2}:\mathsf{MSession}\ (\text{suc}\ n)\}\ (\text{check}:\mathsf{CheckDual}\ \!M_{1}\ M_{2})\) \(\to(split:A\to A\ \times A)\) \(\to(sp:\text{Split}\ m)\) \(\to\mathsf{Cmd}\ A\ (\text{suc}\ m)\ M_{1}\to\mathsf{Cmd}\ A\ (\text{suc}\ n)\ M_{2}\) \(\mathsf{Cmd}\ A\ (m+n)\ (\text{connect}\ sp\ M_{1}\ M_{2}\ \text{check})\) \(\mathsf{SENDCH}:\forall\ \{s\}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! exec (SELECT \(c\_\_\)_getx cmds\({}_{1}\) cmds\({}_{2}\)) state chns = do let \(\langle\ b\), \(a\rangle\) = getx state primSend (lookup chns c) b if \(b\) then (exec cmds\({}_{1}\) a chns) else (exec cmds\({}_{2}\) a chns) exec (CHOICE \(c\_\) cmd\({}_{1}\) cmd\({}_{2}\)) state chns = do \(b\leftarrow\) primRecv (lookup chns c) if \(b\) then exec cmd\({}_{1}\) state chns else exec cmd\({}_{2}\) state chns exec END state [] = do pure state runCmd : Cmd \(A\) \(0\) \(M\) \(M\) \(A\) \(\rightarrow\) \(A\) \(\rightarrow\) \(\mathsf{IO}\)\(A\) runCmd cmd imt = do exec cmd init [] ``` ## 8. Discussion ### Selection and choice with vectors The reader may wonder why we do not define the constructors for selection and choice using vectors of length \(n\), rather than functions from \(\mathsf{Fin}\)\(n\)\(\rightarrow\)\(\mathsf{Session}\) (which is isomorphic). Using a function extends directly to the definition of the Cmd type where the \(\mathsf{Session}\) index of the continuation command depends on the function argument \(i\) : \(\mathsf{Fin}\)\(n\). We would have to define a special dependent vector type to achieve similar expressivity. ### Alternative representations Instead of interpreting a command value, we could compile it to a custom library, following the lead of the related work [(39; 64)]. Such a compiler can be obtained specializing the command interpreter with respect to single commands. The resulting _denotational implementation_ corresponds to a library implementation that exposes the commands, which are reified in a datatype in our approach, as functions. This approach has been pioneered by Reynolds [(48)] and subsequently applied, e.g., in the context of partial evaluation and program analysis [(7; 53)]. Here is an excerpt of such an implementation derived from the interpreter in Section 2. ``` XCmd : Set \(\rightarrow\) Session \(\rightarrow\) Set\({}_{1}\) XCmd \(A\) s = \(A\)\(\rightarrow\) Channel \(\rightarrow\) \(\mathsf{IO}\)\(A\) xclose : XCmd \(A\) end xclose _state ch = do primClose _ch_ pure state xsend : (\(A\) \(\rightarrow\) \(A\) \(\times\) \(\mathsf{T}\)\(\llbracket\)\(T\)\(\rrbracket\)) \(\rightarrow\) XCmd \(A\) S \(\rightarrow\) XCmd \(A\) (! \(T\cdot S\)) xsend \(f\) xcmd = \(\lambda\) state ch \(\rightarrow\) do let \(\langle\) state\({}^{\prime}\), \(x\)\(\rangle\) = \(f\) state primSend \(x\) ch xcmd state\({}^{\prime}\) ch The interpreters from other sections can be rephrased analogously without sacrificing the advantages our approach. However, for program development the command-based approach is more attractive because the Agda interactive programming support features autocompletion for commands, but not for combinators. ### Multiparty session types We see no issues in extending the approach presented in this work to protocols with more than two participants. We refrained from doing so in this work to avoid the extra complication. Section 7 already gives some insight into the requirements for a full multiparty version. ### Verification A fly in the ointment of our approach is the proof obligation that the interpreter does not break linearity. As the interpreters are very simple, it is tempting to rely on manual inspection. To obtain a formal and possible mechanized proof, we envision a semantics in terms of a free monad (Sill, 2017; Sill, 2017) with uninterpreted occurrences of the operations of our primitive channel IO API. The actual proof might be conducted using interaction trees (Sill, 2017), a mechanized framework for representing about recursive and impure programs and reasoning about them. ## 9. Related work In this section, we review how different library implementations of session types deal with linearity. Specifically, we do not consider dedicated language implementations like SILL (Sill, 2017), SEPI (Sill, 2017), FreeST (Brandes et al., 2017), or Links (Sill, 2017). These implementations come with dedicated type checker that properly treat linearity at compile time. There are libraries with fully dynamic enforcement of a session type discipline (Sill, 2017; Sill, 2017). They suffer from run-time overhead as they have to check every communication operation and they lack guarantees as they terminate the protocol when an error is detected at some peer. Several libraries perform linearity checks at run time. While a check for at most one (affine) use of a resource can be performed with a single bit (Sill, 2017), checks for linearity are more expensive, but still deemed lightweight (Sill, 2017). Several libraries statically enforce linearity by encoding it using parameterized monads (Sill, 2017; Sill, 2017), polymorphism (Sill, 2017), or higher-order abstract syntax (Sill, 2017). While these encodings cleverly exploit the facilities of the host language and support type inference, they are nontrivial to explain and yield types that are not easy on the programmer. No such cleverness is needed in our approach; types are human-readable and the interactive Agda system helps with constructing types and programs, but session types are not inferred. None of the existing embeddings offers a feature like our dynamic selection. However, dynamic selection can be viewed as a special case of label-dependent session types (Sill, 2017), so that our approach implements part of that theory, too. While the dynamic and object-oriented libraries support session subtyping, our approach currently does not support subtyping. The most closely related work is by Miu, Ferreira, Yoshida, and Zhou (Miu, 2017). They develop an implementation of multiparty session types in TypeScript by generation of custom libraries from a protocol specification. Their implementation guarantees freedom from communication errors, including deadlocks, communication mismatches, channel usage violation or cancellation errors. They generate TypeScript APIs in callback style, where the finite state machine underlying the communication endpoints is reified in terms of interfaces. Sending and receiving are both encoded via callbacks into the library or into the user program, as appropriate. The generated APIs encapsulate all primitive communication operations. ### Haskell The earliest Haskell implementation by Neubauer (Neubauer, 2017) emphasizes the modeling of the type of a single session using phantom types. Duality is implemented using type classes with functional dependencies. Linearity is not considered. Sackman (Sackman, 2018) and Tov (Tov, 2019) model multiple channels using a parameterized monad that is indexed by a mapping from channel names to their current session types. The mapping changes with each operation to keep track of the current state of all channels. The monadic interface guarantees linearity. This approach leaves more freedom to the programmer than our proposal in Section 7. While individual channel types are independent in their approach, our multi-channel session types impose a choreography on all connections, which is closer in spirit to multiparty session types. Lindley and Morris (Lindley and Morris, 2019) extend a HOAS encoding of linear lambda calculus with monads and session primitives in the style of GV (Lindley and Morris, 2019). They call their approach "parameterized tagless" (Bartner and Morris, 2019), which means that they encode the syntax of lambda calculus and the session primitives in term of parameterized functions in a type class with implementations provided subsequently. Orchard and Yoshida (Orchard and Yoshida, 2019) discuss connections between session types and effect systems. They present an implementation of session types in Haskell via an effect system encoding based on graded monads. Here the "grading" keeps track of the currently active channels and their session types whereas the monadic structure provides proper sequencing. ### OCam! The implementation of FuSe (Vaswani et al., 2017) consists of a typed layer on top of untyped channels (just like our approach). It provides an API inspired by GV, supports type inference, and checks linearity at run time. The approach is extensible to context-free session types (Vaswani et al., 2017) at the expense of some user annotations (while keeping type inference). Session OCaml (Sakman, 2018) enforces linearity by parametric polymorphism based on a technique by Garrigue11 with significant extensions to deal with session types. It can handle a fixed number of channels at the same time with globally defined channel names (slot names). The technique hinges on the polymorphic types of the global slot names. Session types are inferred from programs that can be written in a notation similar to FuSe and GV. Footnote 11: See [https://github.com/garrigue/safeio](https://github.com/garrigue/safeio). ### Java The early interface proposed in "Session-based distributed programming in Java" (Java, 2018) relied on special syntax for session communication and a preprocessor to limit aliasing. Mungo (Mungo, 2018) and Bica (Bica, 2019) use similar ideas to implement typestate and sessions. Hu and Yoshida (Hu and Yoshida, 2019) pioneered code generating approaches for implementing multiparty session types. They generate protocol-specific endpoint APIs from multiparty session types for Java, but claim generality of their approach for mainstream languages. They start from the observation that the behavior of an endpoint of a communication can be represented by a finite state machine. Each of these states is reified as a state channel type with methods that imply communication operations corresponding to state transitions. As Java places no restrictions on object uses, they deploy "very light run-time checks in the generated API that enforce a linear usage discipline on instances of the channel types." Their abstract I/O state interfaces are closest to the facilities that we provide. While their approach refiles the different possible states of a communication endpoint, these states are implicit in our approach. Moreover, while an API based on finite state machines is entirely appropriate for servers, we can provide additional flexibility for implementing clients of recursive protocols though commands like UNROLL described in Section 4.3. Scalas work for Scala (Scala, 2017) follows similar ideas with run-time checks for linearity. More recent work (Cai et al., 2018) improves on the flexibility by relying on advanced typing features in Scala 3. ### Rust Several recent implementations of session types (Cai et al., 2018; Datta et al., 2018; Datta et al., 2018; Datta et al., 2018; Datta et al., 2018; Datta et al., 2018) rely on Rust, a language with uniqueness types and ownership, all checked at compile time. While uniqueness types are quite similar to affine types (describing values that can be used at most once), they do not quite get to the level needed for sessions: having an affine session type for a channel means that an agent can drop the connection anytime without finishing the protocol, which leads to deadlock at the other end of the connection. With a proper linear session type, every agent has to fulfil the protocol up to the closing of the connection. ## 10. Conclusions Our paper demonstrates that a callback-based approach to implementation session-typed programs is a perfect fit with (mildly) dependently-typed functional programming. The intrinsically session-typed way of writing program guarantees protocol fidelity and, in some instances, deadlock freedom by construction. In connection with the callback approach, the host language does not have to support linearity, so that programs are statically safe, once the encapsulated library is verified. Interestingly, the changed angle of attack revealed the possibility of and the need for two novel constructions, the dynamic selection and the UNROLL command for recursive sessions. The first one is facilitated by our dependently-typed host language. The second one arose from the need to write client programs for recursive protocols. The discussion in Section 8 contains several pointers for future work. It would also be interesting to investigate ways to integrate subtyping (an obvious one being the insistence on explicit coercions).
2303.16704
TraVaG: Differentially Private Trace Variant Generation Using GANs
Process mining is rapidly growing in the industry. Consequently, privacy concerns regarding sensitive and private information included in event data, used by process mining algorithms, are becoming increasingly relevant. State-of-the-art research mainly focuses on providing privacy guarantees, e.g., differential privacy, for trace variants that are used by the main process mining techniques, e.g., process discovery. However, privacy preservation techniques for releasing trace variants still do not fulfill all the requirements of industry-scale usage. Moreover, providing privacy guarantees when there exists a high rate of infrequent trace variants is still a challenge. In this paper, we introduce TraVaG as a new approach for releasing differentially private trace variants based on \text{Generative Adversarial Networks} (GANs) that provides industry-scale benefits and enhances the level of privacy guarantees when there exists a high ratio of infrequent variants. Moreover, TraVaG overcomes shortcomings of conventional privacy preservation techniques such as bounding the length of variants and introducing fake variants. Experimental results on real-life event data show that our approach outperforms state-of-the-art techniques in terms of privacy guarantees, plain data utility preservation, and result utility preservation.
Majid Rafiei, Frederik Wangelik, Mahsa Pourbafrani, Wil M. P. van der Aalst
2023-03-29T13:54:32Z
http://arxiv.org/abs/2303.16704v1
# TraVaG: Differentially Private Trace Variant Generation Using GANs ###### Abstract Process mining is rapidly growing in the industry. Consequently, privacy concerns regarding sensitive and private information included in event data, used by process mining algorithms, are becoming increasingly relevant. State-of-the-art research mainly focuses on providing privacy guarantees, e.g., differential privacy, for trace variants that are used by the main process mining techniques, e.g., process discovery. However, privacy preservation techniques for releasing trace variants still do not fulfill all the requirements of industry-scale usage. Moreover, providing privacy guarantees when there exists a high rate of infrequent trace variants is still a challenge. In this paper, we introduce TraVaG as a new approach for releasing differentially private trace variants based on Generative Adversarial Networks (GANs) that provides industry-scale benefits and enhances the level of privacy guarantees when there exists a high ratio of infrequent variants. Moreover, TraVaG overcomes shortcomings of conventional privacy preservation techniques such as bounding the length of variants and introducing fake variants. Experimental results on real-life event data show that our approach outperforms state-of-the-art techniques in terms of privacy guarantees, plain data utility preservation, and result utility preservation. Keywords:Process Mining Event Data Differential Privacy GANs Machine Learning Autoencoder ## 1 Introduction Process mining is a family of data-driven techniques for business process discovery, analysis, and improvement. Process mining techniques require event data, which are widely available in most information systems, including ERP, SCM, and CRM systems. During the last decade, process mining has been successfully deployed in many industries, and it has become a crucial success factor for any type of business. Similar to any data-driven technique in the larger area of data science, concerns about the privacy of people whose data are processed by process mining algorithms are developing as the amount of event data and their usage rise. Thus, privacy regulations, e.g., GDPR [1], restrict data storage and process, which motivates the development of privacy preservation techniques. Modern privacy preservation methods are mostly based on Differential Privacy (DP), which provides a privacy definition by introducing noise into data. This is because of its significant properties, including its ability to ensure mathematically proven privacy and protect against PSO (predicate-singling-out) attacks [6]. The purpose of DP-based approaches is to inject noise into the released output in order to conceal the involvement of an individual. State-of-the-art research in process mining leveraging privacy preservation techniques based on DP focuses on releasing distributions of trace variants, which serve as the foundation for core process mining techniques such as process discovery and conformance checking [2]. A trace variant refers to a complete sequence of activities performed for an individual that is considered to be sensitive and private information. In the healthcare context, for instance, a trace variant shows a complete sequence of treatment-related activities performed for a patient that is private information itself and can also be exploited to conclude other sensitive information, e.g., the disease of the patient. Table 1 shows a small sample of a trace variant distribution in the healthcare context. Note that in a trace variant distribution, each trace variant is associated with an individual, a so-called case. Moreover, each case has precisely one trace variant. To achieve DP for trace variants, conventional so-called _prefix-based_ approaches inject noise drawn from a _Laplacian distribution_ into the variant distribution obtained from an event log [11, 21]. These approaches need to generate all possible unique variants based on a set of activities to provide differential privacy for the original distribution of variants. Since the set of possible variants that can be generated given a set of activities is infinite, prefix-based techniques need to limit the length of generated sequences. Also, to limit the search space, these approaches typically include a pruning parameter to exclude less frequent prefixes. Such a process to obtain DP has a high computational complexity and results in the following drawbacks: (1) _introducing fake variants_, (2) _removing frequent true variants_, and (3) _having limited length for generated variants_. Several approaches have been proposed to partially or entirely address the aforementioned drawbacks. A method, called SaCoFa [11], aims to mitigate drawbacks (1) and (2) by gaining knowledge regarding the underlying process semantics from the original event data. However, the privacy quantification of all extra queries to gain knowledge regarding the underlying semantics is not discussed. Moreover, the third drawback still remains since this work itself is a prefix-based approach. In [10] and a technique called Libra [9], which is based on [10], trace variants are converted to a DAFSA (Deterministic Acyclic Finite State Automata) representation to avoid such drawbacks. However, Libra introduces a clipping parameter for removing infrequent variants. This clipping parameter grows based on the number of unique trace variants and the strength of privacy guarantees. Thus, depending on the number of unique trace variants and privacy \begin{table} \begin{tabular}{l|c} \hline Trace Variant & Frequency \\ \hline \(\langleregister,visit,blood\text{-}test,visit,release\rangle\) & 15 \\ \(\langleregister,blood\text{-}test,visit,release\rangle\) & 12 \\ \(\langleregister,visit,hospitalization,surgery,release\rangle\) & 5 \\ \(\langleregister,visit,blood\text{-}test,blood\text{-}test,release\rangle\) & 2 \\ \hline \end{tabular} \end{table} Table 1: A simple event log from the healthcare context, including trace variants and their frequencies. parameters, Libra may even remove all the variants and return empty outputs. A recent work called TraVaS [25] proposes an approach based on _differentially private partition selection strategies_ to overcome the above-mentioned drawbacks. Similar to Libra, TraVaS also removes infrequent trace variants. However, in TraVaS, the threshold for removing infrequent variants is only dependent on the input privacy parameters and does not grow with the number of unique variants or the size of event data. Yet, for small event data with a high rate of unique trace variants, TraVaS may not be able to provide strong privacy guarantees. In this paper, we introduce TraVaG to generate differentially private trace variants from an original variant distribution by means of GANs (Generative Adversarial Networks) [13]. The main idea of TraVaG is to privately learn important event data characteristics. The trained GAN enables the generation of new synthetic anonymized variants that are statistically similar to the original data. Trained generative models work without data access. Thus, as long as the statistical characteristics of the original data do not significantly change, one does not need to apply DP directly to the original event data. For industry-scale big event data, this property can considerably improve the computational complexities [22]. Moreover, TraVaG is based on DP-SGD (Differentially Private - Stochastic Gradient Descent) [3] optimization techniques that avoid thresholding on training data or released network outputs. Hence, TraVaG can generate infinite and arbitrarily large anonymized synthetic trace variants even if the original variant frequencies are comparably small. Moreover, our experiments on real-life event logs demonstrate a better performance of TraVaG compared to state-of-the-art techniques in terms of data utility preservation for the same privacy guarantees. The remainder of this paper is structured as follows. In Section 2, we provide a summary of related work. Preliminaries are provided in Section 3. In Section 4, we present the details of TraVaG. Section 5 discusses the experimental results based on real-life event logs, and Section 6 concludes the paper. ## 2 Related Work Privacy-preserving process mining is recently growing in importance. Several techniques have been proposed to address privacy issues in process mining. In the following, we provide a summary of the work focusing on _releasing differentially private event data_ and _generating differentially private event data_. ### Releasing Differentially Private Event Data In [21], the authors apply an \((\epsilon,\delta)\)-DP mechanism to event logs to privatize _directly-follows relations_ and trace variants. The underlying principle uses a combination of an \((\epsilon,\delta)\)-DP noise generator and an iterative query engine that allows an anonymized publication of trace variants with an upper bound on their length. In [11], SaCoFa has been introduced as an extension of [21], where the goal is to optimize the query structures with the help of underlying semantics. All the aforementioned techniques follow the so-called prefix-based approach that suffers from the drawbacks explained in Section 1. To deal with such drawbacks, in [10], the authors introduced an approach that transforms a trace variant distribution into a DAFSA representation. This approach aims to keep all the original trace variants that may result in high noise injection during the anonymization process. Libra [9] is a recent work that employs the approach proposed in [10] and aims to increase utility using subsampling and composing privatized subsamples to release differentially private event data. TraVaS [25] introduces a novel approach based on differentially private partition selection to address the mentioned drawbacks in Section 1. ### Generating Differentially Private Synthetic Data Although DP-based generative Artificial Neural Networks (ANNs) have been quite extensively researched in the major field of data science and machine learning, they have not been used in the context of process mining. Thus, we mainly focus on some of the work outside the domain of process mining. In [5], the authors adopted a so-called _variational autoencoder_, DP-VAE, which assumes that the mapping from real data to the Gaussian distribution can be efficiently learned. A different direction was then chosen by [12], where the authors used a _Wasserstein_ GAN (WGAN) to generate differentially private mixed-type synthetic outputs employing a Wasserstein-distance-based loss function. Finally, in [27], the concepts of WGAN and DP-VAE were combined to first learn a private data encoding and then generate respective encoded data. We adapted this principle for our work to cope with the large dimensionality of event data. Research in non-private generative models for process mining, primarily focuses on exploiting ANNs and GANs to predict the next state of processes such as [17], and [19]. Note that the approach in [17] only provides synthetic event data without any privacy guarantees. ## 3 Preliminaries We start the preliminaries by introducing basic notations and mathematical concepts. Let \(A\) be a set. \(B(A)\) is the set of all multisets over \(A\). Given \(B_{1}\) and \(B_{2}\) as two multisets, \(B_{1}\uplus B_{2}\) is the sum over multisets, e.g., \([a^{2},b^{3}]\uplus[b^{2},c^{2}]=[a^{2},b^{5},c^{2}]\). We define a finite sequence over \(A\) of length \(n\) as \(\sigma\)=\(\langle a_{1},a_{2},\ldots,a_{n}\rangle\) where \(\sigma(i)\)=\(a_{i}\)\(\in\)\(A\) for all \(i\)\(\in\)\(\{1,2,\ldots,n\}\). The set of all finite sequences over \(A\) is denoted with \(A^{*}\). ### Event Log Process mining techniques employ event data that are typically collections of unique events recorded per activity execution and characterized by their attributes, e.g., _activity_ and _timestamp_. Events in an event log have to be unique. A _trace_ is a single process execution represented as a sequence of events belonging to a case (individual) and having a fixed ordering based on timestamps. An event cannot appear in more than one trace or multiple times in one trace. Our work focuses on the control-flow aspect of an event log that only considers the activity attribute of events in a trace, so-called a _trace variant_. Thus, we define a simple event log based on activity sequences, so-called _trace variants_. Definition 1 (Simple Event Log): Let \(\mathcal{A}\) be the universe of activities. A simple event log \(L\) is defined as a multiset of trace variants \(\mathcal{A}^{*}\), i.e., \(L\in B(\mathcal{A}^{*})\). \(\mathcal{L}\) denotes the universe of simple event logs. In a simple event log representing a distribution of trace variants, one case, which refers to an individual, cannot contribute to more than one trace variant. ### Differential Privacy (DP) The main idea of DP is to inject noise into the original data in such a way that an observer who sees the randomized output cannot with certainty tell if the information of a specific individual is included in the data [8]. Considering simple event logs, as our sensitive event data, we define differential privacy in Definition 2. Definition 2 ((\(\epsilon\),\(\delta\))-DP for Event Logs): Let \(L_{1}\) and \(L_{2}\) be two neighboring event logs that differ only in a single entry, i.e., \(L_{2}{=}L_{1}\uplus[\sigma]\) for any \(\sigma{\in}\mathcal{A}^{*}\). Also, let \(\epsilon{\in}\mathbb{R}_{>0}\) and \(\delta{\in}\mathbb{R}_{>0}\) be two privacy parameters. A randomized mechanism \(\mathcal{M}_{\epsilon,\delta}{:}\mathcal{L}{\rightarrow}\mathcal{L}\) provides (\(\epsilon,\delta\))-DP if for all \(S{\subseteq}B(\mathcal{A}^{*})\): \(Pr[\mathcal{M}_{\epsilon,\delta}(L_{1})\in S]\leq e^{\epsilon}\times Pr[ \mathcal{M}_{\epsilon,\delta}(L_{2})\in S]{+}\delta\). In Definition 2, \(\epsilon\) specifies the probability ratio, and \(\delta\) allows for a linear violation. In the strict case of \(\delta=0\), \(\mathcal{M}\) offers \(\epsilon\)-DP. The randomness of respective mechanisms is typically ensured by the noise drawn from a probability distribution that perturbs original variant-frequency tuples and results in non-deterministic outputs. The smaller the privacy parameters are set, the more noise is injected into the mechanism outputs, entailing a decreasing likelihood of tracing back the instance existence based on outputs. ### Generative Adversarial Networks (GANs) A generative adversarial network (GAN) represents a special type of ANN compound to synthesize similar data to its original input. It comprises two separate ANNs, a _generator_ and a _discriminator_[13]. The training principle follows a two-player game: a generator tries to fool the discriminator by generating authentic fake data while a discriminator tries to distinguish between real and fake results. A generator \(gen:\mathbb{Z}^{m}\rightarrow\mathbb{R}^{n}\) and a discriminator \(dis:\mathbb{R}^{n}\rightarrow\{0,1\}\) can be described as highly parametrizable functions. Here, a generator \(gen\) is seeded with random multivariate Gaussian noise \(z\in Z^{m}\) of user-defined dimension \(m\) that is translated into a synthetic desired output. A discriminator \(dis\) aims to determine whether its input originates from the generator's output. In a simple form, it outputs a binary decision variable, where 0 means the input is fake and 1 means the input is original data. In our work, we apply a GAN architecture to synthesize event data. ### Autoencoder An _autoencoder_ is a certain type of ANN structure used to learn efficient encodings of unlabeled data [15]. The respective encoding is validated and optimized by attempting to regenerate the input from the encoding by decoding. The autoencoder learns the encoding for a set of data to typically provide dimensionality reduction. As a result, an autoencoder always consists of two separate ANNs, an encoder \(enc:\mathbb{R}^{n}\rightarrow\mathbb{R}^{d}\) and a decoder \(dec:\mathbb{R}^{d}\rightarrow\mathbb{R}^{n}\). These components allow for transforming high-dimensional data \(x\in\mathbb{R}^{n}\) to a compact representation within the so-called latent space \(\mathbb{R}^{d}\) and vice versa (typically \(d\ll n\)). The specific mappings of \(enc\) and \(dec\) are characterized by the network's weights and learned from the data during the training phase. For our work, we employ an autoencoder structure to achieve a compressed encoding of input event data. ## 4 TraVaG As presented in Section 2, DP-based generative networks have been extensively researched outside of the process mining context. Typical approaches either adopt variational autoencoder architectures that leverage both encoder and decoder components or GAN architectures employing a discriminator and a generator part. When transferring these ideas to event data, one crucial aspect is the high-dimensional structure that turns out to be challenging during training, particularly if strong DP is added. Thus, we follow the approach of the novel work [27] and [14] that combines the compression functionality of autoencoders with the flexibility of GANs and demonstrated superior performance for general high-dimensional mix-type input data [27]. Instead of directly generating new event logs, we first learn a compressed encoding and then train a GAN to reproduce data within the encoded latent space. Final datasets are obtained by decoding back the dimension-reduced intermediate format. This principle mitigates the complication of GANs when extracting statistical properties from feature-rich data that is limited in size. Particularly, sparse features can be compressed without significant loss of information, while generator networks improve their learning performance due to the lower dimension. Moreover, no Gaussian Mixture distribution is enforced on the latent space, as is the case for typical generative stand-alone autoencoder methods [5]. ### The TraVaG Framework Different components and the workflow of our framework are shown in Figure 1. We start with preprocessing a simple event log that contains variant distributions in the form of variant-frequency pairs. There are two common possibilities. The first option considers the activities within variants and extracts all subsequences of direct neighbors, i.e., Directly-Follows Relations (DFRs). These DFRs are then mapped to a binary or number space and either fed into a GAN as a single feature or as two features along with their frequencies. A downside of this method is that the generator serves as a sequence constructor which allows the creation of artificial variants in the postprocessing phase where all generated activity pairs are linked back together. To avoid creating fake trace variants, we choose the second option, where only complete variants are considered as inputs. Therefore, a simple event log \(L\) with \(n\) variants and \(m\) cases is binary-encoded as follows. Within a \(m\times n\) matrix, each variant represents a binary feature column and each case denotes a row instance that contains 1 at the respective variant column and 0 elsewhere (sparse matrix). Analogously, this transformation can be inverted back to the original data space. Thus, TraVaG never produces fake trace variants. Also, one-hot encoding does not influence the data statistics and hence does not incur any privacy costs. We refer to this preprocessing procedure as _one-hot encoding_ and _one-hot decoding_. We perform two main training phases including autoencoder training (blue parts) and GANs training (purple parts). Since the focus of this work is on the privacy aspect, we describe the privately trained components in more detail. A detailed algorithmic explanation of the training components including the structure of the networks, parameter tuning, activation functions, loss measures, and optimizations is provided in our supplementary document.1 After the preprocessing, the sparse binary variant vectors \(x_{1}\dots x_{m}{\in}\mathbb{R}^{n}\) are forwarded to the autoencoder training phase, including an encoder and a decoder component. These components allow for transforming high-dimensional data \(x_{i}{\in}\mathbb{R}^{n}\) to a compact representation within the so-called latent space \(\mathbb{R}^{d}\) and vice versa, s.t., \(d\ll n\). The dimension \(d\) is a hyperparameter of the autoencoder and needs to be selected w.r.t. the GANs configuration. Since the encoder does not participate in the process of training the GAN or synthesizing new event data, it does not need to be optimized privately [4, 5]. The decoder is strongly involved in the Figure 1: A simplified workflow diagram of the TraVaG training and application processes. anonymization process and is released to the public. Thus, the training of the decoder is performed privately by means of DP-SGD (see Section 4.2). The same one-hot encoded data \(x_{1}\ldots x_{m}\in\mathbb{R}^{n}\) are used to train a GAN consisting of two feed-forward ANNs; a generator \(gen:2^{\mathbb{Z}}\rightarrow\mathbb{R}^{d}\) and a discriminator \(dis:\mathbb{R}^{n}\rightarrow\{0,1\}\). The goal of the generator \(gen\) is to construct synthetic data within the output space \(\mathbb{R}^{d}\) that are similar to the compressed variants. It is seeded with random multivariate Gaussian noise \(z\) of a user-defined dimension. The discriminator \(dis\) aims at determining whether its input originates from the decompressed generator output \(dec(gen(.))\) or from the original data source \(x_{i},1\leq i\leq m\). Both components are parameterized by their network weights and trained iteratively to outplay each other. Whereas the generator attempts to find latent space outputs that are hard to distinguish from real encoded data by the discriminator, the latter tries to expose these synthetic data records. Eventually, this principle enables the generator to learn and capture the statistical properties of the input variant distribution through the lenses of the autoencoder. Note that due to the integrated autoencoder, the generator only targets the latent space \(\mathbb{R}^{d}\) which is much easier to achieve than constructing data in \(\mathbb{R}^{n}\). Also, it averts to access the real confidential data space and does not need to be trained with DP as opposed to the discriminator that is again privately optimized with DP-SGD algorithms [27]. Once both the autoencoder and GAN are trained, one can generate new synthetic anonymized event data (orange parts). The underlying mechanism equals the training step of the generator. Starting with a random Gaussian noise sample \(z\), this noise becomes digested by the generator, yielding \(gen(z)\). From the latent space, the decoder then maps \(gen(z)\) to \(dec(gen(z))\). Finally, the synthetic one-hot encoded result is transformed back to the variant universe. One compelling advantage of TraVaG lies in the underlying data format. Since the feature space represents the different variants of the original data, TraVaG considers them as given and only has to learn their distribution during training. When applied, the framework reconstructs an anonymized version of this distribution over multiple runs without introducing new variants. The more synthetic data are created, the better the consolidated TraVaG output, i.e., new anonymized variants better approximate the original variant distribution. Note that this process does not converge to the true variant frequencies, but to the TraVaG-internal learned anonymous version. Thus, it is recommended to run TraVaG at least as often as the number of cases in the original event log. In case smaller privatized datasets are needed, the output can be down-sampled during postprocessing rounds. ### Differentially Private - Stochastic Gradient Descent (DP-SGD) To render SGD differentially private, Abadi et al. [3] proposed the following two steps. Given a dataset \(X=\{x_{i}\in\mathbb{R}^{n}\mid 1\leq i\leq m\}\), \(f\) as a loss function, and \(\theta\) as the model parameter. First, the gradient \(g_{i}=\nabla_{\theta}f_{\theta}(x_{i})\) of each data sample \(x_{i}\) is clipped at some real value \(C\in\mathbb{R}_{>0}\) to ensure its \(L^{2}\)-norm of the gradient does not exceed the clipping value. For our work, we refer to the following clipping function2: \(\text{clip}(g_{i},C)=g_{i}\cdot\min{(1,C/||g_{i}||_{2})}\). Footnote 2: Note that also other clipping strategies exist, as highlighted in [22]. Then, as Equation 1 shows, multivariate Gaussian noise parametrized by a noise multiplier \(\Phi\in\mathbb{R}\) is added to the clipped gradient vectors before averaging over the batch \(B\subseteq X\). We further denote the identity matrix as \(I\) and the Gaussian distribution of unspecified dimension as \(\mathcal{N}\). \[g_{B}\leftarrow\tfrac{1}{|B|}\left(\sum_{i\in B}\text{clip}(\nabla_{\theta} f_{\theta}(x_{i}),C)+\mathcal{N}(0,C^{2}\Phi^{2}I)\right) \tag{1}\] The noisy-clipped-averaged gradient \(g_{B}\) is now differentially private and can be used for conventional descent steps: \(\theta\leftarrow\theta-\eta\cdot g_{B}\), where \(\eta\) is the so-called _learning rate_. Note that clipping the individual gradients as in Equation 1 can also be replaced by instead clipping gradients of groups of more data points, so-called _microbatches_[22]. Instead of the common DP parameters \(\epsilon\) and \(\delta\), DP-SGD uses the related noise multiplier \(\Phi\). When translating between these two types of settings, novel research has demonstrated a tighter privacy bound if the batch sampling process for \(B\) is conducted according to a specific procedure [3]. This procedure independently selects each data point of \(X\) with a fixed probability \(q\), the so-called _sampling rate_, in each step. ### Privacy Accounting To evaluate the exact privacy guarantee provided by DP-SGD algorithms, we employ the so-called _Renyi Differential Privacy_ (RDP) [23], a different notion of DP typically used for private optimization. RDP is defined based on the concept of _Renyi divergence_. Given two probability distributions \(P\) and \(Q\), the Renyi divergence of order \(\alpha\) is defined as follows: \(D_{\alpha}(P||Q):=\frac{1}{\alpha-1}\log\mathbb{E}_{x\sim Q}\left(\frac{P(x)}{ Q(x)}\right)^{\alpha}\). Definition 3 ((\(\alpha,\epsilon\))-RDP for Event Logs): Let \(L_{1}\) and \(L_{2}\) be two neighboring event logs that differ only in a single entry, e.g., \(L_{2}\)=\(L_{1}\uplus[\sigma]\) for any \(\sigma\)\(\in\)\(\mathcal{A}^{*}\). Given \(\alpha>1\) and \(\epsilon\in\mathbb{R}_{>0}\), a randomized mechanism \(\mathcal{M}_{\alpha,\epsilon}\):\(\mathcal{L}\)\(\rightarrow\)\(\mathcal{L}\) provides \((\alpha,\epsilon)\)-RDP if \(D_{\alpha}(\mathcal{M}(L_{1})||\mathcal{M}(L_{2}))\leq\epsilon\). To obtain the final \((\epsilon,\delta)\)-DP parameters, we employ the following two propositions on the composition of \((\alpha,\epsilon)\)-RDP mechanisms and the conversion of \((\alpha,\epsilon)\)-RDP parameters to \((\epsilon,\delta)\)-DP parameters. Proposition 1 (Composition of RDP [23]): _If \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\) are two \((\alpha,\epsilon_{1})\)-RDP and \((\alpha,\epsilon_{2})\)-RDP mechanisms for \(\alpha>1\), respectively. Then, the composition of \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\) satisfies \((\alpha,\epsilon_{1}+\epsilon_{2})\)-RDP._ Proposition 2 (RDP Parameter Conversion [23]): _If a mechanism \(\mathcal{M}\) satisfies \((\alpha,\epsilon)\)-RDP with \(\alpha>1\), then for all \(\delta>0\), \(\mathcal{M}\) satisfies \((\epsilon+(\log{1/\delta})/(\alpha-1),\delta)\)-DP._ During an iterative application of Gaussian mechanisms, as is the case in DP-SGD, the Renyi divergence allows more tightly capturing of the corresponding privacy loss than standard \((\epsilon,\delta)\)-DP. To compute the final \((\epsilon,\delta)\)-DP parameters from multiple runs of DP-SGD, the following three steps are followed. 1. **Subsampled RDP.** Given a sampling rate \(q\) and noise multiplier \(\Phi\), the RDP privacy parameters for one iteration of DP-SGD can be derived as a non-explicit integral function of \(\alpha\geq 1\)[23]. This function is standardized in many privacy-related optimization packages and will be referred to as \(\text{RDP}_{1}(q,\Phi)\)[3]. 2. **RDP Composition.** Since DP-SGD is most likely to run iteratively, we need to compose Step 1 over all executions according to Proposition 1. Hence, the resulting RDP parameters of \(T\) iterations are obtained by computing \(\text{RDP}_{T}(q,\Phi,T):=\text{RDP}_{1}(q,\Phi)\cdot T\). 3. **Conversion to \((\epsilon,\delta)\)-DP.** After retrieving an expression for the overall RDP privacy parameters with \(\text{RDP}_{T}\), we need to convert the respective \((\alpha,\epsilon)\) tuple to a \((\epsilon,\delta)\) guarantee according to Proposition 2. Since the \(\epsilon\) parameter of RDP is also a function of \(\alpha\), Step 3 involves optimizing for \(\alpha\) to achieve a minimal \(\epsilon\) and \(\delta\). We apply this procedure to obtain the respective privacy guarantees \((\epsilon,\delta)\)-DP on both the autoencoder and the GAN-based discriminator of TraVaG. The resulting values are then combined into a final privacy cost by the _composition theorem_ of DP [8]. According to the composition theorem, different \((\epsilon,\delta)\)-DP mechanisms can be easily combined into more complex algorithms at the cost of a directly measurable cumulative privacy loss, and the result still promises \((\epsilon,\delta)\)-DP independent of the exact form of composition or query structure. ## 5 Experiments We evaluate the performance of TraVaG on real-life event logs. We select two event logs of varying sizes and trace uniqueness. As we discussed in Section 1 and stated in other research such as [21], [11], and [9] infrequent variants are challenging to privatize. Thus, trace uniqueness is an important analysis criterion. The Sepsis log describes hospital processes for Sepsis patients and contains many rare traces [20]. In contrast, BPIC13 has significantly more cases at a four times smaller trace uniqueness [7]. BPIC13 describes an incident and problem management system called VINST. Both logs are realistic examples of confidential human-centered information where the case identifiers refer to individuals. Table 2 shows detailed log statistics. We perform our evaluation for a wide range of the main privacy parameters \(\epsilon\in\{0.01,0.1,1,2\}\) and \(\delta\in\{10^{-6},10^{-5},10^{-4},10^{-3},0.01\}\). These ranges are selected in accordance with typical values employed at industrial applications as well as state-of-the-art DP research [9, 11, 21, 26]. We particularly note that extreme settings such as \(\epsilon=2,\delta=0.5\) are not chosen due to practical relevance, but to demonstrate how the anonymization methods behave when starting from a weak- or non-private environment. Due to the probabilistic nature of \((\epsilon,\delta)\)-DP, we run the TraVaG generator 100 times on all input event logs and all \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline Event Log & \#Events & \#Cases & \#Activities & \#Variants & Trace Uniqueness \\ \hline Sepsis & 15214 & 1050 & 16 & 846 & 80\% \\ BPIC13 & 65533 & 7554 & 4 & 1511 & 20\% \\ \hline \end{tabular} \end{table} Table 2: General statistics of the event logs used in our experiments. privacy parameters and report the average values. We compare our results with TraVaS [25] as a state-of-the-art technique and the original prefix-based framework called benchmark [21].3 The sequence cutoff for the benchmark method is set to the length that covers 80% of variants in each log, and the remaining pruning parameter is adjusted such that on average anonymized logs contain a comparable number of variants with the given original log. The ANNs of TraVaG are configured by a semi-automated tuning approach w.r.t the different input logs. Whereas most design decisions and hyperparameters are tweaked according to results of manual tests as well as research experience, the settings: _batch size_\((B)\), _number of iterations_\((I)\) and _noise multiplier_\((\Phi)\) are automatically optimized via a grid-search [18] for fixed privacy levels. A detailed list of the derived settings for each event log, the concrete network designs, and configuration values are available on GitHub.4 Footnote 3: Note that in [25], TraVaS was already compared with SaCoFa [11] and benchmark [21] and showed better performance. Here, the benchmark method is included for easier comparison. Moreover, Libra [9] does not take \(\epsilon\) as an input parameter but computes it based on \(\alpha\) as an RDP parameter and its sampling strategy. This makes the comparison based on exact \(\epsilon\) and \(\delta\) parameters very difficult. Nevertheless, an important observation in contrast to TraVaG is that Libra returns an empty log for event logs with many infrequent variants, such as Sepsis when \(\delta\leq 10^{-3}\). Footnote 4: [https://github.com/wangelik/TraVaG/blob/main/supplementary/TraVaG.pdf](https://github.com/wangelik/TraVaG/blob/main/supplementary/TraVaG.pdf) ### Evaluation Measures Suitable evaluation measures are required to assess the performance of an \((\epsilon,\delta)\)-DP mechanism in terms of data (result) utility preservation. The _data utility_ perspective measures the similarity between two logs independent of future applications. For evaluating data utility we employ the following measures: _relative log similarity_[24, 25] and _absolute log difference_[25]. _Relative log similarity_ measures the _earth mover's distance_ between two trace variant distributions, where the normalized _Levenshtein_ string edit distance is used as a similarity function between trace variants. This measure quantifies the degree to which the variant distribution of an anonymized log matches the original variant distribution on a scale from 0 to 1. _Absolute log difference_ accounts for the situations where distribution-based measures provide misleading expressiveness [25]. Exemplary cases are event logs possessing similar variant distributions, but significantly different sizes. To calculate an absolute log difference value, we use the approach introduced in [25], where input logs are converted to a _bipartite graph_ of variants as vertices. Then, a _cost network flow_ problem is solved by setting demands and supplies to the absolute variant frequencies and utilizing a _Levenshtein_ distance between variants as an edge cost. Thus, the result of this measure shows the minimal number of _Levenshtein_ operations to transform variants of an anonymized log into variants of the original log. Details of the exact algorithms are available.5 Footnote 5: [https://github.com/wangelik/TraVaG/blob/main/supplementary/metrics.pdf](https://github.com/wangelik/TraVaG/blob/main/supplementary/metrics.pdf) We additionally evaluate the performance of TraVaG in terms of _result utility preservation_ for _process discovery_ as a specific application of trace variant distribution. In this respect, we use the _inductive miner infrequent_[16] with a default noise threshold of 20% to discover process models from the pri for all \((\epsilon,\delta)\) settings under investigation. Then, we compare the models with the original event log to obtain token-based replay _fitness_ and _precision_ scores [2]. ### Data Utility Analysis In this subsection, the results of the two aforementioned data utility metrics are presented for both real-life event logs. Figure 2 shows the average results on BPIC13 in a six-fold heatmap. The gray fields at the TraVaS and benchmark methods denote an unsuccessful algorithm execution. For \(\delta<10^{-3}\), the thresholding of TraVaS becomes too strict and removes many variants in the anonymized outputs. On the contrary, the benchmark method introduces artificial variants and noise to an extent that is unfeasible to average within reasonable time and accuracy. In opposition, TraVaG successfully manages to generate anonymized outputs for \(\delta<10^{-3}\). More importantly, both results of _relative log similarity_ and _absolute log difference_ do not illustrate clear decreasing trends on lower \(\delta\) within the investigated parameter range. We explain this expected observation by the fact that TraVaG avoids any pruning mechanism on its output and implements less \(\delta\)-dependent Gaussian noise via RDP into the gradients (see Section 4.3 and [23]). Whereas the absolute log difference results maintain a rather stable output for the different \((\epsilon,\delta)\) values, the TraVaG relative log similarity presents a strong positive \(\epsilon\)-dependency. As a result, the absolute statistics (absolute Levenshtein distances and absolute frequencies) of the anonymized event data seem to be more similar to the original logs as the variant distributions. A rationale for this discrepancy lies in the still comparably small dataset with 7554 instances over 1511 variants (features). By construction, TraVaG accomplishes reproducing equally sized event logs containing many original variants but fails to pick up some characteristics of the underlying distribution once the input data or the Figure 2: The _relative log similarity_ and _absolute log difference_ results of anonymized BPIC13 logs generated by TraVaG, TraVaS, and the benchmark method. Each value represents the mean of 100 generations for TraVaG and 10 algorithm runs for TraVaS and the benchmark method. training iterations are limited. Hence, we expect this diverging trend to diminish with increasing training data. The data utility results for the Sepsis log are presented in Figure 3. With only 1050 instances at 846 variants (features), this dataset is even smaller and thus more difficult to train for TraVaG than BPIC13. As a result, we observe similar, but more pronounced behavior of relative log similarity and absolute log difference metrics compared to Figure 2. An extreme example are the results at \(\epsilon=0.01,\delta<10^{-2}\), where the introduced gradient noise turned out as too intense for the generative model to converge under the given training data size. For the remaining privacy settings, TraVaG again outperforms its competitors at the absolute log statistics while the relative log similarity performs slightly better than TraVaS and at the same order as the benchmark results for \(\epsilon>0.1\). ### Process Discovery Analysis Figure 4 illustrates the result utility analysis of TraVaG, TraVaS, and the benchmark on BPIC13. As discussed in Subsection 5.2, TraVaG successfully manages to produce results for \(\delta<10^{-3}\) where the other methods are not applicable. Except for the three outliers at \(\epsilon=0.1\), both fitness and precision show a stable distribution without considerable dependence on the different privacy parameters. In accordance with Figure 2, we thus conclude that the absolute log difference provides a better proxy for process-discovery-based performance of TraVaG than relative log similarity. Similarly, the strong scores on both metrics demonstrate a sufficient replay behavior between the model obtained from an anonymized log and the original log. Whereas fitness denotes that the process model still captures most of the real underlying event data, precision depicts only a small fraction of model decisions, not being included in the anonymized event log. Consequently, TraVaG accomplishes learning the most important facets of the BPIC13 variant distribution for the discovery algorithm to produce a fitted model. When com Figure 3: The _relative log similarity_ and _absolute log difference_ results of anonymized Sepsis logs generated by TraVaG, TraVaS, and the benchmark method. Each value represents the mean of 100 generations for TraVaG and 10 algorithm runs for TraVaS and the benchmark method. pared to the alternative methods, TraVaG achieves comparable scores as TraVaS and again outperforms the benchmark. The result utility evaluation of the high trace-unique Sepsis log is presented in Figure 5. With respect to fitness, TraVaG shows similar values as TraVaS but a slight under-performance compared to the benchmark method. The main cause for this observation again refers to the infrequent variants and the small log size. While TraVaS maintains a strong \(\delta\)-related threshold and TraVaG copes with the limited training data, the benchmark method introduces many artificial variants but tends to match the frequent traces. As a result, the discovered process models are able to replay most of the original behavior in contrast to TraVaG and TraVaS results. According to the aforementioned explanation, precision reflects an inverted trend. Here, the larger models of the benchmark method contain Figure 4: The _fitness_ and _precision_ results of anonymized BPIC13 event logs generated by TraVaG, TraVaS, and the benchmark method. Each value represents the mean of 100 generations for TraVaG and 10 algorithm runs for TraVaS and the benchmark method. Figure 5: The _fitness_ and _precision_ results of anonymized Sepsis event logs generated using TraVaG, TraVaS, and the benchmark method. Each value represents the mean of 100 generations for TraVaG and 10 algorithm runs for TraVaS and the benchmark method. many possible decision paths that are nonexistent in the underlying event log. For TraVaS and TraVaG, we thus achieve more precise anonymized process models. ## 6 Conclusion TraVaG has shown that training a differentially private combination of autoencoders and GANs to synthesize anonymized event data from an underlying original variant distribution outperforms current state-of-the-art selection-based variant anonymization techniques and prefix-based approaches. Particularly, for strong privacy at the low \(\delta\) range. Moreover, TraVaG has the unique advantages of outstanding resource-efficient execution, the absence of distorting noise thresholds, a general acceptance of continuous data streams, and no fake variant generation. In combination, these characteristics allow TraVaG to efficiently operate with infrequent variant data in the low \(\delta\) regime without real competitors. Nevertheless, we note that the framework comprises a more complex training procedure and privacy budget accounting than approaches that directly digest DP parameters, such as TraVaS [25]. We have to follow the one-way procedure to first obtain RDP parameters \((\epsilon,\alpha)\) from noise multiplier \(\Phi\), sampling rate \(q\), iterations \(T\) and then convert \((\epsilon,\alpha)\) to \((\epsilon,\delta)\). Note that a similar procedure is followed by other techniques that are based on RDP, such as Libra [9]. Consequently, specific privacy levels can only be ensured by repeatedly analyzing different TraVaG network settings until a successful match is found. This hyperparameter dependence could be studied in more detail and even coupled with a fully automated tuning strategy in future work.
2302.02458
Precision of quantum simulation of all-to-all coupling in a local architecture
We present a simple 2d local circuit that implements all-to-all interactions via perturbative gadgets. We find an analytic relation between the values $J_{ij}$ of the desired interaction and the parameters of the 2d circuit, as well as the expression for the error in the quantum spectrum. For the relative error to be a constant $\epsilon$, one requires an energy scale growing as $n^6$ in the number of qubits, or equivalently a control precision up to $ n^{-6}$. Our proof is based on the Schrieffer-Wolff transformation and generalizes to any hardware. In the architectures available today, $5$ digits of control precision are sufficient for $n=40,~ \epsilon =0.1$. Comparing our construction, known as paramagnetic trees, to ferromagnetic chains used in minor embedding, we find that at chain length $>3$ the performance of minor embedding degrades exponentially with the length of the chain, while our construction experiences only a polynomial decrease.
Evgeny Mozgunov
2023-02-05T18:54:28Z
http://arxiv.org/abs/2302.02458v1
# Precision of quantum simulation of all-to-all coupling in a local architecture ###### Abstract We present a simple 2d local circuit that implements all-to-all interactions via perturbative gadgets. We find an analytic relation between the values \(J_{ij}\) of the desired interaction and the parameters of the 2d circuit, as well as the expression for the error in the quantum spectrum. For the relative error to be a constant \(\epsilon\), one requires an energy scale growing as \(n^{6}\) in the number of qubits, or equivalently a control precision up to \(n^{-6}\). Our proof is based on the Schrieffer-Wolff transformation and generalizes to any hardware. In the architectures available today, 5 digits of control precision are sufficient for \(n=40,\ \epsilon=0.1\). Comparing our construction, known as paramagnetic trees, to ferromagnetic chains used in minor embedding, we find that at chain length \(>3\) the performance of minor embedding degrades exponentially with the length of the chain, while our construction experiences only a polynomial decrease. ## I Introduction Quantum simulation can be performed on a future fault-tolerant gate-based computer with minimal overhead [1]. Yet, we believe there will always be use cases for analog devices that, instead of quantum gates, implement the simulated Hamiltonian directly. In the NISQ era, they are the only ones available at the system sizes of interest [2; 3], and in the future competition with the fault-tolerant gate-based approaches, they may still prove to be more economical. There are direct applications of such analog quantum simulators to the study of many-body physics in search for insights for material science [2], as well as the alternative computing approach where a physical system is driven to solve an abstract computational problem, best exemplified by quantum annealing and its application to binary optimization [3]. An obstacle on the path to those two applications is the inevitable difference between the hardware interaction graph of the quantum simulator and the desired interaction graph of the target system of interest. This obstacle can be circumvented by embedding the logical qubits of the target Hamiltonian into a repetition code in the hardware Hamiltonian [4; 5]. The performance of the quantum simulators after such an embedding suffers: the scaling of the time-to-solution of the embedded optimization problems becomes far worse [6] than that of the native ones [7], and the accessible range of transverse fields flipping the value stored in the repetition code becomes exponentially reduced [2] with the length of the repetition code. This has been a major obstacle to demonstrating a clear advantage of the analog quantum simulators on a problem of practical interest, despite large qubit numbers and a promising performance on the native problems [3]. We present a solution to the exponential decrease in performance with the length of the repetition code: instead of a ferromagnetic repetition code, one needs to use a paramagnetic chain in its ground state as the interaction mediator, together with a single well-isolated hardware qubit serving as a logical qubit. This idea has already appeared under the name of _paramagnetic trees_[8; 9], and here we provide a theoretical justification for this approach. We observe that such a mediator is a type of perturbative gadget [10], and analyze it via an exact version of perturbation theory: a Schrieffer-Wolff transformation [11]. Perturbative gadgets were previously used to implement a many-body interaction using only two body terms [12]. Here we use them instead to implement a long-range two-body interaction using only nearest-neighbor two-body terms [9]. The mediator can be any physical system. We investigate several cases, focusing our attention on the transmission line with bosonic degrees of freedom as all the relevant quantities can be found analytically. A fermionic or spin chain near its critical point would have worked just as well. We note that other methods [13; 14; 10] for the study of perturbative gadgets can provide better performance guarantees than the Schrieffer-Wolff, but we choose to use it as its application is straightforward and it maintains the information about the basis change induced by the presence of the gadget. Our result did not appear in the literature to the best of our knowledge. The works [15; 16] constructed a classical all-to-all system that would generally exhibit different quantum properties from the target system when the quantum terms are turned on. Schrieffer-Wolff has been applied to the circuit model of interactions between a pair of qubits[17], but not for long-range interactions or a large interaction graph. In Sec. II we define the problem of quantum simulation of an all-to-all coupling, and in Sec. III we present our solution to it: a physically realistic 2d layout of circuit elements on a chip. Our other results for variations of this problem are summarized in Sec. IV. Our method is a version of a perturbation theory introduced in Sec. V and proven in App. A. We illustrate its use in an example of the effect of non-qubit levels on a qubit quantum simulator in App. C, before stating in Sec. VI and proving in App. D the all-to-all gadget theorem at the center of this work. The calculations for applications of our theorem to various architectures of an all-to-all gadget can be found in App. E. The Sec. VII and App. H are the most practically relevant to the applications of our gadget on current quantum annealers such as D-Wave [18]. We discuss how to quantify the accuracy of a quantum simulator from the application perspective in App. B, and present an in-depth study of the circuits of our gadget in App. F. ## II Problem setting The target qubit Hamiltonian of \(n\) qubits we wish to implement is the transverse field Ising model on arbitrary graphs \(G\) of degree \(2s\) that can be as big as the number of qubits \(n-1\) (such that the number of edges is \(ns\)): \[H_{\text{target}}=\sum_{ij\in G}J_{ij}\sigma_{i}^{z}\sigma_{j}^{z}+\sum_{i}h_{ i}\sigma_{i}^{z}+t_{i}\sigma_{i}^{x}. \tag{1}\] The fields and interactions \(h_{i},t_{i},J_{ij}\) can take any values in \([-1,1]\). Note that some of the \(J_{ij}\) can be \(0\), which means models that do not have a regular interaction graph can be cast into the form above. For the purposes of this work, the smallest nontrivial graph is \(n=4\) complete graph with \(s=1.5\). We do not see the need for interaction gadgets for \(s=1\) graphs consisting of rings. Thus the range of \(n,s\) for this work is \(n\geq 4,\ s\geq 1.5\). By implementing Eq. (1) we mean that another quantum system will have all \(2^{n}\) levels of the quantum spectrum of Eq. (1) to some set precision. This faithful reproduction of the quantum spectrum of the desired Hamiltonian is the key difference from embedding methods [15; 16] that would only reproduce the \(\sigma^{z}\) part of \(H_{\text{targ}}\). The implementation may also have additional levels above the \(2^{n}\) levels we use. We will propose several architectures that are possible on a chip, that is, in a 2d plane with elements that are qubits and circuit elements such as inductors and capacitors. It is not easy to formally define what the set of allowed architectures in this lumped element description is. To justify proposed architectures, we will draw parallels between them and the devices available today. The reason why all-to-all coupling in Eq. (1) is impossible on a chip is that to implement it, the qubits on which \(H_{\text{target}}\) is defined will have to be extended objects, which will lead to the failure of the qubit approximation, as well as uncontrollable levels of noise. Instead, we take inspiration from the idea of _paramagnetic trees_[8; 9] where the qubits are well isolated and highly coherent, and the extended objects connecting different qubits are the mediators of interactions. ## III Analytic expressions for the gadget We present a construction of a scalable paramagnetic tree [8; 9]. Specifically, we show that a circuit on a chip with a constant density of elements shown in Fig. 1 implements the desired all-to-all interaction (Eq. (1)) to a controlled precision. This circuit illustrated in Fig. 1 contains \(n\) flux qubits and \(n\) transmission lines, each modeled as \(n\) segments of an inductance and a capacitance, with an extra inductance before closing the loop at the end. A single harmonic oscillator can be associated with an LC circuit, and similarly, it can be shown (see App. F) that a transmission line is equivalent to the Hamiltonian of a chain of coupled harmonic oscillators: \[H_{ci}=\sum_{l=1}^{n}p_{ci,l}^{2}+2x_{ci,l}^{2}-2\sum_{l=1}^{n-1}x_{ci,l}x_{ci, l+1}+Z_{i}x_{ci,1}. \tag{2}\] Here the index \(i\) denotes which of the \(n\) transmission lines is considered. In our notation, \(Z_{i},\ X_{i}\) are Pauli operators on \(i\)'th flux qubit, and \(x_{ci,l},p_{ci,l}\) are the canonically conjugate coordinate and momentum of the \(l\)'th harmonic oscillator of the \(i\)'th chain. The last term is responsible for the coupling of the \(i\)'th chain to its designated qubit. The degrees of freedom \(x_{ci,l}\) are such that the flux through \(l\)'th inductor is \(x_{ci,l}-x_{ci,l-1}\), except for \(l=1\) where the flux coincides with \(x_{ci,1}\) so that we can couple the qubit to \(x\) directly. The coupling between such transmission lines is due to the mutual inductances, and the location in the \(j\)'th transmission line where it couples to the \(i\)'th transmission line is given by \[r_{j}(i)=\begin{cases}i,\ i>j;\\ i+1,\ i<j\.\end{cases} \tag{3}\] Define the interaction term: \[I_{i,j}=x_{ci,r_{i}(j)}-x_{ci,r_{i}(j)-1}. \tag{4}\] Note that the number of harmonic oscillators in our model of the transmission line was set to \(n\) for this convenient definition of \(I_{i,j}\). More generally, since the length of the transmission line is \(\sim n\), the number of harmonic oscillators will be \(\sim n\) with some coefficient. That coefficient, together with the characteristic energy scales of Figure 1: (a) An explanation of the graphical notation for the circuit elements. Transmission lines are shown in blue, the black circles are flux qubits, and the black line indicates the inductive coupling between the two. The yellow lines are tunable inductive couplings between the segments of the transmission lines. (b) The layout for the analytic construction of the all-to-all gadget. Yellow couplings encode the desired values of the all-to-all coupling \(J_{ij}\) with a coefficient \(\alpha\chi^{-2}\). \(x^{2},p^{2}\), and \(Zx\) terms needs to be informed by the hardware constraints outlined in App. F. Optimizing these parameters will give a prefactor improvement to the performance of our gadget, but we do not expect it to change the scaling we obtain below. We propose the following Hamiltonian \(H_{0}+V\) for our gadget: \[V=\alpha\sum_{i}(h_{i}Z_{i}+F^{-1}t_{i}X_{i}+\chi^{-2}\sum_{j>i}J _{ij}I_{i,j}I_{j,i}), \tag{5}\] \[H_{0}=\sum_{i}H_{ci}. \tag{6}\] The coefficients \(\alpha,F,\chi\) are the parameters of the gadget. The reduction factor \(\alpha\) corresponds to the reduction in the energy scale between \(\sim 1\) terms in \(H_{\rm{targ}}\) and \(\sim\alpha\) terms of the effective Hamiltonian of our gadget. We assume that our implementation of \(H+V\) is imperfect, that is, we implement the Hamiltonian \(H+V+V_{n}\), and our control noise \(V_{n}\) is: \[V_{n}=\sum_{i}(\delta h_{i}^{c}Z_{i}+\delta t_{i}^{c}X_{i}+ \delta_{zx_{i}}Zx_{ci,1}+ \tag{7}\] \[+\sum_{j>i}\delta f_{ij}I_{i,j}I_{j,i}+\sum_{l}\delta_{x,il}I_{i,l}). \tag{8}\] The individual errors are unknown, but their strength is characterized by \(\delta_{H,{\rm{loc}}},\delta_{1},\delta\). Here \(\delta_{1}\geq|\delta_{x,i}|\), \(\delta\geq|\delta h_{i}^{c}|,|\delta t_{i}^{c}|,|\delta f_{ij}^{c}|\), and \(\delta_{H,{\rm{loc}}}\geq|\delta_{x,il}|\) is the local error of implementation of the transmission line \(H_{ci}\). The following statement specifies the values of these parameters that guarantee that the gadget fulfills the task: _Main result:_ The gadget effective Hamiltonian satisfies: \[\|H_{\rm{eff}}-\alpha H_{\rm{target}}\|\leq\alpha ens\, \tag{9}\] if the error \(\epsilon\) and the control errors satisfy the inequality: \[\sqrt{2}n\delta_{H,{\rm{loc}}}+\left(1+\sqrt{\ln n}\right)\delta_{1}+3\delta \leq\frac{0.01\epsilon^{2}}{n(n+1)^{5}}. \tag{10}\] We are free to choose any such \(\epsilon\), and we used the following values of the remaining parameters in our construction: \[\alpha_{o}=\frac{0.035\epsilon}{ns(n+1)^{5}}. \tag{11}\] The factor \(F\) is given via a sum: \[F^{-1}=\exp\!\frac{1}{4(n+1)}\sum_{k=1}^{n}\frac{\cos^{2}\frac{k\pi}{2(n+1)}} {\sin\frac{k\pi}{2(n+1)}}\leq e^{1/8}n^{1/4}. \tag{12}\] The extra factor for the interactions is: \[\chi=1/(n+1). \tag{13}\] The gap of \(H_{0}\) is \[\Delta=2\sin\frac{\pi}{2(n+1)}. \tag{14}\] This establishes theoretically that an all-to-all interaction of an arbitrary number \(n\) of qubits can be realized in 2D hardware at the cost of a polynomial \((1/n^{6})\) reduction in the interaction strength compared to the physical energy scale. Equivalently, to get unit interaction strength, the energy scale of the hardware should scale as \(n^{6}\). Moreover, the lowest \(2^{n}\) eigenvalues of the quantum spectrum match between the circuit Hamiltonian and the target, and the gap \(\sim\Delta\) separates them from the other eigenvalues. The rigorous meaning of the effective Hamiltonian is discussed in Sec. V and App. A. The control errors \(\delta_{H,{\rm{loc}}},\delta_{1},\delta\) are required to be polynomially small as well (\(1/n^{7}\) for the elements of the transmission line, \(1/n^{6}\) up to logarithmic factors for everything else). The specific power of the scaling takes into account the chosen allowance for error \(\epsilon ns\), treating \(\epsilon\) as a constant. The motivation for allowing this extensive error and the initial comparison with the gate-based approach to quantum simulation are presented in App. B. If instead, we require a constant global error \(\|H_{\rm{eff}}-\alpha H_{\rm{target}}\|\leq\alpha\epsilon_{G}\), we can use \(\epsilon=\epsilon_{G}/ns\) in the inequalities of this paper to obtain the corresponding control precision requirements. For this gadget, one obtains \(n^{-9}\) and \(n^{-8}\) for respective \(\delta\)'s. The powers of \(n\) in our rigorous result can also be obtained by the following back-of-the-envelope calculation. Each mediator is a distributed circuit element with \(n\) effective degrees of freedom. The linear response \(\chi\) of the ground state to a qubit attached to its end will be distributed evenly as \(1/n\) at each of the degrees of freedom. Since each interaction between qubits involves four elements: qubit-mediator-mediator-qubit, the interaction strength between two mediators needs to be \(\chi^{-2}\) times higher than its target value for qubits. We use the reduction factor \(\alpha\) to get into the range of applicability of the perturbation theory, s.t. the magnitude of the perturbation \(V\) can be estimated as \(\alpha\chi^{-2}sn\). Even without control errors, the second order of the perturbation theory \(\sim V^{2}/\Delta\) needs to be within our error budget \(\alpha ens\). Plugging in \(V\to\alpha\chi^{-2}sn\), we obtain: \[\alpha\sim\frac{\Delta\chi^{4}\epsilon}{ns}. \tag{15}\] For a constant \(\Delta\) the response of most 1d mediators decays exponentially, so its optimal to take \(\Delta\sim 1/n\) to get the response \(\chi\sim 1/n\), which leads to \(\alpha\sim 1/n^{6}\). With that, the error budget becomes \(\sim\epsilon^{2}/n^{5}\), and the control errors \(n^{2}\delta_{H,{\rm{loc}}}+n(\delta_{1}+\delta)\) (estimated by counting the number of terms) need to be at least less than the error budget, resulting in \(\delta_{H,{\rm{loc}}}\sim 1/n^{7}\) and \(\delta_{1},\delta\sim 1/n^{6}\). The main result of our work is making this back-of-the-envelope calculation rigorous and obtaining an analytic expression for the required controls. Note that while the expression for \(F\) is not analytically computable, there is a sequence of approximate analytic expressions for it that correspond to progressively smaller errors in \(t^{*}\). This error becomes smaller than \(\epsilon\) for some order of the analytic expression, or we can numerically compute \(F\) and get the exact value of \(t^{*}\) for that \(n\). Numerical investigation [19] shows that: \[c_{l}(n+1)^{1/2\pi}\leq F^{-1}\leq c_{u}(n+1)^{1/2\pi}\, \tag{16}\] \[c_{l}=e^{\frac{(\gamma-1-\sin(\pi/4))}{2\pi}}\approx 0.9716\,\] (17) \[c_{u}=\frac{e^{\frac{1}{8\sqrt{2}}}}{2^{\frac{1}{2\pi}}}\approx 0.9783. \tag{18}\] Using either the left or the right bound instead of the exact expression for \(F^{-1}\) introduces only \(<1\%\) relative error in \(t^{*}\). So if \(\epsilon>0.01\), using the approximate analytic expression won't significantly change the overall error. ## IV List of other results * First, as a warm-up exercise, we use our machinery to estimate the effect of the non-qubit levels present in every implementation of a qubit quantum simulator. We seek to reproduce the quantum spectrum of a problem native to the hardware graph for this example. Unlike the other problems studied in this work, no interaction mediators are involved. For a hardware implementation with \(n\) qubits and a graph of degree \(2s\), let \(\delta\) be the usual control errors, \(r\) the norm of the term in the Hamiltonian connecting to the third level, and \(\omega_{p}\) is the gap to the non-qubit levels. For the precise definitions, see App. C. As long as \(\omega_{p}\leq 32n(s+2)\), the best solution we found requires \(\delta=O(1/n)\) for any \(r\in[0,1]\). The dependence on \(\epsilon\) for \(r\geq\frac{16s}{7(2+s)}\) is \(r\delta=O(\epsilon^{2}/n)\). For the complete expressions and the solutions found for other values of \(\omega_{p},r\) see App. C. For the realistic values of parameters, we find that a rigorous reproduction of the quantum spectrum with \(\epsilon=0.1\) accuracy requires three digits of control precision \(\delta\leq 0.8\cdot 10^{-3}\) for \(n=4\) qubits and four digits of control precision \(\delta\leq 0.8\cdot 10^{-4}\) for \(n=40\). * We also prove a general theorem (see Sec. VI) applicable for any mediators defined by their Hamiltonians \(H_{m,i}\) and their coupling to the qubits \(Z_{i}I_{m,i}\), as well as to other mediators \(I_{i,j}\). It is also applicable to any control errors as long as \(\|P\delta H_{m,i}\|\leq\delta_{H}\), \(\|P\delta I_{m,i}\|\leq\delta_{I}\), where \(P\) is the projector onto a \(2^{n}\)-fold degenerate ground state subspace of \(\sum_{i}H_{m,i}+X_{i}I_{m,i}\). We sometimes omit the index \(i\) when working with an individual mediator. The direct consequences of the theorem are, besides the above result for a transmission line, two simpler results for a qubit mediator and an LC circuit mediator presented in App. E: * The qubit case is the simplest possible case, where each qubit of our quantum simulator is coupled to a qubit coupler as follows: \[H_{m}=\sqrt{1-J^{2}}X_{qc},\quad I_{m}=JZ_{qc}\.\] (19) The qubit couplers are extended objects that have small mutual inductances where they overlap: \[V_{c}=\sum_{i>j}f_{ij}I_{i,j}I_{j,i},\quad I_{i,j}=Z_{qc,i}\.\] (20) This is inspired by the Chimera and Pegasus architectures of D-Wave [20], with the only difference that here the qubits are only connected to one coupler each, while each coupler is coupled to \(2s\) other couplers. We obtain the following relationship between the control precision and the target precision: \[\delta\leq\max_{J}\frac{(s\epsilon)^{2}0.95(3+\sqrt{1-J^{2}}+sJ^{2})^{-1}}{12 \cdot 7n(1+\sqrt{1-{J^{2}}^{-1}}+sJ^{-2})^{2}}\,\] (21) where \(\delta\) is the control precision of all the qubit and qubit coupler parameters. * We also consider a harmonic oscillator (LC-circuit) mediator. Define the Hamiltonian of each mediator: \[H_{m}=a^{\dagger}a,\quad I_{m}=J(a+a^{\dagger})\,\] (22) and \(I_{i,j}=a_{i}+a_{i}^{\dagger}\) independent of \(j\). The errors \(\delta_{H},\delta_{I}\) defined in the theorem and control errors \(\delta\) limiting the terms in \(V\) are related to the target precision as follows: \[\delta_{H}+\delta_{I}+\delta(1+e^{-2J^{2}}+s(2J)^{2})\leq\] (23) \[\leq\frac{0.9936(s\epsilon)^{2}}{12\cdot 7n(1+e^{2J^{2}}+s(\frac{1}{ 2J}+1)^{2})^{2}}\.\] (24) We can vary \(J\in[0,1]\) to find the best values of \(\delta\)'s. We see that \(\delta\)'s are still \(\sim 1/n\), which means the massive increase in the power of \(n\) is due to the distributed nature of the transmission line, not due to the difference between linear (LC) and nonlinear (qubit) elements. * transmons, so a direct comparison of control precision is not available). The details of this calculation can be found in Appendix * Finally, in Sec. VII we discuss the application of our gadget to quantum annealing. We present the schedules required to operate our all-to-all gadget, concluding that the minimal required adjustment to the current capabilities of the D-Wave [18] is to allow for a third, constant anneal schedule on some of the terms. We also demonstrate how the minimal gap along the anneal of a commonly used minor embedding [4; 5] method decreases exponentially with the length of the chains \(k\) used in the embedding. In contrast, our method sees only a polynomial decrease in \(k\). The prefactors are such that our method is advantageous already for \(k=4\). We believe this approach will bridge the gap between the D-Wave performance on the native graph problems [7] and the highly-connected application-relevant problems [6]. ## V Perturbation Theory Used Let \(H_{0}\geq 0\) be a Hamiltonian over possibly infinite-dimensional Hilbert space, and choose the energy offset such that its (possibly degenerate) ground state has energy \(0\). Let \(0\) be an isolated eigenvalue of the spectrum of \(H_{0}\), separated by a gap \(\Delta\) from the rest of the spectrum. Denote the projector onto the finite-dimensional ground state subspace as \(P\), s.t. \(PH_{0}=0\). We will formulate a version of degenerate perturbation theory with explicit constants in the bounds on its applicability and accuracy. Allow the perturbation \(V\) to have unbounded operator norm (\(\|V\|=\infty\) is allowed). We will need another constraint to separate physical \(V\)'s from unphysical ones. We define a custom norm \(\|V\|_{c}\) for all operators \(V\) to be the smallest number s.t.: \[-\|V\|_{c}(1+H_{0})\leq V\leq\|V\|_{c}(1+H_{0}). \tag{25}\] Here \(1\) is the identity operator. Instead of the exact value \(\|V\|_{c}\), we will use its upper bound: some value \(v\) s.t. we can prove \(v\geq\|V\|_{c}\). More details on this norm can be found in App. A. Define the adjusted gap \(\Delta_{V}=\Delta-v(1+\Delta)\) and the projector \(Q=1-P\). We will use the following perturbation theory result: **Lemma 1**.: (properties of SW, simplified) _For any \(H_{0}+V\) as above, such that \(\Delta_{V}>0\) and \(\|PV\|/\Delta_{V}<1/32\), the following holds. There exists a rotation \(U_{SW}\) that makes the Hamiltonian block-diagonal_ \[U_{SW}(H_{0}+V)U_{SW}^{\dagger}=H_{SW}=PH_{SW}P+QH_{SW}Q. \tag{26}\] _The low-energy block is approximately \(PVP\):_ \[\|P(H_{SW}-V)P\|\leq 7\|PV\|^{2}/\Delta_{V}. \tag{27}\] While many rotations satisfy the above, \(U_{SW}\) possesses an additional property of being close to an identity (a bound \(\|U_{SW}-1\|=O(\|PV\|/\Delta_{V})\) is given in App. A), which means the physical measurements are close to the measurements done in the basis defined by \(U_{SW}\). We will interpret \(PH_{SW}P\) as the effective Hamiltonian in the subspace corresponding to \(P\). For a special case of a finite-dimensional Hamiltonian \(H_{0}+V\), one can use a simpler statement without requiring Eq. (25): **Lemma 2**.: (finite-dimensional case, simplified) _For any \(H_{0}\) and \(V\), let \(P\) be the projector onto the ground state subspace of \(H_{0}\). Let the ground state of \(H_{0}\) be separated by a gap \(\Delta\) from the rest of the spectrum, and shift the energy s.t. \(PH_{0}=0\). If \(\|V\|/\Delta<1/16\), the first order degenerate perturbation theory for states in \(P\) has the following error:_ \[\|P(H_{SW}-V)P\|\leq 3.5\|PVQ\|\|V\|/\Delta\leq 3.5\|V\|^{2}/\Delta. \tag{28}\] The statements of the finite-dimensional Lemma closely follow the results of [11]. We present a more detailed statement and proof of both in App. A. Though we formulated the perturbation theory for the case of \(PH_{0}=0\), these lemmas can be straightforwardly generalized to non-degenerate eigenvalues. Following [11], it is also possible to extend it to higher orders in \(V\) for finite-dimensional systems. We are unaware of a simple way to obtain higher orders in \(V\) for infinite-dimensional systems. ## VI Statement of the General Theorem Consider the Hamiltonian \(H_{0}+V\), where: \[H_{0}=\sum_{i}H_{m,i}+Z_{i}I_{m,i}\, \tag{29}\] with the ground state subspace of states \(|g_{b},b\rangle\) labeled by a string \(b\) of \(\pm 1\) describing the corresponding qubit computational basis state. The projector onto the ground state subspace is \(P=\sum_{b}P_{b}P_{g_{b}}\). The perturbation is: \[V=\sum_{i}h_{i}^{c}Z_{i}+t_{i}^{c}X_{i}+\delta H_{m,i}+Z_{i}\delta I_{m,i}+ \sum_{i>j}f_{ij}I_{i,j}I_{j,i}. \tag{30}\] Here an operator \(I_{i,j}\) acts on mediator \(i\) and is responsible for interaction with the mediator \(j\). In the simple case of a qubit coupler or an LC circuit, \(I_{i,j}\sim I_{m,i}\) is independent of \(j\). Generally, we assume that for every mediator, the operators \(H_{m,i},I_{i,j},I_{m,i}\) have a symmetry \(S_{i}\) such that \(S_{i}H_{m,i}S_{i}^{\dagger}=H,\ SIS^{\dagger}=-I\) for all \(I\) in the \(i\)'th mediator. We will use the gap of \(H_{0}\) denoted as \(\Delta\) (each \(H_{m,i}\pm I_{m,i}\) has the same gap) and its adjusted version \(\Delta_{V}=\Delta-v(1+\Delta)\) that depends on the chosen \(V\). We define the errors \(\delta\) : \[\forall i:\quad\|P\delta H_{m,i}\|\leq\delta_{H},\quad\|P\delta I_{m,i}\|\leq \delta_{I}. \tag{31}\] Note that \(\delta_{H}\) and \(\delta_{I}\) are potentially nontrivial functions of \(n\). Determination of the quantity \(v\) in \(\Delta_{V}=\Delta-v(1+\Delta)\) will also require \(\|\delta H_{m,i}\|_{c},\ \|\delta I_{m,i}\|_{c}\) defined in Eq. (25) to be finite, but these norms will only appear in the following theorem through \(\Delta_{V}\). The parameters \(h_{i}^{c},t_{i}^{c},f_{ij}\) of the perturbation are considered to be implemented imprecisely, with the error \(\delta h=\delta t=\delta f=\delta\). For simplicity, we assume that their error never increases their magnitude beyond the maximum possible exact value within the context of our construction so that we can use the exact expression for \(V\) in the second order of the error bound in App. D. Moreover, we consider the scenario where the graph is fabricated to match the degree \(2s\) graph of the specific problem, and it is possible to have other couplings exactly \(0\) with no control error. This is the most optimistic expectation of the hardware since our architecture has every pair of mediators crossing each other, and realistically there would be some cross-talk. We will comment on the behavior in the realistic case at the end of App. D. The intermediate functions we use are as follows: \[\chi_{i,j}=\langle g_{b_{i}}|I_{i,j}|g_{b_{i}}\rangle|_{b_{i}=1}\,\quad\|PI_{i,j}P\|=|\chi_{i,j}|\, \tag{32}\] \[\|PI_{i,j}\|\leq i_{i,j}\,\quad F=\langle g_{b_{i}=1}|g_{b_{i}=-1} \rangle. \tag{33}\] Here \(i_{i,j}\) is any upper bound on \(\|PI_{i,j}\|\). One such bound can be derived as \(i_{i,j}=|\chi_{i,j}|+i_{m}\): \[\|PI_{i,j}\|\leq|\chi_{i,j}|+\|PI_{i,j}Q\|\,\quad\|PI_{i,j}Q\|\leq i_{m}\,\] where \(i_{m}\) is any upper bound on \(\|PI_{i,j}Q\|\). _Theorem:_ For any \(\epsilon\leq 7/16\) choosing the parameters of the gadget as \(h_{i}^{c}=\alpha_{o}h_{i},\ t_{i}^{c}=\alpha_{o}F^{-1}t_{i},\ f_{ij}=\alpha_{o }J_{ij}^{*}/\chi_{i,j}\chi_{j,i}\) with the reduction factor \(\alpha_{o}\): \[\alpha_{o}=\frac{s\epsilon\Delta_{V}}{3\cdot 7n(1+F^{-1}+s\ \text{max}\ \frac{i_{i,j}i_{i,j}}{|\chi_{i,j}\chi_{j,i}|})^{2}}\, \tag{34}\] ensures that the error is rigorously bounded as \(\|H_{\text{targ}}-H_{\text{eff}}\|\leq\epsilon ns\) (for \(H_{\text{eff}}\) in the logical basis defined via SW transformation, and the bound on how close it is to the qubit computational basis can be obtained using the Lemma in App. A) as long as the following inequalities are satisfied by some choice of \(v\): \[\delta_{H}+\delta_{I}+\delta(2+s\ \text{max}\ |\chi_{i,j}\chi_{j,i }|)\leq \tag{35}\] \[\leq\frac{\Delta_{V}(s\epsilon)^{2}}{12\cdot 7n(1+F^{-1}+s\ \text{max}\ \frac{i_{i,j}i_{j,i}}{|\chi_{i,j}\chi_{j,i}|})^{2}}\,\] (36) \[\pm V\leq v(1+H_{0}). \tag{37}\] In practice, we will always be able to prove that the correction to \(\Delta\) is subleading, i.e., for the purposes of scaling, one may think of \(\Delta_{V}\) as \(\Delta/2\). We prove the theorem in App. D, and present a version of the theorem with an explicit choice of \(v\) in App. D.1. ## VII Comparison with Minor Embedding For theory applications, it is sufficient that the control errors scale polynomially with the system size \(n\). For practical applications, the \(n^{-6}\) scaling of control errors required for the transmission line construction is unrealistic. We note that this scaling results from building a complete graph of extended mediators. For intermediate \(n=40\ldots 100\), there are more economical hardware graphs that effectively host a wide range of fixed degree \(2s\) random problem graphs. Chimera and Pegasus architectures implemented in D-Wave [20] are prime examples of such graphs. Our construction applies to the following practical cases: (i) quantum simulation of a \(n=40\ldots 100\) system that requires faithful reproduction of the quantum spectrum. We will derive the bound on the control errors for a specific example of an \(n=40\), degree \(2s=4\) random graph in the App. H (ii) optimization of a classical \(n=100\ldots 1000\) problem that D-Wave was originally intended for, which is the focus of this section. In both cases, our construction enables a boost in performance compared to the existing method, colloquially referred to as _minor embedding_. For both, we need an embedding: an association between groups of qubits of the hardware graph and individual qubits of the problem graph, such that for interacting problem qubits, there is at least one interaction between the two corresponding groups in the hardware. In the case of minor embedding, hardware qubits within a group are used as a classical repetition code for the corresponding problem qubit, which we discuss in more detail later in this section. In our construction, there is an extra step where one qubit of a group is selected as a problem qubit, while the other qubits within that group are used as the mediator for that problem qubit. This, in principle, allows designing an architecture where the selected problem qubits have better coherence properties than qubits used as mediators, at the cost of less flexibility during the embedding stage. For our estimates here, we will assume that all qubits are the same, as is the case in the current hardware. Let us first describe how to apply the result of our general theorem in practice. There is one straightforward generalization that our theorem needs: not all qubits will need a mediator, as some can be connected directly to all of their problem graph neighbors. Thus the couplings have three types: qubit-qubit, qubit-mediator, and mediator-mediator. The values of susceptibility \(\chi_{i,j}\) can be computed just by considering one connected component of the mediator (there may be noninteracting parts of one mediator), while the value of the overlap \(F\) requires considering all connected components of the mediator of the qubit in question. These computations are still feasible classically for the problem sizes we consider since the individual chain length (size of the group associated with one logical qubit) of the embedding stays within the exact diagonalization range. Even beyond that range, a method such as DMRG [21] can provide the values of \(\chi_{i,j}\) and \(F\). The hardware couplings for qubit-qubit, qubit-mediator, and mediator-mediator cases are set respectively to: \[f_{ij}=\{\alpha J_{ij},\quad\alpha J_{ij}/\chi_{i,j},\quad\alpha J_{ij}/(\chi_{ i,j}\chi_{j,i})\}. \tag{38}\] Only the last term previously appeared in our theorem. The other terms, such as the transverse field, are unchanged and still contain the appropriately defined overlap \(F\). According to our theorem, such a construction will work for sufficiently small control noise, giving a precision \(\epsilon\) as a function of control noise, assuming an appropriate choice of \(\alpha\). Knowing the control noise, one can estimate the range of possible \(\epsilon\) and the \(\alpha\) required for them by our theorem. In practice, it is expected that the inequalities in our theorem are not satisfied for the hardware control noise for any \(\epsilon<1\), and we have no guarantees on the gadget's performance. As our bounds are not tight, and \(\alpha\) is a free parameter, we argue that choosing it according to the formula with some arbitrary \(\epsilon^{\prime}>1\) may still demonstrate the physical effects of interest for the case of quantum simulation, or boost the success of optimization. To push the gadget to the limits of its performance, we note that the expression for the allowed control errors as the functions of \(\epsilon\) depends on the internal parameters of the gadget and can be maximized with respect to them. The optimal values obtained can be used for all \(\alpha\), including those outside the guaranteed performance region. We note that this parameter optimization only requires simulating a single mediator, not the whole gadget, which means it can be performed classically. For applications to optimization problems via quantum annealing, our method suggests a new schedule for controlling the device parameters. Let us use our method to implement the quantum spectrum of the traditional anneal schedule faithfully: \[H(s)=A(s)\sum_{i}X_{i}+B(s)(\sum_{i}h_{i}Z_{i}+\sum_{ij}J_{ij}Z_{i}Z_{j}). \tag{39}\] We note that this doesn't mean the effective Hamiltonian of the dynamics is as above since the geometric terms due to rotation of the effective basis need to be included, for which we refer to Sec. VI of our recent work on adiabatic theorem [22] and leave further developments to future work. We, however, have a guarantee on the spectrum at every point, thus on the minimal gap along the anneal. According to our method, the hardware Hamiltonian is \(H_{0}+V\), where: \[H_{0}={\sum_{i}}^{\prime}H_{m,i}+JZ_{i}Z_{m,q(i)}. \tag{40}\] Here the sum is over the qubits that have mediators, \(q(i)\) is the point of attachment of the qubit to the mediator, and \(H_{m,i}\) is some Hamiltonian on the coupler qubits that can in principle be optimized, but for simplicity, we can take \(H_{m,i}=J^{*}\sum_{i,j\in m}Z_{m,i}Z_{m,j}+\sum_{i\in m}X_{m,i}\), where \(J^{*}\) corresponds to the approximate location of the critical point for this finite-size transverse field Ising model. In particular, if the mediator is a chain or a collection of chains, then \(J^{*}=1\). The perturbation is: \[V=\sum_{i}\alpha(s)(B(s)h_{i}Z_{i}+F_{i}^{-1}A(s)X_{i})+\sum_{i>j}(fZZ)_{i,j}. \tag{41}\] Here \(f_{i,j}\) is given by \[f_{ij}=B(s)\alpha(s)\{J_{ij},\quad J_{ij}/\chi_{i,j},\quad J_{ij}/(\chi_{i,j} \chi_{j,i})\}\, \tag{42}\] depending on the coupling type. The \((fZZ)_{i,j}\) is a shorthand notation for a weighted sum of the various couplings between qubits \(i\) and \(j\) or the coupler qubits in their respective mediators. The weights in the sum weakly affect the bound on \(\|PV\|\) that is used for our theorem and can thus be optimized. Intuitively, we always prefer to use direct couplings instead of mediators whenever possible. We observe that there are the following separate schedules that are required for \(X\) and \(ZZ\) terms: \begin{tabular}{|c|c|c|} \hline & problem & mediator \\ X & \(\alpha(s)F_{i}^{-1}A(s)\) & 1 \\ ZZ & \(\alpha(s)B(s)\{J_{i,j},J_{i,j}/\chi_{i,j}\ldots\}\) & \(J,J^{*}\) \\ Z & \(\alpha(s)B(s)h_{i}\) & 0 \\ \hline \end{tabular} We see that the mediator qubit controls must be kept constant while the problem experiences an anneal schedule. The transverse field controls generally have different overlap factors in front of them, but if the hardware constraints them to be the same, the change in the anneal schedule of the effective Hamiltonian is not substantial: \[H(s)=A(s)\sum_{i}F_{i}X_{i}+B(s)(\sum_{i}h_{i}Z_{i}+\sum_{ij}J_{ij}Z_{i}Z_{j}). \tag{43}\] That reduces the number of independent schedules to 3: \(\alpha(s)A(s),\ \alpha(s)B(s)\,1\). As \(\alpha(s)\) is a free parameter in our construction that determines which error \(\epsilon\) can we guarantee, we can set \(\alpha(s)=\)const for simplicity. This highlights that the only missing capability from the current D-Wave devices is holding some of the \(X\) and \(ZZ\) terms constant throughout the anneal. For some polynomially small \(\alpha\) and control errors that satisfy our theorem, we guarantee that our construction preserves the polynomially small features of the spectrum. In particular, a polynomially small minimal gap above the ground state along the anneal is preserved by this construction, albeit polynomially reduced. As we will see below, the traditional minor embedding, in general, makes that gap exponentially small in the size of the mediator. We note that using our scheme for optimization also has a disadvantage: the final classical effective Hamiltonian at the end of the anneal has its energy scale reduced by a polynomially small factor of \(\alpha\). It only has an extensive error \(\epsilon\) for a polynomially small control noise. We lose all guarantees on the error past a certain system size for a constant control noise. In contrast, minor embedding retains extensive error of the ground state of the classical Hamiltonian at the end of the anneal, even for a constant control noise. We expect a tradeoff between the errors due to non-adiabatic effects and the errors of the implementation of the effective Hamiltonian to result in an optimal schedule that uses some combination of the two schemes. In minor embedding, a repetition code is used for each qubit, and the field \(X\) is applied with the same schedule \(A(s)\) everywhere. The repetition code is enforced by \(B(s)ZZ\) terms (the largest allowed scale in the problem), while the problem interactions and longitudinal fields are all reduced as \(B(s)J_{i,j}/M\) and \(B(s)h_{i}/M\), where \(M\) is a free parameter. The longitudinal fields and, when possible, the problem interactions are distributed between the hardware qubits representing one problem qubit. We note that minor embedding does not adjust the coupling depending on the location; thus, there are no factors of \(\chi\) in the hardware Hamiltonian, in contrast with our construction. For a special case where the factors of \(\chi\) are always the same in our construction, minor embedding becomes a special case of our construction at each \(s\), with an \(s\)-dependent factor \(M\). Our construction corresponds to \(M\leq 1\) since the hardware qubits for each individual problem qubit are in a paramagnetic state. We believe that when extended to \(k\)-qubit chains, the advantage of our paramagnetic gadget vs. the ferromagnetic repetition code is exponential in \(k\). For instance, the minimal gap of the logical problem will experience only polynomial in \(k\) reduction for our method, while the reduction will be exponential in \(k\) for minor embedding. While a naive extension of our perturbative results into the non-perturbative regime can be done by just increasing \(\alpha\) as described above, it is essential to push the gadgets to the limit of their performance. We investigate this for \(n=3,\ k=1,2,3,4\) when both the gadget and the system are only allowed to have terms limited in magnitude (\(|h|,|t|,|J|\leq 1\)), and the geometry is fixed as a ring of \(3k\) hardware qubits: which schedule on the gadget and the system leads to the best minimal gap? We use the minor embedding schedules to compare our results. For the method outlined above, an improvement over minor embedding is seen in Fig. 2 for \(k>3\). The code producing these results can be found in [19]. We note that interpolation between the two methods will likely produce even better improvement. For this example, we only optimized \(\alpha\) and kept \(J,J^{*}=1\). A full optimization will also likely improve these results. Here the optimization involved full system simulation, but we believe the mediator optimized for a collection of small examples like this will still perform well when used as a building block in a large \(n\) system. Such a generalization must, however, be wary that a high enough system scale \(\alpha\) (or \(M^{-1}\) for minor embedding) can change the ground state at the end of the anneal. In our example, the ground state was preserved well above the optimal values of \(\alpha\) and \(M^{-1}\). ## VIII Conclusions We have proven that a physical system can be an accurate quantum simulator. Specifically, we first made sure that the proposed architecture is realistic: it is a 2d layout with a fixed density of elements, and the elements we use are the standard building blocks of superconducting circuits today. We then presented rigorous proof that an all-to-all system is accurately simulated for all system sizes \(n\). The geometry of its interaction graph can be infinitely more complicated than 2d or 3d space, yet the low energy physics of our quantum simulator on a chip will reproduce it accurately. While the scaling of the required control errors \(n^{-6}\) is very costly, and there are likely practical limits to a control precision of a physical system, there are no immediate fundamental limits on it. Future theory work may rely on our construction whenever a low-energy model with complicated geometry is needed to exist in a 3d world. We studied our gadgets and perturbation theory in the context of superconducting qubits. However, the theorem we prove is more general: any type of qubit used in quantum simulators can be connected to a faraway qubit perturbatively using mediators, and our theorem will describe the highly connected limit of that system. In the current D-Wave architecture, relatively short chains can already embed large all-to-all graphs that are intractable classically. Other types of hardware for quantum simulation may be even more efficient than D-Wave for this task. Coupling via the transmission line has yet to be scaled to a large number of qubits, but we already have a promising demonstration of using qubits as couplers. We propose a minimal schedule adjustment needed for Figure 2: Minimal gap along the anneal for a 3-qubit problem embedded in a ring of \(3k\) hardware qubits, with chains of length \(k\) for minor embedding and mediators of length \(k-1\) for our construction. We see that the minimal gap of minor embedding decreases exponentially with \(k\), and our construction is advantageous for \(k>3\). Inset: the overall problem energy scale also decreases for both constructions. Here we plot the optimal values of the problem energy scale used for the minimal gap plotted in the main plot. that: some of the terms are to be kept constant during the anneal. Our method is expected to close the performance gap between native and application problems for quantum optimization, opening the way for quantum advantage on the latter. Another fruitful direction is to benchmark a variant of the Chimera and Pegasus graphs where the distinction between qubits and qubit couplers is fixed at fabrication and to propose better graphs with more economical embeddings in this setting. A surprising result of this work is that there is no apparent difference in performance between linear (bosons with a quadratic Hamiltonian) and nonlinear mediators (qubit couplers). Investigating it further is a promising direction for future work, along with improving the scaling and the value of the required control precision. The next step in developing all-to-all gadgets is to investigate qubit chain mediators, which are most likely the simplest to implement experimentally. It is an important future theoretical milestone to obtain a specification on circuit parameters required for qubit couplers. This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Agreement No. HR00112190071. Approved for public release; distribution is unlimited.
2305.06699
New results on MS-Lipschitz summing operators
This paper focuses on the study of MS-Lipschitz p-summing operators, which were initially defined by the authors in <cite>14. Our objective is to establish relationships between T and its linearizations, namely T and T. Additionally, we extend our investigation by introducing a new definition in the category of Lipschitz mappings defined on metric spaces, known as MS-Cohen Lipschitz p-summing. We provide several results and characterizations for this new concept.
Maatougui Belaala, Athmane Ferradi, Khalil Saadi
2023-05-11T10:16:21Z
http://arxiv.org/abs/2305.06699v2
# New results on MS-Lipschitz summing operators ###### Abstract. This paper focuses on the study of MS-Lipschitz \(p\)-summing operators, which were initially defined by the authors in [14]. Our objective is to establish relationships between a Lipchitz mapping \(T:X\to Y,\) where \(X\) and \(Y\) are pointed metric spaces, and its linearizations \(\widehat{T},\)\(\widetilde{T}\) and \(T^{\#}\) for certain notion of summability. Moreover, we extend our investigation by introducing new definitions in the category of Lipschitz mappings defined on metric spaces, known as MS-strongly Lipschitz \(p\)-summing and MS-Lipschitz \(p\)-nuclear. We provide several results and characterizations for these new concepts. Key words and phrases:Strictly Lipschitz \(p\)-summing; MS-Lipschitz \(p\)-summing; Lipschitz \(p\)-summing operators; MS-Lipschitz \(p\)-nuclear; \(p\)-summing operators; strongly \(p\)-summing operators; Pietsch factorization; factorization theorems 2 Introduction Let \(T\) be a smooth smooth manifold and \(\mathcal{M}\) be a smooth manifold. Let \(\mathcal{M}\) be a smooth manifold and \(\mathcal{M}\) be a smooth manifold. linear functional \(\delta_{\left(x,y\right)}\boxtimes h\) on \(Lip_{0}\left(Y,G^{\ast}\right)\), where \(G^{\ast}\) denotes the topological dual of \(G\). The functional is defined as follows: For any \(s\in Lip_{0}\left(Y,G^{\ast}\right)\), \[\delta_{\left(x,y\right)}\boxtimes h\left(s\right)=\left\langle s\left(x\right) -s\left(y\right),h\right\rangle.\] When \(\mathfrak{n}\in\mathcal{F}\left(Y\right)\) is expressed as \(\mathfrak{n}=\sum_{j=1}^{m}\delta_{\left(x_{j},y_{j}\right)}\), we have the following for every \(s\in Y^{\#}\) \[\left\langle\mathfrak{n},s\right\rangle=\sum_{j=1}^{m}s(x_{j})-s(y_{j}).\] For a more detailed understanding of the properties of the space \(Y\boxtimes G\), we refer to [1]. Now, let's consider a Lipschitz operator \(T:Y\to W\) between pointed metric spaces, and \(z=\sum_{l=1}^{k}\delta_{\left(x_{l},y_{l}\right)}\boxtimes f_{l}\in Y \boxtimes W^{\#}.\) The action of \(T\) on \(z\) is given by \[\left|\left\langle T,z\right\rangle\right|=\left|\sum_{l=1}^{k}f_{l}\left(T \left(x_{l}\right)\right)-f_{l}\left(T\left(y_{l}\right)\right)\right|\] Let \(G\) be a Banach space. In the following definitions, \(B_{G}\) represents the closed unit ball of \(G\), and \(G^{\ast}\) denotes its (topological) dual. For \(1\leq p\leq\infty\) and \(m\in\mathbb{N}^{\ast}\), we define two Banach spaces as follows: - \(\ell_{p}^{m}\left(G\right):\) This space consists of all sequences \(\left(x_{j}\right)_{j=1}^{m}\) in \(G\) with the norm given by \[\left\|\left(x_{j}\right)_{j}\right\|_{\ell_{p}^{m}\left(G\right)}=(\sum_{j=1 }^{m}\|x_{j}\|^{p})^{\frac{1}{p}}.\] - \(\ell_{p}^{m,w}\left(G\right):\) This space comprises all sequences \(\left(x_{j}\right)_{j=1}^{m}\) in \(G\), with the norm defined as \[\left\|\left(x_{j}\right)_{j}\right\|_{\ell_{p}^{m,w}\left(G\right)}=\sup_{x^ {\ast}\in B_{G^{\ast}}}(\sum_{j=1}^{m}\left|\left\langle x_{j},x^{\ast}\right \rangle\right|^{p})^{\frac{1}{p}}.\] If \(G\) is equal to \(\mathbb{K}\), we can simplify the notation as \(\ell_{p}^{m}\) and \(\ell_{p}^{m,w}.\) In particular, let \(\left(\mathfrak{n}_{j}\right)_{j=1}^{m_{1}}\in\mathcal{F}\left(Y\right)\) be such that \(\mathfrak{n}_{j}=\sum_{i=1}^{m_{2}}\delta_{\left(x_{j}^{i},y_{j}^{i}\right)}\) for \(1\leq j\leq m_{1}.\) In this case, we have \[\left\|\left(\mathfrak{n}_{j}\right)_{j}\right\|_{\ell_{p}^{m,w} \left(\mathcal{F}\left(Y\right)\right)} = \sup_{s\in Y^{\#}}(\sum_{j=1}^{m_{1}}\left|\left\langle\mathfrak{ n}_{j},s\right\rangle\right|^{p})^{\frac{1}{p}}\] \[= \sup_{s\in Y^{\#}}(\sum_{j=1}^{m_{1}}\left|\sum_{i=1}^{m_{2}}s(x_ {j}^{i})-s(y_{j}^{i})\right|^{p})^{\frac{1}{p}}.\] Let us review the following concepts: - The linear operator \(u:F\to G\) is said to be \(p\)-summing if there exists a positive constant \(K\) such that, for any sequence \(\left(x_{j}\right)_{j=1}^{m}\) belonging to \(F\), the following inequality holds \[\left\|\left(u\left(x_{j}\right)\right)_{j=1}^{m}\right\|_{\ell_{p}^{m}\left(G \right)}\leq K\left\|\left(x_{j}\right)_{j=1}^{m}\right\|_{\ell_{p}^{m,w}\left(F \right)}. \tag{1.3}\] The class of \(p\)-summing linear operators from the Banach space \(F\) into \(G\), denoted as \(\Pi_{p}(F,G)\), forms a Banach space itself when equipped with the norm \(\pi_{p}(u)\). This norm is defined as the smallest constant \(K\) for which the inequality (1.3) holds. - The linear operator \(u:F\to G\) is said to be (Cohen) strongly \(p\)-summing if there exists a positive constant \(K\) such that, for any sequence \(\left(x_{j}\right)_{j=1}^{m}\) belonging to \(F\) and any \(\left(y_{j}^{*}\right)_{j=1}^{m}\) belonging to \(G^{*}\), the following inequality holds \[\left\|\left(\left\langle u\left(x_{j},y_{j}^{*}\right)\right\rangle\right)_{j =1}^{m}\right\|_{\ell_{1}^{m}}\leq K\left\|\left(x_{j}\right)_{j=1}^{m}\right\| _{\ell_{p}^{m}\left(F\right)}\left\|\left(y_{j}^{*}\right)_{j=1}^{m}\right\|_ {\ell_{p^{*}}^{m,w}\left(G^{*}\right)}. \tag{1.4}\] The class of strongly \(p\)-summing operators from the Banach space \(F\) into \(G\), denoted as \(\mathcal{D}_{p}(F,G)\), forms a Banach space itself when equipped with the norm \(d_{p}(u)\). This norm is defined as the smallest constant \(K\) for which the inequality (1.4) holds. - The linear operator \(u:F\to G\) is said to be (Cohen) \(p\)-nuclear if there exists a positive constant \(K\) such that, for any sequence \(\left(x_{j}\right)_{j=1}^{m}\) belonging to \(F\) and any \(\left(y_{j}^{*}\right)_{j=1}^{m}\) belonging to \(G^{*}\), the following inequality holds \[\left\|\left(\left\langle u\left(x_{j},y_{j}^{*}\right)\right\rangle\right)_{ j=1}^{m}\right\|_{\ell_{1}^{m}}\leq K\left\|\left(x_{j}\right)_{j=1}^{m}\right\|_{ \ell_{p}^{m,w}\left(F\right)}\left\|\left(y_{j}^{*}\right)_{j=1}^{m}\right\|_ {\ell_{p^{*}}^{m,w}\left(G^{*}\right)}. \tag{1.5}\] The class of strongly \(p\)-summing operators from the Banach space \(F\) into \(G\), denoted as \(\mathcal{N}_{p}(F,G)\), forms a Banach space itself when equipped with the norm \(n_{p}(u)\). This norm is defined as the smallest constant \(K\) for which the inequality (1.5) holds. ## 2. M-strictly Lipschitz p-summing operators Consider a pointed metric space \(Y\) and let \(G\) be a Banach space. The concept of Lipschitz tensor product, denoted by \(Y\boxtimes G\), was introduced by Cabrera-Padilla et al. [1]. An element \(z\) in \(Y\boxtimes G\) can be represented as \(z=\sum_{l=1}^{k}\delta_{\left(x_{l},y_{l}\right)}\boxtimes h_{l}\) and can be viewed as a linear functional on \(Lip_{0}\left(Y,G^{*}\right)\). The action of this linear functional is defined by \[\left\langle z,s\right\rangle=\sum_{l=1}^{k}\delta_{\left(x_{l},y_{l}\right)} \boxtimes h_{l}\left(s\right)=\sum_{l=1}^{k}\left(s\left(x_{l}\right)-s\left( y_{l}\right)\right)h_{l}\text{ for every }s\in Lip_{0}\left(Y,G^{*}\right).\] The relationship between \(Y\boxtimes G\) and \(\mathcal{F}\left(Y\right)\otimes G\) is straightforward, where \(Y\boxtimes G\) is a vector subspace of \(\mathcal{F}\left(Y\right)\otimes G\). Given an element \(z\in Y\boxtimes G\), we can define the set \(A_{z}\) as the set of all representations of \(z\) in \(\mathcal{F}\left(Y\right)\otimes G\), that is, \[A_{z}:=\left\{\left(\mathfrak{n}_{j}\right)_{j=1}^{m},\left(h_{j}\right)_{j=1 }^{m}\right):m\in\mathbb{N},\text{ }\mathfrak{n}_{j}\in\mathcal{F}\left(Y\right),h_{j}\in G:z=\sum_{j=1}^{m} \mathfrak{n}_{j}\otimes h_{j}\right\}. \tag{2.1}\] Let's consider \(\beta\) as a tensor norm defined on Banach spaces. Based on [14, Theorem 3.1], it has been established that there exists a corresponding Lipschitz cross-norm, denoted as \(\beta^{L},\) defined on \(Y\boxtimes G\) by: \[\beta^{L}(\sum_{l=1}^{k}\delta_{(x_{l},y_{l})}\boxtimes h_{l})=\beta(\sum_{l=1}^{ k}\delta_{(x_{l},y_{l})}\otimes h_{l}), \tag{2.2}\] with \(\sum_{l=1}^{k}\delta_{(x_{l},y_{l})}\otimes h_{l}\) is an element of \(\mathcal{F}\left(Y\right)\otimes G\). Before presenting the following definition, it is necessary to recall the norms of Chevet-Saphar \(d_{p}\) and \(g_{p}\)[2, 12, 16], which are defined on two Banach spaces \(F\) and \(G\) \[d_{p}\left(z\right)=\inf\left\{\left\|\left(x_{j}\right)_{j=1}^{m}\right\|_{ \ell_{p^{\ast}}^{m,w}\left(F\right)}\left\|\left(y_{j}\right)_{j=1}^{m}\right\| _{\ell_{p}^{m}\left(G\right)}\right\}\text{,}\] and \[g_{p}\left(z\right)=\inf\left\{\left\|\left(x_{j}\right)_{j=1}^{m}\right\|_{ \ell_{p^{\ast}}^{m}\left(F\right)}\left\|\left(y_{j}\right)_{j=1}^{m}\right\| _{\ell_{p}^{m,w}\left(G\right)}\right\},\] here the \(\inf\) is taken over all representations of \(z\) in the form of \(z=\sum_{j=1}^{m}x_{j}\otimes y_{j}\in F\otimes G.\) It is worth noting that we can utilize the Chevet-Saphar norms to provide equivalent definitions for (1.3) and (1.4) (see [12, p. 140]). Specifically, the linear operator \(u:F\to G\) is said to be \(p\)-summing if there exists a constant \(K>0\) such that for any \(z=\sum_{j=1}^{m}x_{j}\otimes y_{j}^{\ast}\in F\otimes G^{\ast},\) the following inequality holds: \[\left|\left\langle u,z\right\rangle\right|=\left|\sum_{j=1}^{m}\left\langle u \left(x_{j}\right),y_{j}^{\ast}\right\rangle\right|\leq Kd_{p}\left(z\right).\] If we replace \(d_{p}\left(u\right)\) with \(g_{p}\left(u\right)\), we obtain the definition of strongly \(p\)-summing. In [14], the Lipschitz cross-norm \(d_{p}^{L}\) is defined as follows: \[d_{p}^{L}\left(z\right)=d_{p}(\sum_{l=1}^{k}\delta_{(x_{l},y_{l})}\otimes t_{ l})=\inf_{\left(\left(\mathfrak{n}_{j}\right)_{j=1}^{m},\left(h_{j}\right)_{j=1}^ {m}\right)\in A_{z}}\left\{\left\|\left(\mathfrak{n}_{j}\right)_{j=1}^{m} \right\|_{l_{p}^{m,w}\left(\mathcal{F}\left(Y\right)\right)}\left\|\left(h_{j }\right)_{j=1}^{m}\right\|_{l_{p^{\ast}}^{m}\left(G\right)}\right\}.\] Most definitions of summability for Lipschitz mappings are typically defined from a metric space to a Banach space. However, in the following definition of MS-Lipschitz \(p\)-summing, we consider Lipschitz operators defined on metric spaces. This new perspective allows us to establish a meaningful relationship between \(T\) and its linearization \(\widetilde{T}\). Before proceeding, let us recall the following definition introduced in [14]. **Definition 2.1**.: [14] Consider \(1\leq p\leq\infty,\)\(Y\) be a pointed metric space and \(G\) be a Banach space. The Lipschitz operator \(T:Y\to G\) is considered to be _strictly Lipschitz \(p\)-summing_ if there exists a constant \(K>0\) such that for every \(z=\sum_{l=1}^{k}\delta_{(x_{l},y_{l})}\boxtimes h_{l}^{\ast}\in Y\boxtimes G^{\ast}\) we have \[\left|\left\langle T,z\right\rangle\right|\leq Kd_{p}^{L}(z). \tag{2.3}\] Building upon the aforementioned idea, we have introduced the concept of MS-Lipschitz \(p\)-summing operators [15]. In contrast to considering elements from the dual space of \(G\), we now focus on elements from its Lipschitz space \(G^{\#}\). **Definition 2.2**.: [15] For \(1\leq p\leq\infty\) and \(Y,W\) being two pointed metric spaces, a Lipschitz operator \(T:Y\to W\) is considered to be MS-Lipschitz \(p\)-summing if there exists a constant \(K>0\) such that for every \(z=\sum_{l=1}^{k}\delta_{(x_{l},y_{l})}\boxtimes f_{l}\in Y\boxtimes W^{\#}\) we have \[|\langle T,z\rangle|\leq Kd_{p}^{L}(z). \tag{2.4}\] The set of all MS-Lipschitz \(p\)-summing operators from \(Y\) into \(W\) is denoted by \(\Pi_{p}^{MSL}\left(Y,W\right)\). It represents the collection of operators that satisfy the MS-Lipschitz \(p\)-summing property. The constant \(\pi_{p}^{MSL}\left(T\right)\) corresponds to the smallest value of \(K\) that satisfies inequality (2.4) for a given operator \(T\). It is important to note that if \(W\) is a Banach space, the set \(\Pi_{p}^{MSL}\left(Y,W\right)\) does not possess the structure of a vector space. **Proposition 2.3**. _Let \(1\leq p\leq\infty.\)Every MS-Lipschitz \(p\)-summing operator from a pointed metric space \(Y\) into a Banach space \(G\) is strictly Lipschitz \(p\)-summing._ **Proof**. Let \(T:X\to E\) be a MS-Lipschitz \(p\)-summing operator. Let \(x_{l},y_{l}\in Y\) and \(h_{k}^{*}\in E^{*}\)\(\left(1\leq l\leq k\right),\) then \[\left|\langle T,z\rangle\right| = \left|\sum_{l=1}^{k}h_{l}^{*}\left(T\left(x_{l}\right)\right)-h_ {l}^{*}\left(T\left(y_{l}\right)\right)\right|\] \[\leq Kd_{p}^{L}(z),\] where \(z=\sum_{l=1}^{k}\delta_{(x_{l},y_{l})}\boxtimes h_{k}^{*}\in Y\boxtimes G^{\#}.\) So, as \(Y\boxtimes G^{*}\subset Y\boxtimes G^{\#}\), the definition of \(d_{p}^{L}(z)\) on \(Y\boxtimes G^{\#}\) is smaller than on \(Y\boxtimes G^{*}\), consequently, the condition (2.3) is verified. \(\quad\blacksquare\) **Remark 2.4**. In the case where \(F\) and \(G\) are Banach spaces, it is well-known that the definitions of strictly Lipschitz \(p\)-summing, Lipschitz \(p\)-summing, and \(p\)-summing coincide for linear operators from \(F\) to \(G\) (see [14, Proposition 3.8]). Furthermore, in our specific case, the definition of MS-Lipschitz \(p\)-summing implies \(p\)-summing; however, the converse is not true, as illustrated in the following example: Consider the identity operator \(id_{F}:F\to F\). It can be easily demonstrated that \(\widetilde{id_{F}}=id_{\mathcal{F}\left(F\right)}\), indicating that the following diagram is commutative \[\begin{array}{ccc}F&id_{F}&F\\ \downarrow\delta_{F}&&\downarrow\delta_{F}\\ \mathcal{F}\left(F\right)&id_{\mathcal{F}\left(F\right)}&\mathcal{F}\left(F \right)\end{array}\] If \(F\) is a finite-dimensional space, then \(id_{F}\) is indeed \(p\)-summing, and consequently, it is also strictly Lipschitz \(p\)-summing. However, \(id_{\mathcal{F}\left(F\right)}\) cannot be \(p\)-summing since \(\mathcal{F}\left(F\right)\) is not finite-dimensional. Therefore, \(id_{F}\) is not MS-Lipschitz \(p\)-summing. The following statement presents the main result of this section. **Theorem 2.5**. _Consider \(1\leq p\leq\infty.\) Let \(Y\) and \(W\) be two pointed metric spaces. Let \(T:X\to Y\) be a Lipschitz operator. The following properties are equivalent. 1) \(T\) is MS-Lipschitz \(p\)-summing. 2) \(\widetilde{T}:\mathcal{F}\left(Y\right)\rightarrow\mathcal{F}\left(W\right)\) is \(p\)-summing. 3) \(\delta_{Y}\circ T:Y\rightarrow\mathcal{F}\left(W\right)\) is strictly Lipschitz \(p\)-summing. 4) There is a constant \(K>0\) such that for every \(\left(x_{j}^{i}\right)_{j=1}^{m_{1}},\left(y_{j}^{i}\right)_{j=1}^{m_{1}}\) in \(Y;(1\leq i\leq m_{2})\) and \(m_{1},m_{2}\in\mathbb{N}^{*}\), we have_ \[(\sum_{j=1}^{m_{1}}\left\|\sum_{i=1}^{m_{2}}\delta_{W}\circ T(x_{j}^{i})-\delta _{W}\circ T(y_{j}^{i})\right\|^{p})^{\frac{1}{p}}\leq K\sup_{s\in Y^{\#}}( \sum_{j=1}^{m_{1}}\left|\sum_{i=1}^{m_{2}}s(x_{j}^{i})-s(y_{j}^{i})\right|^{p })^{\frac{1}{p}}. \tag{2.5}\] **Proof**. \(1)\Leftrightarrow 2)\) See [15, Proposition 2.4.]. \(2)\Rightarrow 3):\) Suppose that \(\widetilde{T}\) is \(p\)-summing. Then \[\sum_{j=1}^{m_{1}}(\left\|\sum_{i=1}^{m_{2}}\delta_{W}\circ T(x_{j }^{i})-\delta_{W}\circ T(y_{j}^{i})\right\|^{p})^{\frac{1}{p}} = (\sum_{j=1}^{m_{1}}\left\|\widetilde{T}\left(\mathfrak{n}_{j} \right)\right\|^{p})^{\frac{1}{p}}\] \[\leq \pi_{p}\left(\widetilde{T}\right)\sup_{s\in Y^{\#}}(\sum_{j=1}^{ m_{1}}\left|s\left(\mathfrak{n}_{j}\right)\right|^{p})^{\frac{1}{p}}\] \[\leq \pi_{p}\left(\widetilde{T}\right)\sup_{s\in Y^{\#}}(\sum_{j=1}^{ m_{1}}\left|\sum_{i=1}^{m_{2}}s(x_{j}^{i})-s(y_{j}^{i})\right|^{p})^{\frac{1}{p}}\] \(3)\Rightarrow 2):\) We have \[(\sum_{j=1}^{m_{1}}\left\|\widetilde{T}\left(\mathfrak{n}_{j} \right)\right\|^{p})^{\frac{1}{p}} = (\sum_{j=1}^{m_{1}}\left\|\sum_{i=1}^{m_{2}}\delta_{W}\circ T \left(x_{j}^{i}\right)-\delta_{W}\circ T\left(y_{j}^{i}\right)\right\|^{p})^{ \frac{1}{p}}\] \[\leq K\sup_{s\in Y^{\#}}(\sum_{j=1}^{m_{1}}\left|\sum_{i=1}^{m_{2}}s( x_{j}^{i})-s(y_{j}^{i})\right|^{p})^{\frac{1}{p}}\] \[\leq K\sup_{s\in Y^{\#}}(\sum_{j=1}^{m_{1}}\left|s\left(\mathfrak{n} _{j}\right)\right|^{p})^{\frac{1}{p}}.\] As a consequence, \(\widetilde{T}\) is \(p\)-summing, and by applying the result in [4, Theorem 2.12], we can obtain the desired result. \(3)\Leftrightarrow 4)\) It is immediate. \(\quad\blacksquare\) By setting \(m_{2}=1\) in formula (2.5) and considering the isometric property of \(\delta_{W}\), we arrive at the precise formulation of Lipschitz \(p\)-summing as originally defined by Farmer [5], indeed \[(\sum_{j=1}^{m_{1}}\left\|\delta_{W}\circ T(x_{j})-\delta_{W}\circ T (y_{j})\right\|^{p})^{\frac{1}{p}} = (\sum_{j=1}^{m_{1}}d\left(T(x_{j}),T(y_{j})\right)^{p})^{\frac{1}{p}}\] \[\leq K\sup_{s\in Y^{\#}}(\sum_{j=1}^{m_{1}}\left|s(x_{j})-s(y_{j}) \right|^{p})^{\frac{1}{p}}.\] **Corollary 2.6**. _Consider a Lipschitz operator \(T:Y\to W\) between pointed metric spaces. The following properties are equivalent. 1) \(T\) is MS-Lipschitz p-summing. 2) The Lipschitz adjoint \(T^{\#}:Y^{\#}\to X^{\#}\) is strongly p\({}^{*}\)-summing._ **Proof**. According to (1.2), the dual operator of \(\widetilde{T}\) is \(T^{\#}\). To establish the equivalence between the given properties, we can make use of the result mentioned in [3, Theorem 2.2.2]. \(\blacksquare\) **Proposition 2.7**. _Consider a Lipschitz operator \(T:Y\to W\) between pointed metric spaces such that \(Y\) or \(W\) is finite, then \(T\) is MS-Lipschitz p-summing._ **Proof**. Suppose \(Y\) is a finite metric space. According to [17, Example 2.3.6], we know that the space \(\mathcal{F}\left(Y\right)\) is finite-dimensional. Therefore, the linearization \(\widetilde{T}:\mathcal{F}\left(Y\right)\rightarrow\mathcal{F}\left(W\right)\) is \(p\)-summing, and consequently, \(T:Y\to W\) is MS-Lipschitz \(p\)-summing. \(\blacksquare\) The Pietsch domination theorem is an intriguing characterization that is satisfied by the class of MS-Lipschitz \(p\)-summing operators. **Theorem 2.8**. _Consider a Lipschitz operator \(T:Y\to W\) between pointed metric spaces. We have the following equivalent properties. 1) \(T\) is MS-Lipschitz p-summing. 2) There exist a constant \(K>0,\) a Radon probability \(\mu\) on \(B_{Y^{\#}}\) such that for every \(\left(x^{i}\right)_{i=1}^{m},\left(y^{i}\right)_{i=1}^{m}\subset Y,\) we have_ \[\left\|\sum_{i=1}^{m}\delta_{W}\circ T(x^{i})-\delta_{W}\circ T(y^{i})\right\| \leq K(\int_{B_{Y^{\#}}}\left|\sum_{i=1}^{m}s\left(x^{i}\right)-s\left(y^{i} \right)\right|^{p}d\mu\left(s\right))^{\frac{1}{p}}. \tag{2.6}\] _In this case, we have_ \[\pi_{p}^{MSL}\left(T\right)=\inf\left\{K:\text{ verifying \eqref{eq:p-summing-1}}\right\}.\] **Proof**. \(1)\Rightarrow 2): Since \(T\) is MS-Lipschitz \(p\)-summing, then \(\widetilde{T}:\mathcal{F}\left(Y\right)\rightarrow\mathcal{F}\left(W\right)\) is \(p\)-summing. By Pietsch domination theorem for \(p\)-summing linear operator [4, Theorem 2.12], we have \[\left\|\widetilde{T}\left(\sum_{i=1}^{m}\delta_{\left(x^{i},y^{i} \right)}\right)\right\| \leq \pi_{p}\left(\widetilde{T}\right)\left(\int_{B_{Y\#}}\left|\left< \sum_{i=1}^{m}\delta_{\left(x^{i},y^{i}\right)},s\right>\right|^{p}d\mu\left(s \right)\right)^{\frac{1}{p}}\] \[\leq \pi_{p}\left(\widetilde{T}\right)\left(\int_{B_{Y\#}}\left|\sum_{ i=1}^{m}s\left(x^{i}\right)-s\left(y^{i}\right)\right|^{p}d\mu\left(s\right) \right)^{\frac{1}{p}}.\] On the other hand, \[\left\|\widetilde{T}\left(\sum_{i=1}^{m}\delta_{\left(x^{i},y^{ i}\right)}\right)\right\| = \left\|\sum_{i=1}^{m}\widetilde{T}\left(\delta_{\left(x^{i},y^{ i}\right)}\right)\right\|\] \[= \left\|\sum_{i=1}^{m}\widetilde{T}\circ\delta_{Y}\left(x^{i} \right)-\widetilde{T}\circ\delta_{Y}\left(y^{i}\right)\right\|\] \[= \left\|\sum_{j=1}^{n}\delta_{W}\circ T(x^{i})-\delta_{W}\circ T( y^{i})\right\|\] Therefore, we have obtained the desired result. \(2)\Rightarrow 1):\) Similarly, we can apply the same argument. \(\qquad\blacksquare\) We conclude this section with a result concerning Lipschitz operators that have a finite image. The following Lemma establishes a relationship between the free space of \(T\left(Y\right)\) and the image \(\widetilde{T}\left(\mathcal{F}\left(Y\right)\right)\). **Lemma 2.9**.: _Let \(X\) and \(Y\) be two pointed metric spaces. Consider a Lipschitz operator \(T:Y\to W\) such that \(T\left(Y\right)\) is a closed subset of \(\ W\). Then, we have the following_ \[\widetilde{T}\left(\mathcal{F}\left(Y\right)\right)=\mathcal{F}\left(T\left( Y\right)\right).\] **Proof**. By [17, Theorem 2.2.6], we have \[\mathcal{F}\left(T\left(Y\right)\right) = \overline{span}\left\{\delta_{T\left(x\right)}:x\in Y\right\}\] \[= \overline{span}\left\{\delta_{W}\left(T\left(x\right)\right):x \in Y\right\}\] \[= \overline{span}\left\{\widetilde{T}\left(\delta_{x}\right):x\in Y\right\}\] \[= \widetilde{T}\left(\mathcal{F}\left(Y\right)\right).\qquad\blacksquare\] **Corollary 2.10**.: _Let \(Y\) and \(W\) be two pointed metric spaces. Suppose that \(T:Y\to W\) is a Lipschitz operator such that \(T\left(Y\right)\) is a finite set. Then, the linearization \(\widetilde{T}\) has finite rank. Consequently, we can conclude that every finite Lipschitz operator is MS-Lipschitz \(p\)-summing._ ## 3. Cohen MS-Lipschitz \(p\)-nuclear operators In [3], Cohen introduced the concepts of strongly \(p\)-summing and \(p\)-nuclear operators in the category of linear operators. Since then, many authors have explored and extended these notions in various directions, including multilinear, sublinear, and Lipschitz cases. We will further extend these concepts using a similar approach to the one presented in the previous section. **Definition 3.1**. Consider a Lipschitz operator \(T:Y\to W\) between pointed metric spaces. For \(1\leq p\leq\infty,\)\(T\) is said to be _MS-strongly Lipschitz \(p\)-summing_ if there exists a constant \(K>0\) such that the following condition holds for any \(z=\sum_{l=1}^{k}\delta_{\left(x_{l},y_{l}\right)}\boxtimes f_{l}\in Y \boxtimes W^{\#}\) \[\left|\left\langle T,z\right\rangle\right|\leq Kg_{p}^{L}(z). \tag{3.1}\] We denote the set of all MS-strongly Lipschitz \(p\)-summing operators from \(Y\) into \(W\) as \(\mathcal{D}_{p}^{MSL}\left(Y,W\right),\) and \(d_{p}^{MSL}\left(T\right)\) represents the smallest constant \(K\) that satisfies (3.1). If \(W\) is a Banach space, it's important to note that \(\mathcal{D}_{p}^{MSL}\left(Y,W\right)\) does not possess the structure of a vector space. **Theorem 3.2.**_Consider \(1\leq p\leq\infty\) and a Lipschitz operator \(T:Y\to W\) between pointed metric spaces. The following properties are equivalent. 1) \(T\) is MS-strongly Lipschitz \(p\)-summing. 2) There exists a constant \(K>0\) such that for every \(\left(x_{j}\right)_{j=1}^{m},\left(y_{j}\right)_{j=1}^{m}\) in \(Y\) and \(\left(f_{j}\right)_{j=1}^{m}\)in \(W^{\#};\) (\(m\in\mathbb{N}^{*}\)),_ \[\left|\sum_{j=1}^{m}f_{j}\left(T(x_{j})\right)-f_{j}\left(T(y_{j})\right) \right|\leq K(\sum_{j=1}^{m}d\left(x_{j},y_{j}\right)^{p})^{\frac{1}{p}}\left\| \left(f_{j}\right)_{j=1}^{m}\right\|_{\ell_{p^{*}}^{m,w}\left(W^{\#}\right)}. \tag{3.2}\] _3) \(\delta_{W}\circ T:Y\rightarrow\mathcal{F}\left(W\right)\) is strongly Lipschitz \(p\)-summing. 4) \(\widetilde{T}:\mathcal{F}\left(Y\right)\rightarrow\mathcal{F}\left(W\right)\) is strongly \(p\)-summing._ **Proof**. \(1)\Rightarrow 2):\) Let \(T\) be a MS-strongly Lipschitz \(p\)-summing operator. Let \(\left(x_{j}\right)_{j=1}^{m},\left(y_{j}\right)_{j=1}^{m}\) in \(Y\) and \(\left(f_{j}\right)_{j=1}^{m}\subset W^{\#}.\) We get \[\left|\left\langle T,z\right\rangle\right|=\left|\sum_{j=1}^{m}f_{j}\left(T(x_ {j})\right)-f_{j}\left(T(y_{j})\right)\right|\leq d_{p}^{MSL}\left(T\right)g_ {p}^{L}(z),\] where \(z=\sum_{j=1}^{m}\delta_{\left(x_{j},y_{j}\right)}\boxtimes f_{j}\in Y \boxtimes W^{\#}.\) Then \[\left|\left\langle T,z\right\rangle\right| = \left|\sum_{j=1}^{m}f_{j}\left(T(x_{j})\right)-f_{j}\left(T(y_{j })\right)\right|\] \[\leq d_{p}^{MSL}\left(T\right)\left(\sum_{j=1}^{m}\left\|\delta_{ \left(x_{j},y_{j}\right)}\right\|^{p}\right)^{\frac{1}{p}}\left\|\left(f_{j} \right)_{j=1}^{m}\right\|_{\ell_{p^{*}}^{m,w}\left(W^{\#}\right)}\] \[\leq d_{p}^{MSL}\left(T\right)\left(\sum_{j=1}^{m}d\left(x_{j},y_{j} \right)^{p}\right)^{\frac{1}{p}}\left\|\left(f_{j}\right)_{j=1}^{m}\right\|_{ \ell_{p^{*}}^{m,w}\left(W^{\#}\right)}\] \(2)\Rightarrow 3):\) We will show that \(\delta_{W}\circ T:Y\to\mathcal{F}\left(W\right)\) is strongly Lipschitz \(p\)-summing. Let \(\left(x_{j}\right)_{j=1}^{m},\left(y_{j}\right)_{j=1}^{m}\) in \(Y\) and \(\left(f_{j}\right)_{j=1}^{m}\subset W^{\#}\left(=\mathcal{F}\left(W\right)^{*} \right).\) Then \[\left|\sum_{j=1}^{m}\left\langle\delta_{W}\circ T(x_{j})-\delta_{W }\circ T(y_{j}),f_{j}\right\rangle\right| = \left|\sum_{j=1}^{m}f_{j}\left(\delta_{W}\circ T(x_{j})\right)-f_ {j}\left(\delta_{W}\circ T(y_{j})\right)\right|\] \[= \left|\sum_{j=1}^{m}f_{j}\left(T(x_{j})\right)-f_{j}\left(T(y_{j} )\right)\right|\] \[\leq K(\sum_{j=1}^{m}d\left(x_{j},y_{j}\right)^{p})^{\frac{1}{p}} \left\|\left(f_{j}\right)_{j=1}^{m}\right\|_{\ell_{p^{*}}^{m,w}\left(W^{\#} \right)}\] Then \(\delta_{Y}\circ T\) is strongly Lipschitz \(p\)-summing. \(3)\Rightarrow 4):\) We know that \[\widehat{\delta_{W}\circ T}=\widetilde{T}.\] Furthermore, according to [13, Proposition 3.1], the linearization \(\widehat{\delta_{W}\circ T}\) is strongly \(p\)-summing, which implies that \(\widetilde{T}\) is also strongly \(p\)-summing. \(4)\Rightarrow 1):\) Let \(z=\sum_{l=1}^{k}\delta_{\left(x_{l},y_{l}\right)}\boxtimes f_{l}\in Y \boxtimes W^{\#}.\) Assuming that \(\widetilde{T}\) is strongly \(p\)-summing, we can deduce from [12, Proposition 6.12] that \(\widetilde{T}\) satisfies the following \[\left|\sum_{l=1}^{k}\left\langle\widetilde{T}\left(\mathfrak{n}_{l}\right),f_ {l}\right\rangle\right|\leq Kg_{p}(\sum_{l=1}^{k}\mathfrak{n}_{l}\otimes f_{l}),\] If we put \(\mathfrak{n}_{l}=\delta_{\left(x_{l},y_{l}\right)}\) for \(1\leq l\leq k\), we find \[\left|\sum_{l=1}^{l}\left\langle\widetilde{T}\left(\delta_{\left( x_{l},y_{l}\right)}\right),f_{l}\right\rangle\right| = \left|\sum_{k=1}^{l}f_{l}\left(T\left(x_{l}\right)\right)-f_{l} \left(T\left(y_{l}\right)\right)\right|\] \[\leq Kg_{p}(z)=Kg_{p}^{L}(z).\qquad\blacksquare\] Using the same reasoning as in Corollary 2.6, we can establish the following result. **Corollary 3.3**. _Consider \(1\leq p\leq\infty\) and a Lipschitz operator \(T:Y\to W\) between pointed metric spaces. The following properties are equivalent. 1) \(T\) is MS-strongly Lipschitz \(p\)-summing. 2) The Lipschitz adjoint \(T^{\#}:W^{\#}\to Y^{\#}\) is \(p^{*}\)-summing._ The following integral characterization is an adaptation of the linear case. To prove it, we rely on the fact that \(T^{\#}\) is \(p^{*}\)-summing or \(\widetilde{T}\) is strongly \(p\)-summing. **Theorem 3.4**. _Consider \(1\leq p\leq\infty\) and a Lipschitz operator \(T:Y\to W\) between pointed metric spaces. The following properties are equivalent. 1) \(T\) is MS-strongly Lipschitz p-summing._ _2) There exist a constant \(K>0\) and a Radon probability \(\mu\) on \(B_{Lip_{0}(W)^{*}}\) such that for every \(x,y\in Y\) and \(f\in W^{\#},\) we have_ \[\left|f\left(T(x)\right)-f\left(T(y)\right)\right|\leq Kd\left(x,y\right)\left( \int_{B_{Lip_{0}(W)^{*}}}\left|\left\langle f,\mathfrak{n}\right\rangle\right|^ {p^{*}}d\mu\left(\mathfrak{n}\right)\right)^{\frac{1}{p^{*}}}.\] Let us now recall the definition of the tensor norm \(w_{p}\) on the product of two Banach spaces \(F\otimes G,\) which has been studied in [12, p. 180]. Let \(p\in\left[1,\infty\right]\) we have \[w_{p}\left(z\right)=\inf\left\{\left\|\left(x_{j}\right)_{j=1}^{m}\right\|_{ l_{p}^{m,w}(F)}\left\|\left(y_{j}\right)_{j=1}^{m}\right\|_{l_{p^{*}}^{m,w}(G)} \right\},\] where the infimum is taken over all representations of \(z\) of the form \(z=\sum_{j=1}^{m}x_{j}\otimes y_{j}\in F\otimes G.\) **Definition 3.5**. Consider a Lipschitz operator \(T:Y\to W\) between pointed metric spaces. For \(1\leq p\leq\infty,\)\(T\) is said to be MS-Lipschitz \(p\)-nuclear if there is a constant \(K>0\) such that such that for every \(z=\sum_{l=1}^{k}\delta_{\left(x_{l},y_{l}\right)}\boxtimes f_{l}\in Y \boxtimes W^{\#}\) we have \[\left|\left\langle T,z\right\rangle\right|\leq Kw_{p}^{L}(z), \tag{3.3}\] We define \(\mathcal{N}_{p}^{MSL}\left(Y,W\right)\) as the set of all MS-Lipschitz \(p\)-nuclear operators from \(Y\) into \(W\). Additionally, \(n_{p}^{MSL}\left(T\right)\) represents the smallest constant \(K\) that satisfies (3.3). If \(W\) is a Banach space, it is important to reiterate that \(\mathcal{N}_{p}^{MSL}\left(Y,W\right)\) does not possess the structure of a vector space. Suppose that \(W\) is a Banach space. By restricting the previous definition to the linear forms of \(W^{*},\) we obtain the definition of (Cohen) Lipschitz \(p\)-nuclear operators as introduced in [9]. Indeed, let \(z=\sum_{l=1}^{k}\delta_{\left(x_{l},y_{l}\right)}\boxtimes a_{l}^{*}\in Y \boxtimes W^{*}\) we have \[\left|\left\langle T,z\right\rangle\right| = \left|\sum_{l=1}^{k}a_{l}^{*}\left(T\left(x_{l}\right)\right)-a_ {l}^{*}\left(T\left(y_{l}\right)\right)\right|=\left|\sum_{l=1}^{k}\left\langle T \left(x_{l}\right)-T\left(y_{l}\right),a_{l}^{*}\right\rangle\right| \tag{3.1}\] \[\leq Kw_{p}^{L}(z)=Kw_{p}(\sum_{l=1}^{k}\delta_{\left(x_{l},y_{l} \right)}\otimes a_{l}^{*})\] \[\leq K\left\|\left(\delta_{\left(x_{l},y_{l}\right)}\right)_{l=1}^{k} \right\|_{\ell_{p}^{k,w}(\mathcal{F}(Y))}\left\|\left(a_{l}^{*}\right)_{l=1}^ {k}\right\|_{\ell_{p}^{k,w}(W^{*})}\] \[\leq K\sup_{f\in Y^{\#}}(\sum_{l=1}^{k}\left|f(x_{l})-f(y_{l})\right| ^{p})^{\frac{1}{p}}\sup_{\left\|a\right\|_{W}=1}(\sum_{l=1}^{k}\left|\left\langle a _{l}^{*},a\right\rangle\right|^{p^{*}})^{\frac{1}{p^{*}}}, \tag{3.3}\] Consequently, it follows that every MS-Lipschitz \(p\)-nuclear operator is also Lipschitz \(p\)-nuclear. **Theorem 3.6**. _Consider \(1\leq p\leq\infty\) and a Lipschitz operator \(T:Y\to W\) between pointed metric spaces. The following properties are equivalent. 1) \(T\) is MS-Lipschitz \(p\)-nuclear._ _2) There is a constant \(K>0\) such that for every \(\left(x_{j}^{i}\right)_{j=1}^{m_{1}},\left(y_{j}^{i}\right)_{j=1}^{m_{1}}\) in \(Y,\)\(\left(f_{j}\right)_{j=1}^{m_{1}}\subset W^{\#};\left(1\leq i\leq m_{2}\right)\) and \(m_{1},m_{2}\in\mathbb{N}^{\ast}\), we have_ \[\left|\sum_{j=1}^{m_{1}}\sum_{i=1}^{m_{2}}f_{j}\left(T(x_{j}^{i})\right)-f_{j} \left(T(y_{j}^{i})\right)\right|\leq K\sup_{s\in Y^{\#}}\left(\sum_{j=1}^{m_{1} }\left|\sum_{i=1}^{m_{2}}s(x_{j}^{i})-s(y_{j}^{i})\right|^{p}\right)^{\frac{1} {p}}\left\|\left(f_{j}\right)_{j=1}^{m_{1}}\right\|_{\ell_{p^{\ast}}^{m_{1},w }\left(W^{\#}\right)}. \tag{3.4}\] _3) \(\widetilde{T}:\mathcal{F}\left(Y\right)\rightarrow\mathcal{F}\left(W\right)\) is \(p\)-nuclear._ **Proof**. \(1)\Rightarrow 2):\) Let \(T\) be a MS-Lipschitz \(p\)-nuclear operator. Let \(\left(x_{j}^{i}\right)_{j=1}^{m_{1}},\left(y_{j}^{i}\right)_{j=1}^{m_{1}}\) in \(Y\) and \(\left(f_{j}\right)_{j=1}^{m_{1}}\) in \(W^{\#};\left(1\leq i\leq m_{2}\right)\), we have \[\left|\sum_{j=1}^{m_{1}}\sum_{i=1}^{m_{2}}f_{j}\left(T(x_{j}^{i})\right)-f_{j} \left(T(y_{j}^{i})\right)\right|\leq Kg_{p}^{L}(z),\] where \(z=\sum_{j=1}^{m_{1}}\sum_{i=1}^{m_{2}}\delta_{\left(x_{j}^{i},y_{j}^{i}\right) }\boxtimes f_{j}\in Y\boxtimes W^{\#}.\) Then \[\left|\sum_{j=1}^{m_{1}}\sum_{i=1}^{m_{2}}f_{j}\left(T(x_{j}^{i}) \right)-f_{j}\left(T(y_{j}^{i})\right)\right|\] \[\leq n_{p}^{MSL}\left(T\right)\sup_{s\in Y^{\#}}\left(\sum_{j=1}^{m_{ 1}}\left|s(\sum_{i=1}^{m_{2}}\delta_{\left(x_{j}^{i},y_{j}^{i}\right)})\right| ^{p}\right)^{\frac{1}{p}}\left\|\left(f_{j}\right)_{j=1}^{m_{1}}\right\|_{ \ell_{p^{\ast}}^{m_{1},w}\left(W^{\#}\right)}\] \[\leq n_{p}^{MSL}\left(T\right)\sup_{s\in Y^{\#}}\left(\sum_{j=1}^{m_{ 1}}\left|\sum_{i=1}^{m_{2}}s(x_{j}^{i})-s(y_{j}^{i})\right|^{p}\right)^{\frac {1}{p}}\left\|\left(f_{j}\right)_{j=1}^{m_{1}}\right\|_{\ell_{p^{\ast}}^{m_{1 },w}\left(W^{\#}\right)}.\] \(2)\Rightarrow 3):\) Let \(\left(\mathfrak{n}_{j}\right)_{j=1}^{m_{1}}\subset\mathcal{F}\left(Y\right)\) and \(\left(f_{j}\right)_{j=1}^{m_{1}}\subset W^{\#}\) such that \[\mathfrak{m}_{j}=\sum_{i=1}^{m_{2}}\delta_{\left(x_{j}^{i},y_{j}^{i}\right)} \in\mathcal{F}\left(Y\right),\text{ }\left(1\leq j\leq m_{1}\right).\] Then \[\left(\sum_{j=1}^{m_{1}}\left|\left\langle\widetilde{T}\left(\mathfrak{n }_{j}\right),f_{j}\right\rangle\right|^{p}\right)^{\frac{1}{p}} = \left(\sum_{j=1}^{m_{1}}\left|\left\langle\sum_{i=1}^{m_{2}}\left( \delta_{W}\circ T\left(x_{j}^{i}\right)-\delta_{W}\circ T\left(x_{j}^{i}\right) \right),f_{j}\right\rangle\right|^{p})^{\frac{1}{p}}\] \[= \sum_{j=1}^{m_{1}}(\left|\sum_{i=1}^{m_{2}}f_{j}\left(T(x_{j}^{i} )\right)-f_{j}\left(T(y_{j}^{i})\right)\right|^{p})^{\frac{1}{p}}\] \[\leq K\sup_{s\in Y^{\#}}(\sum_{j=1}^{m_{1}}\left|\sum_{i=1}^{m_{2}}s(x _{j}^{i})-s(y_{j}^{i})\right|^{p})^{\frac{1}{p}}\left\|(f_{j})_{j=1}^{m_{1}} \right\|_{\ell_{p^{*}}^{m_{1},w}\left(W^{\#}\right)}\] \[\leq K\sup_{s\in Y^{\#}}(\sum_{j=1}^{m_{1}}\left|s(\mathfrak{n}_{j}) \right|^{p})^{\frac{1}{p}}\left\|(f_{j})_{j=1}^{m_{1}}\right\|_{\ell_{p^{*}}^ {m_{1},w}\left(W^{\#}\right)}\] Then \(\widetilde{T}\) is \(p\)-nuclear. \(3)\Rightarrow 1):\) Now, we suppose that \(\widetilde{T}\) is \(p\)-nuclear. Let \(x_{l},y_{l}\in Y\) and \(f_{l}\in W^{\#}\)\((1\leq l\leq k)\) we have \[\left|\sum_{l=1}^{k}f_{l}\left(T(x_{l})\right)-f_{l}\left(T(y_{l })\right)\right| = \left|\sum_{l=1}^{k}\left\langle\delta_{W}\circ T\left(x_{l} \right)-\delta_{W}\circ T\left(y_{l}\right),f_{l}\right\rangle\right|\] \[= \left|\sum_{l=1}^{k}\left\langle\widehat{T}\left(\delta_{(x_{l}, y_{l})}\right),f_{l}\right\rangle\right|\] \[\leq n_{p}\left(\widetilde{T}\right)w_{p}(z)=n_{p}\left(\widetilde{T }\right)w_{p}^{L}(z),\] where \(z=\sum_{l=1}^{k}\delta_{(x_{l},y_{l})}\boxtimes f_{l}\in Y\boxtimes W^{\#}.\) Finally, \(T\) is MS-Lipschitz \(p\)-nuclear and we have \[n_{p}^{MSL}\left(T\right)\leq n_{p}\left(\widetilde{T}\right).\qquad\blacksquare\] By utilizing the result presented in [3, Theorem 2.2.4], we can establish the following relationship between \(T\) and its Lipschitz adjoint \(T^{\#}\). **Corollary 3.7**. _Consider \(1\leq p\leq\infty\) and a Lipschitz operator \(T:Y\to W\) between pointed metric spaces. The following properties are equivalent. 1) \(T\) is MS-Lipschitz p-nuclear. 2) The Lipschitz adjoint \(T^{\#}:W^{\#}\to Y^{\#}\) is \(p^{*}\)-nuclear._ The following integral characterization is an adaptation of the linear case. Its proof will be omitted. **Theorem 3.8**. _Consider \(1\leq p\leq\infty\) and a Lipschitz operator \(T:Y\to W\) between pointed metric spaces. The following properties are equivalent. 1) \(T\) is MS-Lipschitz p-nuclear. _2) There exist a constant \(K>0,\) a Radon probability \(\mu\) on \(B_{Y\#}\) and \(\eta\in B_{Lip_{0}(W)^{*}}\) such that for every \(\left(x^{i}\right)_{i=1}^{m},\left(y^{i}\right)_{i=1}^{m}\) in \(Y\)_ and \(f\in W^{\#},\)_we have_ \[\left|\sum_{i=1}^{m}f\left(T(x^{i})\right)-f\left(T(x^{i})\right)\right| \leq K(\int_{B_{Y\#}}\left|\sum_{i=1}^{m}s(x^{i})-s(y^{i})\right|^{p}d \mu\left(s\right))^{\frac{1}{p}}\times\] \[(\int_{B_{Lip_{0}(W)^{*}}}|\langle s,\mathfrak{n}\rangle|^{p^{*} }\,d\eta\left(\mathfrak{n}\right))^{\frac{1}{p^{*}}}.\] **Theorem 3.9**. _Let \(Y,W\) and \(Z\) be three pointed metric spaces. Let \(u:Y\to W\) be an MS-Lipschitz p-summing operator and \(v:W\to Z\) be an MS-strongly Lipschitz p-summing operator. Then the composition \(T=v\circ u\) is MS-Lipschitz \(p\)-nuclear._ **Proof**. By [7, p. 124], we have \[\widetilde{T}=\widetilde{v}\circ\widetilde{u}.\] According to a result due to Cohen [3], the linear operator \(\widetilde{u}\) being \(p\)-summing and \(\widetilde{v}\) being strongly \(p\)-summing imply that \(\widetilde{T}\) is \(p\)-nuclear. Consequently, \(T\) is also MS-Lipschitz \(p\)-nuclear. \(\quad\blacksquare\) **Remark 3.10**. In the linear case, the converse of the previous statement is true. However, in our case, it is unknown whether every MS-Lipschitz \(p\)-nuclear operator can be expressed as the product of an MS-Lipschitz \(p\)-summing operator and an MS-strongly \(p\)-summing operator. **Declarations** **Conflict of interest.** The authors declare that they have no conflicts of interest.
2307.01957
Hybrid Neural Diffeomorphic Flow for Shape Representation and Generation via Triplane
Deep Implicit Functions (DIFs) have gained popularity in 3D computer vision due to their compactness and continuous representation capabilities. However, addressing dense correspondences and semantic relationships across DIF-encoded shapes remains a critical challenge, limiting their applications in texture transfer and shape analysis. Moreover, recent endeavors in 3D shape generation using DIFs often neglect correspondence and topology preservation. This paper presents HNDF (Hybrid Neural Diffeomorphic Flow), a method that implicitly learns the underlying representation and decomposes intricate dense correspondences into explicitly axis-aligned triplane features. To avoid suboptimal representations trapped in local minima, we propose hybrid supervision that captures both local and global correspondences. Unlike conventional approaches that directly generate new 3D shapes, we further explore the idea of shape generation with deformed template shape via diffeomorphic flows, where the deformation is encoded by the generated triplane features. Leveraging a pre-existing 2D diffusion model, we produce high-quality and diverse 3D diffeomorphic flows through generated triplanes features, ensuring topological consistency with the template shape. Extensive experiments on medical image organ segmentation datasets evaluate the effectiveness of HNDF in 3D shape representation and generation.
Kun Han, Shanlin Sun, Xiaohui Xie
2023-07-04T23:28:01Z
http://arxiv.org/abs/2307.01957v1
# Hybrid Neural Diffeomorphic Flow for Shape Representation and Generation via Triplane ###### Abstract Deep Implicit Functions (DIFs) have gained popularity in 3D computer vision due to their compactness and continuous representation capabilities. However, addressing dense correspondences and semantic relationships across DIF-encoded shapes remains a critical challenge, limiting their applications in texture transfer and shape analysis. Moreover, recent endeavors in 3D shape generation using DIFs often neglect correspondence and topology preservation. This paper presents HNDF (Hybrid Neural Diffeomorphic Flow), a method that implicitly learns the underlying representation and decomposes intricate dense correspondences into explicitly axis-aligned triplane features. To avoid suboptimal representations trapped in local minima, we propose hybrid supervision that captures both local and global correspondences. Unlike conventional approaches that directly generate new 3D shapes, we further explore the idea of shape generation with deformed template shape via diffeomorphic flows, where the deformation is encoded by the generated triplane features. Leveraging a pre-existing 2D diffusion model, we produce high-quality and diverse 3D diffeomorphic flows through generated triplanes features, ensuring topological consistency with the template shape. Extensive experiments on medical image organ segmentation datasets evaluate the effectiveness of HNDF in 3D shape representation and generation. ## 1 Introduction 3D geometry representation is critical for numerous computer vision tasks, including 3D model reconstruction, matching and manipulation. Deep implicit functions (DIFs) have emerged as promising alternatives to traditional representation methods such as voxel grids, point clouds and polygon meshes. DIFs offer several advantages such as compactness, continuity, and the ability to capture fine geometric details. They enable efficient computation while leveraging deep neural networks for end-to-end training, enhancing shape representation and understanding. However, despite the promising results in direct object modeling using DIFs, it is important to consider the com mon shape features and semantic correspondences shared among objects. Conventional DIFs face challenges in establishing correspondences between different shapes, limiting their applicability in domains like medical image segmentation [13, 19, 26] and texture transfer [8, 31]. Previous methods [6, 46, 52] have proposed shape modeling as conditional deformations of a template DIF to address this limitation. However, these methods still have limitations, such as being topology-agnostic or lacking the capability to capture correspondences for local details. Recent researches have also explored the integration of DIFs for the 3D shape generation [34, 37, 44, 51]. Compared to point clouds and polygon meshes, DIF-based generation offers continuous representations with high quality and resolution. However, existing approaches primarily focus on direct shape generation without considering underlying point correspondence and topology preservation. To overcome these challenges, we introduce Hybrid Neural Diffeomorphic Flow (HNDF) for shape representation and generation. HNDF models shapes as conditional deformations of a template DIF, similar to previous work [6, 46, 50, 52]. However, HNDF encodes diffeomorphic deformations into axis-aligned triplane features to enhance representation capability. Local deformations are controlled through interpolation of triplane features with a shared feature decoder. Nevertheless, the direct application of triplanes may lead to local optimization issues and defective deformations, resulting in inaccurate representations. To address this, we propose a hybrid supervision approach that considers both local and global correspondences, along with additional modifications and regularization to preserve the diffeomorphism property of the represented deformations. This combination of triplane feature exploration and supervision enables high representation capabilities and accurate dense correspondences. Unlike conventional 3D shape generation works which primarily focus on direct shape generation, we explore the idea of deformation-based shape generation, where the template shape is deformed based on newly generated diffeomorphic deformations. This approach ensures that the newly generated shapes maintain the same topology as the template shape, preserving topological consistency while offering a wide range of diverse shapes. To achieve this, we represent deformations using optimized per-object triplane features, which encode diffeomorphic deformations as three axis-aligned 2D feature planes. We concatenate the triplane features as multi-channel images and leverage the existing 2D diffusion models to generate new triplane features. By applying the new diffeomorphic deformations encoded in the triplane features, we deform the template shape to generate novel 3D shapes while preserving their topological characteristics. The contributions of this paper are as follows: 1. We propose HNDF, which leverages axis-aligned triplane features to provide high representation capability and capture dense correspondences accurately. 2. We demonstrate that hybrid supervision and regularization are essential for ensuring correct deformation representation and preventing the representation from local optima. 3. Rather than directly generating 3D shapes, we explore the concept of shape generation through diffeomorphic deformations and provide a baseline method utilizing 2D diffusion model. The topology and correspondences are preserved in newly generated 3D shapes. ## 2 Related Works **Deep Implicit Function** Deep implicit functions, or neural fields, have enabled the parameterization of physical properties and dynamics through simple neural networks [5, 32, 33, 38, 45, 49].DeepSDF [38] serves as an auto-decoder model, commonly used as a baseline for shape representation [1, 17, 47]. NeRF [38] presents a novel approach for synthesizing photorealistic 3D scenes from 2D images. Occupancy Network [32] constructs solid meshes through the classification of 3D points, while Occupancy Flow [36] extends this idea to 4D with a continuous vector field in time and space. Recent trends incorporate locally conditioned representations [1, 5, 47, 17, 40], utilizing small MLPs that are computationally and memory-efficient while capturing local details effectively. One such representation is the hybrid triplane [2, 29, 39, 2, 7], which represents features on axis-aligned planes and aggregates them using a lightweight implicit feature decoder. In our work, we adopt the expressive triplane representation. However, instead of decoding the 3D object itself, we utilize triplane features to decode complex diffeomorphic deformations, allowing us to represent new 3D objects by deforming the template shape using the encoded deformation. **Point Correspondence and Topology preservation** Capturing dense correspondences between shapes remains a significant challenge and a critical area of interest in the 3D vision community. Various approaches have been proposed to address point correspondence, including template learning, elementary representation, and deformation field-based methods. Among them, mesh-based methods [20, 21] face difficulties in handling topological changes, sensitivity to mesh connectivity, and challenges in capturing fine-grained details. Elementary-based methods [11, 17], on the other hand, may struggle with capturing high-level structural features due to the simplicity of the elements used. DIT [52] and NDF [46] exemplify deformation field-based methods, with DIT exhibiting smoother deformations using LSTM [16] and NDF employing NODE [3] for achieving diffeomorphic deformation. ImplicitAtlas [50] integrates multiple templates to improve the shape representation capacity at a negligible computational cost. In our work, we follow the NDF framework but enhance the representation's capacity to capture accurate correspondences by leveraging more powerful triplane representation. Experimental results highlight the importance of incorporating triplane features with hybrid supervision, which prevents local optimization issues, provides significantly more accurate correspondences, and ensures the preservation of topology. **3D Shape Generation** Generative models, such as GANs, autoregressive models, score matching models, and denoising diffusion probabilistic models, have been extensively studied for 3D shape generation. However, GAN-based methods [30, 35, 37, 9, 10, 2, 12] still outperform alternative approaches. Voxel-based GANs [9, 14, 48], for example, directly extend the use of CNN generators from 2D to 3D settings with high memory requirement and computational burden. In recent years, there has been a shift towards leveraging expressive 2D generator backbones, such as StyleGAN2 [18]. EG3D [2] combines a hybrid explicit-implicit triplane representation to improve computational efficiency while maintaining expressiveness. Get3D [10] incorporates the deformable tetrahedral grid for explicit surface extraction and triplane representation for differentiable rendering to generate textured 3D shapes. Compared to the existing GAN-based approaches for 3D generation, the development of 3D diffusion models is still in its early stages. Several notable works have explored the application of diffusion models in generating 3D shapes. PVD [53] proposed the use of a point-voxel representation combined with PVConv [24] to generate 3D shapes through diffusion. DPM [27] introduced a shape latent code to guide the Markov chain in the reverse diffusion process. MeshDiffusion [23] utilized the deformable tetrahedral grid parametrization for unconditionally generating 3D meshes. 3D-LDM [34] integrated DeepSDF [38] into diffusion-based shape generation, leveraging diffusion to generate a global latent code and improve the conditioning of the neural field. NFD [44] extended the use of 2D diffusion into 3D shape generation, exploring the potential of diffusion models in capturing and generating complex 3D shapes with Occupancy Network [32]. While existing approaches in shape generation focus on directly generating 3D shapes, they often neglect the preservation of underlying topology. This oversight can lead to artifacts in the generated shapes and limit their applicability in scenarios where topology is important. In our work, we introduce a baseline diffusion-based method that deforms a template to generate new shape. The diffeomorphic deformation is encoded by the generated triplane features. Our approach focuses on producing visually coherent and realistic shapes while preserving point correspondence and underlying topology. ## 3 Preliminaries **Diffomorphic Flow** is a continuous and smooth mapping that transforms a given manifold or space while preserving its differentiable structure. In the context of 3D geometry, diffeomorphic flow plays a crucial role in establishing dense point correspondences between 3D shapes and ensuring the preservation of their underlying topology during deformation. Mathematically, the forward diffeomorphic flow \(\Phi(p,t):\mathbb{R}^{3}\times[0,1]\rightarrow\mathbb{R}^{3}\) describes the trajectory of a 3D point \(p\) over the interval \([0,1]\), where the starting point \(p\) is located in the space of instance shape \(S\) and the destination point corresponds to the target shape \(T\). The velocity field \(\mathbf{v}(p,t):\mathbb{R}^{3}\times[0,1]\rightarrow\mathbb{R}^{3}\) represents the derivative of deformation of 3D points. The diffeomorphic flow \(\Phi\) is obtained by solving the initial value problem (IVP) of an ordinary differential equation (ODE), \[\frac{\partial\Phi}{\partial t}(p,t)=\mathbf{v}\left(\Phi(p,t),t\right)\quad \text{ s.t. }\quad\Phi(p,0)=p \tag{1}\] Similarly, the inverse flow \(\Psi\) can be calculated by solving a corresponding ODE with negative velocity field \(-\mathbf{v}\), allowing for the transformation from the template space to the instance space \[\frac{\partial\Psi}{\partial t}(p,t)=-\mathbf{v}\left(\Psi(p,t),t\right)\quad \text{ s.t. }\quad\Psi(p,0)=p \tag{2}\] where \(p\) is the starting point on the target shape. The property of topology preservation is achieved through the Lipschitz continuity of the velocity field. The forward and backward diffeomorphic deformation can be calculated by the integration of the velocity field by solving the equation 1 2, respectively. **Diffusion Probabilistic Model** (DPM) [15] is a parameterized Markov chain designed to learn the underlying data distribution \(p(X)\). During the Forward Diffusion Process (FDP), the diffused data point \(X_{t}\) is obtained at each time step t by sampling from the conditional distribution: \[q\left(X_{t}\mid X_{t-1}\right)=\mathcal{N}\left(X_{t};\sqrt{1-\beta_{t}}X_{t -1},\beta_{t}I\right) \tag{3}\] where \(X_{0}\) is sampled from the initial distribution \(q(X_{0})\), and \(X_{T}\) follows a Gaussian distribution \(N(X_{T};0,I)\). The parameter \(\beta_{t}\in(0,1)\) represents a variance schedule that gradually introduces Gaussian noise to the data. By defining \(\alpha_{t}=1-\beta_{t}\) and \(\bar{\alpha}_{t}=\prod_{s=1}^{t}\left(1-\beta_{t}\right)\), \(X_{t}\) can be sampled conditionally on \(X_{0}\) as \(q\left(X_{t}\mid X_{0}\right)=\mathcal{N}\left(X_{t};\sqrt{\bar{\alpha}_{t}}X_ {0},\left(1-\bar{\alpha}_{t}\right)I\right)\), providing a distribution for sampling \(X_{t}\) from the initial data \(X_{0}\). In contrast, the Reverse Diffusion Process aims to approximate the posterior distribution \(p(X_{t-1}|X_{t})\) to recreate a realistic \(X_{0}\) starting from random noise \(X_{T}\). The Reverse Diffusion Process is formulated as a trajectory of posterior distributions starting from \(X_{T}\): \[p\left(X_{0:T}\right)=p\left(X_{T}\right)\prod_{t=1}^{T}p_{\theta}\left(X_{t- 1}\mid X_{t}\right) \tag{4}\] The conditional distribution \(p_{\theta}(X_{t-1}|X_{t})\) is approximated by a neural network with parameters \(\theta\): \[p_{\theta}\left(X_{t-1}\mid X_{t}\right)=\mathcal{N}\left(X_{t};\mu_{\theta} \left(X_{t},t\right),\Sigma_{\theta}\left(X_{t},t\right)\right) \tag{5}\] ## 4 Method In this section, we present our Hybrid Neural Diffeomorphic Flow (HNDF) for shape representation and generation. Section 4.1 reviews our baseline method [46]. In Section 4.2, we introduce the utilization of triplane features, and the hybrid supervision for capturing local and global correspondences. Finally, in Section 4.3, we describe our proposed method for generating topology-preserving shapes. ### Review of NDF NDF [46], similar to DeepSDF [38], represents a 3D shape \(S_{i}\) using a continuous signed distance field (SDF) \(\mathcal{F}\). Given a random 3D point \(p\) and a one-dimensional latent code \(c_{i}\) of length \(k\), \(\mathcal{F}\) outputs the distance from the point \(p\) to the closest surface of shape \(S_{i}\). However, unlike DeepSDF, which directly represents 3D shapes, NDF uses a deform code \(c_{i}\) to control the deformation of each instance shape from the template shape. As a result, the conditional continuous SDF \(\mathcal{F}\) can be decomposed into \(\mathcal{T}\circ\mathcal{D}\), where \(\mathcal{D}:\mathbb{R}^{3}\times\mathbb{R}^{k}\mapsto\mathbb{R}^{3}\) provides the deformation mapping from the coordinates of \(p\) in the instance space of \(S_{i}\) to a canonical position \(p^{\prime}\) in the template space. The function \(\mathcal{T}\) represents a single shape DeepSDF that models the implicit template shape. ### Hybrid Shape Representation via Triplane As shown in [1, 5, 47, 39, 17, 40], previous methods [38, 50, 52, 46] utilizing a single latent vector to control the entire shape or deformation space could not be able to capture the details of the complex 3D shape or the deformation. Motivated by recent advancements in hybrid representation [2], we propose to encode complex diffeomorphic deformations as a set of three axis-aligned 2D feature planes, as shown in Fig. 2. This enables us to capture fine-grained details and variations in the shape space more effectively. The triplane representation is a hybrid architecture for neural fields that combines explicit and implicit components [2]. For each instance shape \(S_{i}\), it employs three axis-aligned orthogonal feature planes \((X_{i}=[F_{xy}^{i},F_{xz}^{i},F_{yz}^{i}])\), each with a resolution of \(L\times L\times C\). These planes serve as the encoded representations of the deformation. To query a deformation, the position of given point \(p_{i}\) is projected onto each of the feature planes, and the corresponding feature vectors are retrieved using bilinear interpolation. Subsequently, a lightweight multilayer perceptron (MLP) decoder is employed to interpret the aggregated features as corresponding velocity vector \(v_{i}\). The diffeomorphic deformation \(d_{i}\) for point \(p_{i}\) can be calculated by integrating the velocity vector using an explicit Runge-Kutta solver [3], as defined in Eq. 1. In contrast to the approach in [2], where feature aggregation is performed through summation, we have found that concatenating the interpolated features from the triplane yields better results. #### 4.2.1 Training In our method, we represent the instance shape \(S_{i}\) as a deformed template shape (\(\mathcal{T}\circ\mathcal{D}_{i}\)). To capture the continuous shape of \(S_{i}\), we employ two modules: a continuous diffeomorphic deformation module \(\mathcal{D}\) and a template shape representation \(\mathcal{T}\). As discussed in Sec. 4.2, the diffeomorphic deformation \(d_{i}\) of a point \(p_{i}\) is obtained by integrating the velocity field. The signed distance field (SDF) value of \(p_{i}\) is determined by evaluating the implicit template shape module \(\mathcal{T}\) at the transformed point \(p_{i}^{\prime}\), where \(p_{i}^{\prime}=p_{i}+d_{i}\). During training, our method jointly optimizes the deformation module \(\mathcal{D}\), template DeepSDF shape \(\mathcal{T}\), and per-object triplane features \(X_{i}\) to represent a training set of \(S\) objects. The triplane representation provides an expressive representation power, allowing us to achieve accurate deformation and correspondence. Unlike NDF [46], which requires multiple deformation modules, our method only requires one deformation module. This not only enables more accurate deformation representation but also reduces the memory and computation requirements. The training objective function includes a reconstruction loss and a regularization loss: \[\mathcal{L}_{train}=\mathcal{L}_{\text{rec}}\,+\lambda_{\text{reg}}\,\mathcal{ L}_{\text{reg}} \tag{6}\] Figure 2: **Shape Representation** framework consists of a deformation module \(\mathcal{D}\), a template module \(\mathcal{T}\), and per-object triplane features \(X_{i}\). Given a point \(p\) in the instance space, we compute its corresponding destination point \(p^{\prime}\) in the template space using Eq. 1. The template module then provides the sign distance value \(s^{\prime}\) for this point. During training, we optimize the framework by minimizing the \(L_{1}\) loss between the represented \(s^{\prime}\) and the ground truth \(s\), while incorporating regularization terms. where \(\mathcal{L}_{\text{rec}}\) shows the reconstruction loss between the ground truth SDF value \(s_{i}\) and the represented SDF value \(s^{\prime}_{i}\), and \(\mathcal{L}_{\text{reg}}\) includes a series of regularization terms. Specifically, reconstruction loss \(\mathcal{L}_{\text{rec}}\) can be written as \[\mathcal{L}_{\text{rec}}\,=\sum_{i=1}^{S}\sum_{j=1}^{N}L_{1}(\mathcal{T}\circ \mathcal{D}_{i}(p_{i,j}),s_{i,j}) \tag{7}\] where \(S\) is the number of instance shapes in the training set, \(N\) is the number of sampling points for each shape, \(p_{i,j}\) is the \(j\)-th point on the \(i\)-th shape and \(s_{i,j}\) is the corresponding ground truth SDF value. In addition to the point-wise deformation regularization (\(\sum_{i,j}\left\|\mathcal{T}\circ\mathcal{D}_{i}(p_{i,j})-s_{i,j}\right\|_{2}\)) and the \(L_{2}\) norm feature regularization (\(\left\|F^{i}_{xy}\right\|_{2}+\left\|F^{i}_{yz}\right\|_{2}+\left\|F^{i}_{xz} \right\|_{2}\)), the inclusion of total variation (TV) regularization [42] is crucial for simplifying the triplane representation and ensuring smooth deformations. The overall regularization term in the training objective is defined as: \[\mathcal{L}_{\text{reg}}\,=\lambda_{\text{PW}}\,\mathcal{L}_{\text{PW}}\,+ \,\lambda_{\text{L2}}\,\mathcal{L}_{\text{L2}}\,+\,\lambda_{\text{TV}}\, \mathcal{L}_{\text{TV}} \tag{8}\] #### 4.2.2 Hybrid Supervision for Inference Time Reconstruction In contrast to previous methods [38, 46] that utilize a single latent vector for shape reconstruction, the incorporation of triplane representation in our work introduces specific challenges when reconstructing new shapes. Specifically, during the optimization process, the features interpolated from the triplane representation for different positions \(p_{i}\) are optimized locally. Since the final diffeomorphic deformation is the integration of velocity vectors along the trajectory in the entire space, the optimized deformation can become trapped in local optima, leading to incorrect global correspondence, as shown in Fig. 3. As a consequence, the reconstructed shape and deformation may exhibit artifacts, and the overall correspondence may be compromised. Therefore, we introduce a hybrid supervision strategy that incorporates both global and local correspondence. In addition to randomly sampled points that provide local supervision, we downsample the entire \(N\times N\times N\) coordinate grid with predefined step size and include these regularly sampled points for global supervision during optimization. The reconstruction loss during inference is defined as: \[\mathcal{L}_{\text{rec}}\,=\mathcal{L}_{\text{rec}}^{\text{grid}}+\lambda_{ \text{random}}\,\mathcal{L}_{\text{rec}}^{\text{random}} \tag{9}\] where \(\lambda_{\text{random}}\) is initialized as 0 and gets increased as the optimization continues. After we get the grid-structure deformation \(\Phi\), we utilize two additional regularization terms to ensure the diffeomorphism of the deformation field and maintain structural integrity. The first term, selective Jacobian determinant regularization (\(\mathcal{L}_{\text{Jdet}}\)), enforces local orientation consistency. \[\mathcal{L}_{Jdet}=\frac{1}{N}\sum_{p}relu\left(-\left|J_{\Phi}(p)\right|\right) \tag{10}\] where the Jacobian matrix \(J_{\Phi}\) is defined as: \[J_{\Phi}(p)=\begin{bmatrix}\frac{\partial\Phi_{p}(p)}{\partial x}&\frac{ \partial\Phi_{p}(p)}{\partial y}&\frac{\partial\Phi_{p}(p)}{\partial z}\\ \frac{\partial\Phi_{p}(p)}{\partial x}&\frac{\partial\Psi_{p}(p)}{\partial y}& \frac{\partial\Psi_{p}(p)}{\partial z}\\ \frac{\partial\Phi_{p}(p)}{\partial x}&\frac{\partial\Phi_{p}(p)}{\partial y}& \frac{\partial\Phi_{p}(p)}{\partial z}\\ \end{bmatrix} \tag{11}\] The second term, deformation regularization (\(\mathcal{L}_{\text{def}}\)), discourages excessively skewed deformations that may lead to unnatural shapes. \[\mathcal{L}_{def}=\sum_{p}\left\|\nabla\Phi(p)\right\|^{2} \tag{12}\] The combination of global and local supervision provides comprehensive guidance during optimization, enabling the model to capture both fine-grained details and global structural consistency. #### 4.2.3 Point Correspondence and Shape Registration During inference, our method utilizes the learned template shape from training and the diffeomorphic deformation encoded by the triplane feature to establish point correspondence and shape registration between different instance shapes. For each point \(p_{t}\) on the template shape, we apply the inverse diffeomorphic flow \(\Psi\), as defined in Eq. 2, to obtain the corresponding points \(p_{i}\) and \(p_{j}\) on instance shapes \(S_{i}\) and \(S_{j}\) respectively, based on their respective triplane features \(X_{i}\) and \(X_{j}\). This process allows us to accurately capture point correspondence and establish registration between the instances, facilitating tasks such as shape comparison, shape synthesis, and texture transfer. ### Topology-preserving Shape Generation In this section, we present our proposed method for topology-preserving shape generation. Rather than directly generating shapes from scratch, our approach focuses on generating new shapes by deforming a template shape using synthesized diffeomorphic deformations. #### 4.3.1 Training a Diffusion Model After the training of the diffeomorphic deformation module \(\mathcal{D}\) and the template shape representation \(\mathcal{T}\), as described in Section 4.2.1, we can leverage the hybrid supervision introduced in Section 4.2.2 to obtain the corresponding per-shape triplane features for the dataset. These optimized sets Figure 3: Left is the reconstruction results with proposed hybrid supervision. Middle is the ground truth. Right is the result from purely local supervision, which failed to capture the global correspondence. of triplane features, denoted as \(X\in\mathbb{R}^{N\times(L\times L\times 3C)}\), will be utilized to train our generative model, where N denotes the number of shapes in the dataset, L is the dimension of triplane features and C is the number of channels for each 2D plane (\(F_{xy}^{i},F_{xz}^{i},F_{yz}^{i}\)). In our framework, the triplane feature is composed of three 2D plane features. We concatenate these feature planes and takes advantage of the strong generative capability of existing 2D diffusion models. Following Sec. 3, we train a diffusion model to learn the reverse diffusion process and predict the added noise from its noisy input by minimizing the following loss function: \[\begin{split} Loss(\theta)=&\mathbb{E}_{X_{0}\sim q (X),\epsilon\sim\mathcal{N}(0,I),t}\\ &\left[\left\|\epsilon-\epsilon_{\theta}\left(\sqrt{\alpha_{t}}X_ {0}+\sqrt{1-\alpha_{t}}\epsilon,t\right)\right\|^{2}\right]\end{split} \tag{13}\] where \(\epsilon_{\theta}\) is predicted noise and \(\theta\) represents the model parameters. #### 4.3.2 New Shape Generation During the inference phase, the generation of a new shape involves deforming the template shape based on the diffeomorphic deformation encoded by the sampled triplane features. Following [15], we initiate the process by sampling a random Gaussian noise \(X_{T}\sim\mathcal{N}(0,I)\in\mathbb{R}^{L\times L\times 3C}\). Subsequently, we perform iterative denoising for a total of \(T\) steps as: \[X_{t-1}=\frac{1}{\sqrt{\alpha_{t}}}\left(X_{t}-\frac{1-\alpha_{t}}{\sqrt{1- \bar{\alpha}_{t}}}\mathbf{\epsilon}_{\theta}\left(X_{t},t\right)\right)+\sigma_{t }\mathbf{\epsilon} \tag{14}\] where \(\epsilon\sim\mathcal{N}(0,I)\) if \(t>1\), else, \(\epsilon=0\). After sampling, the concatenated triplane feature is split into three axis-aligned 2D planes (\(F_{xy}^{i},F_{xz}^{i},F_{yz}^{i}\)). This generated triplane feature can be interpreted as the diffeomorphic deformation. By following the trajectory defined by the ODE function in Eq. 2, each point on the template shape is displaced towards its corresponding destination point in the instance space. Consequently, the new generated shape, known as the deformed template, retains the same underlying topology as the template shape, ensuring consistent connectivity. ## 5 Experiments In this section, we present the experiments conducted to evaluate our proposed Hybrid Neural Diffeomorphic Flow (HNDF) for shape **representation** and **generation** tasks. **Datasets:** To assess the effectiveness of our shape representation, we utilize the same medical datasets as [46]: Pancreas CT [41] and Inhouse Liver [4], as these datasets exhibit clear common topology while demonstrating shape variation, making them suitable for our evaluation. For shape generation evaluation, we employ the Abdomen1k dataset [28], consisting of 573 valid liver data and 693 pancreas data after preprocessing and filtering. Please refer to the supplementary material for detailed data sources and preprocessing information. **Shape Representation Evaluation:** We evaluate HNDF for shape representation through two experiments. First, we demonstrate the expressive power of triplane representation and the importance of our hybrid supervision. Evaluation metrics include Chamfer distance (CD) and normal consistency (NC). Second, we evaluate point correspondence and shape registration accuracy, incorporating self-intersection (SI) as an additional metric for geometrical fidelity. **Shape Generation Evaluation:** For shape generation evaluation, following [44], we adopt an adapted version of Frechet inception distance (FID). This metric considers rendered shading images of our generated meshes, taking human perception into account. As discussed in [51], shading-image FID overcomes limitations of other mesh-based evaluation metrics. FID is computed across 20 views and averaged to obtain a final score \[\mathrm{FID}=\frac{1}{20}\left[\sum_{i=1}^{20}\left\|\mu_{g}^{i}-\mu_{r}^{i} \right\|^{2}+\mathrm{Tr}\left(\Sigma_{g}^{i}+\Sigma_{r}^{i}-2\left(\Sigma_{r} ^{i}\Sigma_{g}^{i}\right)^{\frac{1}{2}}\right)\right] \tag{15}\] Additionally, precision and recall scores are reported using the method proposed by [43]. Precision reflects the quality of the rendered images, while recall measures the diversity of the generative model. **Baseline Methods** We compare our proposed Hybrid Neural Diffeomorphic Flow (HNDF) with several baselines for the shape representation task. This includes DIT [52], DIFNet [6], and NDF [46], which share the same representation formula as ours, where the shape is represented as a deformed template. We also include AtlasNet [11], which uses explicit mesh parameterization for shape reconstruction. Additionally, we compare with DeepSDF [38] and NFD [44], which directly represent 3D shapes from scratch. For the shape generation task, we explore different sampling strategies and generative models. We compare against DeepSDF [38] and NDF [46], which assume a Gaussian distribution for the global latent vector. We sample new shapes by randomly sampling global vectors from a Gaussian distribution or performing PCA analysis on optimized Figure 4: The triplane feature can be represented as multi-channel images. In our work, we adopt the 2D diffusion model as our shape generation model. The generated triplane feature encodes the diffeomorphic deformation that deforms the template to produce the new shapes. global latent vectors. We also compare with recent generative models such as point-cloud-based PVD [53], and neural-field-based 3D-LDM [34] and NFD [44]. However, it's important to note that these models do not consider the preservation of underlying topology. ### Shape Representation We evaluate our shape representation through two evaluations: **representation** on training data and **reconstruction** on unseen data, following the setting of [46]. For each point \(p\) in the instance space, according to Eq. 1, we can get the corresponding destination point \(p^{\prime}\) in the template space, and the trained template module will return the sign distance value for this point. After retrieving the sign distance value for all the grid points, we can then utilize the marching cube algorithm [25] to extract the mesh for each instance. In the representation comparison, we utilize the trained per-object latent feature to assess the effectiveness of different representation methods. In the reconstruction comparison, we independently optimize the per-object latent feature while keeping the network parameters fixed to evaluate the generality of the methods in shape reconstruction. Fig. 5 shows the reconstruction results of different methods. According to Table 1, DIF-Net achieves the best results on the training data representation but worse results on the shape reconstruction tasks, indicating the overfitting on the training data. Our method and NFD achieve similar overall performance, benefiting from the enhanced representation power of the triplane feature. Comparing with NDF, our method achieves superior performance even with a single deformation module, outperforming NDF with 4 consecutive deformation modules. The ablation study conducted on regularization, as shown in Tab. 4, demonstrates the significance of our proposed hybrid supervision in achieving accurate reconstruction for new shapes reconstruction. ### Point Correspondence and Shape Registration As the methods DeepSDF and NFD can only represent the shape without capturing point correspondence, we compare the remaining methods in Table 2 for shape registration evaluation and the instance shape is represented by deforming the template, as described in Sec. 4.2.3. Following the trajectory defined by the ODE function in Eq. 2, each point on the template shape moves towards the corresponding destination point on the instance space. As a result, the instance shape, defined as the deformed template, shares the same underlying topology as the template shape, ensuring consistent connectivity. The diffeomorphic deformation from the template towards instance shapes is shown in the left half of Fig. 1. To evaluate the point correspondence and shape registration results, we compare the deformed template with the corresponding ground truth instance shape. We also utilize self-intersection as a metric to assess the preservation of topology and geometric fidelity during the deformation. To ensure a fair comparison, we remesh the template meshes to have the same number of vertices (5000), following the approach in [46]. Based on the comparison presented in Ta \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Representation} & \multicolumn{4}{c}{Reconstruction} \\ \cline{2-9} & \multicolumn{2}{c}{CD Mean(\(\downarrow\))} & \multicolumn{2}{c}{NC Mean(\(\uparrow\))} & \multicolumn{2}{c}{CD Mean(\(\downarrow\))} & \multicolumn{2}{c}{NC Mean(\(\uparrow\))} \\ \cline{2-9} Model/Data & \multicolumn{1}{l}{Pancreas} & Liver & \multicolumn{1}{l}{Pancreas} & Liver & \multicolumn{1}{l}{Pancreas} & Liver & \multicolumn{1}{l}{Pancreas} & Liver \\ \hline DeepSDF & 0.342 & 0.232 & 0.927 & 0.876 & 0.711 & 0.539 & 0.898 & 0.866 \\ NFD & 0.200 & 0.168 & 0.969 & 0.884 & **0.080** & 0.118 & **0.982** & **0.898** \\ \hline AtlasNet & 4.5 & 1.76 & 0.733 & 0.836 & 8.08 & 3.46 & 0.703 & 0.823 \\ DIT & 0.349 & 0.303 & 0.929 & 0.878 & 0.63 & 0.509 & 0.903 & 0.87 \\ DIF-Net & 0.568 & **0.122** & **0.979** & **0.894** & 4.18 & 1.58 & 0.756 & 0.832 \\ NDF & 0.315 & 0.291 & 0.933 & 0.883 & 0.512 & 0.476 & 0.917 & 0.873 \\ \hline Ours & **0.133** & 0.266 & 0.965 & 0.889 & 0.082 & **0.116** & 0.961 & 0.885 \\ \hline \hline \end{tabular} \end{table} Table 1: **Shape Representation** results on Training Shapes and **Shape Reconstruction** results on Unseen Shapes. The chamfer distance results shown above are multiplied by \(10^{3}\). \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{CD Mean(\(\downarrow\))} & \multicolumn{2}{c}{NC Mean(\(\uparrow\))} & \multicolumn{2}{c}{SI Mean(\(\downarrow\))} \\ \cline{2-7} Model/Data & \multicolumn{1}{l}{Pancreas} & Liver & \multicolumn{1}{l}{Pancreas} & Liver & \multicolumn{1}{l}{Pancreas} & Liver \\ \hline DeepSDF & - & - & - & - & - & - \\ NFD & - & - & - & - & - & - \\ \hline AtlasNet & 8.08 & 3.46 & 0.703 & 0.823 & 5860 & 29.5 \\ DIT & 0.677 & 0.528 & 0.893 & 0.868 & 346 & 11.8 \\ DIF-Net & 10.5 & 2.06 & 0.694 & 0.832 & 2560 & 4.61 \\ NDF & 0.518 & 0.49 & 0.916 & 0.873 & **0** & **2** \\ \hline Ours & **0.099** & **0.125** & **0.946** & **0.882** & 15 & 8 \\ \hline \hline \end{tabular} \end{table} Table 2: **Shape Registration** results on unseen shapes. DeepSDF and NFD don’t have scores as they can not capture the point correspondences. Figure 5: Reconstruction Result on unseend data. ble 2, our proposed method achieves better registration accuracy and correct dense correspondence, with only slight self-intersection, which can be considered negligible given the large number of vertices and faces in the template shape. ### Shape Generation Table 3 presents the evaluation of shape generation across different methods. For DeepSDF and NDF, we sample global latent vectors from a Gaussian distribution and perform PCA analysis, where the parameters are determined by grid search. However, similar to the results in previous experiments, the shapes sampled from DeepSDF and NDF tend to be smoother compared to real instance shapes. PVD is capable of generating variable shapes, but it is limited by its nature to generate only coarse object shapes. 3D-LDM attempts to capture the distribution of the global latent vectors of DeepSDF, but still faces the smoothing issue from the global latent vector. NFD can also generate variable shapes. However, compared to our methods, the shapes generated by NFD may not preserve topology, resulting in potentially separated components in the generated shapes, as shown in Fig. 6. In contrast, our method focuses on generating diffeomorphic deformations encoded by triplane features. The new shapes are generated by deforming the template, allowing us to achieve high fidelity and variability while preserving the underlying topology. ### Ablation Study **Supervision** Table 4 highlights the significance of our global supervision in shape reconstruction, mitigating the risk of local minima. While incorporating additional mesh supervision improved the results marginally, it also increased computational and memory demands. Thus, we opted to utilize global supervision in our approach. **Feature Representation** We explored the use of 3D voxel-grid features as an alternative to triplane features, and found that they yielded similar results as shown in Table 5. However, voxel-grid features required more computation and memory resources for representation and generation tasks. In contrast, triplane feature representation achieved high reconstruction accuracy with improved memory and computation efficiency. ## 6 Conclusion In this paper, we introduce Hybrid Neural Diffeomorphic Flow (HNDF) as a novel approach for topology-preserving shape representation and generation. Our method leverages the expressive power of triplane representation, enabling accurate dense correspondence and high representation accuracy. The proposed hybrid supervision plays a crucial role in capturing both local and global correspondence. Unlike existing methods that primarily focus on directly generating shapes, we explore the concept of generating shapes using deformed templates to preserve the underlying topology. We present a baseline method for topology-preserving shape generation and will continue our exploration for more complex shapes and scenarios. By presenting our research, we aim to contribute to the 3D vision community and provide insights into the potential of topology-preserving shape representation and generation. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{CD Mean(\(\downarrow\))} & \multicolumn{2}{c}{NC Mean(\(\uparrow\))} \\ \cline{2-5} Model/Data & Pancreas & Liver & Pancreas & Liver \\ \hline Ours & **0.082** & 0.116 & **0.961** & 0.885 \\ Ours - Global Sup. & 0.264 & 0.368 & 0.932 & 0.877 \\ Ours + Mesh Sup. & **0.082** & **0.112** & 0.960 & **0.886** \\ \hline \hline \end{tabular} \end{table} Table 4: Shape Reconstruction with various supervision. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{FID Mean(\(\downarrow\))} & \multicolumn{2}{c}{Prec. Mean(\(\uparrow\))} & \multicolumn{2}{c}{Recall Mean(\(\uparrow\))} \\ \cline{2-6} Model/Data & Pancreas & Liver & Pancreas & Liver & Pancreas & Liver \\ \hline DeepSDF \(\bigtriangleup\) & 99.46 & 93.74 & 0.810 & 0.858 & 0.078 & 0.089 \\ DeepSDF \(\bigstarup\) & 80.03 & 85.64 & 0.729 & 0.810 & 0.430 & 0.534 \\ NDF \(\bigtriangleup\) & 69.66 & 60.50 & 0.797 & 0.714 & 0.508 & 0.593 \\ NDF \(\bigstarup\) & 69.66 & 66.45 & 0.844 & 0.821 & 0.505 & 0.571 \\ \hline PVD & 89.26 & 86.32 & 0.760 & 0.821 & 0.420 & 0.466 \\ 3D-LDM & 78.64 & 79.58 & 0.782 & 0.824 & 0.470 & 0.554 \\ NFD & 72.83 & 74.24 & 0.812 & 0.831 & 0.523 & 0.560 \\ \hline Ours & **52.01** & **48.54** & **0.992** & **0.994** & **0.661** & **0.613** \\ \hline \hline \end{tabular} \end{table} Table 3: **Shape generation** results. Our method achieve better performance according to the FID, precision and recall. \(\bigtriangleup\) denotes sampling from Gaussian distribution while \(\bigstarup\) denotes sampling from PCA. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{CD Mean(\(\downarrow\))} & \multicolumn{2}{c}{NC Mean(\(\uparrow\))} \\ \cline{2-5} Model/Data & Pancreas & Liver & Pancreas & Liver \\ \hline Vector & 0.512 & 0.476 & 0.917 & 0.873 \\ Triplane & **0.082** & 0.116 & **0.961** & **0.885** \\ Voxel & 0.146 & **0.112** & 0.957 & **0.885** \\ \hline \hline \end{tabular} \end{table} Table 5: Shape Reconstruction with various feature representations. Figure 6: Visualization of generated 3D shape.
2302.13555
Implementing any Linear Combination of Unitaries on Intermediate-term Quantum Computers
We develop three new methods to implement any Linear Combination of Unitaries (LCU), a powerful quantum algorithmic tool with diverse applications. While the standard LCU procedure requires several ancilla qubits and sophisticated multi-qubit controlled operations, our methods consume significantly fewer quantum resources. The first method (Single-Ancilla LCU) estimates expectation values of observables with respect to any quantum state prepared by an LCU procedure while requiring only a single ancilla qubit, and no multi-qubit controlled operations. The second approach (Analog LCU) is a simple, physically motivated, continuous-time analogue of LCU, tailored to hybrid qubit-qumode systems. The third method (Ancilla-free LCU) requires no ancilla qubit at all and is useful when we are interested in the projection of a quantum state (prepared by the LCU procedure) in some subspace of interest. We apply the first two techniques to develop new quantum algorithms for a wide range of practical problems, ranging from Hamiltonian simulation, ground state preparation and property estimation, and quantum linear systems. Remarkably, despite consuming fewer quantum resources they retain a provable quantum advantage. The third technique allows us to connect discrete and continuous-time quantum walks with their classical counterparts. It also unifies the recently developed optimal quantum spatial search algorithms in both these frameworks, and leads to the development of new ones that require fewer ancilla qubits. Overall, our results are quite generic and can be readily applied to other problems, even beyond those considered here.
Shantanav Chakraborty
2023-02-27T07:15:14Z
http://arxiv.org/abs/2302.13555v4
# Implementing Linear Combination of Unitaries on Intermediate-term Quantum Computers ###### Abstract Over the years, the framework of Linear combination of unitaries (LCU) has been extremely useful for designing a plethora of quantum algorithms. In this work, we explore whether this widely applicable paradigm can be implemented on quantum computers that will be available immediately after the current NISQ stage. To this end, we develop three variants of LCU and apply each, to quantum algorithms of practical interest. First, we develop a physically motivated, continuous-time analogue of LCU ("Analog LCU"). This technique, implementable on hybrid qubit-qumode systems, is simpler than its discrete-time counterpart. We use this method to develop analog quantum algorithms for ground state preparation and quantum linear systems. We also develop a randomized quantum algorithm to sample from functions of Hamiltonians applied to quantum states ("Single-Ancilla LCU"). This approach repeatedly samples from a short-depth quantum circuit and uses only a single ancilla qubit. We use this to estimate expectation values of observables in the ground states of a Hamiltonian, and in the solution of quantum linear systems. This method is suitable for early fault-tolerant quantum computers. Our third approach stems from the observation that for several applications, it suffices to replace LCU with randomized sampling of unitaries according to the distribution of the LCU coefficients ("Ancilla-free LCU"). This is particularly useful when one is interested in the projection of a quantum state implemented by an LCU procedure in some subspace of interest. We demonstrate that this technique applies to the spatial search problem and helps establish a relationship between discrete and continuous-time quantum walks with their classical counterparts. Our work demonstrates that generic quantum algorithmic paradigms, such as LCU, can potentially be implemented on intermediate-term quantum devices. ###### Contents * I Introduction * II Summary of our results * II.1 Analog LCU: Coupling discrete systems with continuous variable systems * II.2 Single-Ancilla LCU: Controlled application of randomly sampled unitaries followed by single-shot measurement * II.3 Ancilla free LCU: Randomized sampling of unitaries * II.4 Other results on quantum walks * II.5 Prior work * III Preliminaries * III.1 Linear Combination of Unitaries * III.2 Block encoding and Quantum Singular Value Transformation * IV New approaches for implementing linear combinations of unitaries * IV.1 Analog LCU: coupling a discrete primary system to a continuous-variable ancilla * IV.2 Robustness of expectation values of observables * IV.3 Single-Ancilla LCU: Controlled application of randomly sampled unitaries followed by single-shot measurement D. Ancilla-free LCU: Randomized unitary sampling V Applications to Ground state preparation A Applying Analog LCU: A continuous-time quantum algorithm for ground state preparation B Applying Single-Ancilla LCU: Sampling from the ground states of Hamiltonians C Ground state preparation using QSVT on fully fault-tolerant quantum computers VI Applications to Quantum linear systems A Applying Analog LCU: Continuous-time quantum linear systems algorithms B Applying Single-Ancilla LCU: Sampling from the solution of quantum linear systems C Quantum linear systems for fully fault-tolerant quantum computers VII Applications to quantum walks A Random and quantum walks: A very brief overview B Applying Ancilla-Free LCU: Optimal quantum spatial search by fast-forwarding discrete-time random walks C Applying Ancilla-Free LCU: Optimal quantum spatial search by fast-forwarding continuous-time random walks D Applying Ancilla-free LCU: Fast-forwarding continuous-time random walks E Other results: Relationship between discrete-time and continuous-time quantum walks VIII Discussion and Open problems Acknowledgments A - I Robustness of normalization factors A - II Proof of Lemma 9 A - III Polynomial approximations of functions A - IV Basics on random walks A - V Proof of Lemma 32 Introduction We are currently in an era of quantum computing where theoretical advancements have been accompanied by drastic improvements in experimental capabilities [1; 2; 3; 4; 5]. With rapid progress being made, it is reasonable to envision a stage in the near future, where quantum computing will transition away from the NISQ era [6; 7]. Quantum devices, available immediately after the current NISQ stage, will most likely not have the capabilities of a large-scale, fully-programmable, fault-tolerant quantum computer. These devices would have short circuit depth and only a limited number of logical qubits, the so-called _early fault-tolerant quantum computers_[8; 9; 10; 11; 12]. On the other hand, for particular quantum technological platforms, it might be possible to engineer certain specific interactions more precisely, and for longer time-scales than others. For instance, it might be easier to engineer hybrid qubit-qumode systems in the intermediate-term [3; 13; 14; 15; 16; 17; 18] as many of the most promising quantum technological platforms such as superconducting systems [13], ion-traps [19], and photonic systems [20], naturally have access to continuous variables. We refer to such devices, which will become available shortly after the current stage, as "intermediate-term quantum computers". It is thus crucial to develop quantum algorithms of practical interest that are implementable on intermediate-term quantum computers. Indeed, quantum algorithms tailored to early fault-tolerant quantum computers are already being developed [8; 9; 10; 11; 12]. With many quantum technological platforms vying for supremacy, it is also essential to develop physically motivated quantum algorithms that can exploit the degrees of freedom that are naturally available to such platforms. Over the years, only a few quantum algorithmic frameworks have found broad applicability: they can be used to solve various problems of interest. However, such frameworks are only implementable on fully-fault tolerant quantum computers, which might be decades away. In this work, we ask the following question: _Can we bring generic quantum algorithmic paradigms closer to implementation on intermediate-term quantum devices?_ The framework of Linear Combination of Unitaries (LCU) is one such paradigm. Over the years, it has been widely applied and has been central to the development of a plethora of useful quantum algorithms ranging from Hamiltonian simulation [21; 22; 23; 24], quantum linear systems [25; 26] and differential equations [27; 28; 29], quantum walks [30; 31; 32], ground state preparation [33; 34; 35] and a large-class of optimization problems [36; 37]. In this work, we significantly enhance the applicability of the LCU framework to intermediate-term quantum devices: namely hybrid quantum systems and early fault-tolerant quantum computers. Given a Hermitian matrix \(H\), LCU implements any function \(f(H)\) that can be well-approximated by a linear combination of unitaries. Despite its broad applicability, LCU has its drawbacks when it comes to being implementable in the intermediate-term. First, for many problems of interest, there is a significant overhead in terms of the number of ancilla qubits needed. Additionally, the procedure requires implementing highly controlled-unitary operations, which can be challenging for near to intermediate-term quantum computers. In this work, we demonstrate that the issues concerning LCU can be addressed, and the framework can be used to develop new quantum algorithms that are more amenable to implementation. To this end, we introduce three new approaches to implementing LCU and demonstrate that each can be applied to develop quantum algorithms of practical interest. Firstly, we develop "**Analog LCU**", a physically motivated, continuous-time analogue for implementing a linear combination of unitaries. This technique requires coupling the system Hamiltonian \(H\) to a continuous-variable ancilla system (such as a one-dimensional quantum Harmonic oscillator), initialized in some easy-to-prepare continuous-variable quantum state (such as a Gaussian). The overall system is then evolved according to the resulting interaction Hamiltonian. Although this approach requires a continuous-variable ancilla register, the overall algorithm is simpler. Moreover, this technique might be particularly useful for intermediate-term quantum computers as such interactions can already be engineered on several quantum technological platforms. Examples of discrete systems coupled to continuous-variable ones include ion traps and superconducting systems [13; 14; 15; 16; 17]. We show that this approach can be used to develop novel analog quantum algorithms for ground state preparation and solving quantum linear systems. Our second approach is a randomized quantum algorithm that samples from functions of Hamiltonians applied to quantum states. More precisely, for any initial state \(\rho_{0}\), and observable \(O\), we estimate \(\mathrm{Tr}[Of(H)\rho_{0}f(H)^{\dagger}]\) to arbitrary accuracy. This technique, which we refer to as "**Single-Ancilla LCU**", is a generalization of the Hamiltonian simulation algorithm of [9], and is implementable on early fault-tolerant quantum computers. The hallmark of quantum algorithms run on such devices is that they minimize the amount of quantum resources consumed [8, 12]. Typically, such quantum algorithms repeatedly sample from a short-depth quantum circuit, followed by classical post-processing. Our randomized quantum algorithm requires only one ancilla qubit that acts as a control and requires implementing two (controlled) unitaries sampled according to the distribution of the LCU coefficients, followed by a single-shot measurement. By repeating this simple quantum circuit, one obtains samples whose average converges to the expectation value we seek to estimate. Another advantage of this method is in the case where \(H\) can itself be expressed as a linear combination of unitaries, i.e. \(H=\sum_{j}c_{j}H_{j}\), where \(H_{j}\) is unitary. Then in order to implement any \(f(H)\) that can be well-approximated by some Fourier series, one can also take advantage of the recently developed randomized Trotter methods [9, 38, 39, 40]. The overall decomposition of \(f(H)\) would be a linear combination of products of \(H_{j}\), which can be repeatedly sampled (according to the distribution of the resulting LCU coefficients) while adding no overhead in terms of ancilla qubits. These features also make it appealing for early fault-tolerant quantum computers. We apply this method to estimate the expectation values of observables in the ground states of a Hamiltonian and the solution of quantum linear systems. For several applications, it is sufficient to obtain only a projection of the LCU state \(f(H)\left|\psi_{0}\right\rangle\) in some subspace. In such scenarios, we show that the ancilla registers can be dropped entirely. We call this the "**Ancilla-free LCU**" technique. This approach involves randomly sampling unitaries, according to the distribution of the LCU coefficients without any ancilla registers. More precisely, for any matrix \(H\) if \(f(H)\approx\sum_{j}c_{j}U_{j}\), we sample \(U_{j}\) according to the distribution \(\mathcal{D}\sim\{c_{j}/\|c\|_{1}\}\). This results in some average density matrix \(\rho\), for which the projection in this subspace can be proven to be at least as large. This is typically the case for quantum walk-based problems, where we are interested in obtaining a state that has a good overlap with some subset of nodes in a graph. For instance, consider quantum spatial search algorithms - where we are interested in preparing a state that has a good overlap with the marked nodes of any ergodic, reversible Markov chain. Consequently, we use this technique to design optimal spatial search algorithms by discrete-time and continuous-time quantum walks, also placing recent results in this context [31, 32, 41]. Note that for these algorithms, we do not need any extra registers (other than the walk space). In addition to this, these techniques allow us to connect discrete and continuous-time quantum walks with their classical counterparts. We believe that this approach can be applied to a wide class of problems, beyond quantum walks, such as quantum simulated annealing [42] and quantum metropolis algorithms [43, 44]. Ever since the work of Childs [45], one of the long-standing open problems has been to obtain a relationship between discrete-time and continuous-time quantum walks. Childs showed that one can obtain a discrete-time quantum walk for any Hamiltonian. However, given access to the evolution operator of a continuous-time quantum walk, it is not known whether one can obtain a discrete-time quantum walk. Virtually no progress has been made in this direction in decades. Along the way, using quantum singular value transformation (QSVT) [46, 47], we show how one can obtain discrete-time quantum walks from continuous-time quantum walks (and vice versa), thereby making significant progress on this problem. The paper is organized as follows. We begin by providing a brief overview of the main results and also relate our work to prior results in Sec. II. In Sec. III, we review some basic definitions and techniques that we will be using in this article. We formally describe the three different approaches to implementing LCU in Sec. IV. The rest of the article involves applying these techniques to develop new quantum algorithms. In Sec. V, we make use of our techniques to develop new quantum algorithms for ground state preparation of Hamiltonians and also to sample from their ground states. We develop analog quantum linear systems algorithms and also show how to sample from the solution of quantum linear systems in Sec. VI. The quantum algorithms developed in Sec. V and Sec. VI mainly make use of the "Analog LCU" and the "Single-Ancilla LCU" frameworks, respectively. In Sec. VII, we make use of the "Ancilla-free LCU" technique to relate discrete and continuous-time quantum walks with their classical counterparts. In addition, we also show how one can obtain discrete-time quantum walks from continuous-time quantum walks and vice versa. Finally, we conclude and discuss open problems in Sec. VIII. Summary of our results In this section, we outline the main results of this article. The LCU method involves implementing functions of matrices that can be approximated by linear combinations of unitaries. We begin by briefly outlining each of the three variants of implementing LCU that we develop in this article. Then we state the complexities of the various algorithms to which we apply these techniques. Finally, we also state some other related results that we develop along the way. ### Analog LCU: Coupling discrete systems with continuous variable systems Firstly, we develop a more physical model for LCU in continuous-time. Consider any Hamiltonian \(H\), consider any \(f(H)\) that can be well approximated by a truncated Fourier transform, i.e., \[\left\|f(H)-\int_{a}^{b}\ dz\ c(z)\cdot e^{-iHzt}\right\|\leq\varepsilon,\] where \(c:\mathbb{R}\mapsto\mathbb{R}\backslash\{0\}\). Then by a purely continuous-time procedure, for any initial state \(\left|\psi_{0}\right\rangle\), we can prepare a state that \(O\left(\varepsilon/\|c\|_{1}\right)\)-close to \(\frac{f(H)|\psi_{0}\rangle}{\left\|f(H)|\psi_{0}\rangle\right\|}\), where \(\|c\|_{1}=\int_{a}^{b}dz\ c(z)\). This approach requires coupling \(H\) to a continuous variable ancilla system (such as a one-dimensional quantum Harmonic oscillator) prepared in a continuous variable state. We show in this work that for several applications, this state is easy to prepare (such as a Gaussian). The overall system is then evolved according to the interaction Hamiltonian \(H^{\prime}=H\otimes\hat{z}\) for an appropriate time \(T\). This approach is not only simpler than its discrete-time counterpart, but such hybrid qubit-qumode coupling can be implemented in a number of quantum technological platforms such as trapped ions, Cavity (or Circuit QED), photonic systems, and superconducting qubits [13; 14; 15; 16; 17]. Our Figure 1: _Summary of the main results – The three approaches to LCU and their applications_ motivation is not only to provide an alternative approach to implementing LCU but make this paradigm more implementable for intermediate-term quantum devices. Applications:We apply these techniques to develop analog quantum algorithms for ground state preparation and for solving quantum linear systems. As mentioned previously, our aim is to couple the system Hamiltonian with an ancillary continuous-variable system. 1. **Ground state preparation:** Given a Hamiltonian \(H\), we couple this system with a continuous variable ancillary system via the interaction Hamiltonian \(H^{\prime}=H\otimes\hat{z}\). The ancilla system is prepared in an easy-to-prepare continuous variable state, namely a Gaussian. We show that given an initial state \(\ket{\psi_{0}}\) that has an overlap of at least \(\eta\) with the ground state, simply evolving the system according to \(H^{\prime}\) results in a state proportional to \(f(H)\ket{\psi_{0}}\) in the first register, where \(f(H)=e^{-tH^{2}}\). We show that, with probability \(\eta^{2}\), this state is \(\varepsilon\)-close to the ground state of \(H\) (provided its ground energy is known up to some precision). The overall time required is \[T=O\left(\frac{1}{\Delta}\sqrt{\log\left(\frac{1}{\eta\varepsilon}\right)} \right),\] where \(\Delta\) is the spectral gap of \(H\) (Lemma 8). This quantum algorithm appeared in [32] and also independently in Ref. [35]. Here we place this in the context of "Analog LCU" it provides useful intuition for (i) the quantum linear systems algorithms we develop using similar techniques and (ii) the "Single-Ancilla LCU"method to solve this problem. 2. **Quantum linear systems:** We provide two quantum algorithms for this problem. For both these problems, we couple \(H\) to two ancillary continuous variable systems (Harmonic oscillators), i.e. \(H^{\prime}=H\otimes\hat{y}\otimes\hat{z}\). The first approach works for any Hermitian matrix \(H\) with eigenvalues in the domain \([-1,-1/\kappa]\cup[1/\kappa,1]\), where \(\kappa\) is an upper bound on the condition number (ratio between the maximum and the minimum non-zero eigenvalue) of \(H\). The first register contains the initial state \(\ket{b}\), the second register is prepared in the first excited state of the first Harmonic oscillator, while the third register is prepared in the ground state of a "particle in a ring" of unit radius [48]. This algorithm (see Sec. VI.1) can be seen as an analog variant of the quantum linear systems algorithm of Childs, Kothari and Somma [25]. In order to obtain a quantum state that is \(\varepsilon/\kappa\)-close to \(\ket{x}=\frac{H^{-1}\ket{b}}{\left\|H^{-1}\ket{b}\right\|}\) in the first register, with overlap at least \(1/\kappa\), we require evolving the system according to \(H^{\prime}\) for a time \[T=O\left(\kappa\sqrt{\log\left(\frac{\kappa}{\varepsilon}\right)}\right).\] Typically in continuous variable systems, Gaussian states are easier to prepare, and engineer [49]. Thus, we also provide an analog quantum algorithm for solving quantum linear systems (for positive semidefinite Hamiltonians) in which both the ancilla registers are now prepared in Gaussian states. Evolving this system according to \(H^{\prime}\) prepares a state that is \((\varepsilon/\kappa)^{3/2}\) - close to \(\ket{x}\), with overlap \(\Omega(1/T)\) in time \[T=O\left(\frac{\kappa^{3/2}}{\sqrt{\varepsilon}}\right).\] Although the complexity is worse than the first approach, this quantum algorithm requires preparing only Gaussian states, which we expect to be easier for intermediate-term quantum computers to implement. Single-Ancilla LCU: Controlled application of randomly sampled unitaries followed by single-shot measurement Given any Hamiltonian \(H\), we develop a randomized quantum algorithm that estimates expectation values \(\operatorname{Tr}[Of(H)\rho_{0}f(H)^{\dagger}]\), to arbitrary accuracy, where \(O\) is any observable and \(f(H)\) can be well-approximated by a linear combination of unitaries. The quantum algorithm is suitable for implementation on early fault-tolerant quantum computers: it repeatedly runs the simple short-depth quantum circuit shown in Fig. 2 and requires only one ancilla qubit. Our approach is a generalization of the Hamiltonian simulation procedure of Faerhmann et al. [9], wherein the authors used this circuit to generate randomized multi-product formulas. We generalize this to implement any \(f(H)\) which can be approximated by an LCU. The quantum circuit applies controlled and anti-controlled versions of \(V_{1}\) and \(V_{2}\), respectively where \(V_{1}\) and \(V_{2}\) are sampled according to \(\left\{c_{j}/\|c\|_{1}\,,U_{j}\right\}\). This is followed by a measurement of the observable \(X\otimes O\) which gives the output of a single run. This procedure is then repeated enough times so that the sample mean of the outcomes converges to \(\operatorname{Tr}[Of(H)\rho_{0}f(H)^{\dagger}]\). The overall procedure is formally stated in Algorithm 1 whose correctness we prove via the following theorem: **Theorem 1** (Sampling from functions of Hamiltonians applied to quantum states).: _Let \(\varepsilon,\delta,\gamma\in(0,1)\) be some parameters. Let \(O\) be some observable and \(\left|\psi_{0}\right\rangle\) be some initial state. Suppose there is a Hermitian matrix \(H\in\mathbb{R}^{N\times N}\), such that \(\left\|f(H)-\sum_{j}c_{j}U_{j}\right\|\leq\gamma\), where \(U_{j}\) is some unitary such that_ \[\gamma\leq\frac{\varepsilon}{6\|O\|\|f(H)\|}.\] _Furthermore, let_ \[T\geq\frac{8\|O\|^{2}\ln(2/\delta)\|c\|_{1}^{4}}{\varepsilon^{2}}.\] _Then, Algorithm 1 estimates \(\mu\) such that_ \[\left|\mu-\operatorname{Tr}[Of(H)\left|\psi_{0}\right\rangle\left\langle\psi_ {0}\right|f(H)^{\dagger}]\right|\leq\varepsilon,\] _with probability at least \(1-\delta\), using of one ancilla qubit and \(T\) runs of the quantum circuit in Fig. 2._ Overall, \(T\) samples are required, and if \(\mathcal{C}_{j}\) is the cost implementing \(U_{j}\) and the state \(\left|\psi_{0}\right\rangle\) can be prepared in cost \(\mathcal{C}_{\psi_{0}}\), then the cost of each run of the quantum circuit is upper bounded by \(\max_{j}C_{j}+\mathcal{C}_{\psi_{0}}\). Furthermore, when \(f(H)\) has a well-defined Fourier series, and each \(U_{j}=e^{-ijH}\), the cost of each run is determined by the cost of the underlying Hamiltonian simulation algorithm. For instance, consider that \(H=\sum_{j}\alpha_{j}\Lambda_{j}\), where \(\Lambda_{j}\)'s are local unitary operators (a string of Pauli matrices). Then one can use any of Figure 2: _Quantum circuit corresponding for the “Single-Ancilla LCU” procedure. For \(f(H)\approx\sum_{j}c_{j}U_{j}\), repeated runs of this short-depth quantum circuit can estimate \(\operatorname{Tr}[Of(H)\rho_{0}f(H)^{\dagger}]\), to arbitrary accuracy. For this, \(V_{1}\) and \(V_{2}\) are sampled at random according to \(\mathcal{D}\sim\left\{c_{j}/\|c\|_{1}\,,U_{j}\right\}\). Each run of the circuit outputs a random variable corresponding to the outcome of the measurement of the observable \(X\otimes O\). Overall we need to repeat this circuit \(T\) times, which is large enough for the sample mean of the \(T\) observations to converge to the expectation value._ the recently developed randomized Trotter-based approaches for Hamiltonian simulation such as qDRIFT [38, 39] or randomized multi-product formulas [9, 50], and others [40, 51]. These methods do not require ancilla qubits and could be easily incorporated into our algorithm. **Applications:** We use this technique to sample from the ground states of Hamiltonians and also from the solution of quantum linear systems. 1. **Ground state preparation:** Suppose \(H\) is a Hamiltonian with ground state \(\left|v_{0}\right\rangle\) and spectral gap \(\Delta\), with so that its ground energy is known to a certain precision. Also, consider some initial state \(\left|\psi_{0}\right\rangle\) that can be prepared with cost \(\tau_{\psi_{0}}\) and has an overlap of at least \(\eta\) with \(\left|v_{0}\right\rangle\). For any observable \(O\), by considering the LCU of the function \(f(H)=e^{-tH^{2}}\), we can use Algorithm 1 to output \(\mu\) such that \[\left|\mu-\left\langle v_{0}|O|v_{0}\right\rangle\right|\leq\varepsilon,\] with probability at least \(1-\delta\), using \[T\geq O\left(\frac{\left\|O\right\|^{2}\ln(2/\delta)}{\varepsilon^{2}\eta^{4} }\right),\] runs of a quantum circuit where the cost of each run is \(\tau_{\max}+\tau_{\psi_{0}}\), where \[\tau_{\max}=O\left(\frac{1}{\Delta}\log\left(\frac{\left\|O\right\|}{ \varepsilon\eta}\right)\right).\] This has been formally stated in Theorem 11. 2. **Quantum linear systems:** Suppose we have a Hermitian matrix \(H\) such that its eigenvalues in \([-1,-1/\kappa]\cup[1/\kappa,1]\). Let us assume that the initial quantum state \(\left|b\right\rangle\) can be prepared in cost \(\tau_{b}\). Then, we show that we can use Algorithm 1 to output some \[\left|\mu-\left\langle x|O|x\right\rangle\right|\leq\varepsilon,\] with probability at least \(1-\delta\), using \[T\geq O\left(\frac{\left\|O\right\|^{2}\kappa^{4}\log^{2}\left(\frac{\left\|O \right\|\kappa}{\varepsilon}\right)\ln(2/\delta)}{\varepsilon^{2}}\right),\] runs of the quantum circuit in Fig. 2 where the cost of each run is \(\tau_{\max}+\tau_{b}\), where \[\tau_{\max}=O\left(\kappa\log\left(\frac{\left\|O\right\|\kappa}{\varepsilon} \right)\right).\] This approach makes use of the LCU decomposition of \(f(H)=H^{-1}\) in Ref. [25] (See Theorem 17). We also develop an alternative approach that has a slightly better sample complexity (without the log factor). This makes use of the key insight of the recently developed adiabatic approaches for solving quantum linear systems [52, 53, 54, 55]. Namely, we can construct some Hamiltonian \(H^{\prime}\), given any \(H\) such that the \(0\)-eigenstate of \(H^{\prime}\) corresponds to the solution of the quantum linear systems, i.e. \(\left|x\right\rangle=H^{-1}\left|b\right\rangle/\left\|H^{-1}\left|b\right\rangle\right\|\). So, we are able to exploit this connection to use the procedure in (a) above (For details see Theorem 19). However the construction of \(H^{\prime}\) requires an additional qubit. ### Ancilla free LCU: Randomized sampling of unitaries Suppose for some Hermitian matrix \(H\), we intend to implement \(f(H)\) such that \[\left\|f(H)-\sum_{j=1}^{M}c_{j}U_{j}\right\|\leq\gamma,\] where \(\gamma\in[0,1)\), \(c_{j}\in\mathbb{R}\) and \(U_{j}\) is some unitary. For instance, it may be the case that \(U_{j}=e^{-ijH}\). We show that dropping the ancilla register altogether and simply sampling \(U_{j}\) to \(\mathcal{D}\sim\left\{c_{j}/\left\|c\right\|_{1}\right\}\), allows us to prepare the average density matrix \[\rho=\frac{1}{\left\|c\right\|_{1}^{2}}\sum_{j=1}^{M}c_{j}U_{j}\left|\psi_{0} \right\rangle\left\langle\psi_{0}\right|U_{j}^{\dagger},\] where \(\left\|c\right\|_{1}=\sum_{j=1}^{M}c_{j}\). In Sec. IV.4, we show that this suffices, for instance, if we are interested in the projection of \(f(H)\left|\psi_{0}\right\rangle\) in some subspace of interest. We may instead prepare \(\rho\), by sampling \(U_{j}\) according to \(\mathcal{D}\) (or evolving the initial state under \(H\) for a random time). The projection of \(\rho\) in this subspace is guaranteed to be at least as large. Formally, we prove the following theorem **Theorem 2** (Randomized unitary sampling).: _Let \(H\in\mathbb{R}^{N\times N}\) is a Hermitian matrix. Also let \(\varepsilon\in(0,1)\) and suppose \(f:\mathbb{R}\mapsto\mathbb{R}\) be some function such that_ \[\left\|f(H)-\sum_{j=1}^{M}c_{j}U_{j}\right\|\leq\frac{\varepsilon}{3\left\|f( H)\right\|},\] _for some unitaries \(U_{j}\) and \(c_{j}\in\mathbb{R}\backslash\{0\}\). For some initial state \(\rho_{0}\), define the density matrix_ \[\rho=\frac{1}{\left\|c\right\|_{1}^{2}}\sum_{j=1}^{M}c_{j}U_{j}\rho_{0}U_{j}^{ \dagger},\] _where \(\left\|c\right\|_{1}=\sum_{j=1}^{M}c_{j}\). Then, for any projector \(\Pi\),_ \[\left\|c\right\|_{1}^{2}\mathrm{Tr}\left[\Pi\rho\right]\geq\mathrm{Tr}[\Pi f( H)\rho_{0}f(H)^{\dagger}]-\varepsilon.\] We prove Theorem 2 in Sec. IV.1. Let us, for simplicity, let us assume that \(\left\|c\right\|_{1}^{2}<1\). Then the result of Theorem 2 can be interpreted as follows: if we are interested in the projection of \(f(H)\left|\psi_{0}\right\rangle\) in some subspace of interest, instead of implementing the LCU procedure, we can prepare \(\rho\) by random unitary sampling. The projection of \(\rho\) in this subspace is guaranteed to be is at least as large. The same approach would also work in the continuous-time setting described previously. This makes this technique suitable for application to the spatial search problem, where we are interested in preparing a quantum state with a good overlap with the space spanned by the marked vertices. It can also be applied to quantum simulated annealing [42], where the goal is to prepare a state that has a good overlap with the ground state. In this work, we apply this procedure to the former problem. Applications:Given any ergodic, reversible Markov chain \(P\) of \(\left|X\right|=n\) nodes (out of which \(M\) are marked), the spatial search algorithm finds a marked vertex. We discuss two separate quantum algorithms by discrete-time quantum walks that solve this problem quadratically faster than classical random walks. The first relies on fast-forwarding discrete-time random walks and formalizes an unproven observation in [41]. The second quantum algorithm fast-forwards continuous-time random walks. Furthermore, we also briefly discuss the optimal spatial search algorithm by continuous-time quantum walk of Ref. [32]. The running time of all these algorithms scales as the square root of the hitting time of classical random walks (up to log factors), which is optimal. We demonstrate that the "Ancilla-free LCU" method captures all these quantum algorithms. We briefly describe the results we obtain in both the discrete and the continuous-time settings: 1. **Discrete-time quantum walks:** We use the fact that if any Hamiltonian \(H\) is encoded in the top-left block of a unitary \(U_{H}\), we can obtain a block encoding [26, 56] of \(H^{t}\) or \(e^{-t(I-H)}\) by implementing a linear combination of Chebyshev polynomials of \(H\). Both these procedures require roughly \(O(\sqrt{t})\) cost to be implemented. When \(H=D\) (the discriminant matrix of \(P\)), \(D^{t}\) results in a fast-forwarding of discrete-time random walks [30]. One the other hand, when \(H=e^{-t(I-D)}\), this is a fast-forwarding of continuous-time random walks. Now for the spatial search problem, we do not need to implement a full LCU procedure and can make use of Theorem 2 instead. Using the framework of interpolated Markov chains (see Sec. A - IV for details of these terms), and for a specific initial state \(|\sqrt{\pi_{U}}\rangle\) (related to the stationary distribution of the interpolated random walk), we can show for the first spatial search algorithm, the "Ancilla-free LCU" procedure prepares an average density such that \[\mathrm{Tr}[(I\otimes\Pi_{M})\rho]\geq\left\|\Pi_{M}D(s)^{T}\left|\sqrt{\pi_{ U}}\right\rangle\right\|^{2}-\varepsilon,\] where \(\Pi_{M}\) is a projection on to the marked subspace. In Ref. [31], the authors proved that the RHS of the aforementioned inequality is \(\tilde{\Omega}(1)\), for \(T=\widetilde{O}\left(\sqrt{HT}\right)\), for some randomly chosen value of \(s\in[0,1)\) and \(HT\) is the hitting time of a random walk on \(P\) (See Algorithm 3). This also formalizes the observation of Ref. [41]. For our second spatial search algorithm by discrete-time quantum walk (See Algorithm 5), Theorem 2, prepares an average density matrix \(\rho\) such that \[\mathrm{Tr}[(I\otimes\Pi_{M})\rho]\geq\left\|\Pi_{M}e^{T(D(s)-I)}\left|\sqrt{ \pi_{U}}\right\rangle\right\|^{2}-\varepsilon,\] where again, the RHS can be proven to be in \(\widetilde{\Omega}(1)\), for \(T=\widetilde{O}\left(\sqrt{HT}\right)\). 2. **Continuous-time quantum walks:** The optimal quantum spatial search algorithm by continuous-time quantum walk [32] is yet another demonstration of ancilla-free LCU, where a quadratic speedup for this problem can be obtained simply by randomized time-evolution. For this, the key idea is that there exists a Hamiltonian \(H\) that corresponds to a quantum walk on the edges of the underlying Markov chain \(P\) such that \(H^{2}\left|\psi\right\rangle\left|\widetilde{0}\right\rangle=[(D^{2}-I)\otimes I ]\left|\psi\right\rangle\left|\widetilde{0}\right\rangle\). Then by using the analog LCU procedure to implement \(f(H)=e^{-tH^{2}}\), one can implement \(e^{t(D^{2}-I)}\). As before, for the spatial search problem we can bypass the LCU itself and use Theorem 2 instead to obtain \(\rho\) such that, \[\mathrm{Tr}[(I\otimes\Pi_{M})\rho]\geq\left\|\Pi_{M}e^{T(D^{2}(s)-I)}\left| \sqrt{\pi_{U}}\right\rangle\right\|^{2}-\varepsilon,\] where in [32], the RHS is \(\widetilde{\Omega}(1)\), for \(T=\widetilde{O}\left(\sqrt{HT}\right)\). These algorithms allow us to relate discrete-time random walks and continuous-time random walks with their quantum counterparts. ### Other results on quantum walks Other than the aforementioned contributions, we also develop several additional results along the way. A long-standing open problem in this area has been to establish a relationship between discrete-time quantum walks and continuous-time quantum walks. In a seminal work, Childs showed that any dynamics generated by a continuous-time quantum walk can be simulated by a discrete-time quantum walk [45]. However, whether discrete-time quantum walks can be obtained from the continuous-time quantum walk evolution operator, has been unknown. In this work, we also make progress towards answering this question. In fact, we complete the picture by also connecting random and quantum walks in both discrete and continuous-time settings. We first show that given a block encoding of any Hamiltonian \(H\), it is possible to obtain a discrete-time quantum walk on \(H\). This was observed also in [41]. As we have already discussed, from this block encoding, it is possible to fast-forward discrete-time and continuous-time random walks. Also, following the result of [32], we know that a continuous-time quantum walk can fast-forward a continuous-time random walk. * **From discrete-time quantum walks to continuous-time quantum walks:** In this work, we also show that from any given unitary \(U_{H}\), which is a block encoding of \(H\) implemented in cost \(T_{H_{j}}\), one can construct a block encoding of the Hamiltonian \(H_{P}=i[U,\Pi_{0}]\) in cost \(O(T_{H})\), where \(\Pi_{0}=I\otimes|\bar{0}\rangle\bra{\bar{0}}\) such that \(|\bar{0}\rangle\) is some reference state. When \(H\) corresponds to the discriminant matrix of an ergodic, reversible Markov chain \(P\), \(e^{-iH_{P}t}\), corresponds to a continuous-time quantum walk on the edges of \(P\). Thus, using Hamiltonian simulation algorithms, we are able to obtain a continuous-time quantum walk, starting from a discrete-time quantum walk. * **From continuous-time quantum walks to discrete-time quantum walks:** Suppose \(H\) is some Hamiltonian that encodes the connectivity of some graph \(G\). Then \(U=e^{iH/2}\) is a continuous-time quantum walk on \(G\). Using quantum singular value transformation, we explicitly show that given access Figure 3: _Summary of other results – Relationship between discrete and continuous-time quantum walks, and their classical counterparts._ to \(U\), we can construct a \((1,3,\varepsilon)\) block encoding of \(H\) in cost \(O\left(\frac{1}{\varepsilon}\log(1/\varepsilon)\right).\) This allows us to obtain a discrete-time quantum walk from a continuous-time quantum walk. We discuss the subtleties of this approach and also possible improvements. The overall relationship between the different quantum and random walk frameworks is indicated in Fig. 3. ### Prior work In this section, we briefly sketch relevant prior work and relate them to the results we obtain. The linear combination of unitaries technique was first developed by Childs and Wiebe [21] to develop quantum Hamiltonian simulation algorithms based on multi-product formulas. Since then, LCU has been extensively used to develop improved quantum algorithms for Hamiltonian simulation [22; 23; 24]. Subsequently, LCU has been used to develop a wide variety of quantum algorithms for linear algebra, such as for solving quantum linear systems, and linear regression [25; 26], preparing ground states of Hamiltonians [33] and solving optimization problems [36; 37]. Many of these quantum algorithms are unified by the framework of quantum singular value transformation (QSVT) [46], which implements polynomial transformations to the singular values of a matrix. Although QSVT provides near-optimal query complexities for these problems and requires fewer ancilla qubits than LCU, the framework itself is not likely to be implemented on early fault-tolerant quantum computers. In order to implement polynomial transformations to the underlying matrix, QSVT requires a sequence of controlled operations and the overall quantum circuit can be of large depth. It is also unclear whether the QSVT-based techniques can be modified to make it more amenable for near/intermediate-term implementation. This remains an important open problem. The main contribution of this article is to demonstrate that the framework of implementing LCU can be modified so that this framework, which is also applicable to a wide variety of problems, is implementable on intermediate-term quantum computers. As discussed in the previous sections, we introduce three main variants of LCU. The first technique is a continuous-time variant of LCU, which is more physical. The key idea is to couple discrete systems with continuous-variable systems. Such interactions have been explored in the context of quantum phase estimation, where the system Hamiltonian is coupled with a one-dimensional free particle, acting as the pointer variable - which is the so-called von Neumann measurement model [57; 58]. In Ref. [32], the continuous-time quantum walk Hamiltonian \(H\) was coupled to a one-dimensional quantum Harmonic oscillator to implement \(e^{-tH^{2}}\). This was a key ingredient of their spatial search algorithm by continuous-time quantum walk. In this work, we formalize this technique and show that it is more widely applicable and in fact, can serve as a continuous-time variant to any LCU-based quantum algorithm. We develop an analog variant of the quantum linear systems algorithm of Childs et al. [25] and a new quantum algorithm for this problem (using only Gaussian states) that is more suited for intermediate-term implementation. The "Single-Ancilla LCU" technique is a generalization of the work by Faerhmann et al. [9], where similar techniques were used for Hamiltonian simulation. We apply this technique to develop randomized quantum algorithms for ground state preparation and solving quantum linear systems that are suitable for implementation on early fault-tolerant quantum computers. For both these problems, several quantum algorithms have been developed over the years. The first quantum algorithm for ground state preparation involved using Hamiltonian simulation along with quantum phase estimation [59]. Subsequently, Refs. [33; 34] took advantage of the fact that functions of Hamiltonians can be expressed as a linear combination of unitaries to develop fast quantum algorithms for ground state preparation and estimation. A QSVT-based quantum algorithm has also been developed recently [60]. The problem of preparing ground states of Hamiltonians is considered to be one of the first problems to be solved on early fault-tolerant quantum computers. In this context, several quantum algorithms have been proposed for ground state preparation as well as for estimating properties of ground states [11; 12; 61]. Ever since the seminal algorithm by Harrow, Hassidim and Lloyd [62], quantum linear systems has been analyzed extensively. In particular, the LCU-based approach of Childs, Kothari, and Somma [25] provided a linear dependence on the condition number of the underlying sparse matrix and an exponentially improved dependence on the error. This algorithm was improved to also work for non-sparse matrices [63] and in the more general block encoding framework [64]. Recently, QSVT-based approaches for this problem have also been developed [46; 47; 64]. Another direction of research has been to develop quantum algorithms for this problem in the adiabatic quantum computing framework [52; 53; 55]. The possibility of applying quantum linear systems algorithm on the near-term quantum devices has been explored in [65]. Finally, we apply the "Ancilla-free LCU" technique to develop optimal quantum spatial search algorithms. Recently, LCU techniques have been used to develop several quantum walk algorithms. The quantum fast-forwarding scheme by Apers and Sarlette [30] quadratically fast-forwards the dynamics of a discrete-time random walk by implementing a linear combination of discrete-time quantum walk steps. Recently, Ambainis et al. [31] proved that for the spatial search problem, fast-forwarding \(T\) random walk steps on an interpolated Markov chain, prepares a quantum state that has a \(\tilde{\Omega}(1)\) overlap with the marked space for \(T=\widetilde{O}(\sqrt{HT})\), where \(HT\) is the classical hitting time of the random walk. Thus, their LCU-based quantum spatial search algorithm for discrete-time quantum walks completely solves the spatial search problem quadratically faster than classical random walks, for any number of marked nodes. This closed a long line of work which made partial progress towards solving this problem. Subsequently, Apers et al. [41] provided a unified framework that connected the different variants of discrete-time quantum walk search. Therein, the authors observed that the LCU procedure of [31] could be replaced by randomly sampling quantum walk steps but no proof was provided. In the continuous-time quantum walk framework, whether the spatial search problem offered a generic quadratic speedup was also open for a long time, and was only recently solved in [32]. Their analog quantum algorithm indeed managed to bypass the LCU procedure by evolving the system under the quantum walk Hamiltonian for a random time. Establishing a relationship between discrete-time quantum walks and continuous-time quantum walks from both directions has been a long-standing open problem in the field. Ever since Childs' seminal work in this direction [45], virtually no progress has been made. This also explains why efforts to prove the optimality of quantum spatial search were undertaken in parallel in the discrete-time and the continuous-time settings. We show how one can obtain discrete-time quantum walks from continuous-time quantum walks and vice versa, making significant progress on this problem. In the following section, we briefly define some of the key concepts that we use to derive our results. ## III Preliminaries In this section, we introduce some of the topics that we deal with in this article as well as discuss the key algorithmic primitives required to develop our results. We begin by introducing the LCU framework. ### Linear Combination of Unitaries We will begin by stating the general framework of Linear Combination of Unitaries (LCU). Let us define an operator \(V=\sum_{j=1}^{M}c_{j}U_{j}\), such that each \(U_{j}\) is a unitary matrix. Without loss of generality let us define the parameters \(c_{j}\in\mathbb{R}\backslash\{0\}\). Note that these parameters can take complex values as well, but in such cases, we can always absorb its imaginary part into the definition of the corresponding \(U_{j}\) itself. Note that \(V\) is not necessarily unitary and the LCU approach allows us to implement any such \(V\). Let \(R\) be a state preparation unitary such that \[R\left|\bar{0}\right\rangle=\sum_{j=1}^{M}\sqrt{\frac{c_{j}}{\left\|c\right\| _{1}}}\left|j\right\rangle,\] where \(c=(c_{0},c_{1},\ldots c_{M})^{T}\). Furthermore, define the controlled unitary \[W=\sum_{j=1}^{M}\left|j\right\rangle\left\langle j\right|\otimes U_{j}.\] Then, it is easy to verify that the state \[\ket{\psi_{t}}=(R^{\dagger}\otimes I)W(R\otimes I)\ket{\bar{0}}\ket{\psi}=\frac{1} {\left\|c\right\|_{1}}\ket{\bar{0}}\sum_{j=1}^{M}c_{j}U_{j}\ket{\psi}+\ket{\Phi}^ {\perp}, \tag{1}\] where \(\left(\ket{\bar{0}}\bra{\bar{0}}\otimes I\right)\ket{\Phi}^{\perp}=0\). Thus, by post-selecting of obtaining \(\ket{\bar{0}}\) in the first register, we obtain the state \(V\ket{\psi}/\left\|V\ket{\psi}\right\|\) in the second register, with probability \(\left\|V\ket{\psi}\right\|^{2}/\left\|c\right\|_{1}^{2}\). The dominant factor in the complexity of implementing this procedure is \(C_{j}\) the cost of implementing controlled \(U_{j}\). This is then upper bounded by \(C_{\max}=\max_{j}C_{j}\). Let us now explore the applicability of this procedure. Given any Hermitian matrix \(H\in\mathbb{R}^{N\times N}\) with spectral decomposition \(H=\sum_{j=1}^{N}\lambda_{j}\ket{v_{j}}\bra{v_{j}}\), define \(f(H)=\sum_{j=1}^{N}f(\lambda_{j})\ket{v_{j}}\bra{v_{j}}\). Now suppose \(f(H)\) can be well approximated by linear combinations of unitaries. More precisely, for some \(\varepsilon\in(0,1)\) suppose, \[\left\|f(H)-\sum_{j=1}^{M}c_{j}U_{j}\right\|\leq\varepsilon.\] For instance, \(f(H)\) could be well approximated by a Fourier series, in which case, \(U_{j}=e^{-ijH}\). Since, this is indeed the case, for several functions \(f(x)\), LCU provides a versatile framework to implement these matrix functions. Consequently, several near-optimal quantum algorithms have been designed in this framework that has wide applicability such as quantum algorithms for linear systems [25], ground state preparation [33; 34], sampling from thermal states [36; 66], Hamiltonian simulation [21; 22; 23; 24] and others. However, as observed from the construction of \(B\) and \(W\), these quantum algorithms will be implementable only on a full-scale fault-tolerant quantum computer. ### Block encoding and Quantum Singular Value Transformation For many of the algorithms in this work, we will consider the framework of _block encoding_, wherein it is assumed that the input matrix \(H\) (up to some sub-normalization) is stored in the left block of some unitary. The advantage of the block encoding framework, which was introduced in a series of works [67; 46; 26], is that it can be applied to a wide variety of input models. **Definition 3** (Block Encoding [26]).: _Suppose that \(H\) is an \(s\)-qubit operator, \(\alpha,\varepsilon\in\mathbb{R}^{+}\) and \(a\in\mathbb{N}\), then we say that the \((s+a)\)-qubit unitary \(U_{H}\) is an \((\alpha,a,\varepsilon)\)-block encoding of \(H\), if_ \[\left\|H-\alpha(\bra{0}^{\otimes a}\otimes I)U_{H}(\ket{0}^{\otimes a}\otimes I )\right\|\leq\varepsilon. \tag{2}\] Let \(\ket{\psi}\) be an \(s\)-qubit quantum state. Then applying \(U_{H}\) to \(\ket{\psi}\ket{0}^{\otimes a}\) outputs a quantum state that is \(\frac{\varepsilon}{\alpha}\)-close to \[\frac{H}{\alpha}\ket{\psi}\ket{0}^{\otimes a}+\ket{\Phi^{\perp}},\] where \(\left(I_{s}\otimes\ket{0}^{\otimes a}\bra{0}^{\otimes a}\right)\ket{\Phi^{ \perp}}=0\). Equivalently, suppose \(\tilde{H}:=\alpha\left(\bra{0}^{\otimes a}\otimes I_{s}\right)U_{H}\left( \ket{0}^{\otimes a}\otimes I_{s}\right)\) denotes the actual matrix that is block-encoded into \(U_{H}\), then \(\left\|H-\tilde{H}\right\|\leq\varepsilon\). Quantum Singular Value Transformation (QSVT) applies a polynomial transformation to the singular values of a block-encoded matrix [46]. Formally, let \(P\in\mathbb{C}[x]\) be a polynomial of degree \(d\geq 2\), such that * \(P\) has parity-\((d\mod 2)\), * \(\forall x\in[-1,1]:\!\left|P(x)\right|\leq 1\), * \(\forall x\in(-\infty,-1]\cup[1,\infty):\left|P(x)\right|\geq 1\), * if \(d\) is even, then \(\forall x\in\mathbb{R}:P(ix)P^{*}(ix)\geq 1\). Then, QSVT allows us to implement any polynomial \(P(x)\) that satisfies the aforementioned requirements. Next, we formally introduce QSVT formally via the following theorem. **Theorem 4** (Quantum Singular Value Transformation [46]).: _Suppose \(A\in\mathbb{R}^{N\times d}\) is a matrix with singular value decomposition \(A=\sum_{j=1}^{d_{\min}}\sigma_{j}\ket{v_{j}}\bra{w_{j}}\), where \(d_{\min}=\min\{N,d\}\) and \(\ket{v_{j}}\)\((\ket{w_{j}})\) is the left (right) singular vector with singular value \(\sigma_{j}\). Furthermore, let \(U_{A}\) be a unitary such that \(A=\widetilde{\Pi}U_{A}\Pi\), where \(\Pi\) and \(\widetilde{\Pi}\) are orthogonal projectors. Then, for any QSP polynomial \(P(x)\) of degree \(n\), there exists a vector \(\Phi=(\phi_{1},\phi_{2},\cdots\phi_{n})\in\mathbb{R}^{n}\) and a unitary_ \[U_{\Phi}=\begin{cases}e^{i\phi_{1}(2\widetilde{\Pi}-I)}U_{A}\left[\prod_{k=1} ^{(n-1)/2}e^{i\phi_{2k}(2\widetilde{\Pi}-I)}U_{A}^{\dagger}e^{i\phi_{2k+1}(2 \widetilde{\Pi}-I)}U_{A}\right],&n\text{ is odd}\\ \left[\prod_{k=1}^{n/2}e^{i\phi_{2k-1}(2\widetilde{\Pi}-I)}U_{A}^{\dagger}e^{ i\phi_{2k}(2\widetilde{\Pi}-I)}U_{A}\right],&n\text{ is even},\end{cases} \tag{3}\] _such that_ \[P^{SV}(A)=\begin{cases}\widetilde{\Pi}U_{\Phi}\Pi,&n\text{ is odd}\\ \Pi U_{\Phi}\Pi,&n\text{ is even},\end{cases} \tag{4}\] _where \(P^{SV}(A)\) is the polynomial transformation of the matrix \(A\) defined as_ \[P^{SV}(A):=\begin{cases}\sum_{j}P(\sigma_{j})\ket{v_{j}}\bra{w_{j}},&P\text{ is odd}\\ \sum_{j}P(\sigma_{j})\ket{w_{j}}\bra{w_{j}},&P\text{ is even}.\end{cases} \tag{5}\] Theorem 4 tells us that for a \(P\) of degree \(n\), we can implement \(P^{SV}(A)\) using one ancilla qubit, \(\Theta(n)\) applications of \(U_{A}\), \(U_{A}^{\dagger}\) and controlled reflections \(I-2\Pi\) and \(I-2\widetilde{\Pi}\). Furthermore, if in some well-defined interval, some function \(f(x)\) is well approximated by an \(n\)-degree polynomial \(P(x)\), then Theorem 4 also allows us to implement a transformation that approximates \(f(A)\), where \[f(A):=\begin{cases}\sum_{j}f(\sigma_{j})\ket{v_{j}}\bra{w_{j}},&P\text{ is odd}\\ \sum_{j}f(\sigma_{j})\ket{w_{j}}\bra{w_{j}},&P\text{ is even}.\end{cases} \tag{6}\] The following theorem from Ref. [46] deals with the robustness of the QSVT procedure, i.e. how errors propagate in QSVT. In particular, for two matrices \(A\) and \(\tilde{A}\), it shows how close their polynomial transformations (\(P^{SV}(A)\) and \(P^{SV}(\tilde{A})\), respectively) are, as a function of the distance between \(A\) and \(\tilde{A}\). **Lemma 5** (Robustness of Quantum Singular Value Transformation, [46]).: _Let \(P\in\mathbb{C}[x]\) be a \(d\)-degree polynomial that satisfies the requirements of QSVT. Let \(A,\tilde{A}\in\mathbb{C}^{N\times M}\) be matrices of spectral norm at most 1. Then,_ \[\left\|P^{SV}(A)-P^{SV}(\tilde{A})\right\|\leq 4d\sqrt{\left\|A-\tilde{A} \right\|}.\] Having discussed the preliminary concepts, in the next section, we explain the three variants of LCU we consider in this article. ## IV New approaches for implementing linear combinations of unitaries In this section, we present the key technical contributions of this work. We begin by describing a purely continuous-time variant of the LCU technique. ### Analog LCU: coupling a discrete primary system to a continuous-variable ancilla Suppose we have some Hermitian matrix \(H\in\mathbb{R}^{N\times N}\) of unit spectral norm and we wish to implement \(f(H)\) for some function \(f:[-1,1]\mapsto\mathbb{R}\) which satisfies: \[\left|f(x)-\int_{a}^{b}dz\ c(z)\cdot e^{-itxz}\right|\leq\varepsilon,\] where \(c:\mathbb{R}\mapsto\mathbb{R}^{+}\backslash\{0\}\). For instance, this can be a Fourier transform of \(f(x)\), in which case \(a=-\infty\), \(b=+\infty\) and \(\varepsilon=0\). Now suppose \(H\) is coupled to a continuous variable system such that the resulting interaction Hamiltonian is \(H^{\prime}=H\otimes\hat{z}\). Suppose the first register is prepared in some initial state \(\left|\psi_{0}\right\rangle\) and the ancilla system is prepared in the continuous-variable quantum state \[\left|\bar{0}\right\rangle_{c}=\int_{a}^{b}dz\ \sqrt{\frac{c(z)}{\left\|c \right\|_{1}}}\left|z\right\rangle,\] where \(\left\|c\right\|_{1}=\int_{a}^{b}dz\ \left|c(z)\right|\). For instance, \(\hat{z}\) can represent a degree of freedom (position or momentum) of a one-dimensional quantum Harmonic oscillator, and the state \(\left|\bar{0}\right\rangle_{c}\), could be its ground state (a Gaussian), a free resource state for continuous variable systems. For several of our applications, we shall see that this is indeed the case. Now we shall simply evolve the system according to the interaction Hamiltonian \(H^{\prime}\) to obtain \[\left|\eta_{t}\right\rangle=e^{-i\hat{H}t}\left|\psi_{0}\right\rangle \left|\bar{0}\right\rangle_{c} =\int_{a}^{b}dz\ \sqrt{\frac{c(z)}{\left\|c\right\|_{1}}}e^{-iHtz}\left| \psi_{0}\right\rangle\left|z\right\rangle \tag{7}\] \[=\frac{1}{\left\|c\right\|_{1}}\int_{a}^{b}dz\ c(z)e^{-iHtz} \left|\psi_{0}\right\rangle\left|\bar{0}\right\rangle_{c}+\left|\Phi\right\rangle ^{\perp}, \tag{8}\] where \(\left|\Phi\right\rangle^{\perp}\) is a quantum state (not normalized) such that \(\left(I\otimes\left|\bar{0}\right\rangle_{c}\left\langle\bar{0}\right|_{c} \right)\left|\Phi\right\rangle^{\perp}=0\). Thus, we have prepared a quantum state that is \(O(\varepsilon/\left\|c\right\|_{1})\)-close to \[\left|\psi\right\rangle=\frac{f(H)}{\left\|c\right\|_{1}}\left|\psi_{0}\right\rangle \left|\bar{0}\right\rangle_{c}+\left|\Phi\right\rangle^{\perp}. \tag{9}\] Now post-selecting on having \(\left|0\right\rangle_{c}\) in the second register we obtain a state that is \(\varepsilon\)-close to \(f(H)\left|\psi_{0}\right\rangle/\left\|f(H)\left|\psi_{0}\right\rangle\right\|\) in the first register with probability \(\left\|f(H)\left|\psi_{0}\right\rangle\right\|^{2}/\left\|c\right\|_{1}^{2}\). We will use this procedure to develop an analog quantum algorithm for preparing ground states of Hamiltonians in Sec. V. This continuous-time algorithm can be naturally generalized to the scenario where we want to implement \(f(H)\) for some function \(f:[-1,1]\mapsto\mathbb{R}\) such that \[\left|f(x)-\int_{a_{1}}^{b_{1}}dz_{1}\ c(z_{1})\int_{a_{2}}^{b_{2}}dz_{2}\ c(z_{2}) \cdots\int_{a_{k}}^{b_{k}}dz_{k}\ c(z_{k})e^{-itxz_{1}z_{2}\cdots z_{k}}\right| \leq\varepsilon.\] This can be implemented by coupling the Hamiltonian \(H\) with \(k\) different ancillary continuous-variable systems such that the effective interaction Hamiltonian is \(\tilde{H}=H\otimes\hat{z}_{1}\otimes\cdots\otimes\hat{z}_{k}\). The \(j\)-th ancilla system is prepared in the quantum state \[\left|\bar{0}\right\rangle_{c_{j}}=\int_{a_{j}}^{b_{j}}dz_{j}\ \sqrt{\frac{c(z_{j})}{\left\|c_{j}\right\|_{1}}}\left|z_{j}\right\rangle.\] Then by evolving the initial state according to \(\tilde{H}\) for time \(t\) results in a quantum state that is \(O\left(\frac{\varepsilon}{\Pi_{j=1}^{k}\left\|c_{j}\right\|_{1}}\right)\)-close to \[\left|\eta_{t}\right\rangle=\frac{f(H)}{\Pi_{j=1}^{k}\left\|c_{j}\right\|_{1}} \left|\psi_{0}\right\rangle\left|\bar{0}\right\rangle_{c_{1}}\cdots\left|\bar{ 0}\right\rangle_{c_{k}}+\left|\Phi\right\rangle^{\perp}. \tag{10}\] In Sec. VI, our analog quantum linear systems algorithm requires coupling the system Hamiltonian to two ancillary systems, which is captured by this generalization of analog LCU. Interestingly, for the applications we consider, the ancillary states are the ground or the first excited state of a one-dimensional quantum Harmonic oscillator or the ground state of a "particle in a ring". ### Robustness of expectation values of observables In this section, we develop general results on the robustness of expectation values of observables which we shall use for both the "Single-Ancilla LCU" (Sec. IV.3) and the "Ancilla-free LCU" (Sec. IV.4) approaches. Consider that there exist two operators \(P\) and \(Q\) such that \(\left\|P-Q\right\|\leq\varepsilon\). In this section, we demonstrate that the expectation value of \(O\) with respect to \(P\rho P^{\dagger}\) is not far off from the expectation value of \(O\) with respect to \(Q\rho Q^{\dagger}\), for any density matrix \(\rho\). More precisely, we prove \[\left|\operatorname{Tr}[OP\rho P^{\dagger}]-\operatorname{Tr}[QQ\rho Q^{ \dagger}]\right|\leq 3\|P\|\|O\|\,\varepsilon.\] In order to prove this result, we need to use the tracial version Holder's inequality which is stated below for completeness: **Lemma 6** (Tracial version of Holder's inequality).: _Define two operators \(A\) and \(B\) and parameters \(p,q\in[1,\infty)\) such that \(1/p+1/q=1\). Then the following holds:_ \[\operatorname{Tr}[A^{\dagger}B]\leq \left\|A\right\|_{p}\|B\|_{q}\,.\] Now we are in a position to formally state the main result of this section. **Theorem 7**.: _Suppose \(P\) and \(Q\) are operators such that \(\left\|P-Q\right\|\leq\varepsilon\) for some \(\varepsilon\in[0,1]\). Furthermore, let \(\rho\) be any density matrix and \(O\) be some Hermitian operator with spectral norm \(\left\|O\right\|\). Then, if \(\left\|P\right\|\geq 1\), the following holds:_ \[\left|\operatorname{Tr}[OP\rho P^{\dagger}]-\operatorname{Tr}[QQ\rho Q^{ \dagger}]\right|\leq 3\|O\|\|P\|\,\varepsilon.\] Proof.: For the operators \(P\) and \(Q\), we have \[(Q-P)\rho(P^{\dagger}-Q^{\dagger})=Q\rho(P^{\dagger}-Q^{\dagger})-P\rho(P^{ \dagger}-Q^{\dagger}) \tag{11}\] Now adding and subtracting \(P\rho P^{\dagger}\) in the RHS we obtain \[(Q-P)\rho(P^{\dagger}-Q^{\dagger})+P\rho(P^{\dagger}-Q^{\dagger }) =Q\rho(P^{\dagger}-Q^{\dagger})+P\rho P^{\dagger}-P\rho P^{\dagger} \tag{12}\] \[=P\rho P^{\dagger}-Q\rho Q^{\dagger}-(P-Q)\rho P^{\dagger} \tag{13}\] This gives us \[P\rho P^{\dagger}-Q\rho Q^{\dagger}=(Q-P)\rho(P^{\dagger}-Q^{ \dagger})+P\rho(P^{\dagger}-Q^{\dagger})+(P-Q)\rho P^{\dagger}. \tag{14}\] We multiply \(O\) to the left of each term on both sides of this equation and then take trace. Thus, we have \[\left|\operatorname{Tr}[OP\rho P^{\dagger}]-\operatorname{Tr}[OQ \rho Q^{\dagger}]\right| =\left|\operatorname{Tr}[O(Q-P)\rho(P^{\dagger}-Q^{\dagger})]+ \operatorname{Tr}[OP\rho(P^{\dagger}-Q^{\dagger})]+\operatorname{Tr}[(P-Q) \rho P^{\dagger}]\right| \tag{15}\] \[\leq\left|\operatorname{Tr}[O(Q-P)\rho(P^{\dagger}-Q^{\dagger})] \right|+\left|\operatorname{Tr}[OP\rho(P^{\dagger}-Q^{\dagger})]\right|+\left| \operatorname{Tr}[(P-Q)\rho P^{\dagger}]\right|, \tag{16}\] Now for each term in the RHS, we invoke Theorem 6 with \(p=q=2\) to obtain \[\left|\operatorname{Tr}[OP\rho P^{\dagger}]-\operatorname{Tr}[OQ \rho Q^{\dagger}]\right| \leq\!\left\|O\right\|\!\left\|P-Q\right\|^{2}+2\|O\|\!\left\|P \right\|\!\left\|P-Q\right\| \left[\text{ As }\|\rho\|=1\right] \tag{17}\] \[\leq\varepsilon^{2}\|O\|+2\|O\|\|P\|\,\varepsilon \left[\text{ As }\|P-Q\|\leq\varepsilon\right]\] (18) \[\leq 3\varepsilon\|O\|\|P\| \left[\text{ As }\|P\|\geq 1\right] \tag{19}\] It is easy to see why Theorem 7 is useful to develop robust versions of the new approaches to LCU. Typically, \(f(H)\) is often not exactly equal to a linear combination of unitaries but is \(\varepsilon\)-close to it. Formally, \(\left\|f(H)-f(H^{\prime})\right\|\leq\varepsilon\), where \(f(H^{\prime})\) can be exactly expressed as an LCU. Consequently, for the variants of LCU that we develop in the subsequent sections, we will be estimating \(\operatorname{Tr}[Of(H^{\prime})\rho f(H^{\prime})^{\dagger}]\). But, by Theorem 7, we can always bound \[\left|\operatorname{Tr}[Of(H)\rho f(H)^{\dagger}]-\operatorname{Tr}[Of(H^{ \prime})\rho f(H^{\prime})^{\dagger}]\right|\leq 3\|O\|\!\left\|f(H)\right\|\!\left\| \varepsilon.\] We shall be using this result in the subsequent sections. Single-Ancilla LCU: Controlled application of randomly sampled unitaries followed by single-shot measurement In this section, we describe the "Single-Ancilla LCU" technique, which allows us to sample from quantum states obtained by applying LCU. This approach is a generalization of the Hamiltonian simulation algorithm of [9]. Suppose we are given a Hermitian matrix \(H\) and we wish to implement \(f(H)\), which can be approximated by a linear combination of unitaries. That is, for some \(\gamma\in[0,1)\), \[\left\|f(H)-\sum_{j=1}^{M}c_{j}U_{j}\right\|\leq\gamma,\] for unitaries \(U_{j}\) and \(c_{j}\in\mathbb{R}\). Let us define the \(\ell_{1}\)-norm of the LCU coefficients as \(\left\|c\right\|_{1}=\sum_{j=1}^{M}c_{j}\). Furthermore, suppose we have access to the quantum circuit for \(U_{j}\). Then given any initial state \(\left|\psi_{0}\right\rangle\), by the addition of just one ancilla qubit, we can estimate the expectation values \(\operatorname{Tr}[Of(H)\left|\psi_{0}\right\rangle\left\langle\psi_{0}\right| f(H)^{\dagger}]\) up to arbitrary accuracy, for any observable \(O\). This is a randomized quantum algorithm that is applicable to early fault-tolerant quantum devices: we use a short-depth quantum circuit, from which we sample repeatedly. We depict the resulting quantum circuit in Fig. 2 and state the algorithm formally in Algorithm 1. The overall procedure is simple: The ancilla qubit is prepared in the state \(\left|+\right\rangle\) so that the overall initial state is \(\rho=\left|+\right\rangle\left\langle+\right|\otimes\left|\psi_{0}\right\rangle \left\langle\psi_{0}\right|\). We sample two unitaries \(V_{1}\) and \(V_{2}\) according to \(\left\{U_{j},c_{j}/\left\|c\right\|_{1}\right\}\) and then implement controlled and anti-controlled versions of \(V_{1}\) and \(V_{2}\), respectively. Finally, we make a measurement of the observable \(X\otimes O\) and store the outcome. We then sample from this quantum circuit \(T\) times and calculate the mean of all the outcomes. This is the final output of Algorithm 1. We show formally via Theorem 1, that if the number of samples obtained \(T\) is large enough, the sample mean of the outcomes converges to the expectation value we intend to estimate. Formally we define the following theorem, **Theorem 1** (Sampling from functions of Hamiltonians applied to quantum states).: _Let \(\varepsilon,\delta,\gamma\in(0,1)\) be some parameters. Let \(O\) be some observable and \(\left|\psi_{0}\right\rangle\) be some initial state. Suppose there is a Hermitian matrix \(H\in\mathbb{R}^{N\times N}\), such that \(\left\|f(H)-\sum_{j}c_{j}U_{j}\right\|\leq\gamma\), where \(U_{j}\) is some unitary such that_ \[\gamma\leq\frac{\varepsilon}{6\|O\|\|f(H)\|}.\] _Furthermore, let_ \[T\geq\frac{8\|O\|^{2}\ln(2/\delta)\|c\|_{1}^{4}}{\varepsilon^{2}}.\] _Then, Algorithm 1 estimates \(\mu\) such that_ \[\left|\mu-\operatorname{Tr}[Of(H)\left|\psi_{0}\right\rangle\left\langle\psi _{0}\right|f(H)^{\dagger}]\right|\leq\varepsilon,\] _with probability at least \(1-\delta\), using of one ancilla qubit and \(T\) runs of the quantum circuit in Fig. 2._ Proof.: Let \(f(H^{\prime})=\sum_{j}c_{j}U_{j}\). First observe from Algorithm 1 that the initial state \(\rho\) transforms to \[\rho^{\prime} =\tilde{V}_{1}\tilde{V}_{2}\rho\tilde{V}_{2}^{\dagger}\tilde{V}_ {1}^{\dagger} \tag{20}\] \[=\frac{1}{2}\left[\left|0\right\rangle\left\langle 0\right| \otimes V_{2}\rho_{0}V_{2}^{\dagger}+\left|0\right\rangle\left\langle 1 \right|\otimes V_{2}\rho_{0}V_{1}^{\dagger}+\left|1\right\rangle\left\langle 0 \right|\otimes V_{1}\rho_{0}V_{2}^{\dagger}+\left|1\right\rangle\left\langle 1 \right|\otimes V_{1}\rho_{0}V_{1}^{\dagger}\right]. \tag{21}\] So after measuring the observable \(X\otimes O\), we have \[\operatorname{Tr}\left[(X\otimes O)\rho^{\prime}\right]=\frac{1}{2}\left[O \left(V_{1}\rho_{0}V_{2}^{\dagger}+V_{2}\rho_{0}V_{1}^{\dagger}\right)\right].\] The expected values, \[\mathbb{E}\left[V_{1}\right]=\mathbb{E}\left[V_{2}\right]=\frac{1}{\left\|c \right\|_{1}}\sum_{j}c_{j}U_{j}.\] So, the expected outcome of the \(j^{\text{th}}\) iteration is \[\mathbb{E}\left[\mu_{j}\right]=\mathbb{E}\left[\text{Tr}[(X\otimes O)\rho^{ \prime}]\right]=\frac{1}{\|c\|_{1}^{2}}\text{Tr}[O\ f(H^{\prime})\rho_{0}f(H^{ \prime})^{\dagger}].\] Next, we need to estimate two things: 1. How fast does the sample mean \(\mu=\sum_{j}\|c\|_{1}^{2}\,\mu_{j}/T\) converge to its expectation value? For this, we use Hoeffding's inequality. 2. What is the accuracy of the observation with respect to \(f(H)\) as a function of the distance between \(f(H)\) and \(f(H^{\prime})\)? For this, we invoke Theorem 7. Furthermore, observe that the POVM measurement yields some outcome of \(O\) in the range \([-\|O\|\,,\|O\|]\). So each random variable lies in the range \[-\|O\|\|c\|_{1}^{2}\leq\|c\|_{1}^{2}\,\mu_{j}\leq-\|O\|\|c\|_{1}^{2}.\] We evaluate (a), by using Hoeffding's inequality. We obtain \[Pr\left[\left|\mu-\text{Tr}[O\ f(H^{\prime})\rho_{0}f(H^{\prime})^{\dagger}] \right|\geq\varepsilon/2\right]\leq 2\exp\left[-\frac{T\varepsilon^{2}}{8\|c\|_{1}^{ 4}\|O\|^{2}}\right]. \tag{22}\] This immediately gives us that for \[T\geq\frac{8\|O\|^{2}\ln(2/\delta)\|c\|_{1}^{4}}{\varepsilon^{2}}, \tag{23}\] \[\left|\mu-\ Tr[O\ f(H^{\prime})\rho_{0}f(H^{\prime})^{\dagger}]\right|\leq \varepsilon/2,\] with probability at least \(1-\delta\). Now, in order to evaluate (b), we first apply triangle inequality, we obtain \[\left|\mu-\text{Tr}[O\ f(H)\rho_{0}f(H)^{\dagger}]\right|\leq \left|\mu-\text{Tr}[O\ f(H^{\prime})\rho_{0}f(H^{\prime})^{\dagger }]\right|+ \tag{24}\] \[\left|\text{Tr}[O\ f(H)\rho_{0}f(H)^{\dagger}]-\text{Tr}[O\ f(H^ {\prime})\rho_{0}f(H^{\prime})^{\dagger}]\right|. \tag{25}\] The first term in the RHS of the above inequality is upper bounded by \(\varepsilon/2\). In order to bound the second term, note that \(\left\|f(H)-f(H^{\prime})\right\|\leq\gamma\). For any such operators that are at most \(\gamma\)-separated, we can use Theorem 7 to obtain: \[\left|\text{Tr}[O\ f(H)\rho_{0}f(H)^{\dagger}]-\text{Tr}[O\ f(H^{\prime})\rho _{0}f(H^{\prime})^{\dagger}]\right|\leq 3\|O\|\big{\|}f(H)\big{\|}\,\gamma\leq \varepsilon/2.\] So, overall we have \[\left|\mu-\text{Tr}[O\ f(H)\rho_{0}f(H)^{\dagger}]\right|\leq\varepsilon,\] which completes the proof. What is this procedure useful for? In Sec. V and Sec. VI, we show that this approach is useful to estimate expectation values \(\text{Tr}[\ Off(H)\rho f(H)^{\dagger}]\) for several functions of interest. In particular, we apply this to sample from the solution of quantum linear systems and also from the ground states of Hamiltonians. These quantum algorithms consume minimum quantum resources and rely on repeated classical sampling from a short-depth quantum circuit and hence, are applicable for early fault-tolerant quantum computers. Next, we discuss the "Ancilla-free LCU" procedure. ### Ancilla-free LCU: Randomized unitary sampling As in the previous section, suppose we are given a Hermitian matrix \(H\) and we wish to implement \(f(H)\), which can be approximated by a linear combination of unitaries. That is, for some \(\gamma\in[0,1)\), \[\left\|f(H)-\sum_{j=1}^{M}c_{j}U_{j}\right\|\leq\gamma,\] for unitaries \(U_{j}\), \(c_{j}\in\mathbb{R}\), and \(\left\|c\right\|_{1}=\sum_{j=1}^{M}c_{j}\). In this section, we formally prove that if we are interested in the projection of \(f(H)\left|\psi_{0}\right\rangle\) in some subspace of interest, then it suffices to sample \(U_{j}\) according to the distribution of the LCU coefficients, i.e. \(\mathcal{D}\sim\left\{c_{j}/\left\|c\right\|_{1}\right\}\). This is the key idea behind the "Ancilla-free LCU" technique. For some initial state \(\rho_{0}=\left|\psi_{0}\right\rangle\left\langle\psi_{0}\right|\), this procedure prepares the average density matrix \[\rho=\frac{1}{\left\|c\right\|_{1}}\sum_{j=1}^{M}c_{j}U_{j}\rho_{0}U_{j}^{ \dagger}.\] Then, for some projector \(\Pi\) onto the subspace of interest, the projection of \(\rho\) is at least as large as the projection of \(f(H)\left|\psi_{0}\right\rangle\). We formally prove this via the following theorem: **Theorem 2** (Randomized unitary sampling).: _Let \(H\in\mathbb{R}^{N\times N}\) is a Hermitian matrix. Also let \(\varepsilon\in(0,1)\) and suppose \(f:\mathbb{R}\mapsto\mathbb{R}\) be some function such that_ \[\left\|f(H)-\sum_{j=1}^{M}c_{j}U_{j}\right\|\leq\frac{\varepsilon}{3\left\|f( H)\right\|},\] _for some unitaries \(U_{j}\) and \(c_{j}\in\mathbb{R}\backslash\{0\}\). For some initial state \(\rho_{0}\), define the density matrix_ \[\rho=\frac{1}{\left\|c\right\|_{1}^{2}}\sum_{j=1}^{M}c_{j}U_{j}\rho_{0}U_{j}^{ \dagger},\] _where \(\left\|c\right\|_{1}=\sum_{j=1}^{M}c_{j}\). Then, for any projector \(\Pi\),_ \[\left\|c\right\|_{1}^{2}\mathrm{Tr}\left[\Pi\rho\right]\geq\mathrm{Tr}[\Pi f( H)\rho_{0}f(H)^{\dagger}]-\varepsilon.\] Proof.: Let \(f(H^{\prime})=\sum_{j=1}^{M}c_{j}U_{j}\). Then, the standard LCU procedure would have implemented the quantum state \[\left|\psi_{t}\right\rangle=\left|\bar{0}\right\rangle\frac{f(H^{\prime})}{ \left\|c\right\|_{1}}\left|\psi_{0}\right\rangle+\left|\Phi^{\perp}\right\rangle.\] Then, we have \[\mathrm{Tr}[\Pi\rho] =\mathrm{Tr}[\left(I\otimes\Pi\right)\left|\psi_{t}\right\rangle \left\langle\psi_{t}\right|] \tag{26}\] \[=\left\langle\psi_{t}\right|\left(\left|0\right\rangle\left\langle 0 \right|\otimes\Pi\right)\left|\psi_{t}\right\rangle+\left\langle\psi_{t}\right| \left[\left(I-\left|0\right\rangle\left\langle 0\right|\right)\otimes\Pi\right]\left| \psi_{t}\right\rangle\] (27) \[=\frac{1}{\left\|c\right\|_{1}^{2}}\left\langle\psi_{0}\right|f( H^{\prime})^{\dagger}\Pi f(H^{\prime})\left|\psi_{0}\right\rangle+\left\langle \Phi^{\perp}\right|\left[\left(I-\left|\bar{0}\right\rangle\left\langle\bar{0 }\right|\right)\otimes\Pi\right]\left|\Phi^{\perp}\right\rangle\] (28) \[\geq\frac{1}{\left\|c\right\|_{1}^{2}}\left\langle\psi_{0}\right|f( H^{\prime})^{\dagger}\Pi f(H^{\prime})\left|\psi_{0}\right\rangle\] (29) \[\geq\frac{1}{\left\|c\right\|_{1}^{2}}\left(\left\langle\psi_{0} \right|f(H)^{\dagger}\Pi f(H)\left|\psi_{0}\right\rangle-\varepsilon\right), \tag{30}\] where in the last line we have invoked Theorem 7. This completes the proof. A similar result naturally extends to the analog LCU framework as well. Thus, replacing the LCU procedure by random unitary sampling suffices in cases where we are interested in the projection of \(f(H)\ket{\psi_{0}}\) in some subspace, i.e. say we wish to measure this state in some basis. Then we may instead measure \(\rho\), for which the probabilities would be at least as large. This is typically the case for several quantum walk-based problems: the goal is to prepare a quantum state that has a good overlap with the marked nodes (some subset of the nodes) of the underlying graph. This problem can be tackled by discrete-time quantum walks as well as by continuous-time quantum walks. Briefly, \(f(D)=D^{t}\) represents \(t\)-steps of a discrete-time random walk on \(D\), the discriminant matrix of some (reversible) Markov chain \(P\). Let \(\Pi_{M}\) be a projector onto the space spanned by the marked nodes of \(P\). Then, one can express \(f(D)=D^{T}\) as a linear combination of discrete-time quantum walk steps for some \(T=O(HT)\), which has roughly \(\sqrt{T}\) terms (\(HT\) is the hitting time of the discrete-time random walk on \(D\)). Then, instead of implementing the complete LCU procedure, one can apply \(k\)-steps of a discrete-time quantum walk, where \(k\) is sampled according to the distribution of the LCU coefficients. This results in \(\rho\) such that \(\mathrm{Tr}[\Pi_{M}\rho]\), the probability of observing a marked vertex after \(\sqrt{T}\) quantum walk steps is at least as large as the classical quantity \(\langle\psi_{0}|D^{T}\Pi_{M}D^{T}|\psi_{0}\rangle\), which corresponds to the distribution after \(T\)-steps of a classical random walk. The classical quantity can be proven to be \(\tilde{\Omega}(1)\) for some specific \(\ket{\psi_{0}}\), using the framework of interpolated Markov chains (following the work of Amabainis et al [31]), which proves optimality of the quantum spatial search algorithm. Similar results can also be shown with respect to continuous-time random walks, wherein the LCU decomposition \(f(D)=e^{T(D-I)}\) becomes relevant. For continuous-time quantum walks [32], on the other hand, \(f(D)=e^{T(D^{2}-I)}\). Thus, the "Ancilla-Free" LCU approach is the broad framework to tackle quantum walk-based algorithms. We shall discuss these ideas in Sec. VII ## V Applications to ground state preparation The ground state preparation (GSP) problem can be stated as follows: given access to a Hamiltonian and an initial state, find a quantum algorithm that outputs the ground state of the Hamiltonian with high precision. Generally, this problem is known to be computationally hard, even for a quantum computer [68]. However, it finds widespread applications across physics and computer science. As a result, novel quantum algorithms for GSP that improve upon existing ones are of extreme importance and interest. In this section, we will develop quantum algorithms for preparing ground states of Hamiltonians as well for sampling from the ground state. We will find that both the "Analog LCU" and the "Single-Ancilla LCU" approaches, introduced in Sec. IV, are useful for this problem. This section is organized as follows. We begin by formally describing the GSP problem. Next, we develop an analog quantum algorithm for GSP using the "Analog LCU" framework. Then, we use the "Single-Ancilla LCU" technique to sample from the ground state. Finally, as a bonus, we provide a quantum algorithm for GSP for fully fault-tolerant quantum computers using the framework of QSVT. We start by describing the ground state preparation problem. **The ground state Preparation problem:** The set up of the problem is similar to prior works [32, 34, 35]. Suppose we have a Hamiltonian \(H\) with ground state \(\ket{v_{0}}\) and ground energy \(\lambda_{0}\), and assume that we are given a lower bound on the gap between the ground state and the first excited state of \(H\) (spectral gap), i.e. we have knowledge of \(\Delta\) such that \(\ket{\lambda_{1}-\lambda_{0}}\geq\Delta\). For clarity of exposition, we assume that the ground space of \(H\) is non-degenerate. If this is not the case, e.g. if the degeneracy of the ground space is \(d\) and is spanned by mutually orthonormal eigenstates \(\{\ket{v_{0}^{(\ell)}}\}_{l=1}^{d}\), then we will be preparing a quantum state \(\ket{v_{0}}\) which is a projection onto the ground space given by \[\ket{v_{0}}=\frac{1}{\sqrt{\sum_{\ell=1}^{d}|c_{0}^{(\ell)}|^{2}}}\sum_{\ell= 1}^{d}c_{0}^{(\ell)}\ket{v_{0}^{(\ell)}}.\] In addition, suppose we have access to some initial state \(\ket{\psi_{0}}\) and a lower bound on the overlap \(|\bra{\psi_{0}}\ket{v_{0}}|=c_{0}\geq\eta\). Furthermore, for some desired accuracy \(\varepsilon\in(0,1)\), we will assume that we know the value of the ground energy to some precision parameter \(\varepsilon_{g}\) such that \(\varepsilon_{g}=\mathcal{O}\left(\Delta/\sqrt{\log\frac{1}{\eta\varepsilon}}\right)\). That is, we know some \(E_{0}\) such that \[\left|\lambda_{0}-E_{0}\right|\leq\varepsilon_{g}. \tag{31}\] By implementing \(H-(E_{0}-\varepsilon_{g})I\), we ensure that \(0\leq\lambda_{0}\leq 2\varepsilon_{g}\). This transformation also ensures that the lower bound for the spectral gap of \(H\) remains \(\Delta\). If also an upper bound on the maximum eigenvalue of \(H\) is known, then we can actually assume that the spectrum of \(H\) is in \([0,1]\). ### Applying Analog LCU: A continuous-time quantum algorithm for ground state preparation In this section, we will use the "Analog LCU" framework to develop an analog quantum algorithm for the GSP problem. This algorithm was described in the Supplemental Material of [32]. Here, we place it in the broader context of the "Analog LCU" framework. Moreover, it will serve as useful intuition for the "Ancilla free LCU" and the "Single-Ancilla LCU" approaches to the GSP problem, which we discuss in the subsequent sections. Consider some quantum system in state \(\left|\psi_{0}\right\rangle\) coupled to an ancillary system in a Gaussian state \[\left|\psi_{g}\right\rangle=\int_{-\infty}^{+\infty}\frac{dz}{(2\pi)^{1/4}}e^ {-z^{2}/4}\left|z\right\rangle. \tag{32}\] The Gaussian state is typically easy to prepare in this setting. This state can be seen as the ground state of a one-dimensional quantum harmonic oscillator. The coupling is done via interaction Hamiltonian \(H^{\prime}=H\otimes\hat{z}\), where \(\hat{z}\) corresponds to the position (or momentum) operator. Evolving \(\left|\psi_{0}\right\rangle\left|\psi_{g}\right\rangle\), under \(H^{\prime}\) for a time \(t\) results in the state \[\left|\eta_{t}\right\rangle =e^{-itH^{\prime}}\left|\psi_{0}\right\rangle\left|\psi_{g}\right\rangle \tag{33}\] \[=\int_{-\infty}^{+\infty}\frac{dz}{(2\pi)^{1/4}}e^{-z^{2}/4}e^{- itHz}\left|\psi_{0}\right\rangle\left|z\right\rangle\] \[=\int_{-\infty}^{+\infty}\frac{dz}{\sqrt{2\pi}}e^{-z^{2}/2}e^{- itHz}\left|\psi_{0}\right\rangle\left|\psi_{g}\right\rangle+\left|\Phi \right\rangle^{\perp},\] where \(\left|\Phi\right\rangle^{\perp}\) is a quantum state with the ancillary system being orthogonal to \(\left|\psi_{g}\right\rangle\). Now the Fourier transform of a Gaussian is a Gaussian, i.e. we have for any \(x\in\mathbb{R}\), \[e^{-y^{2}/2}=\int_{-\infty}^{\infty}\frac{dz}{\sqrt{2\pi}}\ e^{-z^{2}/2}e^{- yz}. \tag{34}\] So using this we obtain \[\left|\eta_{t}\right\rangle=e^{-t^{2}H^{2}/2}\left|\psi_{0}\right\rangle\left| \psi_{g}\right\rangle+\left|\Phi\right\rangle^{\perp}. \tag{35}\] By post-selecting on obtaining \(\left|\psi_{g}\right\rangle\) in the second register, we are able to prepare a quantum state proportional to \(e^{-t^{2}H^{2}/2}\left|\psi_{0}\right\rangle\) in the first register. Now we formally state the ground state preparation algorithm and analyze its complexity, via the following lemma. **Lemma 8**.: _Suppose \(\varepsilon\in(0,1)\) and \(\eta\in(0,1/\sqrt{2}]\). Furthermore, suppose we have a Hamiltonian \(H\) with ground state \(\left|v_{0}\right\rangle\) with \(\Delta\) being a lower bound on the spectral gap. Also, the ground state energy of \(H\) is known up to a precision \(\varepsilon_{g}\in\mathcal{O}\left(\Delta/\sqrt{\log\frac{1}{\eta\varepsilon}}\right)\). Then, given an initial state \(\left|\psi_{0}\right\rangle\) satisfying \(\left|\left\langle\psi_{0}|v_{0}\right\rangle\right|\geq\eta\), we output, with probability \(\mathcal{O}(\eta^{2})\), a state \(\left|\phi\right\rangle\) such that \(\left\||\phi\right\rangle-\left|v_{0}\right\rangle\right\|\leq\varepsilon\) by evolving the Hamiltonian \(H^{\prime}=H\otimes\hat{z}\) for time_ \[T=\mathcal{O}\left(\frac{1}{\Delta}\sqrt{\log\left(\frac{1}{\eta\varepsilon} \right)}\right).\] Proof.: We shift the overall eigenvalues of \(H\) by \(-E_{0}+\varepsilon_{g}\) as explained previously. So now suppose \(H\) has a spectral decomposition is \(H=\sum_{j}\lambda_{j}\left|v_{j}\right\rangle\left\langle v_{j}\right|\) with eigenvalues \(\lambda_{j}\in(0,1]\) and particularly \(0\leq\lambda_{0}\leq 2\varepsilon_{g}\). Then the input quantum state, when expressed in the eigenbasis of \(H\) is written as \(\left|\psi_{0}\right\rangle=\sum_{j}c_{j}\left|v_{j}\right\rangle\). Without loss of generality, we assume that \(c_{0}\) is real and positive and follow the analog procedure outlined in the main article. In fact, from Eq. (3) of the letter, we obtain that \[\left|\eta_{t}\right\rangle=e^{-tH^{2}}\left|\psi_{0}\right\rangle\left|\psi_ {g}\right\rangle+\left|\Phi\right\rangle^{\perp}.\] By post-selecting on obtaining the Gaussian state \(\left|\psi_{g}\right\rangle\) on the second register, we are left with the following quantum state in the first register after a time \(\sqrt{2t}\) \[\left|\phi\right\rangle=\frac{e^{-tH^{2}}\left|\psi_{0}\right\rangle}{\sqrt{ \left\langle\psi_{0}|e^{-2tH^{2}}|\psi_{0}\right\rangle}}, \tag{36}\] with probability \(\left\|e^{-tH^{2}}\left|\psi_{0}\right\rangle\left|\psi_{g}\right\rangle \right\|^{2}=\left\langle\psi_{0}|e^{-2tH^{2}}|\psi_{0}\right\rangle\). Expressing \(\left|\phi\right\rangle\) in the eigenbasis of \(H\), we obtain \[\left|\phi\right\rangle=\frac{c_{0}e^{-t\lambda_{0}^{2}}}{\sqrt{\left\langle \psi_{0}|e^{-2tH^{2}}|\psi_{0}\right\rangle}}\left[\left|v_{0}\right\rangle+ \sum_{j\geq 1}\frac{c_{j}}{c_{0}}e^{-t\left(\lambda_{j}^{2}-\lambda_{0}^{2} \right)}\left|v_{j}\right\rangle\right], \tag{37}\] where the normalization factor \[\left\langle\psi_{0}|e^{-2tH^{2}}|\psi_{0}\right\rangle=|c_{0}|^{2}e^{-2t \lambda_{0}^{2}}\left[1+\sum_{j\geq 1}\frac{|c_{j}|^{2}}{|c_{0}|^{2}}e^{-2t \left(\lambda_{j}^{2}-\lambda_{0}^{2}\right)}\right]. \tag{38}\] Now, we intend to choose a value of \(t\), so that \(\left|\phi\right\rangle\) and the ground state \(\left|v_{0}\right\rangle\) are \(\varepsilon\)-close to each other in \(\ell_{2}\)-norm. We have, \[\left\||\phi\right\rangle-\left|v_{0}\right\rangle\right\|^{2}=2-2\left\langle \phi|v_{0}\right\rangle, \tag{39}\] where we use the fact that \(c_{0}\geq 0\) which implies \(\left\langle\phi|v_{0}\right\rangle>0\). Thus we have, \[\left\langle\phi|v_{0}\right\rangle =\left[1+\sum_{j\geq 1}\frac{|c_{j}|^{2}}{|c_{0}|^{2}}e^{-2t( \lambda_{j}^{2}-\lambda_{0}^{2})}\right]^{-1/2}\] \[\geq\left[1+\frac{1-\eta^{2}}{\eta^{2}}e^{-2t(\lambda_{1}^{2}- \lambda_{0}^{2})}\right]^{-1/2} \tag{40}\] \[\geq 1-\frac{1-\eta^{2}}{2\eta^{2}}e^{-2t(\lambda_{1}^{2}- \lambda_{0}^{2})}, \tag{41}\] which gives \[\left\||\phi\right\rangle-\left|v_{0}\right\rangle\right\|^{2}\leq\frac{1- \eta^{2}}{\eta^{2}}e^{-2t(\lambda_{1}^{2}-\lambda_{0}^{2})}\leq\frac{1-\eta^ {2}}{\eta^{2}}e^{-2t(\lambda_{1}-\lambda_{0})^{2}}. \tag{42}\] Thus by choosing any value of \(t\) such that \[t>\frac{1}{2\Delta^{2}}\log\left(\frac{1-\eta^{2}}{\eta^{2}\varepsilon^{2}}\right), \tag{43}\] we ensure that \(\left\|\left|\phi\right\rangle-\left|v_{0}\right\rangle\right\|\leq\varepsilon\). The total evolution time of the interaction Hamiltonian is \(T=\sqrt{2t}\), which is \[T=\mathcal{O}\left(\frac{1}{\Delta}\sqrt{\log\left(\frac{1}{\eta\varepsilon} \right)}\right). \tag{44}\] Now we have, \[\langle\psi_{0}|e^{-2tH^{2}}|\psi_{0}\rangle=\frac{|c_{0}|^{2}e^{-2t\lambda_{0 }^{2}}}{|\left\langle\phi|\psi_{0}\right\rangle|^{2}}\geq|c_{0}|^{2}e^{-2t \lambda_{0}^{2}}. \tag{45}\] So the success probability of our algorithm \[\langle\psi_{0}|e^{-T^{2}H^{2}}|\psi_{0}\rangle\geq|c_{0}|^{2}e^{-T^{2} \lambda_{0}^{2}}\geq\mathcal{O}(\eta^{2}), \tag{46}\] where we have used the fact that \(\lambda_{0}\leq 2\varepsilon_{g}\) and so \(T\lambda_{0}=\mathcal{O}(1)\). Overall, this physically motivated quantum algorithm is significantly simpler than implementing standard LCU in the circuit model. Moreover, hybrid qubit-qumode systems are currently being engineered in a number of quantum technological platforms. In the future, we intend to provide an experimental proposal to implement "Analog LCU" on experimental platforms such as ion traps or superconducting systems. Next, we move on to applications of the other approaches of LCU for the GSP problem. Next, we describe how the "Single-Ancilla LCU" technique can be used to develop a randomized quantum algorithm for sampling from ground states. ### Applying Single-Ancilla LCU: Sampling from the ground states of Hamiltonians In this section, suppose that \(H\in\mathbb{R}^{N\times N}\) such that the ground state of \(H\) is \(\left|v_{0}\right\rangle\). We intend to use the techniques of Sec. IV.3, in particular Algorithm 1 and Theorem 1 to estimate the quantity \(\langle v_{0}|O|v_{0}\rangle\), upto \(\varepsilon\)-accuracy for some observable \(O\). As promised, our quantum algorithm will require sampling from a short-depth quantum circuit of Fig. 2 and hence, will be applicable to early fault-tolerant quantum computers. The basic idea is to take advantage of the fact the LCU decomposition of \(f(H)=e^{-tH^{2}}\). Following the analog quantum algorithm in Sec. IV.1, we already know that \(f(H)\left|\psi_{0}\right\rangle\) helps in preparing the ground state of \(H\), for an appropriate choice of \(t\). In this section, we consider a discretized version of this LCU decomposition, i.e. we express \(f(H)\approx\sum_{j=1}^{T}c_{j}e^{-ijH}\), where \(T\) scales as roughly \(\sqrt{t}\). This allows us to use Algorithm 1 to estimate \(\langle v_{0}|O|v_{0}\rangle\). Let us begin by obtaining an \(\gamma\)-accurate LCU decomposition of \(f(H)=e^{-tH^{2}}\). This decomposition has already shown up in prior works [36, 66]. We formally state this via the following Lemma which we prove, for completeness in the Appendix. **Lemma 9** (LCU decomposition of \(e^{-tH^{2}}\)).: _Let \(0<\gamma<1\) and consider a Hamiltonian \(H\) of unit spectral norm. Furthermore, for any \(t>1\), let us define_ \[X_{M}=\sum_{j=-M}^{M}c_{j}e^{-ij\delta_{t}\sqrt{2t}H},\] _where \(M=\left\lceil\sqrt{2}\left(\sqrt{t}+\sqrt{\log(5/\gamma)}\right)\sqrt{\log(4/\gamma) }\right\rceil\), \(\delta_{t}=\left(\sqrt{2t}+\sqrt{2\log(5/\gamma)}\right)^{-1}\) and,_ \[c_{j}=\frac{\delta_{t}}{\sqrt{2\pi}}e^{-j^{2}\delta_{t}^{2}/2}.\] _Then,_ \[\left\|X_{M}-e^{-tH^{2}}\right\|\leq\gamma.\] Proof.: We prove this in the Appendix (See Sec. A - II) We shall use this lemma to develop our randomized quantum algorithm. First, observe that the \(\ell_{1}\)-norm of the LCU coefficients can be upper bounded by a constant. In fact, \[||c||_{1} =\sum_{j=M}^{M}|c_{j}| \tag{47}\] \[\leq|c_{0}|+2\sum_{j=1}^{\infty}\frac{\delta_{t}}{\sqrt{2\pi}}e^{ -j^{2}\delta_{t}^{2}/2}\] (48) \[\leq|c_{0}|+2\int_{0}^{\infty}\frac{e^{-x^{2}/2}}{\sqrt{2\pi}}dx =1+|c_{0}|\leq 1+\delta_{t}=O(1). \tag{49}\] We will now use these results to estimate \(\left\langle v_{0}|O|v_{0}\right\rangle\) by invoking Algorithm 1. First of all, notice that each iteration of our randomized quantum algorithm requires sampling \(V_{1},V_{2}\) according to \(\left\{U_{j},c_{j}/\|c\|_{1}\right\}\). From Lemma 9, we know that each \(U_{j}=e^{-ij\delta_{t}\sqrt{2t}H}\). Thus, the cost of implementing each \(U_{j}\) would be the cost of simulating \(H\). For this, any Hamiltonian simulation technique can be used. Some of the recent randomized Trotter-based approaches [9, 38, 40, 51] will be more suitable for implementing this algorithm on early fault-tolerant quantum computers. For simplicity and clarity of exposition, we consider the Hamiltonian simulation subroutine as a black box and assume that the cost of implementing the Hamiltonian simulation is upper bounded by the maximum time of evolution for \(H\). From Lemma 9, the largest evolution time is given by \[\tau_{\max}=M\delta_{t}\sqrt{2t}=O\left(\sqrt{t\log\left(1/\gamma \right)}\right), \tag{50}\] where, for the GSP problem, the choice of \(\sqrt{t}\) can be obtained from Lemma 8. So, for any observable \(O\) and an initial state \(\rho_{0}=\left|\psi_{0}\right\rangle\left\langle\psi_{0}\right|\), each run of Algorithm 1 estimates some \(\mu_{j}\), such that \[\mathbb{E}\left[\mu_{j}\right]=\mathrm{Tr}[O\;X_{M}\rho_{0}X_{M} ^{\dagger}].\] However, if \(\rho_{g}=\left|v_{0}\right\rangle\left\langle v_{0}\right|\) is the ground state of \(H\), we want the output of Algorithm 1, to be some \(\mu\) such that \[\left|\mu-Tr[O\rho_{g}]\right|\leq\varepsilon.\] For this, the precision \(\gamma\) must be judiciously chosen, using Theorem 7 so that the output \(\mu\) is \(\varepsilon\)-close to the desired expectation value. Furthermore, from Lemma 8, we know that only the normalized quantum state \[\frac{e^{-tH^{2}}\left|\psi_{0}\right\rangle}{\left\|e^{-tH^{2}} \left|\psi_{0}\right\rangle\right\|},\] is close to the ground state. Consequently, we need information about \(\ell=\left\|e^{-tH^{2}}\left|\psi_{0}\right\rangle\right\|\), which from Eq. (46), we know to be at least \(\eta\), the overlap between the ground state and the initial state. For now, let us assume that we have exact knowledge of \(\ell=\normalsize e^{-tH^{2}}\left|\psi_{0}\right\rangle\normalsize\). We will relax this assumption later on. Then, we divide the output of \(\mu\) of Algorithm 1 by \(\ell^{2}\), i.e. obtain \(\mu/\ell^{2}\). In fact, we prove via the following theorem that \(\mu/\ell^{2}\) is a \(\varepsilon\)-accurate approximation of \(\operatorname{Tr}[O\rho_{g}]\). **Theorem 10**.: _Let \(\varepsilon,\delta,\gamma\in(0,1)\) and \(\eta\in(0,1/\sqrt{2}]\). Suppose \(H\) is a Hermitian matrix with ground state \(\left|v_{0}\right\rangle\) and let \(\left|\psi_{0}\right\rangle\) be some initial state, prepared in cost \(\tau_{\psi_{0}}\), such that \(\left|\left\langle v_{0}\right|\psi_{0}\right\rangle|=\eta\). Let \(O\) be some observable. Furthermore, for_ \[t=O\left(\frac{1}{\Delta}\log\left(\frac{\normalsize\|O\|}{\eta\varepsilon} \right)\right),\] _define \(\ell^{2}=\left\langle\psi_{0}|e^{-2tH^{2}}|\psi_{0}\right\rangle\). Then if_ \[\gamma\leq\frac{\varepsilon\eta^{2}}{12\normalsize\|O\|},\] _such that,_ \[\normalsize e^{-tH^{2}}-X_{M}\normalsize\leq\gamma,\] _and_ \[T\geq O\left(\frac{\normize\|O\|^{2}\ln(2/\delta)}{\varepsilon^{2}\eta^{4}}\right)\] _then Algorithm 1 allows outputs, with probability at least \(1-\delta\), a parameter \(\mu\) such that_ \[\left|\frac{\mu}{\ell^{2}}-\left\langle v_{0}|O|v_{0}\right\rangle\right|\leq\varepsilon,\] _using \(T\) calls to the quantum circuit in Fig. 2, where the cost of each such repetition is at most \(\tau_{\max}+\tau_{\psi_{0}}\), where_ \[\tau_{max}=O\left(\frac{1}{\Delta}\log\left(\frac{\normize\|O\|}{\varepsilon \eta}\right)\right).\] Proof.: We have \[\normalsize e^{-tH^{2}}-X_{M}\normalsize\leq\gamma, \tag{51}\] where the upper bound in \(\gamma\) in the Theorem statement ensures \(\gamma\leq\frac{\varepsilon\ell^{2}}{12\normalsize\|O\|}\) as \(\ell>\eta\). From Theorem 1, we have that for this choice of \(\gamma\), \[\left|\frac{\mu}{\ell^{2}}-\frac{\operatorname{Tr}[Oe^{-tH^{2}}\rho_{0}e^{-tH^ {2}}]}{\ell^{2}}\right|\leq\varepsilon/2. \tag{52}\] This implies that the sample complexity is \[T\geq O\left(\frac{\normize\|O\|^{2}\log(2/\delta)\normalsize\|c\|_{1}^{4}}{ \varepsilon^{2}l^{4}}\right)=O\left(\frac{\normize\|O\|^{2}\log(2/\delta)}{ \varepsilon^{2}\eta^{4}}\right), \tag{53}\] where have used the fact that \(\left\|c\right\|_{1}=O(1)\) (from Eq. (49)) and the bound \(\ell\geq\eta\) (from Eq. (46)). Now, Algorithm 1, outputs \(\mu\) such that \[\frac{\mu}{\ell^{2}}=\frac{\left\|c\right\|_{1}^{2}}{T\ell^{2}}\sum_{j=1}^{T} \mu_{j}. \tag{54}\] If \(\rho_{0}=\left|\psi_{0}\right\rangle\left\langle\psi_{0}\right|\) and \(\rho_{g}=\left|v_{0}\right\rangle\left\langle v_{0}\right|\), by triangle inequality, \[\left|\frac{\mu}{\ell^{2}}-\mathrm{Tr}[O\rho_{g}]\right|\leq\left|\frac{\mu} {\ell^{2}}-\frac{\mathrm{Tr}[Oe^{-tH^{2}}\rho_{0}e^{-tH^{2}}]}{\ell^{2}}\right| +\left|\mathrm{Tr}[O\rho_{g}]-\frac{\mathrm{Tr}[Oe^{-tH^{2}}\rho_{0}e^{-tH^{2 }}]}{\ell^{2}}\right|. \tag{55}\] We have already shown that the first expression in the RHS of Eq (55) is bounded by \(\varepsilon/2\). In order to upper bound the second expression in the RHS of Eq. (55), let us consider the normalized quantum state \(\left|\phi\right\rangle\) as defined in Eq. (67), i.e. \[\left|\phi\right\rangle=\frac{e^{-tH^{2}}\left|\psi_{0}\right\rangle}{\left\| e^{-tH^{2}}\left|\psi_{0}\right\rangle\right\|}\] Then, we have: \[\left|\left\langle v_{0}|O|v_{0}\right\rangle-\left\langle\phi|O| \phi\right\rangle\right|\leq \left\|O\right\|_{\infty}\left\||\phi\right\rangle\left\langle \phi|-\left|v_{0}\right\rangle\left\langle v_{0}\right|\right\|_{1}\quad\text { [ Using Lemma \ref{lem:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq: In the previous theorem, we assumed exact knowledge of \(\ell^{2}\). This requirement can be relaxed. In Sec. A - I, we prove that we only require the knowledge of an approximation \(\tilde{\ell}\) of \(\ell^{2}\) (Theorem A1). We prove that if \[\left|\tilde{\ell}-\ell^{2}\right|\leq\frac{\varepsilon\ell^{2}}{6\|O\|}, \tag{60}\] and the estimate \(\mu\) in Theorem 10 be such that \[\left|\frac{\mu}{\ell^{2}}-\frac{\operatorname{Tr}[Of(H)\rho f(H)]}{\ell^{2} }\right|\leq\varepsilon/3,\] then we have \[\left|\frac{\mu}{\tilde{\ell}}-\frac{\operatorname{Tr}[Og(H)\rho g(H)]}{\ell^ {2}}\right|\leq\varepsilon. \tag{61}\] Thus, we only need an approximation \(\tilde{\ell}\) of \(\ell^{2}\) that satisfies Eq. (60). Moreover, one can use the recently developed quantum algorithm in Ref. [12] to obtain an approximation to \(\ell^{2}\). In particular, the quantum algorithm in Ref. [12] obtains samples from a Hadamard test circuit which is also applicable for early fault-tolerant quantum computers (short depth and needing only one ancilla qubit). So, one can make use of this algorithm to estimate \(\tilde{\ell}\), an approximation to the normalization factor \(\ell\), and then run our quantum algorithms. So, our assumption of the prior knowledge of \(\tilde{\ell}\) is not a strong one. For our results, however, we assume that such an estimate \(\tilde{\ell}\) has already been obtained. This gives us the main theorem of this section, which we state next: **Theorem 11** (Sampling from ground states of Hamiltonians).: _Let \(\varepsilon,\delta,\gamma\in(0,1)\) and \(\eta\in(0,1/\sqrt{2}]\). Suppose \(H\) is a Hermitian matrix with ground state \(|v_{0}\rangle\) and let \(|\psi_{0}\rangle\) be some initial state, prepared in cost \(\tau_{\psi_{0}}\), such that \(|\left\langle v_{0}|\psi_{0}\right\rangle|=\eta\). Let \(O\) be some observable. Also for,_ \[t=O\left(\frac{1}{\Delta}\log\left(\frac{\|O\|}{\eta\varepsilon}\right)\right),\] _define \(\ell^{2}=\langle\psi_{0}|e^{-2tH^{2}}|\psi_{0}\rangle\). Suppose we know \(\tilde{\ell}\), an approximation of \(\ell^{2}\) such that_ \[\left|\tilde{\ell}-\ell^{2}\right|\leq\frac{\varepsilon\ell^{2}}{6\|O\|}.\] _Then if_ \[\gamma\leq\frac{\varepsilon\eta^{2}}{36\|O\|},\] _such that,_ \[\left\|e^{-tH^{2}}-X_{M}\right\|\leq\gamma,\] _and,_ \[T\geq O\left(\frac{\|O\|^{2}\ln(2/\delta)}{\varepsilon^{2}\eta^{4}}\right)\] _then Algorithm 1 allows outputs, with probability at least \(1-\delta\), a parameter \(\mu\) such that_ \[\left|\frac{\mu}{\tilde{\ell}}-\langle v_{0}|O|v_{0}\rangle\right|\leq\varepsilon,\] _using \(T\) calls to the quantum circuit in Fig. 2, where the cost of each such repetition is at most \(\tau_{\max}+\tau_{\psi_{0}}\), where_ \[\tau_{max}=O\left(\frac{1}{\Delta}\log\left(\frac{\|O\|}{\varepsilon\eta}\right) \right).\] Proof.: The upper bound on \(\gamma\) ensures that \[\gamma\leq\frac{\varepsilon\ell^{2}}{36\|O\|},\] as \(\ell>\eta\). So from Theorem 1, we have \[\left|\frac{\mu}{\ell^{2}}-\frac{\operatorname{Tr}[Oe^{-tH^{2}}\rho_{0}e^{-tH ^{2}}]}{\ell^{2}}\right|\leq\varepsilon/6.\] Thus, combining this bound with Theorem A1, it follows that as long as \[\left|\tilde{\ell}-\ell^{2}\right|\leq\frac{\varepsilon\ell^{2}}{6\|O\|},\] we have \[\left|\frac{\mu}{\tilde{\ell}}-\frac{\operatorname{Tr}[Oe^{-tH^{2}}\rho_{0}e^ {-tH^{2}}]}{\ell^{2}}\right|\leq\varepsilon/2.\] The rest of the proof follows exactly like the derivation of Theorem 10. ### Ground state preparation using QSVT on fully fault-tolerant quantum computers In this section, we provide a quantum algorithm for the GSP problem for fully fault-tolerant quantum computers. The key idea is to implement the function \(f(H)=e^{-tH^{2}}\) in the circuit model. A straightforward approach would be to use the decomposition of \(f(H)\) in Lemma 9 and implement a standard LCU procedure. However, a more efficient approach would be to implement a polynomial approximation of \(f(H)\) using QSVT. This is what we implement next. A polynomial approximation of \(e^{-tx^{2}}\) can be obtained for \(x\in[-1,1]\) by modifying the polynomial in Lemma 29. In the following Lemma, we prove that such a polynomial exists. **Lemma 12** (Polynomial approximation to \(e^{-tx^{2}}\)).: _Suppose \(x\in[-1,1]\), \(\varepsilon\in[0,1/2)\) and \(t\in\mathbb{R}^{+}\). Furthermore, suppose \(d=\lceil\max\{te^{2}/2,\ln(2/\varepsilon)\}\rceil\). Then, there exists a polynomial \(\tilde{q}_{t,d,d^{\prime}}(x)\) of degree_ \[d^{\prime}=\lceil\sqrt{2d\ln(4/\varepsilon)}\rceil\in O\left(\sqrt{t}\log(1/ \varepsilon)\right),\] _for which the following holds_ \[\sup_{x\in[-1,1]}\left|e^{-tx^{2}}-\tilde{q}_{t,d,d^{\prime}}(x)\right|\leq\varepsilon.\] Proof.: This is proven in Sec. A - III of the Appendix We use lemma 12, to obtain a block encoding of \(e^{-tH^{2}}\), given an approximate block encoding of \(H\). Subsequently, we shall show that this results in a robust quantum algorithm for preparing the ground state of \(H\) under the assumptions we have considered. **Lemma 13**.: _Let \(H\) be a Hermitian matrix with eigenvalues in \([-1,1]\) and \(\varepsilon\in(0,1/2)\). Furthermore, suppose \(t\in\mathbb{R}^{+}\) and \(U_{H}\) is an \((1,a,\delta)\)-block encoding of \(H\), implementable in time \(T_{H}\). Also, let \(d=\lceil\max\{te^{2}/2,\ln(4/\varepsilon)\}\rceil\) and \(d^{\prime}=\lceil\sqrt{2d\ln(8/\varepsilon)}\rceil\). Then, provided_ \[\delta\leq\frac{\varepsilon^{2}}{128d\ \ln(8/\varepsilon)},\] _we can implement an \((1,a+1,\varepsilon)\)-block encoding of \(e^{-tH^{2}}\) in cost_ \[T=O\left(T_{H}\sqrt{t}\log(1/\varepsilon)\right).\] Proof.: Now suppose \(\tilde{H}=\left(\left\langle 0\right|^{\otimes a}\otimes I\right)U_{H}\left( \left|0\right\rangle^{\otimes a}\otimes I\right)\). Then, from the definition of block encoding of operators, \(\left\|H-\tilde{H}\right\|\leq\delta\). Also, from Lemma 12, we can use the polynomial of degree \(d^{\prime}=\lceil\sqrt{2d\log(8/\varepsilon)}\rceil\) to implement \((1,a+1,\varepsilon/2)\)-block encoding of \(\tilde{q}_{t,d,d^{\prime}}(\tilde{H})\) in cost \[T=d^{\prime}\in O\left(T_{H}\sqrt{t}\log(1/\varepsilon)\right).\] The number of ancilla qubits increased by one because of the QSVT procedure. So, we have \[\left\|e^{-tH^{2}}-\tilde{q}_{d,d^{\prime}}\left(\tilde{H}\right)\right\| \tag{62}\] \[\leq\varepsilon/2+4d^{\prime}\sqrt{\delta}\] (63) \[\leq\varepsilon/2+\varepsilon/2=\varepsilon.\qquad\qquad\text{ [ Substituting the value of $\delta$ and $d^{\prime}$]}. \tag{64}\] Now that we have a procedure to implement a block encoding of \(e^{-tH^{2}}\), given an approximate block encoding of \(H\), we can use this to obtain a circuit model quantum algorithm for preparing the \(0\)-eigenstate of \(H\). As before, let us make some assumptions on the spectrum of \(H\). We assume that we are given a Hamiltonian \(H\) of unit norm with ground energy, \(\lambda_{0}\) and we intend to prepare a state that is close to its ground state, \(\left|v_{0}\right\rangle\). We assume that the gap between the ground state and the rest of the spectrum is lower bounded by \(\Delta\). We also assume that we have knowledge of \(E_{0}\) such that \[\left|\lambda_{0}-E_{0}\right|\leq O\left(\Delta/\sqrt{\log\frac{1}{\eta \varepsilon}}\right).\] **Lemma 14**.: _Let \(\varepsilon\in(0,1/2)\) and \(H\) be a Hamiltonian. Furthermore, suppose we are given \(U_{H}\), which is a \((1,a,\delta)\)-block encoding of \(H\), implemented in time \(T_{H}\). Let \(\left|v_{0}\right\rangle\) be the ground state of \(H\) with eigenvalue \(\lambda_{0}\) such that the value of \(\lambda_{0}\) is known up to precision \(\varepsilon_{g}\in\mathcal{O}\left(\Delta/\sqrt{\log\frac{1}{\eta\varepsilon}}\right)\), where \(\Delta\) is a lower bound on the spectral gap of \(H\)._ _Additionally, let us assume access to a state preparation procedure \(B\) which prepares a state \(\left|\psi_{0}\right\rangle\) in time \(T_{\psi_{0}}\) such that \(\left|\left\langle\psi_{0}\right|v_{0}\right\rangle|\geq\eta\). Also, let_ \[\delta\leq\frac{\varepsilon^{2}\eta^{2}}{512d\ \ln\left(\frac{16}{\eta \varepsilon}\right)},\] _where, \(d=\lceil\max\{te^{2}/2,\ln(8/\varepsilon)\}\rceil\), and_ \[t>\frac{1}{2\Delta^{2}}\log\left(\frac{4(1-\eta^{2})}{\eta^{4}\varepsilon^{2 }}\right).\] _Then there exists a quantum algorithm that prepares a quantum state that is \(O(\varepsilon)\)-close to \(|v_{0}\rangle\) with \(\Omega(1)\) probability in cost_ \[T=O\left(\frac{T_{H}}{\eta\Delta}\log\left(\frac{1}{\eta\varepsilon}\right)+ \frac{T_{\psi_{0}}}{\eta}\right). \tag{65}\] Proof.: In lemma 13, we replace \(\varepsilon\) with \(\varepsilon\eta/2\) to prepare an \((1,a+1,\varepsilon\eta/2)\)-block encoding of \(e^{-tH^{2}}\). Furthermore, we choose \[t\geq\frac{1}{2\Delta^{2}}\log\left(\frac{4(1-\eta^{2})}{\eta^{4}\varepsilon^ {2}}\right)=O\left(\frac{1}{\Delta^{2}}\log\left(\frac{1}{\eta\varepsilon} \right)\right). \tag{66}\] To get an \(\varepsilon\eta/2\)-precision in the block encoding the degree of the polynomial \(\tilde{q}_{t,d,d^{\prime}}(H^{\prime})\) is \[d^{\prime}=\left\lceil\sqrt{2d\ln\left(\frac{16}{\eta\varepsilon}\right)} \right\rceil,\] where \(d=\left\lceil\max\{te^{2}/2,\ln\left(\frac{8}{\varepsilon\eta}\right)\}\right\rceil\). This yields that the precision of block encoding of \(H\) needs to be at least \(\delta\)-precise where, \[\delta\leq\frac{\varepsilon^{2}\eta^{2}}{512d\ln\left(\frac{16}{\eta \varepsilon}\right)}.\] Thus, with cost, \[O\left(\frac{T_{H}}{\Delta}\log\left(\frac{1}{\eta\varepsilon}\right)+T_{\psi _{0}}\right),\] we prepare a quantum state that is \(O\left(\varepsilon\eta/2\right)\)-close to \[|\eta_{t}\rangle=|\bar{0}\rangle\,\frac{e^{-tH^{2}}}{\sqrt{\langle\psi_{0}|e^{ -2tH^{2}}|\psi_{0}\rangle}}\,|\psi_{0}\rangle+|\Phi^{\perp}\rangle\,.\] Post-selected on obtaining \(|\bar{0}\rangle\) in the first register, we obtain a quantum state that is \(O(\varepsilon\eta/2)\)-close to \[|\phi\rangle=\frac{e^{-tH^{2}}}{\sqrt{\langle\psi_{0}|e^{-2tH^{2}}|\psi_{0} \rangle}}\,|\psi_{0}\rangle\,, \tag{67}\] with amplitude \(\sqrt{\langle\psi_{0}|e^{-2tH^{2}}|\psi_{0}\rangle}=\Omega(\eta)\), where the lower bound is obtained from Eq. (46). Now by choosing \(t\) as in Eq. (66) and replacing it in Eq. (42), we have \[\big{\|}|v_{0}\rangle-|\phi\rangle\big{\|}\leq O(\varepsilon\eta/2).\] By the triangle inequality, this implies, that the quantum state prepared is \(O(\varepsilon\eta)\)-close to \(|v_{0}\rangle\) with probability \(\Omega(\eta)\). So by using \(O(1/\eta)\)-rounds of amplitude amplification, we obtain a quantum state that is \(O(\varepsilon)\)-close to \(|v_{0}\rangle\) with probability \(\Omega(1)\). The overall cost will be \[T=O\left(\frac{T_{H}}{\eta\Delta}\log\left(\frac{1}{\eta\varepsilon}\right)+ \frac{T_{\psi_{0}}}{\eta}\right).\] Now suppose we have a generic Hamiltonian with eigenvalues in \([-1,1]\) such that the ground energy is unknown. Then the function \(e^{-tH^{2}}\) helps prepare the \(0\)-eigenstate of \(H\), even if this is not necessarily the ground state of \(H\). One application of this procedure is that it leads to an optimal quantum linear systems algorithm as we shall see in the next section. ## VI Applications to quantum linear systems The quantum linear systems algorithm can be stated as follows: Given access to a Hermitian matrix \(H\) and some initial state \(\ket{b}\), prepare the quantum state \(\ket{x}=H^{-1}\ket{b}/\left\lVert H^{-1}\ket{b}\right\rVert\). Ever since the first quantum algorithm for this problem by Harrow, Hassidim, and Lloyd [62], the quantum linear system algorithm has been widely studied. The complexity of this algorithm has been progressively improved through a series of results [25; 26; 46]. Recently, adiabatic-inspired approaches have also been reported [52; 53; 55]. Just like in the previous section, we apply "Analog LCU" to develop two quantum linear systems algorithms in continuous-time (Sec VI.1): the first one is an analog variant of the direct approach in [25] while the second one is more amenable to near-term implementations. Following this, we use the "Single-Ancilla" approach to develop two randomized quantum algorithms for this problem that are implementable on early fault-tolerant quantum computers (Sec. VI.2). Finally, just like in the previous section, we provide an algorithm for solving quantum linear systems using QSVT on fully fault-tolerant quantum computers (Sec. VI.3). Let us begin by formally stating the quantum linear systems problem. Quantum linear systems:Suppose we have access to a Hermitian matrix \(H\in\mathbb{R}^{N\times N}\) such that its eigenvalues lie in the interval \([-1,-1/\kappa]\cup[1/\kappa,1]\). Then, given a procedure that prepares the \(N\)-dimensional quantum state \(\ket{b}\), a quantum linear systems algorithm prepares a quantum state that is \(O(\varepsilon)\) - close to \[\ket{x}=\frac{H^{-1}\ket{b}}{\left\lVert H^{-1}\ket{b}\right\rVert}.\] It is worth noting that the quantum linear systems algorithm is different from its classical counterpart in that by preparing \(\ket{x}\), one does not have access to the entries of the classical vector \(\vec{x}\). In quantum linear systems, thus, often one is interested in extracting useful information out of the state \(\ket{x}\), such as estimating the expectation value \(\bra{x}O\ket{x}\), for some observable \(O\). The assumption that \(H\) is a Hermitian matrix is without loss of generality. Given any non-Hermitian \(H\in\mathbb{R}^{N\times d}\), there exist efficient procedures to obtain a Hermitian matrix \(\tilde{H}\) of dimension \((N+d)\times(N+d)\), such that the absolute value of its (non-zero) eigenvalues are equal to the non-zero singular values of \(H\). Then, one may instead implement quantum linear systems with \(\tilde{H}\) instead of \(H\). ### Applying Analog LCU: Continuous-time quantum linear systems algorithms In this section, we develop analog quantum algorithms for solving quantum linear systems. Following the exposition in Sec. IV.1, we shall assume that we are given a system Hamiltonian \(H\). We couple this Hamiltonian (the primary system) to two ancillary continuous-variable systems via the interaction Hamiltonian \[H^{\prime}=H\otimes\hat{y}\otimes\hat{z}. \tag{68}\] The primary system will be initialized in the quantum state \(\ket{b}\) while the two ancillary systems will be in some continuous-variable states. The quantum algorithms developed in this subsection involve evolving the overall system according to \(H^{\prime}\) for some time. Following this, we shall show that the primary system is in the state \(\ket{x}\) (or close to it) with an amplitude of \(\Omega(1/\kappa)\). We begin with the first quantum algorithm, which is an analog analogue of the quantum linear systems algorithm of [25]. Continuous-time quantum linear systems algorithm:Consider the function \(f(y)=ye^{-y^{2}/2}\), where \(y\in\mathbb{R}\). As \[\int_{0}^{\infty}dy\;f(y)=1 \tag{69}\] \[\implies \int_{0}^{\infty}dy\;f(xy)=1/x, \tag{70}\] which holds for any \(x\neq 0\). For any function \(g(y)\), suppose its Fourier transform is \(\mathcal{F}(g(y))=F(\omega)\), then \(\mathcal{F}(g^{\prime}(y))=i\omega F(\omega)\). If \(g(y)=e^{-y^{2}/2}\), we have that \(g^{\prime}(y)=-ye^{-y^{2}/2}=-f(y)\). This implies, \[\frac{i}{\sqrt{2\pi}}\int_{-\infty}^{\infty}dz\ ze^{-z^{2}/2}e^{ -izy}=ye^{-y^{2}/2}, \tag{71}\] and, \[\frac{1}{x}=\frac{i}{\sqrt{2\pi}}\int_{0}^{\infty}dt\int_{-\infty }^{\infty}dz\ ze^{-z^{2}/2}e^{-izxt}.\] Next, we will prove via a lemma that the upper limit of the outer integral can be truncated at \(T=\widetilde{O}(\kappa)\), without introducing significant error. **Lemma 15**.: _Suppose \(\varepsilon>0,z\in\mathbb{R}\), and \(x\in\mathbb{R}\setminus\{0\}\). Then there exists \(T\in\Theta\left(\kappa\sqrt{\log(\kappa/\varepsilon)}\right)\), such that on the domain \([-1,-1/\kappa]\cup[1/\kappa,1]\),_ \[\left|\frac{1}{x}-\frac{1}{\sqrt{2\pi}}\int_{0}^{T}dt\ \int_{-\infty}^{ \infty}dz\ ze^{-z^{2}/2}e^{-izxt}\right|\leq\varepsilon. \tag{72}\] Proof.: We have to evaluate the quantity \[\left|\frac{1}{\sqrt{2\pi}}\int_{T}^{\infty}dt\ \int_{-\infty}^{ \infty}dz\ ze^{-z^{2}/2}e^{-izxt}\right|\] We first evaluate the outer integral and obtain, \[\left|\frac{1}{\sqrt{2\pi}}\int_{T}^{\infty}dt\ \int_{-\infty}^{ \infty}dz\ ze^{-z^{2}/2}e^{-izxt}\right| =\left|\int_{T}^{\infty}dt\ xt\ e^{-x^{2}t^{2}/2}\right| \text{[ Using Eq.~{}\eqref{eq:2.1}~{}]} \tag{73}\] \[=\left|\frac{1}{x}\int_{x^{2}T^{2}/2}^{\infty}dy\ e^{-y}\right| \text{[}\ y=x^{2}t^{2}/2\ \text{]}\] (74) \[=\left|\frac{1}{x}\cdot e^{-x^{2}T^{2}/2}\right|\] (75) \[\leq\frac{1}{|x|}\left|e^{-x^{2}T^{2}/2}\right|. \tag{76}\] Now for \(T=\kappa\sqrt{2\log(\kappa/\varepsilon)}\), we have \(\left|e^{-x^{2}T^{2}/2}\right|\leq\varepsilon/\kappa\). Now as \(|x|\geq 1/\kappa\), we have that Eq. (76) is upper bounded by \(\varepsilon\). So finally, \[\left|\frac{1}{x}-\frac{1}{\sqrt{2\pi}}\int_{0}^{T}dt\ \int_{-\infty}^{ \infty}dz\ ze^{-z^{2}/2}e^{-izxt}\right| =\left|\frac{1}{\sqrt{2\pi}}\int_{T}^{\infty}dt\ \int_{-\infty}^{ \infty}dz\ ze^{-z^{2}/2}e^{-izxt}\right|\leq\varepsilon. \tag{77}\] In order to design the analog quantum algorithm, consider that the effective interaction Hamiltonian is \(H^{\prime}=H\otimes\hat{y}\otimes\hat{z}\). While the system Hamiltonian \(H\) is prepared in some input state \(\ket{b}\), the first ancilla system is prepared in the first-excited state of a one-dimensional quantum Harmonic oscillator \[\ket{\psi_{h}}=\frac{1}{(2\pi)^{1/4}}\int_{-\infty}^{\infty}dy\ ye^{-y^{2}/4} \ket{y}. \tag{78}\] The second ancilla system is in the ground state of a "particle in a ring" of diameter 1, given by \[\ket{\tau}=\int_{0}^{1}dz\ \ket{z}. \tag{79}\] Then evolving the overall system according to \(H^{\prime}\) for time \(T\), we obtain \[\ket{\eta_{t}} =e^{-i\tilde{H}T}\ket{b}\ket{\psi_{h}}\ket{\tau} \tag{80}\] \[=\int_{0}^{1}dz\ \int_{-\infty}^{\infty}\frac{dy}{(2\pi)^{1/4}}\ ye^ {-y^{2}/4}e^{-iyzHT}\ket{b}\ket{y}\ket{z}\] (81) \[=\frac{1}{T}\int_{0}^{T}dt\ \int_{-\infty}^{\infty}\frac{dz}{ \sqrt{2\pi}}\ ze^{-z^{2}/2}e^{-iztH}\ket{b}\ket{\psi_{h}}\ket{\tau}+\ket{\Phi}^ {\perp}\ \ \ \ \ \text{[ Change of variable $t=Ty$ ]} \tag{82}\] Now, by choosing time \(T=\Theta\left(\kappa\sqrt{\log(\kappa/\varepsilon)}\right)\), from Lemma 15, we obtain a quantum state that is \(O(\varepsilon/T)\)-close to \[\ket{\eta_{t}}=\frac{H^{-1}}{T}\ket{b}\ket{\psi_{h}}\ket{\tau}+\ket{\Phi}^{ \perp}. \tag{83}\] By post-selecting on \(\ket{\psi_{h}}\) in the second register, one obtains \(\ket{x}\) in the first register with probability \(\widetilde{\Omega}(1/\kappa^{2})\). Alternatively, \(\widetilde{O}(\kappa)\)-rounds of amplitude amplification (which is a circuit model procedure) can yield a quantum state \(O(\varepsilon)\)-close to \(\ket{x}\). Although this procedure works in general, the quantum state \(\ket{\tau}\) might be difficult to prepare experimentally. In fact, for continuous-variable systems, Gaussian states are the easiest to prepare and manipulate [49]. So, next, we provide a quantum algorithm for which it suffices to prepare both the ancillary registers in Gaussian states. **Continuous-time quantum linear systems algorithm using only Gaussian states:** The previous quantum algorithm sections require to prepare the non-Gaussian continuous-variable state \[\ket{\tau}=\frac{1}{\sqrt{T}}\int_{0}^{T}dz\ \ket{z}.\] Since Gaussian states are typically easier to generate and manipulate, let us design alternative algorithms using Gaussian states only. The general idea is to approximate \(\int_{-\infty}^{+\infty}dt\) by \(\int_{-\infty}^{+\infty}dt\ e^{-t^{2}/2T^{2}}\) (rather than \(\int_{-T}^{T}dt\)) for large enough \(T\). The analogue of Lemma 15 becomes **Lemma 16**.: _Suppose \(\varepsilon>0,z\in\mathbb{R}\), and \(x\in\mathbb{R}\setminus\{0\}\). Then, there exists \(T\geq\kappa^{3/2}/\sqrt{\varepsilon}\), such that on the domain \([1/\kappa,1]\),_ \[\left|\frac{1}{x}-\frac{1}{2\pi}\int_{-\infty}^{+\infty}dt\ e^{-t^{2}/2T^{2}} \ \int_{-\infty}^{+\infty}dz\ e^{-z^{2}/2}e^{-ixtz}\right|\leq\Theta(\varepsilon). \tag{84}\] Proof.: We have, using the fact that the Fourier transform of a Gaussian is a Gaussian \[\frac{1}{2\pi}\int_{-\infty}^{+\infty}dt\ e^{-t^{2}/2T^{2}}\ \int_{-\infty}^{+\infty}dz\ e^{-z^{2}/2}e^{-ixtz} =\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{+\infty}dt\ e^{-t^{2}/2T^{2} }\ e^{-x^{2}t^{2}/2}\] \[=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{+\infty}dt\ e^{-\left(x^{2} +1/T^{2}\right)t^{2}/2}=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{+\infty}dt\ e^{- \tilde{x}^{2}t^{2}/2}\] \[=\frac{1}{\tilde{x}}\] where, we have set \(\tilde{x}=\sqrt{x^{2}+1/T^{2}}\). Therefore, it remains to bound \[\left|\frac{1}{x}-\frac{1}{\tilde{x}}\right|=\left|\frac{1}{x}\left(1-\frac{ x}{\tilde{x}}\right)\right|\leq\frac{1}{|x|}\left|1-\frac{1}{\sqrt{1+1/x^{2}T^{ 2}}}\right|\leq\frac{1}{|x|}\cdot\frac{1}{x^{2}T^{2}}\leq\varepsilon\] Unfortunately, the scaling of \(T\) is worse than for the non-Gaussian approach since \(T\) scales as \(\kappa^{3/2}\) (instead of linear) and the dependence of precision is \(1/\sqrt{\varepsilon}\) (rather than inverse-logarithmic). Moreover, as the Gaussian function is even, the procedure works only for positive semi-definite Hamiltonians. Nevertheless, this allows us to design a quantum linear systems algorithm using only Gaussian states as ancillae. Let us again consider the interaction Hamiltonian \(H^{\prime}=H\otimes\hat{y}\otimes\hat{z}\), where \(H\) is now some positive semidefinite Hamiltonian with its eigenvalues lying in \([1/\kappa,1]\). We prepare both the ancilla registers in a Gaussian state which, similarly to Sec. V.1, is defined as follows \[\left|\psi_{g}\right\rangle=\frac{1}{(2\pi)^{1/4}}\int_{-\infty}^{\infty}dz\ e^{-z^{2}/4}\left|z\right\rangle.\] Indeed, it suffices to let the state \(\left|b\right\rangle\left|\psi_{g}\right\rangle\left|\psi_{g}\right\rangle\) evolve under Hamiltonian \(H^{\prime}\) for time \(T\) to obtain \[e^{-iH^{\prime}T}\left|b\right\rangle\left|\psi_{g}\right\rangle\left|\psi_{g }\right\rangle=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}dz\ \int_{-\infty}^{\infty}dy\ e^{-(y^{2}+z^{2})/4}e^{-iyzHT}\left|b\right\rangle \left|y\right\rangle\left|z\right\rangle.\] If we choose some \(T\geq\kappa^{3/2}/\sqrt{\varepsilon}\), we have \[\left(I\otimes\left|\psi_{g}\right\rangle\left\langle\psi_{g} \right|\otimes\left|\psi_{g}\right\rangle\left\langle\psi_{g}\right|\right)e^{ -i\hat{H}T}\left|b\right\rangle\left|\psi_{g}\right\rangle\left|\psi_{g}\right\rangle =\frac{1}{2\pi}\int_{-\infty}^{+\infty}dz\ \int_{-\infty}^{+\infty}dy\ e^{-(y^{2}+z^{2})/2}e^{-iyzHT}\left|b\right\rangle \left|\psi_{g}\right\rangle\left|\psi_{g}\right\rangle \tag{85}\] \[=\frac{1}{2\pi T}\int_{-\infty}^{+\infty}dt\ e^{-t^{2}/2T^{2}}\ \int_{-\infty}^{+\infty}dz\ e^{-z^{2}/2}e^{-itzH}\left|b\right\rangle \left|\psi_{g}\right\rangle\left|\psi_{g}\right\rangle \tag{86}\] where we have used the change of variable \(t=Ty\). So, \[e^{-i\hat{H}T}\left|b\right\rangle\left|\psi_{g}\right\rangle \left|\psi_{g}\right\rangle =\frac{1}{2\pi T}\int_{-\infty}^{+\infty}dt\ e^{-t^{2}/2T^{2}} \ \int_{-\infty}^{+\infty}dy\ e^{-z^{2}/2}e^{-itzH}\left|b\right\rangle \left|\psi_{g}\right\rangle\left|\psi_{g}\right\rangle+\left|\phi\right\rangle^ {\perp}\] (87) \[=\frac{H^{-1}}{T}\left|b\right\rangle\left|\psi_{g}\right\rangle \left|\psi_{g}\right\rangle+\left|\Phi\right\rangle^{\perp}+O(\varepsilon/T) \qquad\text{[ From Lemma \ref{lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lemlem:lem:lem:lemlem:lem:lem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lemlem:lem:lemlem:lem:lem:lemlem:lem:lem:lem:lemlem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lem:lem:lemlem:lem:lem:lemlem:lem:lem:lem:lem:lem:lemlem:lem:lem:lem:lemlem:lem:lem:lem:lem:lemlem:lem:lemlem:lem:lem:lemlem:lem:lem:lem:lemlem:lem:lemlem:lem:lem:lem:lemlem:lem:lem:lem:lemlem:lem:lem:lemlem:lem:lem:lemlem:lemlem:lem:lem:lemlem:lemlem:lem:lemlem:lemlem:lem:lemlem:lem:lemlem:lem:lemlem:lem:lemlem:lem:lemlem:lem:lemlem:lemlem:lem:lemlem:lemlem:lem:lemlem:lem:lem:lemlem:lem:lemlem:lem:lemlem:lemlem:lemlem:lem:lemlem:lem:lemlem:lemlem:lemlem:lem:lem:lemlem:lemlem:lemlem:lemlem:lem:lemlem:lemlem:lemlem:lemlem:lem:lemlemlem:lemlem:lem:lemlem:lemlem:lemlem:lemlem:lem:lemlemlem:lem:lemlem:lemlemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlem:lemlemlem:lemlem:lemlem:lemlem:lemlemlem:lemlem:lemlemlem:lemlem:lemlem:lemlemlem:lemlemlem:lemlem:lemlemlem:lemlemlem:lemlemlem:lemlemlemlemlem:lem with amplitude \(\tilde{\Omega}\left(\sqrt{\varepsilon}/\kappa^{3/2}\right)\). Although the complexity of this algorithm is worse than the continuous-time quantum algorithm in the previous section, it requires only Gaussian states. Consequently, it is more suitable for being implementable in the near term. One can improve the complexity of this quantum linear systems algorithm by replacing the Gaussian state in the second register with the flat state \(\ket{\tau}\). For positive semidefinite Hamiltonians, if the first ancillary system is in a Gaussian state while the second one is in \(\ket{\tau}\), we can still obtain a quantum state that is \(O(\varepsilon/\kappa)\)-close to the solution of the quantum linear systems in time \(\widetilde{O}(\kappa)\). This follows from observing \[\frac{1}{x}=\int_{-\infty}^{\infty}\frac{dt}{\sqrt{2\pi}}\int_{- \infty}^{\infty}\frac{dz}{\sqrt{2\pi}}e^{-z^{2}/2}e^{-ixtz}=2\int_{0}^{\infty} \frac{dt}{\sqrt{2\pi}}\int_{-\infty}^{\infty}\frac{dz}{\sqrt{2\pi}}e^{-z^{2}/ 2}e^{-ixtz}. \tag{89}\] We can truncate the outer integral to \(T=\Theta(\kappa\sqrt{\log(\kappa/\varepsilon)})\), and introduce only a \(\varepsilon\) error. Finally, we believe several existing quantum technological platforms might already be able to engineer these interactions for system Hamiltonians of small dimensions. Our analog approach provides a more physically motivated model for implementing quantum linear systems. Next, we move on to the problem of sampling from the solution of quantum linear systems on early fault-tolerant quantum computers. For this, we make use of the "Single-Ancilla LCU" technique. ### Applying Single-Ancilla LCU: Sampling from the solution of quantum linear systems In the "Single-Ancilla" framework, we provide two randomized quantum algorithms for sampling from the solution of quantum linear systems. For the first algorithm, we consider the discrete LCU decomposition of \(H^{-1}\) of Ref. [25]. For the second method, we will draw inspiration from the adiabatic approaches to solve this problem [52; 53; 54; 55]. #### Method 1: Direct implementation of \(H^{-1}\) In this approach, we will use the discrete version of the LCU decomposition of \(H^{-1}\) by Childs, Kothari, and Somma [25]. We will directly use their results here. We begin by stating the discretized version of the expression in Lemma 15. Let, \[g(x)=\frac{i}{\sqrt{2\pi}}\sum_{j=0}^{J-1}\Delta_{y}\sum_{k=-K}^{ K}\Delta_{z}z_{k}e^{-z_{k}^{2}/2}e^{-xy_{j}z_{k}}, \tag{90}\] where \(y_{j}=j\Delta_{y}\) and \(z_{k}=k\Delta_{z}\), for some \(J\in\Theta(\frac{\kappa}{\varepsilon}\log(\kappa/\varepsilon))\), \(\Delta_{y}=\Theta(\varepsilon/\sqrt{\log(\kappa/\varepsilon)})\) and \(\Delta_{z}=\Theta((\kappa\sqrt{\log(\kappa/\varepsilon)})^{-1})\). Then, Childs et al. [25] proved that \(\left|1/x-h(x)\right|\leq\varepsilon\) in the domain \([-1,-1/\kappa]\cup[1/\kappa,1]\). From this LCU it is clear that in order to approximate \(H^{-1}\) in this domain, the time parameter of the Hamiltonian simulation is at most, \[t=\Theta(y_{J}z_{K})=\Theta\left(\kappa\log(\kappa/\varepsilon) \right). \tag{91}\] Furthermore, from [25], the \(\ell_{1}\)-norm of the LCU coefficients were shown to be upper bounded by \(\left\|c\right\|_{1}=\Theta(\kappa\sqrt{\log(\kappa/\varepsilon)})\). The normalization factor \(\ell=\left\|H^{-1}\left|b\right\rangle\right\|\) is important as we have seen in Sec. V.2. In this case \(1\leq\ell\leq\kappa\). As before, given an approximate estimate \(\tilde{\ell}\) of \(\ell^{2}\), for any observable \(O\), we would want Algorithm 1 to output some \(\mu\) such that \[\left|\frac{\mu}{\tilde{\ell}}-\langle x|O|x\rangle\right|\leq\varepsilon. \tag{92}\] We show via the following Theorem that this is indeed the case for appropriate choices of \(T\) and \(\tau_{\max}\). **Theorem 17** (Sampling from the solution of quantum linear systems).: _Let \(H\) be a Hermitian matrix such that its non zero eigenvalues lie in \([-1,-1/\kappa]\cup[1/\kappa,1]\). Suppose there exists a procedure that prepares the quantum state \(\left|b\right\rangle\) in cost \(\tau_{b}\) and let \(O\) be an observable. Define \(\ell=\left\|H^{-1}\left|b\right\rangle\right\|\) and let \(\varepsilon,\delta,\gamma\in(0,1)\). Furthermore, assume that we know \(\tilde{\ell}\), an approximation \(\ell^{2}\) such that_ \[\left|\tilde{\ell}-\ell^{2}\right|\leq\frac{\varepsilon\ell^{2}}{6\|O\|}\] _suppose, \(\varepsilon,\delta,\gamma\in(0,1)\) be parameters and \(O\) be an observable. Then if_ \[\gamma\leq\frac{\varepsilon\ell^{2}}{18\|O\|\|H^{-1}\|},\] _such that_ \[\left\|H^{-1}-g(H)\right\|\leq\gamma,\] _and,_ \[T\geq O\left(\frac{\left\|O\right\|^{2}\kappa^{4}\log^{2}\left(\frac{\left\| O\right\|\kappa}{\varepsilon}\right)\ln(2/\delta)}{\varepsilon^{2}}\right).\] _then Algorithm 1 allows outputs, with probability at least \(1-\delta\), a parameter \(\mu\) such that_ \[\left|\frac{\mu}{\tilde{\ell}}-\left\langle x|O|x\right\rangle\right|\leq\varepsilon,\] _using \(T\) calls to a Hamiltonian simulation procedure, where the cost of each such repetition is at most_ \[\tau_{max}=O\left(\kappa\log\left(\frac{\left\|O\right\|\kappa}{\varepsilon} \right)\right).\] _and only one ancilla qubit._ Proof.: First observe that we have \[\left\|g(H)-H^{-1}\right\|\leq\gamma,\] where \[\gamma\leq\frac{\varepsilon}{18\kappa^{2}\|O\|},\] which implies in fact, \[\gamma\leq\frac{\varepsilon\ell^{2}}{18\|O\|\|H^{-1}\|},\] as \(\ell\geq 1\) and \(\left\|H^{-1}\right\|\leq\kappa\). If \(\rho_{b}=\left|b\right\rangle\left\langle b\right|\), this upper bound in \(\gamma\) ensures that from Theorem 1, \[\left|\frac{\mu}{\ell^{2}}-\mathrm{Tr}[O\ g(H)\rho_{b}g(H)^{\dagger}]\right| \leq\varepsilon/6.\] From Theorem A1, and from the upper bound in \(|\tilde{\ell}-\ell^{2}|\), this \[\left|\frac{\mu}{\tilde{\ell}}-\frac{\mathrm{Tr}[O\ g(H)\rho_{b}g(H)^{ \dagger}]}{\ell^{2}}\right|\leq\varepsilon/2. \tag{93}\] Overall, Algorithm 1 outputs \(\mu\) such that \[\left|\frac{\mu}{\bar{\ell}}-\left\langle x|O|x\right\rangle\right|\leq\left| \frac{\mu}{\ell^{2}}-\mathrm{Tr}[O\ g(H)\rho_{b}g(H)^{\dagger}]\right|+\left| \mathrm{Tr}[O\ g(H)\rho_{b}g(H)^{\dagger}-\left\langle x|O|x\right\rangle\right| \tag{94}\] The first expression in the RHS of Eq. (94) is bounded by \(\varepsilon/2\) (from Eq. (93)). We bound the second expression to \(\varepsilon/2\) from Theorem 1. For this, the sample complexity is \[T\geq O\left(\frac{\ln(2/\delta)\|O\|^{2}\|c\|_{1}^{4}}{\varepsilon^{2}\ell^{4} }\right)=O\left(\frac{\left\|O\right\|^{2}\kappa^{4}\log^{2}\left(\frac{\left\| O\right\|\kappa}{\varepsilon}\right)\ln(2/\delta)}{\varepsilon^{2}}\right). \tag{95}\] The cost of each repetition is bounded by \(\tau_{max}+\tau_{b}\), where \(\tau_{\max}\) is determined by the upper bound in \(\gamma\) as \[\tau_{\max}=O\left(\kappa\log\left(\frac{\left\|O\right\|\left\|H^{-1}\right\| \kappa}{\varepsilon\ell^{2}}\right)\right)=O\left(\kappa\log\left(\frac{\left\| O\right\|\kappa}{\varepsilon}\right)\right),\] where we have used \(\ell\geq 1\) and \(\left\|H^{-1}\right\|\leq\kappa\). For our next method, we draw inspiration from the problem of sampling from ground states. This approach has a slightly better sample complexity (up to log factors) but needs an extra ancilla qubit. **Method 2: Quantum linear systems as a ground state preparation problem** In Sec. V.2, we have discussed that given a Hamiltonian \(H\), the function \(f(H)=e^{-tH^{2}}\) can be used to prepare the ground state of \(H\) when \(H\) is positive semi-definite or when the ground energy is known to a certain precision. However, for general Hamiltonians with eigenvalues in \([-1,1]\), \(f(H)\) helps prepare the 0-eigenstate of \(H\). We shall exploit this fact in this section. In fact, following the adiabatic inspired approaches for solving quantum linear systems in Refs [52, 53, 54, 55], we know that, given any Hamiltonian \(H\), one can construct a Hamiltonian \(H^{\prime}\) such that it's 0-eigenstate encodes the solution to the linear systems problem. For simplicity, we restrict our attention to the case where \(H\) is positive semi-definite, i.e. its eigenvalues are in \([1/\kappa,1]\) but this is without loss of generality. Our analysis holds for any \(H\) as from any such \(H\), one can construct \(H^{\prime}\). Furthermore, we consider that \(H\) is accessible via a block encoding. Consider the Hamiltonian \[H^{\prime}=\left|0\right\rangle\left\langle 1\right|\otimes HQ_{b}+\left|1 \right\rangle\left\langle 0\right|\otimes Q_{b}H, \tag{96}\] where \(Q_{b}=I-\left|b\right\rangle\left\langle b\right|\). \(H^{\prime}\) can be constructed efficiently from an \((1,a,\varepsilon)\)-block encoding of \(H\). Let us consider the spectrum of \(H^{\prime}\) following Refs. [52, 53, 54] and relate it to the spectrum of \(H\). Once again, when \(H\) is not positive definite, then there also exists a different encoding of \(H^{\prime}\) which can be applied (by adding two ancilla qubits). Moreover, the 0-eigenstate of this Hamiltonian also encodes the solution to the quantum linear systems problem. For simplicity, we now restrict our attention to positive definite \(H\). It is easy to see that the null space of \(H^{\prime}\) is \(\mathrm{Span}\{\left|0,x\right\rangle,\left|1,b\right\}\). In other words, the state \(\left|0,x\right\rangle\) is a 0-eigenstate of \(H^{\prime}\). Furthermore, if we start with the state \(\left|0,b\right\rangle\), the dynamics never leaves the space spanned by states having \(\left|0\right\rangle\) in the first register. Hence, \(\left|0,x\right\rangle\) is the unique 0-eigenstate of \(H^{\prime}\) in this subspace. Also, one can verify that the minimum non-zero eigenvalue of \(H^{\prime}\) is at least \(1/\kappa\) away from zero, i.e. \[\Delta=\min_{\lambda\neq 0}\left|\lambda\right|=1/\kappa.\] We will first show how to construct an \((1,a+2,\varepsilon)\)-block encoding of \(H^{\prime}\) given a \((1,a,\varepsilon)\)-block encoding of \(H\), prepared with cost \(T_{H}\). Moreover, if \(T_{b}\) is the cost of implementing a procedure \(B\), such that \(B\left|\bar{0}\right\rangle=\left|b\right\rangle\), then the cost of preparing \(H\) is \(O(T_{H}+T_{b})\). We formally prove this via the following lemma **Lemma 18**.: _Let \(H\) be a Hermitian matrix such that its eigenvalues lie in \([1/\kappa,1]\). Suppose \(U_{H}\), which is a \((1,a,\varepsilon)\)-block encoding of \(H\), can be implemented with cost \(T_{H}\). Also, suppose we have access to the unitary \(B\) such that \(B\ket{\bar{0}}=\ket{b}\). Furthermore, define \(Q_{b}=I-\ket{b}\bra{b}\). Then we can implement \(U_{H^{\prime}}\), which is an \((1,a+2,\varepsilon)\)-block encoding of \(H^{\prime}=\ket{0}\bra{1}\otimes HQ_{b}+\ket{1}\bra{0}\otimes Q_{b}H\) in cost \(O\left(T_{H}+T_{b}\right)\)._ Proof.: First, observe that \(H^{\prime}\) can be written as a product of three matrices. That is, \[H^{\prime}=\begin{pmatrix}I&0\\ 0&Q_{b}\end{pmatrix}\begin{pmatrix}0&H\\ H&0\end{pmatrix}\begin{pmatrix}I&0\\ 0&Q_{b}\end{pmatrix}. \tag{97}\] Also, the matrix \(Q_{b}\) can be written as \[Q_{b}=B\left(\frac{I+e^{i\pi\Pi_{0}}}{2}\right)B^{\dagger},\] where \(\Pi_{0}=\ket{0}\bra{0}\). A \((1,1,0)\)-block encoding of \(Q_{b}\) is implemented by the circuit \[V_{Q}=(I\otimes B)(H\otimes I)\left(I\otimes\ket{0}\bra{0}+e^{i\pi\Pi_{0}} \otimes\ket{1}\bra{1}\right)(H\otimes I)(I\otimes B^{\dagger}).\] The second matrix in the RHS of Eq. (97) is simply \(H\otimes\sigma_{x}\). This can be implemented using \(V_{H}=U_{H}\otimes\sigma_{x}\), which is a \((1,a,\varepsilon)\)-block encoding of \(H\otimes\sigma_{x}\). The first and the third matrix can be implemented by \(\tilde{V}_{Q}=\ket{0}\bra{0}\otimes I+\ket{1}\bra{1}\otimes V_{Q}\), which is a \((1,2,0)\)-block encoding of these matrices. Thus, we have the product of three block encodings. Then the unitary \(U_{H}=(I\otimes\tilde{V}_{Q})(I\otimes V_{H})(I\otimes\tilde{V}_{Q})\) is an \((1,a+2,\varepsilon)\) block encoding of \(H^{\prime}\), requiring cost \(O(T_{H}+T_{b})\). Note that \(H\) need not be accessed via a block encoding. If \(H\) is a linear combination of Pauli matrices, \(H^{\prime}\) can be built from \(H\) using a short-depth quantum circuit. However, this requires one additional qubit which is an overhead. Thus, we have now mapped the quantum linear systems algorithm to prepare the zero-eigenstate of the Hermitian matrix \(H^{\prime}\). So in order to estimate \(\bra{x}O|x\rangle\), we can directly use Theorem 11, where the initial state is \(\ket{0}\ket{b}\) and the \(0\)-eigenstate is \(\ket{0}\ket{x}\). Recall, using Algorithm 1 we can output \(\mu\) such that for any observable \(O\), \[\left|\frac{\mu}{\ell^{2}}-\bra{x}O|x\rangle\right|\leq\varepsilon, \tag{98}\] where \(\ell=\left\|e^{-tH^{\prime 2}}\ket{0,b}\right\|\geq 1/\kappa\). However, we only need an approximation \(\tilde{\ell}\) of \(\ell^{2}\). We formally state our result via the following theorem **Theorem 19**.: _Let \(H^{\prime}\) be the Hamiltonian defined in Eq. (96), constructed from a given Hermitian matrix \(H\) with its eigenvalues lying in \([1/\kappa,1]\). Suppose there exists a procedure that prepares the quantum state \(\ket{b}\) in cost \(\tau_{b}\). Let \(O\) be some observable. Also for some,_ \[t=O\left(\kappa\log\left(\frac{\kappa\|O\|}{\varepsilon}\right)\right),\] _define \(\ell^{2}=\bra{0,b}e^{-2tH^{\prime 2}}\ket{0,b}\). Suppose we know \(\tilde{\ell}\), an approximation of \(\ell^{2}\) such that_ \[\left|\tilde{\ell}-\ell^{2}\right|\leq\frac{\varepsilon\ell^{2}}{6\|O\|}.\] _Furthermore suppose, \(\varepsilon,\delta,\gamma\in(0,1)\) be parameters. Then if_ \[\gamma\leq\frac{\varepsilon}{36\kappa^{2}\|O\|},\] \[T\geq O\left(\frac{\left\|O\right\|^{2}\kappa^{4}\ln(2/\delta)}{\varepsilon^{2}} \right).\] _then Algorithm 1 allows outputs, with probability at least \(1-\delta\), a parameter \(\mu\) such that_ \[\left|\frac{\mu}{\ell}-\left\langle x|O|x\right\rangle\right|\leq\varepsilon,\] _using \(T\) calls to the quantum circuit in Fig. 2, where the cost of each such repetition is at most \(\tau_{\max}+\tau_{b}\), where_ \[\tau_{max}=O\left(\kappa\log\left(\frac{\left\|O\right\|\kappa}{\varepsilon} \right)\right).\] Proof.: First, observe that the upper bound in \(\gamma\) ensures that \[\gamma\leq\frac{\varepsilon\ell^{2}}{36\|O\|}.\] The rest of the proof follows from Theorem 11 where we simply replace \(\Delta=1/\kappa\) and \(\eta\geq 1/\kappa\). So, we output such a \(\mu/\ell\), with probability at least \(1-\delta\), using \[T\geq O\left(\frac{\left\|O\right\|^{2}\kappa^{4}\ln(2/\delta)}{\varepsilon^{ 2}}\right)\] samples of the quantum circuit in Fig. 2 where the cost of each such repetition is at most \(\tau_{\max}+\tau_{b}\) where \[\tau_{max}=O\left(\kappa\log\left(\frac{\left\|O\right\|\kappa}{\varepsilon} \right)\right).\] Overall, the sample complexity of this method is slightly better than the randomized quantum algorithm in the first method. However, constructing \(H^{\prime}\) requires one extra qubit as compared to the previous approach. Nevertheless, this helps us connect between a ground state preparation procedure and the quantum linear systems algorithm which serves as an inspiration for our next quantum algorithm which uses QSVT to prepare the \(0\)-eigenstate of \(e^{-tH^{\prime 2}}\), given a block encoding of \(H\). ### Quantum linear systems for fully fault-tolerant quantum computers The quantum algorithm is quite simple and exploits the connection observed in Method 2 of the previous section. Namely, the fact that given a block encoding of a Hermitian matrix \(H\) whose eigenvalues are in \([-1,-1/\kappa]\cup[1/\kappa,1]\), each can efficiently construct a block encoding of a Hamiltonian \(H^{\prime}\) (See Lemma 18) whose \(0\)-eigenstate is \(\left|0\right\rangle\left|x\right\rangle\). We use QSVT to implement a polynomial approximation of the Gaussian function (Lemma 13), to the block encoding of \(H^{\prime}\). Formally, **Lemma 20**.: _Let \(\varepsilon\in(0,1/2)\). Suppose \(H\) is a Hermitian matrix with eigenvalues in \([1/\kappa,1]\) and,_ \[\delta\in o\left(\frac{\varepsilon^{2}}{\kappa^{4}\log^{2}(\kappa/ \varepsilon)}\right).\] _Furthermore, suppose that \(U_{H}\) is an \((1,a,\delta)\)-block encoding of \(H\), implementable in cost \(T_{H}\). Also, let \(B\) be a unitary procedure that prepares \(\left|b\right\rangle\) in time \(T_{b}\). Then there exists a procedure that prepares a state that is \(O(\varepsilon)\)-close to,_ \[\left|x\right\rangle=\frac{H^{-1}\left|b\right\rangle}{\left\|H^ {-1}\left|b\right\rangle\right\|}\] _with cost \(O\left(\kappa^{2}(T_{H}+T_{b})\log(\kappa/\varepsilon)\right)\)._ Proof.: Given an \((1,a,\delta)\)-block encoding of \(H\), we can obtain, in time \(O(T_{H}+T_{b})\), an \((1,a+2,\delta)\)-block encoding of the Hamiltonian \(H^{\prime}\) using Lemma 18. We now simply apply Lemma 14 by replacing \(\Delta=1/\kappa\), \[\eta=\langle b|A^{-1}|b\rangle\geq 1/\kappa.\] and \(T_{H}=O(T_{A}+T_{b})\). Note that the precision in block encoding required is \[\delta\in o\left(\frac{\varepsilon^{2}\eta^{2}}{t\log^{2}(1/ \varepsilon\eta)}\right),\] where \(\sqrt{t}=O\left(\Delta^{-1}\log(\eta^{-1}\varepsilon^{-1})\right)\) and \(\eta\geq 1/\kappa\). The complexity is obtained by the aforementioned substitutions in Eq. (65). ## VII Applications to quantum walks So far, we have seen applications of the "Analog LCU" and the "Single-Ancilla LCU" approaches. In this section, we will show that the "Ancilla-free" LCU can be applied to the framework of quantum walks. Recall from Sec. IV.4, that this approach is useful when we are interested in the projection of the LCU state \(f(H)\left|\psi_{0}\right\rangle\) in some subspace of interest. In such scenarios, it suffices to prepare an average density matrix \(\rho\) by sampling the unitaries \(U_{j}\) according to the distribution of the LCU coefficients. This is because the projection of \(\rho\) in this subspace is at least as large (See Theorem 2). We will show this is precisely the case for spatial search by quantum walks. We first discuss the optimal quantum spatial search algorithm by discrete-time quantum walks, for which we provide two quantum algorithms. The first relies on fast-forwarding discrete-time quantum walks [31]. For this algorithm, we formalize the unproven observation of Ref. [41] - where the authors stated that the LCU could indeed be bypassed. Our second quantum algorithm relies on fast-forwarding continuous-time random walks, which also fits nicely in the "Ancilla-free LCU framework". For completeness, we also briefly outline the recent optimal spatial search algorithm by continuous-time quantum walk [32]. Finally, we show how one can obtain a discrete-time quantum walk from a continuous-time quantum walk (and vice versa) using the frameworks of block encoding and QSVT. Similar to the previous sections, here too, we shall present our results based on generic Hamiltonians. We will refer to quantum walks only as a particular case of our general results. We begin with a very brief review of random and quantum walks. We refer the readers to Appendix (Sec. A - IV) for an introduction to the basic concepts related to random walks. ### Random and quantum walks: A very brief overview Consider any ergodic, reversible Markov chain \(P\) defined on a vertex space \(X\) with \(\left|X\right|=n\) nodes. One can think of such chains as a weighted graph of \(n\) nodes (For detailed definitions of these terms, refer to the Appendix). Then \(P\) is an \(n\times n\) stochastic matrix. Let \(p_{x,y}\) be the \((x,y)\) - th entry of \(P\). We shall consider that the singular values of \(P\) lie in \([0,1]\). This is without loss of generality: one can implement the transformation \(P\mapsto(I+P)/2\) to always ensure this. Then starting from any initial probability distribution over \(X\), represented by the row vector \(v_{0}\), \(t\)-steps of a classical random walk, results in distribution \(v_{t}=v_{0}P^{t}\) over \(X\). For any such \(P\) there exists a stationary distribution \(\pi=(\pi_{1},\pi_{2},\cdots,\pi_{n})\) such that \(\pi=\pi P\). From any \(P\) one obtains a continuous-time random walk by using \(Q=I-P\) (under fairly general conditions). A continuous-time random walk, starting from \(v_{0}\), evolves to \(v_{t}=v_{0}e^{Qt}\). Since \(P\) is not symmetric in general, it would be useful to work with the Discriminant matrix \(D\) of \(P\). \(D\) is an \(n\times n\) symmetric matrix such that its \((x,y)^{\text{th}}\) - entry is \(\sqrt{p_{xy}p_{yx}}\). The singular values of \(P\) are the same as the eigenvalues of \(D\). Moreover, the state \(\ket{\pi}=\sum_{x\in X}\sqrt{\pi_{x}}\ket{x}\) is the eigenstate of \(D\) with eigenvalue \(1\). In order to define a discrete-time quantum walk, define the unitary \(U_{P}\) such that \[U_{P}\ket{\bar{0}}\ket{x}=\sum_{y=1}^{n}\sqrt{p_{xy}}\ket{y,x}.\] where \(\ket{\bar{0}}\) is some reference state. Let \(S\) be the swap operation such that \(S\ket{x,y}=\ket{y,x}\), and \(\Pi_{0}=\ket{\bar{0}}\bra{\bar{0}}\otimes I\). Then the unitary defined by \[V_{P}=[(2\Pi_{0}-I)\otimes I]U_{P}^{\dagger}SU_{P}, \tag{99}\] is a discrete-time quantum walk on the edges of \(P\). For details on these discrete-time quantum walks, we refer the reader to Refs. [31, 41, 69, 70]. Similarly, following Refs. [71, 72, 32], the Hamiltonian \[H_{P}=i[U_{P}^{\dagger}SU_{P},\Pi_{0}] \tag{100}\] defines a continuous-time quantum walk on the edges of \(P\). We now describe the spatial search problem, which we shall deal with in the subsequent sections. Suppose a subset \(M\) of the \(n\) nodes of \(P\) are marked. That is, its state space \(X=U\cup M\), where \(U\) is the set of unmarked nodes. Then, the spatial search problem can be defined as follows: suppose the random walk starts from the stationary distribution \(\pi\) of \(P\). What is the expected number of steps needed by the random walk to find some node \(v\in M\)? For random walks (both discrete and continuous-time), this is known as the hitting time (\(HT\)). Whether quantum walks can provide a quadratic advantage for the spatial search problem for any \(P\) and any number of marked nodes, was open until recently. Ambainis et al. proved that discrete-time quantum walks solve the problem in \(\tilde{O}(\sqrt{HT})\) steps [31]. A similar result was shown for continuous-time quantum walks in [32]. Both these quantum algorithms make use of the so-called interpolated Markov chains framework. Let \(P^{\prime}\) be the absorbing Markov chain, obtained from \(P\) by replacing all outgoing edges from \(M\) with self-loops. Then, the interpolated Markov chain is defined as \(P(s)=(1-s)P+sP^{\prime}\), where \(s\in[0,1)\). One can define a Discriminant matrix \(D(s)\) for \(P(s)\), analogous to \(P\). The relationship between \(D(s)\) and \(P(s)\) is also analogous to the non-interpolated case. In addition to interpolated Markov chains, the optimal quantum spatial search algorithms in [31, 32] made use of LCU-based techniques. We will formalize the results therein under the framework of "Ancilla-Free LCU" and show that it can lead to new optimal quantum algorithms, quite naturally. We discuss the discrete-time quantum walks in this context in the next section. As mentioned previously, we will work with general operators (Hamiltonians) and only invoke quantum (or random) walks as particular cases. Applying Ancilla-Free LCU: Optimal quantum spatial search by fast-forwarding discrete-time random walks We begin by considering any Hamiltonian \(H\) of unit spectral norm. Then given a block encoding of \(H\), we first show that we can implement a block encoding of \(H^{t}\) using LCU. For this, we will make use of the well-known result that for \(x\) such that \(|x|\leq 1\), \(x^{t}\) can be expressed as a linear combination of Chebyshev polynomials. For this, let us first define a \(d\)-degree polynomial \(p_{t,d}(x)\), which is a linear combination of Chebyshev polynomials. For any even \(t,d\), define \[p_{t,d}(x)=\begin{cases}\dfrac{1}{2^{t}}\sum_{j=-d/2}^{d/2}\binom{t}{j+t/2}T_{ 2j}(x)&\text{$t,d$ are even}\\ \\ \dfrac{2}{2^{t}}\sum_{j=0}^{(d-1)/2}\binom{t}{\frac{t+1}{2}+j}T_{2j+1}(x)& \text{$t,d$ are odd.}\end{cases} \tag{101}\] Then, the following lemma in Ref. [73] states that for any \(t\in\mathbb{Z}\), the function \(f(x)=x^{t}\) can be well-approximated by truncating the polynomial \(p_{t,d}(x)\): **Lemma 21**.: _[_73_]_ _Suppose \(\varepsilon>0\), \(x\in[-1,1]\), \(q\geq 1\) and \(t\in\mathbb{R}^{+}\), then there exists a polynomial \(p_{t,d}(x)\) of degree \(d=\lceil\sqrt{2t\ln(2q/\varepsilon)}\rceil\) such that,_ \[\sup_{x\in[-1,1]}\left|x^{t}-p_{t,d}(x)\right|\leq 2e^{-d^{2}/2t}\leq \varepsilon/q.\] So given the block encoding of any Hamiltonian \(H\), we first show how one can obtain the even Chebyshev polynomials of \(H\). **Lemma 22**.: _Consider any Hamiltonian \(H\) such that \(\left\|H\right\|\leq 1\). Suppose we have access to \(U_{H}\) which is a \((1,a,0)\) block encoding of \(H\). Furthermore define the reflection operator \(R=(2\left|\bar{0}\right\rangle\left\langle\bar{0}\right|-I)\otimes I=2\Pi_{0} -I\otimes I\) and the unitary \(V=R.U_{H}^{\dagger}.R.U_{H}\). Then \(V^{t}\) is a \((1,a,0)\) block encoding of \(T_{2t}(H)\), where \(T_{2t}(x)\) is the \(2t\)-th Chebyshev polynomial of the first kind._ Proof.: We will prove this by induction. For the basic step, we have \[\left(\left\langle\bar{0}\right|\otimes I\right)V\left(\left|\bar{ 0}\right\rangle\otimes I\right) =\left(\left\langle\bar{0}\right|\otimes I\right)\left(2\Pi_{0}-I \otimes I\right)U_{H}^{\dagger}\left(2\Pi_{0}-I\otimes I\right)U_{H}\left( \left|\bar{0}\right\rangle\otimes I\right) \tag{102}\] \[=2\left(\left\langle\bar{0}\right|\otimes I\right)U_{H}^{\dagger }\Pi_{0}U_{H}\left(\left|\bar{0}\right\rangle\otimes I\right)-I\] (103) \[=2H^{2}-I=T_{2}(H). \tag{104}\] Now let us assume that for any \(k\leq t\), we have that \[(\langle\bar{0}|\otimes I)V^{k}(\left|\bar{0}\right\rangle\otimes I)=T_{2k}(H).\] Then, \[V^{k+1} =V^{k}\left[(2\left|0\right\rangle\left\langle 0\right|-I)\otimes I \right]U_{H}^{\dagger}RU_{H} \tag{105}\] \[=2V^{k}\Pi_{0}U_{H}^{\dagger}RU_{H}-V^{k-1}VU_{H}^{\dagger}RU_{H}\] (106) \[=2V^{k}\Pi_{0}\Pi_{0}V-V^{k-1}\left(2\Pi_{0}-I\otimes I\right) \qquad\text{[ Using }\Pi_{0}U_{H}^{\dagger}RU_{H}=\Pi_{0}V\text{ ]}. \tag{107}\] So, \[(\langle\bar{0}|\otimes I)V^{k+1}(\left|\bar{0}\right\rangle\otimes I) =2(\langle\bar{0}|\otimes I)V^{k}\Pi_{0}\Pi_{0}V(\left|\bar{0} \right\rangle\otimes I)-(\langle\bar{0}|\otimes I)V^{k-1}\left(2\Pi_{0}-I \otimes I\right)(\left|\bar{0}\right\rangle\otimes I) \tag{108}\] \[=2T_{2}(H).T_{2k}(H)-T_{2k-2}(H)\] (109) \[=T_{2k+2}(H)\qquad\text{[ Using }2T_{p}(x)T_{q}(x)=T_{p+q}(x)+T_{ \left|p-q\right|}(x)\text{ ]}, \tag{110}\] which completes the proof. So using this lemma we obtain, \(\left(\langle\bar{0}|\otimes I\rangle\,V^{t}\left(|\bar{0}\rangle\otimes I\right)= T_{2t}(H)\). Now one can implement a linear combination of different powers of \(V\), to obtain \(p_{t,d}(H)\), the polynomial approximating \(H^{t}\). We will demonstrate via the following lemma that this is indeed possible and going up to roughly, \(\sqrt{t}\) powers of \(V\) suffices. Furthermore, this lemma can be adapted to also incorporate the case when \(t\) is odd. **Lemma 23**.: _Suppose \(\varepsilon\in(0,1)\) and we have access to \(U_{H}\), which is a \((1,a,\delta)\)-block encoding of a Hamiltonian \(H\) such that \(\left\|H\right\|=1\). Then, provided_ \[\delta\leq\frac{\varepsilon^{2}}{128t\ \ln(8/\varepsilon)},\] _for any \(t\in\mathbb{N}\), we can implement a \((1,O(a+\log t+\log\log(1/\varepsilon)),O(\varepsilon))\)-block encoding of \(H^{t}\) in cost \(O\left(\sqrt{t\log(1/\varepsilon)}\right)\)._ Proof.: We will implement a \((1,a+1,\varepsilon)\) block encoding of \(H^{t}\) by separating out the cases where \(t\) is even or odd. When \(t\) is even, we implement \(H^{t}\) by approximating it with the polynomial defined in Eq. (101). They are guaranteed to be \(\varepsilon\)-close following Lemma 21. The odd case also follows through via similar arguments. Let \(U_{H}\) be a \((1,a,0)\)-block encoding of \(H^{\prime}\). Then \(\left\|H-H^{\prime}\right\|\leq\delta\). From Lemma 22, the unitary \(V=RU^{\dagger}RU\) is a \((1,a,0)\)-block encoding of \(T_{2}(H^{\prime})\). We will use LCU to implement the polynomial \(p_{t,d}(H^{\prime})\) defined in Eq. (101). The degree of the polynomial is chosen to be \(d=\lceil\sqrt{2t\ln(8/\varepsilon)}\rceil\), which ensures (from Lemma 21) that \(\left\|x^{t}-p_{t,d}(x)\right\|\leq\varepsilon/4\). Consider the unitary \(Q\) such that \[Q\left|\bar{0}\right\rangle=\frac{1}{\sqrt{\alpha}}\sum_{l=0}^{d/2}\sqrt{c_{l} }\left|l\right\rangle, \tag{111}\] where, \[c_{l}=\begin{cases}2^{1-t}\binom{t}{q+l},&l>0\\ 2^{-t}\binom{t}{t/2},&l=0,\end{cases} \tag{112}\] and \(\alpha=\left\|c\right\|_{1}\), where \(c=(c_{0},\cdots,c_{d/2})\). Also, define the controlled unitary \[W=\sum_{j=0}^{d/2}\left|j\right\rangle\left\langle j\right|\otimes V^{j},\] where \(V=RU_{H}^{\dagger}RU_{H}\). Then, it is easy to see, using LCU that the unitary \(\widetilde{W}=(Q^{\dagger}\otimes I)W(Q\otimes I)\) is a \((\alpha,a+\lceil\log_{2}d\rceil-1,0)\)-block encoding of \(p_{t,d}(H^{\prime})\). That is, \[\left(\langle\bar{0}|\otimes I\rangle\,\widetilde{W}\left(|\bar{0}\rangle \otimes I\right)=\frac{p_{t,d}(H^{\prime})}{\alpha}, \tag{113}\] where \(\alpha\) is obtained by observing that for any \(x\in[-1,1]\) \[\alpha= \left|x^{t}-\sum_{l=d/2+1}^{t}2^{1-t}\binom{t}{t/2+l}\right| \tag{114}\] \[\geq 1-\left|\sum_{l=d/2+1}^{t}2^{1-t}\binom{t}{t/2+l}\right|\] (115) \[\geq 1-\varepsilon/4 \text{[ From Lemma \ref{lem:21} ]}. \tag{116}\] Now by using triangle inequality we obtain, \[\left\|H^{t}-p_{t,d}(H^{\prime})/\alpha\right\| \leq\left\|H^{t}-p_{t,d}(H^{\prime})\right\|+(1-\alpha)\big{\|}p_{t, d}(H^{\prime})/\alpha\big{\|}\] (117) \[\leq\varepsilon/4+\big{\|}H^{t}-p_{t,d}(H)\big{\|}+\big{\|}p_{t,d} (H)-p_{t,d}(H^{\prime})\big{\|}\] (118) \[\leq\varepsilon/4+\varepsilon/4+4d\sqrt{\delta} \left[\text{ From Lemma \ref{lem:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemmalemma:lemma:lemma:lemma:lemma:lemmalemma:lemma:lemma:lemma:lemma:lemmalemma:lemma:lemma:lemmalemma:lemma:lemma:lemmalemma:lemma:lemmalemma:lemmalemma:lemmalemma:lemmalemma:lemmalemma:lemmalemma: **Lemma 24**.: _Consider any Hamiltonian \(H\) such that \(\left\|H\right\|=1\). Suppose we have access to \(U_{H}\) which is a \((1,a,0)\) block encoding such that \(U_{H}^{2}=I\). Furthermore define the reflection operator \(R=(2\left|\bar{0}\right\rangle\left\langle\bar{0}\right|-I)\otimes I=2\Pi_{0}-I \otimes I\) and the unitary \(V=R.U_{H}\). Then \(V^{t}\) is a \((1,a,0)\) block encoding of \(T_{t}(H)\), where \(T_{t}(x)\) is the \(t\)-th Chebyshev polynomial of the first kind._ Proof.: We will prove this by induction. For the basic step, we trivially obtain \[\left(\left\langle\bar{0}\right|\otimes I\right)V\left(\left|\bar{0}\right\rangle \otimes I\right)=H=T_{1}(H). \tag{127}\] Now let us assume that for any \(k\leq t\), we have that \[(\left\langle\bar{0}\right|\otimes I)V^{k}(\left|\bar{0}\right\rangle\otimes I )=T_{k}(H).\] Then, \[V^{k+1} =V^{k}\left[(2\left|\bar{0}\right\rangle\left\langle\bar{0} \right|-I)\otimes I\right]U_{H} \tag{128}\] \[=2V^{k}\Pi_{0}U_{H}-V^{k-1}VU_{H}\] (129) \[=2V^{k}\Pi_{0}\Pi_{0}V-V^{k-1}\left(2\Pi_{0}-I\otimes I\right)U _{H}^{2} \left[\text{ Using }\Pi_{0}U_{H}=\Pi_{0}V\text{ }\right]\] (130) \[=2V^{k}\Pi_{0}\Pi_{0}V-V^{k-1}\left(2\Pi_{0}-I\otimes I\right) \left[\text{ Using }U_{H}^{2}=I\text{ }\right]. \tag{131}\] So, \[(\left\langle\bar{0}\right|\otimes I)V^{k+1}(\left|\bar{0}\right\rangle \otimes I) =2((\bar{0}|\otimes I)V^{k}\Pi_{0}\Pi_{0}V(\left|\bar{0}\right\rangle \otimes I)-(\left\langle\bar{0}\right|\otimes I)V^{k-1}\left(2\Pi_{0}-I \otimes I\right)(\left|\bar{0}\right\rangle\otimes I) \tag{132}\] \[=2T_{1}(H).T_{k}(H)-T_{k-1}(H)\] (133) \[=T_{k+1}(H), \tag{134}\] which completes the proof. The unitary \(V\), defined in Lemma 24 is reminiscent of the discrete-time quantum walk unitary of Eq. (99): a product of \(U_{D}\), followed by a reflection around the reference state. This lemma then allows us to implement \(H^{t}\) whenever \(H\) is block-encoded by a unitary \(U_{H}\), satisfying \(U_{H}^{2}=I\). We state this via the following lemma **Lemma 25**.: _Suppose \(\varepsilon\in(0,1)\) and we have access to \(U_{H}\), which is a \((1,a,\delta)\)-block encoding of a Hamiltonian \(H\) such that \(\left\|H\right\|=1\) and \(U_{H}^{2}=I\). Then, provided_ \[\delta\leq\frac{\varepsilon^{2}}{128t\ \ln(8/\varepsilon)},\] _for any \(t\in\mathbb{N}\), we can implement \(a\left(1,O(a+\log t+\log\log(1/\varepsilon)),\varepsilon\right)\)-block encoding of \(H^{t}\) in cost \(O\left(\sqrt{t\log(1/\varepsilon)}\right)\)._ Proof.: The proof is similar to Lemma 23. The degree of the polynomial is chosen to be \(d=\lceil\sqrt{2t\ln(8/\varepsilon)}\rceil\), which ensures (from Lemma 21) that \(\left\|x^{t}-p_{t,d}(x)\right\|\leq\varepsilon/4\). Let \(U_{H}\) be a \((1,a,0)\)-block encoding of \(H^{\prime}\). Then \(\left\|H-H^{\prime}\right\|\leq\delta\). For even or odd \(t\), we prepare \(Q\) appropriately with the correct amplitudes. More precisely, when \(t\) is even, \(Q\) is defined exactly as in Eq. (111) and the coefficients defined in Eq. (112). For odd \(t\), \[Q\left|\bar{0}\right\rangle=\frac{1}{\sqrt{\alpha}}\sum_{l=0}^{(d-1)/2}\sqrt{ c_{l}}\left|l\right\rangle, \tag{135}\] where \[c_{l}=2^{1-t}\binom{t}{\frac{t+1}{2}+l}, \tag{136}\] and \[\alpha= \left\|c\right\|_{1}\geq 1-\varepsilon/4.\] The overall unitary is \(\widetilde{W}=(Q^{\dagger}\otimes I)W(Q\otimes I)\), where \[W=\begin{cases}\sum_{j=0}^{d/2}\ket{j}\bra{j}\otimes V^{2j},&\text{$t$ is even}\\ \\ \\ \sum_{j=0}^{(d-1)/2}\ket{j}\bra{j}\otimes V^{2j+1},&\text{$t$ is odd}.\end{cases}\] In both cases, we implement a \((1,O(a+\log t+\log\log(1/\varepsilon)),\varepsilon)\)-block encoding of \(D^{t}\). For fast-forwarding of discrete-time random walks, using this procedure, given an initial state \(\ket{\psi_{0}}\), we can prepare a quantum state that \(O(\varepsilon\cdot\left\|D^{t}\ket{\psi_{0}}\right\|)\)-close to \[\ket{\psi_{t}}=\ket{\bar{0}}\frac{D^{t}\ket{\psi_{0}}}{\left\|D^{t}\ket{\psi_{ 0}}\right\|}+\ket{\Phi}^{\perp},\] in cost \(O\left(\sqrt{t}\log\left(\varepsilon^{-1}\left\|D^{t}\ket{\psi_{0}}\right\| ^{-1}\right)\right)\) with success probability \(\Theta\left(\left\|D^{t}\ket{\psi_{0}}\right\|^{2}\right)\). Finally, by applying rounds of quantum amplitude amplification, we can prepare a quantum state that is \(O(\varepsilon)\)-close to \(\ket{\psi_{t}}\) in cost \[T=O\left(\frac{\sqrt{t}}{\left\|D^{t}\ket{\psi_{0}}\right\|}\log\left(\frac{1} {\varepsilon\cdot\left\|D^{t}\ket{\psi_{0}}\right\|}\right)\right).\] Now for the spatial search problem, we are interested in the projection of the quantum state \(H^{t}\ket{\psi_{0}}\) in the marked subspace. As a result, we can drop the ancilla register and simply apply the "Ancilla-free LCU" technique to implement the unitary \(V\) for a random number of steps, sampled according to the distribution of the LCU coefficients. **Fast-forwarding by "Ancilla-free LCU":** The overall procedure is outlined via Algorithm 2. ``` Inputs:\(U_{H}\), which is a \((1,a,\delta)\)-block encoding of Hamiltonian \(H\), an initial state \(\ket{\psi_{0}}\) and parameters \(t\in\mathbb{R}^{+}\) and \(d\in\mathbb{N}\). Let \(V=R\cdot U_{H}\). 1. If \(t\) is even, (a) Pick \(\ell\in[0,d/2]\) according to \(c_{\ell}/\norm{c}_{1}\), where \(c_{\ell}=2^{1-t}\binom{t}{\binom{t}{\frac{t}{2}+\ell}}\). (b) Apply \(2\ell\) steps of the unitary \(V\) to \(\ket{\psi_{0}}\). 2. If \(t\) is odd, (a) Pick \(\ell\in\left[0,\frac{d-1}{2}\right]\) according to \(c_{\ell}/\norm{c}_{1}\), where \(c_{\ell}=2^{1-t}\binom{t}{\binom{t}{\frac{t+1}{2}+\ell}}\). (b) Apply \(2\ell+1\) steps of the unitary \(V\) to \(\ket{\psi_{0}}\). 3. Measure the second register in the node basis. ``` **Algorithm 2**POW-HAM\((t,d,U_{H},\ket{\psi_{0}})\) If the initial state is \(\rho_{0}=\ket{\psi_{0}}\bra{\psi_{0}}\), the average density matrix obtained from Algorithm 2 is \[\rho=\sum_{\ell=0}^{d/2}\frac{c_{\ell}}{\norm{c}_{1}}V^{2\ell}\rho_{0}V^{-2 \ell},\] if \(t\) is even (an analogous expression is obtained when \(t\) is odd). There are still some issues to be considered in order to ensure that for any projector \(\Pi\), \[\operatorname{Tr}[\Pi\rho]\geq\operatorname{Tr}[\Pi H^{t}\rho_{0}H^{t}]-\varepsilon. \tag{137}\] For instance, the block encoding of \(H\) is not perfect. How should the precision in block encoding, \(\delta\), scale so that Eq. (137) holds? We formally state this, and prove the algorithmic correctness via the following Theorem: **Lemma 26**.: _Suppose \(\varepsilon\in(0,1)\) and we have access to \(U_{H}\), which is a \((1,a,\delta)\)-block encoding of a Hamiltonian \(H\) such that \(\left\lVert H\right\rVert=1\) and \(U_{H}^{2}=I\). Then, provided \(d=\lceil\sqrt{2t\ln(24/\varepsilon)}\rceil\) and,_ \[\delta\leq\frac{\varepsilon^{2}}{1152\ t\ln(24/\varepsilon)},\] _for any \(t\in\mathbb{R}^{+}\), projector \(\Pi\) and initial state \(\rho_{0}=\left\lvert\psi_{0}\right\rangle\left\langle\psi_{0}\right\rvert\), then Algorithm 2 prepares the average density matrix \(\rho\) such that_ \[\operatorname{Tr}[\Pi\rho]\geq\operatorname{Tr}[\Pi H^{t}\rho_{0}H^{t}]-\varepsilon,\] _using \(O\left(\sqrt{t\log(1/\varepsilon)}\right)\) queries to \(V=R.U_{H}\)._ Proof.: Let \(H^{\prime}\) be a \((1,a,0)\) block encoding of \(U_{H}\). Then, by definition \(\left\lVert H-H^{\prime}\right\rVert\leq\delta\). Let us choose the degree of the polynomial \(p_{t,d}(H^{\prime})\) to be \(d=\lceil\sqrt{2t\ln(24/\varepsilon)}\rceil\), which ensures that \(\left\lVert x^{t}-p_{t,d}(x)\right\rVert\leq\varepsilon/12\) (from Lemma 21). Now, from Lemma 25, the full LCU procedure would implement the state \[\left\lvert\psi_{t}\right\rangle=\left\lvert\bar{0}\right\rangle\frac{p_{t,d} (H^{\prime})}{\left\lVert c\right\rVert_{1}}\left\lvert\psi_{0}\right\rangle+ \left\lvert\Phi\right\rangle^{\perp}.\] Now, from the choice of \(d\), we ensure that \(\alpha=\left\lVert c\right\rVert_{1}\geq 1-\varepsilon/12\). Also, \[\left\lVert H^{t}-p_{t,d}(H^{\prime})/\alpha\right\rVert\] (138) \[\leq\varepsilon/12+\left\lVert H^{t}-p_{t,d}(H)\right\rVert+ \left\lVert p_{t,d}(H)-p_{t,d}(H^{\prime})\right\rVert\] (139) \[\leq\varepsilon/12+\varepsilon/12+4d\sqrt{\delta} \left[\text{ From Lemma \ref{lem: (obtained from \(P\) by replacing the outgoing edges from \(M\) with self-loops) to the stationary distribution of \(P\) (say \(\pi\)). At every step, one checks to see if the vertex obtained is marked. The expected number of steps needed to find some \(x\in M\) is known as the hitting time, denoted by \(HT\). Consider the interpolated Markov chain \(P(s)=(1-s)P+sP^{\prime}\), where \(s\in[0,1]\) and \(D(P(s))\) be the corresponding discriminant matrix. Let \(U_{D(s)}\), be a \((1,a,0)\)-block encoding of \(D(s)\), where \(a=\lceil\log_{2}(n)\rceil\). Then the quantum spatial search algorithm is stated below. ``` 1. Pick \(t\) uniformly at random from \([0,T]\), where \(T=\Theta\left(HT\right)\) and set \(d=\lceil\sqrt{T\log(T)}\rceil\). 2. Set \(s=1-1/r\), where \(r\) is picked uniformly at random from \(R=\{2^{0},2^{1},\cdots,2^{\lceil\log T\rceil}\}\). 3. Construct \(U_{D(s)}\), which is a \((1,\lceil\log_{2}(|X|)\rceil,0)\)-block encoding of \(D(s)\). 4. Prepare the quantum state \(\left|\bar{0}\right\rangle\left|\sqrt{\pi}\right\rangle=\left|\bar{0}\right\rangle \sum_{y\in X}\sqrt{\pi_{y}}\left|y\right\rangle\). 5. Measure in the basis \(\{\Pi_{M},I-\Pi_{M}\}\) in the second register. If the output is marked, measure in the node basis to output some \(x\in M\). Otherwise, we are in the state \(\left|\bar{0}\right\rangle\left|\sqrt{\pi_{U}}\right\rangle=\left|\bar{0} \right\rangle\sum_{y\in X\setminus M}\sqrt{\pi_{y}}\left|y\right\rangle\). 6. Call POW-HAM\((t,d,U_{D(s)},\left|\sqrt{\pi_{U}}\right\rangle)\). ``` **Algorithm 3**QSpatial Search - 1 Spatial search by DTQW **Lemma 27**.: _Algorithm 3 returns a marked element with success probability \(\Omega\left(\frac{1}{\log^{2}HT}\right)\)._ The details of the proof can be found in Refs. [31, 41]. From Lemma 26, we obtain that, for any \(s\in[0,1)\), the probability to observe a marked node after running Algorithm 3 is lower bounded as \[Tr[(I\otimes\Pi_{M})\rho_{t}]\geq\left\|\Pi_{M}D(s)^{T}\left|\sqrt{\pi_{U}} \right\rangle\right\|^{2}-\varepsilon, \tag{145}\] for a small enough \(\varepsilon\in\Theta(1/\log(T))\). While the exact value of \(s\) is difficult to obtain, Algorithm 3 shows that if we choose parameters \(s\in\{1-1/r:r=1,2,\cdots,2^{\lceil\log T\rceil}\}\) and \(T\in\Theta\left(HT\right)\) uniformly at random, \[\mathbb{E}\left[\left\|\Pi_{M}D(s)^{T}\left|\sqrt{\pi_{U}}\right\rangle\right\| ^{2}\right]\in\Omega\left(1/\log^{2}T\right).\] This is achieved by lower bounding the average success probability after \(T\) steps of DTQW on \(D(s)\) by a quantity related to a discrete-time random walk on the classical Markov chain \(P(s)\). What Equation (145) tells us is that if we run Algorithm 3, and measure the second register in the vertex basis, the probability of finding a marked element would be at least \(\Omega(1/\log^{2}T)\). Since Algorithm 3 requires at most \(O(\sqrt{T\log T})\) DTQW steps (where \(T=\Theta(HT)\)), the overall algorithm yields a quadratic improvement over its classical counterpart (up to a log factor). We shall use similar ideas to develop an alternative quantum algorithm for spatial search by discrete-time quantum walk. #### Fast-forwarding of discrete-time random walks using QSVT For completeness, we also provide a QSVT-based procedure to implement the polynomial \(p_{t,d}(H)\) that approximates \(H^{t}\). This is formally stated in the following lemma: **Lemma 28**.: _Suppose \(H\) is a Hermitian matrix such that \(\left\|H\right\|=1\) and \(\varepsilon\in(0,1/2)\). Furthermore suppose we have access to \(U_{H}\), which is an \((1,a,\delta)\)-block encoding of \(H\), implemented in cost \(T_{H}\). Then provided_ \[\delta\leq\frac{\varepsilon^{2}}{128t\ \ln(4/\varepsilon)},\] _there exists a quantum algorithm which implements a \((1,a+1,\varepsilon)\)-block encoding of \(H^{t}\) in cost_ \[T=O\left(T_{H}\sqrt{t\log(1/\varepsilon)}\right).\] Proof.: Suppose \(H^{\prime}=\left(\langle 0|\otimes I\rangle\,U_{H}\left(|0\rangle\otimes I\right)\right)\). Then, \(\left\|H-H^{\prime}\right\|\leq\delta\). By using QSVT, we can implement the polynomial \(p_{t,d}(H^{\prime})\) of degree \(d^{\prime}=\lceil\sqrt{2t\log(4/\varepsilon)}\rceil\), which implies \[\left\|H^{t}-p_{t,d^{\prime}}(H^{\prime})\right\| \leq\left\|H^{t}-p_{t,d^{\prime}}(H)\right\|+\left\|p_{t,d^{\prime }}(H)-p_{t,d^{\prime}}(H^{\prime})\right\| \tag{146}\] \[\leq\varepsilon/2+4d^{\prime}\sqrt{\delta}\qquad\left[\text{ From Lemma \ref{lem:qSVT} }\right]\] (147) \[\leq\varepsilon\qquad\qquad\left[\text{ As }\delta\leq\frac{ \varepsilon^{2}}{64d^{\prime 2}}\right] \tag{148}\] Thus, we have implemented a \((1,a+1,\varepsilon)\)-block encoding of \(H^{t}\) in cost \[T=O\left(T_{H}\sqrt{t\log(1/\varepsilon)}\right).\] Applying Ancilla-Free LCU: Optimal quantum spatial search by fast-forwarding continuous-time random walks We develop a quantum algorithm for fast-forwarding the dynamics of a continuous-time random walk using the "Ancilla-free LCU" technique. Given the discriminant matrix \(D\), a continuous-time random walk is defined by the operator \(e^{D-I}\), where \(Q=D-I\) is the continuous-time random walk kernel. As in the previous section, we will work with general Hamiltonians and discuss quantum walks as a particular case We begin by making use of the polynomial approximation to \(x^{t}\) to obtain a low degree polynomial that approximates \(e^{-t(1-\varepsilon)}\). This is stated in the following lemma. **Lemma 29** (Polynomial approximation of \(e^{t(x-1)}\)).: _Suppose \(t\in\mathbb{R}^{+}\) and \(\varepsilon\in(0,1/2]\) and \(d=\lceil\max\{te^{2},\ln(2/\varepsilon)\}\rceil\). Furthermore, let \(p_{j,d^{\prime}}(x)\) be the \(d^{\prime}\)-degree polynomial approximation to \(x^{t}\) defined in Lemma 21. Then there exists a polynomial_ \[q_{t,d,d^{\prime}}(x)=e^{-t}\sum_{j=0}^{d}\frac{t^{j}}{j!}p_{j,d^{\prime}}(x),\] _of degree_ \[d^{\prime}=\lceil\sqrt{2d\ln(4/\varepsilon)}\rceil\in O\left(\sqrt{t}\log(1/ \varepsilon)\right),\] _such that_ \[\sup_{x\in[-1,1]}\Bigl{|}e^{-t(1-x)}-q_{t,d,d^{\prime}}(x)\Bigr{|}\leq\varepsilon.\] Proof.: This is proven in Sec. A - III of the Appendix. From Lemma 29, it is clear that given a block encoding of any Hermitian matrix \(H\), with unit spectral norm, the operator \(e^{-t(I-H)}\) can be implemented as a linear combination of unitaries. This is because the \(d^{\prime}\)-degree polynomial \[q_{t,d,d^{\prime}}(x)=e^{-t}\sum_{j=0}^{d}\frac{t^{j}}{j!}p_{j,d^{\prime}}(x)\] approximates \(e^{-t(I-H)}\) and is a linear combination of the \(d^{\prime}\)-degree polynomial \(p_{j,d^{\prime}}(x)\). So overall, by LCU, we can implement the polynomial \(q_{t,d,d^{\prime}}(x)\), approximating \(e^{-t(1-x)}\). We formally show this via the following lemma: **Lemma 30**.: _Suppose \(\varepsilon\in(0,1)\) and we have access to \(U_{H}\), which is a \((1,a,\delta)\)-block encoding of a Hamiltonian \(H\) such that \(\left\|H\right\|=1\) and \(U_{H}^{2}=I\). Furthermore, let \(d=\lceil\max\{te^{2},\ln(8/\varepsilon)\}\rceil\) and \(d^{\prime}=\sqrt{2d\ln(16/\varepsilon)}\). Then, provided_ \[\delta\leq\frac{\varepsilon^{2}}{128d\ \ln(16/\varepsilon)},\] _for any \(t\in\mathbb{N}\), we can implement a \((1,O(a+\log t+\log\log(1/\varepsilon)),\varepsilon)\)-block encoding of \(e^{-t(I-H)}\) in cost \(O\left(\sqrt{t}\log(1/\varepsilon)\right)\)._ Proof.: The proof is similar to Lemma 25. Let \(U_{H}\) be a \((1,a,0)\)-block encoding of \(H^{\prime}\). By definition, \(\left\|H-H^{\prime}\right\|\leq\delta\). For the polynomial \(q_{t,d,d^{\prime}}(x)\), we choose \(d=\lceil\max\{te^{2},\ln(8/\varepsilon)\}\rceil\) and \(d^{\prime}=\sqrt{2d\ln(16/\varepsilon)}\). This ensures \(\left\|e^{-t(1-x)}-q_{t,d,d^{\prime}}(x)\right\|\leq\varepsilon/4\). As before, let \(\widetilde{W}\) be the unitary that implements the LCU. Then, \[\left(\left\langle\bar{0}\right|\otimes I\right)\widetilde{W}\left(\left|\bar {0}\right\rangle\otimes I\right)=\frac{q_{t,d,d^{\prime}}(H^{\prime})}{ \left\|c\right\|_{1}}. \tag{149}\] For the choice of \(d,d^{\prime}\) we have \[\left\|c\right\|_{1} =e^{-t}\sum_{j=0}^{d}\frac{t^{j}}{j!}p_{j,d^{\prime}}(H) \tag{150}\] \[\geq e^{-t}\sum_{j=0}^{d}\frac{t^{j}}{j!}\left(1-\varepsilon/4 \right)\qquad\qquad\qquad\text{[ As }d^{\prime}=\lceil\sqrt{2d\ln(16/\varepsilon)} \rceil\ ]\] (151) \[\geq\left(1-e^{-t}\sum_{j=d+1}^{\infty}\frac{t^{j}}{j!}\right) \left(1-\varepsilon/8\right)\] (152) \[\geq\left(1-\varepsilon/8\right)\left(1-\varepsilon/8\right) \qquad\qquad\qquad\text{[ From Eqs.\ (\ref{eq:L1}) - (\ref{eq:L1}) ]}\] (153) \[\geq 1-\varepsilon/4. \tag{154}\] This implies \(\left\|c\right\|_{1}\in[1-\varepsilon/4,1]\). Now, we will show that \(\widetilde{W}\) indeed implements a block encoding of \(e^{-t(I-H)}\). \[\left\|e^{-t(I-H)}-\frac{q_{t,d,d^{\prime}}(H^{\prime})}{\left\|c \right\|_{1}}\right\| \leq \tag{155}\] \[\leq\varepsilon/4+\varepsilon/4+4d^{\prime}\sqrt{\delta}\qquad \qquad\text{[From Lemma \ref{lem:L1}]}\] (156) \[\leq\varepsilon/2+\varepsilon/2\qquad\qquad\qquad\qquad\qquad \text{[ As }\delta\leq\frac{\varepsilon^{2}}{64d^{\prime 2}}\Bigg{]} \tag{157}\] It is easy to see that this leads to the fast-forwarding of continuous-time random walks. The unitary \(V=U_{P}^{\dagger}SU_{P}\) is a \((1,\lceil\log n\rceil,0)\)-block encoding of the random walk discriminant matrix \(D\). Then by using Lemma 30, given an initial state \(\left|\psi_{0}\right\rangle\), we can prepare a quantum state that is \(O\left(\varepsilon\cdot\left\|e^{-(I-D)t}\left|\psi_{0}\right\rangle\right\|\right)\)-close to \[\left|\psi_{t}\right\rangle=\left|0\right\rangle\frac{e^{-t(I-D)}\left|\psi_{0 }\right\rangle}{\left\|e^{-t(I-D)}\left|\psi_{0}\right\rangle\right\|}+\left| \psi^{\perp}\right\rangle, \tag{158}\] with success probability \(\Theta\left(\left\|e^{-(I-D)t}\left|\psi_{0}\right\rangle\right\|^{2}\right)\), in cost \(O\left(\sqrt{t}\log\left(\varepsilon^{-1}\cdot\left\|e^{-(I-D)t}\left|\psi_{0} \right\rangle\right\|^{-1}\right)\right)\). Finally, by applying \(O\left(\left\|e^{-(I-D)t}\left|\psi_{0}\right\rangle\right\|^{-1}\right)\) rounds of amplitude amplification, we prepare, with \(\Omega(1)\) probability, a quantum state that is \(O(\varepsilon)\)-close to \(\left|\psi_{t}\right\rangle\) in cost \[T=O\left(\frac{\sqrt{t}}{\left\|e^{-(I-D)t}\left|\psi_{0}\right\rangle\right\| }\log\left(\frac{1}{\varepsilon\big{\|}e^{-(I-D)t}\left|\psi_{0}\right\rangle \right\|}\right)\right).\] In Lemma 30, we considered that the unitary \(U_{H}\) satisfies \(U_{H}^{2}=I\). While this is true for quantum walks, it need not be so in general. However, even when this condition is not satisfied, we can obtain a block encoding of \(e^{-t(I-H)}\) using LCU, analogous to Lemma 30. Fast-forwarding by "Ancilla-free LCU":For the spatial search problem, we are concerned about the projection of \(e^{-t(I-H)\left|\psi_{0}\right\rangle}\). Thus, we can apply "Ancilla-Free LCU" instead. The overall procedure is outlined in Algorithm 4. We will be implementing \(q_{t,d,d^{\prime}}(H)\) which is itself a linear combination of \(p_{t,d^{\prime}}(H)\). So, Algorithm 4 also calls Algorithm 2 as a subroutine. Consider the following algorithm: ``` Inputs:\(U_{H}\), which is a \((1,a,\delta)\)-block encoding of a Hamiltonian \(H\) with unit norm, \(t\in\mathbb{R}^{+}\), \(d,d^{\prime}\in\mathbb{N}\), and an initial state \(\left|\psi_{0}\right\rangle\). 1. Pick some integer \(\ell\in[0,d]\) according to \(c_{\ell}/\|c\|_{1}\), where \(c_{\ell}=\frac{e^{-t}t^{\ell}}{\ell!}\) 2. Call POW-HAM\((\ell,d^{\prime},U_{H},\left|\psi_{0}\right\rangle)\). ``` **Algorithm 4****EXP-HAM\((t,d^{\prime},d,U_{H},\left|\psi_{0}\right\rangle)\)** If \(\rho_{0}=\left|\psi_{0}\right\rangle\left\langle\psi_{0}\right|\), for \(d,d^{\prime}\in\mathbb{N}\), Algorithm 4 implements the following average density matrix \[\rho=e^{-t}\sum_{j=0}^{d}\frac{t^{j}}{j!}\left[\sum_{j\in\mathrm{Even},k=0}^{ d^{\prime}/2}2^{1-j}\binom{j}{j+k/2}V^{2k}\rho_{0}V^{-2k}+\sum_{j\in\mathrm{ Odd},k=0}^{(d^{\prime}-1)/2}2^{1-j}\binom{j}{(j+1)/2+k}V^{2k+1}\rho_{0}V^{-(2k+1)} \right], \tag{159}\] where \(V=R\cdot U_{H}\) is the quantum walk operator. On average \(O(d^{\prime})\) queries are made to \(V\). However, in order to ensure that Algorithm 4 indeed results in a \(\rho\) such that \[\mathrm{Tr}[\Pi\rho]\geq\mathrm{Tr}[\Pi e^{-t(I-H)}\rho_{0}e^{-t(I-H)}]-\varepsilon,\] we need to choose the right values of \(\delta,d,d^{\prime}\). We do this via the following lemma: **Lemma 31**.: _Suppose \(\varepsilon\in(0,1)\) and we have access to \(U_{H}\), which is a \((1,a,\delta)\)-block encoding of a Hamiltonian \(H\) such that \(\|H\|=1\) and \(U_{H}^{2}=I\). Then, provided \(d=\lceil\max\{te^{2},\ln(12/\varepsilon)\}\rceil\), \(d^{\prime}=\lceil\sqrt{2t\ln(48/\varepsilon)}\rceil\) and,_ \[\delta\leq\frac{\varepsilon^{2}}{1152\ d\ln(48/\varepsilon)},\] _for any \(t\in\mathbb{R}^{+}\), projector \(\Pi\) and initial state \(\rho_{0}=\left|\psi_{0}\right\rangle\left\langle\psi_{0}\right|\), then Algorithm 2 prepares the average density matrix \(\rho\) such that_ \[\mathrm{Tr}[\Pi\rho]\geq\mathrm{Tr}[\Pi e^{-t(I-H)}\rho_{0}e^{-t(I-H)}]-\varepsilon,\] _using \(O\left(\sqrt{t}\log(1/\varepsilon)\right)\) queries to \(V=R.U_{H}\)._ Proof.: Let \(H^{\prime}\) be a \((1,a,0)\) block encoding of \(U_{H}\). Then, by definition \(\left\|H-H^{\prime}\right\|\leq\delta\). By choosing the degree of the polynomial \(q_{t,d,d^{\prime}}(H^{\prime})\) to be \(d^{\prime}=\lceil\sqrt{2d\ln(48/\varepsilon)}\rceil\), where \(d=\lceil\max\{te^{2},\ln(12/\varepsilon)\}\rceil\), ensures that \(\left\|e^{-t(1-x)}-q_{t,d,d^{\prime}}(x)\right\|\leq\varepsilon/12\) (from Lemma 29). Now, from Lemma 30, the full LCU procedure would implement the state \[\left|\psi_{t}\right\rangle=\left|\bar{0}\right\rangle\frac{q_{t,d,d^{\prime}} (H^{\prime})}{\left\|c\right\|_{1}}\left|\psi_{0}\right\rangle+\left|\Phi \right\rangle^{\perp}.\] Now, from the choice of \(d^{\prime}\), we ensure that \(\alpha=\left\|c\right\|_{1}\geq 1-\varepsilon/12\). Also, \[\left\|e^{-t(I-H)}-q_{t,d,d^{\prime}}(H^{\prime})/\left\|c \right\|_{1}\right\| \leq \left\|e^{-t(I-H)}-q_{t,d,d^{\prime}}(H^{\prime})\right\|+(1- \left\|c\right\|_{1})\big{\|}q_{t,d,d^{\prime}}(H^{\prime})/normc_{1}\big{\|}\] (160) \[\leq\varepsilon/12+\left\|e^{-t(I-H)}-q_{t,d,d^{\prime}}(H) \right\|+\left\|q_{t,d,d^{\prime}}(H)-q_{t,d,d^{\prime}}(H^{\prime})\right\|\] (161) \[\leq\varepsilon/12+\varepsilon/12+4d^{\prime}\sqrt{\delta} \left[\text{ From Lemma \ref{lem:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq: have access to \(U_{D(s)}\), which is a \((1,a,0)\)-block encoding of \(D(s)\), such that \(U_{D(s)}^{2}=I\). Consider Algorithm 5, which is very similar to the first spatial search algorithm except that it calls Algorithm 4 as a subroutine. From Algorithm 5, we obtain the average density matrix \(\rho\) such that \[Tr[(I\otimes\Pi_{M})\rho_{t}]\geq \left\|\Pi_{M}e^{t(D(s)-I)}\left|\sqrt{\pi_{U}}\right\rangle \right\|^{2}-\varepsilon,\] for small enough \(\varepsilon\in\Theta(1/\log(T))\). It remains to show that Algorithm 5 succeeds with probability \(\tilde{\Omega}(1)\). We demonstrate this via the following lemma, which we prove in the Appendix. **Lemma 32**.: _Consider an ergodic, reversible Markov chain \(P\) and a set of marked nodes \(M\). If we choose parameters \(s\in\{1-1/r:r=1,2,\cdots,2^{\lceil\log T\rceil}\}\) and \(T\in\Theta\left(HT(P,M)\right)\) uniformly at random, then the the following holds_ \[\mathbb{E}\left[\left\|\Pi_{M}e^{(D(s)-I)T}\left|\sqrt{\pi_{U}} \right\rangle\right\|^{2}\right]\in\Omega\left(1/\log^{2}T\right).\] **Proof sketch:** The overall proof is similar to the result of [32]. The key idea is to show that the quantity we intend to estimate, i.e. \(\left\|\Pi_{M}e^{(D(s)-I)T}\left|\pi_{U}\right\rangle\right\|^{2}\) is related to the behaviour of the original Markov chain \(P(s)\) (which applies to any reversible Markov chain). We provide a sketch of the proof here: * The first step is to show that \(\left\|\Pi_{M}e^{(D(s)-I)t}\left|\pi_{U}\right\rangle\right\|^{2}\) is lower bounded by the probability of the following event occurring in a continuous-time Markov chain, for any \(t\geq 0\): starting from a distribution over the unmarked elements, a continuous-time random walk is at some marked vertex after time \(t\) and is at an unmarked vertex after time \(t+t^{\prime}\), where \(t^{\prime}>0\). Let us call this event \(\mathcal{E}_{X}\). * The next step is to then show that the probability of this event occurring on a continuous-time Markov chain is lower bounded by the same event (say \(\mathcal{E}_{Y}\)) happening in a discrete-time Markov chain. * So, by these two steps we have related the quantity \(\left\|\Pi_{M}e^{(D(s)-I)T}\left|\pi_{U}\right\rangle\right\|^{2}\) to the probability of a specific event occurring on a discrete-time Markov chain \(P\). At this stage, we can make use of the results of Ambainis et al. [31], wherein the authors proved that for any reversible Markov chain \(P\), the probability of the event \(\mathcal{E}_{Y}\) occurring is \(\tilde{\Omega}(1)\), which allows us to prove \[\mathbb{E}\left[\left\|\Pi_{M}e^{(D(s)-I)T}\left|\pi_{U}\right\rangle\right\| ^{2}\right]=\tilde{\Omega}(1).\] For formal proof, see Sec. A - V of the Appendix. **Fast-forwarding continuous-time random walk using QSVT:** One can use QSVT to implement the operator \(e^{-t(1-x)}\), \(x\in[-1,1]\). We state this result via the following lemma: **Lemma 33**.: _Let \(\varepsilon\in(0,1/2)\) and \(t\in\mathbb{R}^{+}\). Suppose we have access to \(U_{H}\), which is an \((1,a,\delta)\)-block encoding of a Hamiltonian \(H\) such that\(\|H\|=1\), implementable in time \(T_{H}\). Furthermore, let \(d=\lceil\max\{te^{2},\log(4/\varepsilon)\}\rceil\) and \(d^{\prime}=\lceil\sqrt{2t\ln(8/\varepsilon)}\rceil\). Then if,_ \[\delta\leq\frac{\varepsilon^{2}}{128d\ \ln(8/\varepsilon)},\] _then there exists a quantum algorithm that implements an \((1,a+1,\varepsilon)\)-block encoding of \(e^{-t(I-H)}\) in cost_ \[T=O\left(T_{H}\sqrt{t}\log(1/\varepsilon)\right).\] Proof.: From Lemma 29, we find that by choosing \(d=\lceil\max\{te^{2},\log(4/\varepsilon)\}\rceil\) and \(d^{\prime}=\lceil\sqrt{2d\ln(8/\varepsilon)}\rceil\), we obtain a polynomial \(q_{t,d,d^{\prime}}(x)\) of degree \(d^{\prime}\) such that \(\left\|e^{-t(1-x)}-q_{t,d,d^{\prime}}(x)\right\|\leq\varepsilon/2\). Let \(H^{\prime}=\left(\left\langle\bar{0}\right|\otimes I\right)U_{H}\left(\left| \bar{0}\right\rangle\otimes I\right)\). Then \(\left\|H-H^{\prime}\right\|\leq\delta\). Thus, by using QSVT, we can implement the polynomial \(q_{t,d,d^{\prime}}(H^{\prime})\) using one extra ancilla qubit and \(O(d^{\prime})\)-queries to (controlled versions of) \(U_{H}\) and its inverse. Finally, \[\left\|e^{-t(I-H)}-q_{t,d,d^{\prime}}\left(H^{\prime}\right)\right\|\leq \left\|e^{-t(I-H)}-q_{t,d,d^{\prime}}\left(H\right)\right\|+ \left\|q_{t,d,d^{\prime}}\left(H^{\prime}\right)-q_{t,d,d^{\prime}}\left(H \right)\right\|\] (167) \[\leq\varepsilon/2+\underbrace{4d^{\prime}\cdot\sqrt{\left\|H-H^{ \prime}\right\|}}_{\text{Lemma \ref{lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemmalemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemmalemma:lemma:lemma:lemma:lemma:lemmalemma:lemma:lemma:lemma:lemma:lemma:lemmalemma:lemma:lemma:lemma:lemma:lemmalemma:lemma:lemmalemma:lemmalemma:lemmalemma:lemmalemma:lemmalemma: Thus, the "Analog LCU" procedure allows us to implement the operator \(e^{t(I-D^{2})}\) to the initial state. Now, the transition \(P^{2}\) corresponds to two steps of a discrete-time random walk on any Markov chain \(P\). Consequently, if we define the continuous-time random walk transition matrix \(Q=I-P^{2}\), then the operator \(e^{tQ}\) corresponds to a continuous-time random walk obtained from the transition matrix \(P^{2}\). So, the "Analog LCU" procedure allows us to implement a continuous-time random walk with respect to the transition matrix \(Q\). Since we are only interested in the projection of the state \(e^{-tH_{P}^{2}}\ket{\psi_{0}}\ket{\bar{0}}\) in the space spanned by the marked nodes, we can take advantage of the "Ancilla-free LCU" technique. Consider the following lemma from Ref. [32]: **Lemma 34** (Lemma 1 in Ref. [32]).: _Evolving a quantum state \(\ket{\psi_{0}}\) under a Hamiltonian \(H\) for time \(\sqrt{2t}z\), where \(z\sim\mathcal{N}(0,1)\) is drawn from the standard normal distribution, results in a mixed state_ \[\rho=\int_{-\infty}^{+\infty}\frac{dz}{\sqrt{2\pi}}e^{-z^{2}/2}e^{-i\sqrt{2t}Hz }\ket{\psi_{0}}\bra{\psi_{0}}e^{i\sqrt{2t}Hz},\] _such that for any projector \(\Pi\),_ \[\operatorname{tr}[\Pi\rho]\geq\bra{\psi_{0}}e^{-tH^{2}}\Pi e^{-tH^{2}}\ket{ \psi_{0}}.\] Proof.: It can be seen that \[\operatorname{tr}[\Pi\rho] =\operatorname{tr}\left[\left(\Pi\otimes I\right)\ket{\eta_{t}} \bra{\eta_{t}}\right]\] \[=\bra{\psi_{0}}e^{-tH^{2}}\Pi e^{-tH^{2}}\ket{\psi_{0}}+\bra{ \Phi^{\perp}}\Pi\otimes I\ket{\Phi^{\perp}},\] \[\geq\bra{\psi_{0}}e^{-tH^{2}}\Pi e^{-tH^{2}}\ket{\psi_{0}}. \tag{174}\] The expected time required to obtain \(\rho\) is simply \(T=\sqrt{2t}\cdot\bra{z}\). Since \(z\sim\mathcal{N}(0,1)\) this is \(T=2\sqrt{t/\pi}\). Although this lemma works for any Hamiltonian \(H\), we will restrict our attention to the quantum walk Hamiltonian \(H_{P}\). Using Lemma 34, in time \(\sqrt{2t/\pi}\), we can prepare the average density matrix \(\rho\) such that the projection of \(\rho\) in the marked subspace is at least as large as the projection of \(e^{-tH_{P}^{2}}\ket{\psi_{0}}\ket{\bar{0}}\). This implies, \[\operatorname{tr}[\Pi\rho]\geq\bra{\psi_{0}}e^{t\left(D^{2}-I\right)}\Pi e^{ t\left(D^{2}-I\right)}\ket{\psi_{0}}.\] This allows us, in \(O(\sqrt{t})\) time, to have access to the properties of the state of a continuous-time random walk (defined by \(Q=I-P^{2}\)) after time \(t\). Thus, if we want to find a marked node by measuring \(e^{t\left(D^{2}-I\right)}\ket{\psi_{0}}\), then we may instead measure \(\rho_{t}\), for which the probability of finding a marked node is at least as large. This allows us to directly state the spatial search algorithm of [32], wherein it was proven that for an interpolated Markov chain, the expected value of the quantity \(\left\|\Pi_{M}e^{t\left(D^{2}-I\right)}\ket{\sqrt{\pi_{U}}}\right\|^{2}\) is at least \(\tilde{\Omega}(1)\). Spatial search by continuous-time quantum walk: We directly state the quantum algorithm of [32]. The main result of [32] was to show that for uniformly random choices of \(s\in[0,1)\) and \(T\in\Theta(HT)\), the quantity \[\mathbb{E}\left[\left\|\Pi_{M}e^{t\left(D(s)^{2}-I\right)}\ket{\sqrt{\pi_{U}} }\right\|^{2}\right]=\tilde{\Omega}(1),\] which showed that Algorithm 6 finds a marked node in time scaling as \(O(\sqrt{HT})\). We refer the readers to [32] for details of the derivations. Thus, overall the "Ancilla-free" LCU framework is applicable for quantum walk-based problems. It also helped establish a connection between discrete and continuous-time quantum walks with their classical counterparts, which has been shown in Fig. 3. Additionally, this framework is applicable to other problems where we are interested in the projection of a state in some subspace. For instance, consider the Welded trees problem by continuous-time quantum walk [75; 76]. There the algorithm involves the time-evolution of a Hamiltonian \(H\) for a time \(t\) chosen uniformly at random in \([0,T]\), where \(T\) is related to the eigenvalue gaps of \(H\). This can be thought of as a special case of "Ancilla-free LCU", where the LCU coefficients correspond to a uniform distribution. General techniques such as quantum phase randomization [77], also fall under the umbrella of "Ancilla-free LCU". In the next section, we make use of the QSVT framework to establish a connection between discrete-time quantum walks and continuous-time quantum walks. ### Other results: Relationship between discrete-time and continuous-time quantum walks The "Ancilla-free LCU" framework helped us relate between discrete and continuous-time quantum walks and their classical counterparts. In this section, we establish a relationship between discrete-time quantum walks and continuous-time quantum walks, from both directions. In a seminal work, Childs [45] showed that, given any Hamiltonian \(H\), one can implement \(e^{-iHt}\) using a discrete-time quantum walk. However, this left open the problem of obtaining a discrete-time quantum walk, given access to a continuous-time quantum walk. In fact, since then there has been very little progress towards answering this question. In this section, we make significant progress in this direction. To this end, we will make use of two key observations from the previous sections: 1. The continuous-time quantum walk Hamiltonian \(H_{P}^{2}\) is a block encoding of \(I-D^{2}\). 2. Given any \(U_{H}\), which is a block encoding of a Hamiltonian \(H\), we can obtain a discrete-time quantum walk (Lemma 23). Furthermore, this simplifies when \(U_{H}^{2}=I\) (Lemma 25). In order to obtain a continuous-time quantum walk from a discrete-time quantum walk, we first obtain a generalization of (i). We will show that given \(U_{H}\), a block encoding of any \(H\) such that \(U_{H}^{2}=I\), we can obtain a Hamiltonian \(H_{P}\) which is a block encoding of \(I-H^{2}\). Thus, when \(H=D\), simulating \(H_{P}\) allows us to obtain a continuous-time quantum walk from a block encoding of \(H\). For the other direction, that is, to obtain a discrete-time quantum walk from a continuous-time quantum walk, we will assume that we have access to some \(U=e^{iH}\). This corresponds to a continuous-time quantum walk with respect to the Hamiltonian \(H\). From this, using QSVT, we will show that we can obtain a block encoding of \(H\). So, using (ii), we obtain a discrete-time quantum walk. We discuss each of these approaches next. From discrete-time quantum walks to continuous-time quantum walks: We first show that given any block encoding of \(H\), we can obtain a Hamiltonian that block-encodes \(I-H^{2}\). **Lemma 35**.: _Suppose \(\varepsilon\in(0,1)\), \(\Pi_{0}=\left|\bar{0}\right\rangle\left\langle\bar{0}\right|\otimes I\) and \(R=2\Pi_{0}-I\otimes I\). Let \(U_{H}\) be any \((1,a,0)\)-block encoding of \(H\) such that \(U_{H}^{2}=I\). Then the Hamiltonian,_ \[H_{P}=i[U_{H},\Pi_{0}] \tag{175}\] _can be constructed from one query to the (controlled) discrete-time quantum walk unitary \(V=R\cdot U_{H}\) and its conjugate transpose. Furthermore, \(H_{P}^{2}\) is a \((1,a+1,\varepsilon)\) block encoding of \(I-H^{2}\)._ Proof.: It is easy to see that \(H_{P}=i(V-V^{\dagger})/2\). So if \(W_{V}=\left|0\right\rangle\left\langle 0\right|\otimes e^{i\pi/2}V+\left|1 \right\rangle\left\langle 1\right|\otimes e^{-i\pi/2}V^{\dagger}\), then \(Q=\left(H\otimes I\right)W_{V}\left(H\otimes I\right)(\sigma_{x}\otimes I)\) is a \((1,a+1,0)\)-block encoding of \(H_{P}\). \(Q\) is implemented by versions of \(V\) and \(V^{\dagger}\) and also single Hadamard gates. It is easy to verify that \(H_{P}\) is a Hamiltonian (Hermitian operator) of unit norm. To prove that \(H_{P}^{2}\) is a \((1,a,0)\) block encoding of \(I-H^{2}\), observe \[\left(\left\langle\bar{0}\right|\otimes I\right)H_{P}^{2}\left( \left|\bar{0}\right\rangle\otimes I\right) =\left(\left\langle\bar{0}\right|\otimes I\right)\left[\Pi_{0}+U \Pi_{0}U-U\Pi_{0}U\Pi_{0}-\Pi_{0}U\Pi_{0}U\right]\left(\left|\bar{0}\right\rangle \otimes I\right) \tag{176}\] \[=I+H^{2}-2H^{2}=I-H^{2}. \tag{177}\] From a \((1,a+1,\varepsilon)\) block encoding of \(H_{P}\), using QSVT, we can implement a \((1,a+3,\varepsilon)\) block encoding of \(e^{-itH_{P}}\) using \(\Theta(t+\log(1/\varepsilon))\) queries to the controlled versions of the DTQW unitary \(V\) and its conjugate transpose [46, 56]. This implies, from a block encoding of \(H\), we can simulate a continuous-time quantum walk on the vertices of \(H\) (by implementing \(e^{-iHt}\)) as well as on the edges of \(H\) (by implementing \(e^{-iH_{P}t}\)), requiring in both cases, \(\Theta\left(t+\log(1/\varepsilon)\right)\) queries to the corresponding discrete-time quantum walk unitary. From continuous-time quantum walks to discrete-time quantum walks: For this approach, we begin by assuming that we have access to a continuous-time quantum walk evolution operator \(U=e^{iH}\). Given \(U\), we first show one can obtain a block encoding of \(\sin(H)\) and then use QSVT to implement the polynomial \(\arcsin(x)\) to obtain a block encoding of \(H\). There are some subtle issues involved, which we shall also discuss. We begin by stating the following lemma: **Lemma 36**.: _Given any \(U=e^{iH}\), such that \(H\) is some Hamiltonian. Then we can implement a \((1,1,0)\)-block encoding \(\tilde{V}\) of \(\sin(H)\) with \(O(1)\) cost._ Proof.: With one additional qubit, we can obtain the controlled unitary \[V=\left|0\right\rangle\left\langle 0\right|\otimes I+\left|1\right\rangle \left\langle 1\right|\otimes U.\] Then consider the circuit \(\tilde{V}=(H\otimes I)V^{\dagger}(Y\otimes I)V(H\otimes I)\). We have, \[\tilde{V}\left|\psi\right\rangle\left|0\right\rangle =(H\otimes I)V^{\dagger}(Y\otimes I)V\left|+\right\rangle\left|\psi\right\rangle \tag{178}\] \[=(H\otimes I)V^{\dagger}(Y\otimes I)\left(\frac{\left|0\right\rangle \left|\psi\right\rangle+\left|1\right\rangle e^{iH}\left|\psi\right\rangle}{ \sqrt{2}}\right)\] (179) \[=(I\otimes H)V^{\dagger}\left(\frac{i\left|1\right\rangle\left| \psi\right\rangle-i\left|0\right\rangle e^{iH}\left|\psi\right\rangle}{\sqrt{2}}\right)\] (180) \[=(I\otimes H)\left(\frac{i\left|1\right\rangle e^{-iH}\left|\psi \right\rangle-i\left|0\right\rangle e^{iH}\left|\psi\right\rangle}{\sqrt{2}}\right)\] (181) \[=\left|0\right\rangle\sin(H)\left|\psi\right\rangle-i\left|1 \right\rangle\cos(H)\left|\psi\right\rangle. \tag{182}\] Thus, \(\tilde{V}\) is a block encoding of \(\sin(H)\). Now we shall use QSVT a polynomial approximation of \(\arcsin(H)\) given a block encoding of \(H\). The fact that such a polynomial exists can be proven by the following lemma: **Lemma 37** (Lemma 9 of [67]).: _Let \(\delta,\varepsilon\in(0,1/2]\), there exists an efficiently computable odd real polynomial \(P\in\mathbb{R}(x)\) of degree \(O\left(\frac{1}{\delta}\log\left(\frac{1}{\varepsilon}\right)\right)\) such that_ \[\sup_{x\in[-1+\delta,1-\delta]}\bigl{|}P(x)-\arcsin x\bigr{|}\leq\varepsilon.\] For any Hamiltonian \(H\) with \(\left\|H\right\|\leq 1/2\), combining Lemma 36 and Lemma 37, we can obtain a block encoding of \(H\), given access to \(U=e^{iH}\). **Lemma 38**.: _Given any \(U=e^{iH}\), such that \(H\) is some Hamiltonian with \(\left\|H\right\|\leq 1/2\). Let \(\varepsilon\in(0,1/2]\). Then we can implement a \((1,2,\varepsilon)\)-block encoding of \(H\) with cost \(O(\log 1/\varepsilon)\)._ Proof.: From Lemma 36, we obtain a trivial block encoding of \(\sin(H)\). Now using quantum singular value transformation, with another ancilla qubit we can implement the polynomial \(P(\sin(H))\) in Lemma 37, which is \(\varepsilon\) close to \(H\) and choosing \(\delta=1/2\) with cost \(O(\log(1/\varepsilon))\). Thus, we obtain a \((1,2,\varepsilon)\)-block encoding of \(H\). One issue here is that Lemma 38 does not work when \(\left\|H\right\|=1\). This is because, the polynomial in Lemma 37, only approximates \(\arcsin(x)\) in the domain \([-1+\delta,1-\delta]\), for some \(\delta>0\). Also, for discrete-time quantum walks it is important that the sub-normalization factor of the block-encoded matrix is one. This is because the polynomials \(p_{t,d}(x)\) and \(q_{t,d,d^{\prime}}(x)\) approximate \(x^{t}\) and \(e^{t(x-1)}\) (respectively) on the entire domain \(\mathcal{I}\in[-1,1]\). However, for block-encoded matrices with normalization \(\alpha>1\), we would need to approximate these functions in \([-1/\alpha,1/\alpha]\). Using \(p_{t,d}(x/\alpha)\) or \(q_{t,d,d^{\prime}}(x/\alpha)\) would lead to an exponential overhead of \(\alpha^{t}\) in the cost. One way to circumvent this problem is to instead consider access to the continuous-time evolution operator \(U=e^{iH/2}\), where now \(\left\|H\right\|=1\). Using Lemma 38, we obtain a \((2,2,\varepsilon/2)\)-block encoding of \(H\) in cost \(O(\log(1/\varepsilon))\). At this stage, we can make use of the procedure of uniform singular value amplification [Theorem 17 of Ref. [46]], which amplifies all the singular values (in our case the eigenvalues) of a block-encoded matrix. This allows us to obtain a \((1,3,\varepsilon)\) block encoding of \(H\) as we prove next. **Theorem 39** (From continuous-time quantum walks to discrete-time quantum walks).: _Suppose \(\varepsilon\in(0,1)\) and \(H\) is a Hermitian operator. Suppose we have access to \(U=e^{iH/2}\). Then there exists a procedure that implements a \((1,3,\varepsilon)\) - block encoding of \(H\) in cost \(O\left(\frac{1}{\varepsilon}\log(1/\varepsilon)\right)\)._ Proof.: From \(U\), we obtain \(U_{H}\), which is a \((2,2,\delta)\) - block encoding of \(H\) in cost \(O(\log(1/\delta))\), using Lemma 38, for any \(\delta\leq\varepsilon/2\). Then, we use the uniform spectral amplification theorem [Theorem 17 of [46]]. In Theorem 17 of [46], set \(\gamma=2(1-\varepsilon)\). This gives us a \((1,3,\varepsilon)\) - block encoding of \(H\) in cost \(O(\frac{1}{\varepsilon}\log(1/\varepsilon))\). Thus given access to a continuous-time quantum walk \(U=e^{iH/2}\), we can obtain a block encoding of \(H\), which from Lemma 23, can be used to generate a discrete-time quantum walk. Some issues still remain if we want to use this block encoding to fast-forward random walks. For instance, from Lemma 23 and Lemma 28, we can see that the precision \(\delta\) required in the block encoding of \(H\) is \(\widetilde{O}(\varepsilon^{2}/t)\). Theorem 39 implies that to implement a block encoding of \(H\), from \(U\) would require \(\widetilde{O}(t/\varepsilon^{2})\) cost. Thus, the advantage of quantum fast-forwarding would be lost. In order to avoid this, we need a polynomial of degree \(t\) that approximates the monomial \((2x)^{t}\) in the domain \(\mathcal{D}:=\left[-\frac{1}{2}(1-1/t),\frac{1}{2}(1-1/t)\right]\). The existence of such a polynomial \(P(x)\) of degree \(O(t\log(1/\varepsilon))\) can indeed be guaranteed by Corollary 66 of [78]. For this set \(f(x)=(2x)^{t},x_{0}=0\), \(r=1/2\) and \(\delta=1/t\) in the corollary. We have not been able to find an explicit construction of this polynomial and leave it open for future work. Expressing \(P(x)\) as a linear combination of Chebyshev polynomials, we would obtain an \(\varepsilon\)-approximation of \((2x)^{t}\) in \(\mathcal{D}\) (having \(\sqrt{t}\) terms). Given access to \(U=e^{iH/2}\), we obtain a \((2,2,\varepsilon)\) - block encoding of \(H\) using Lemma 38. We can then directly apply QSVT directly to implement the polynomial \(P(x)\) on this block-encoded Hamiltonian. This would allow us to fast-forward discrete-time quantum walks, starting from continuous-time quantum walks. So, from the discussion on quantum walks and random walks, we establish a connection between these frameworks (in both discrete and continuous-time). Fig. 3 provides a pictorial representation of these relationships. ## VIII Discussion and open problems We considered the framework of Linear Combination of Unitaries: a quantum algorithmic paradigm that has been used to develop several quantum algorithms of practical interest. However, standard techniques to implement LCU are only implementable on fully fault-tolerant quantum computers, which are perhaps decades away. In this work, our motivation was to explore whether a broadly applicable framework such as LCU can be implemented on quantum devices that will be available in the intermediate-term, as we move beyond the NISQ era. To this end, we provided three variants of LCU, considering the different intermediate-term hardware possibilities. We refer to them as "Analog LCU", "Single-Ancilla LCU", and "Ancilla-free LCU". We have demonstrated that each of these techniques can be applied to various useful problems. "Analog LCU" is a physically motivated, continuous-time analogue of the LCU framework. It requires coupling a primary system to a continuous-variable ancilla. We apply this framework to develop continuous-time quantum algorithms for ground state preparation and also for solving quantum linear systems. This framework can be seen as a way to exploit qubit-qumode interactions to perform meaningful computational tasks. Such hybrid systems are currently being engineered in a number of quantum technological platforms such as photonics, trapped-ions, Circuit (or Cavity) QED and superconducting systems [3, 13, 14, 15, 16, 17, 18]. In order to experimentally implement the quantum algorithms we discuss, it is crucial to undertake a detailed comparative analysis of the resource requirements for each of these platforms. In future, we plan to develop an experimental proposal in this regard. Our work could lead to further research into whether other, simpler interactions can be engineered on hybrid platforms [79]. This would help bring quantum algorithmic frameworks closer to realization. The "Single Ancilla LCU" makes repeated use of a short-depth quantum circuit and only a single ancilla qubit, to sample from quantum states of the form \(f(H)\ket{\psi_{0}}\), where \(f(H)\) can be well-approximated by a linear combination of unitaries. The motivation behind our randomized quantum algorithm is to minimize the quantum resources consumed and use classical repetitions as much as possible. This makes our approach appealing for early fault-tolerant quantum computers. We apply this method to sample from ground states of Hamiltonians and also from the solution of quantum linear systems. One direction of future research would be to estimate the explicit cost (circuit depth and gate complexity) of also incorporating specific quantum simulation techniques into our method. The cost of each run of the algorithm would then be determined by the underlying quantum simulation algorithm. For this, it would be important to consider techniques that are amenable to intermediate-term implementation, such as the various randomized Trotter-based approaches [9, 38, 40, 51]. It would also be interesting to compute the algorithmic performance for specific Hamiltonians that find applications in areas such as in quantum chemistry [80, 81] and condensed matter physics [82]. The "Ancilla-free LCU" approach is useful when we are interested in the projection of \(f(H)\ket{\psi_{0}}\) in some subspace of interest. Then one can randomly sample the unitaries according to the distribution of the LCU coefficients. We have shown that it is applicable to the framework of quantum walks, in particular, to spatial search algorithms. This technique has been useful to connect discrete and continuous-time quantum walks, with their classical counterparts. We believe that this method is more widely applicable to quantum optimization and sampling algorithms such as quantum simulated annealing [42] and quantum Metropolis sampling [43, 44]. Along the way, we have also developed other results. In particular, we have established a novel connection between discrete-time and continuous-time quantum walks. We have shown that a discrete-time quantum walk can be implemented via any block encoding of a Hamiltonian, which was also observed in [41]. Given access to a continuous-time quantum walk evolution operator, we proved, using QSVT [46], that one can obtain a block encoding of the Hamiltonian corresponding to the walk. This allowed us to generate a discrete-time quantum walk, given access to any continuous-time quantum walk evolution operator. As previously discussed, the discrete-time quantum walk we obtain, cannot fast-forward random walks. However, we provide insights into how to resolve this issue. Specifically, we have shown that there exists a polynomial that can be implemented (via QSVT) to solve the problem completely. That is, starting from a continuous-time quantum walk we could build a discrete-time quantum walk unitary that can fast-forward random walks. However, we leave open the explicit construction of such a polynomial, for future work. Overall, an immediate direction of research would be to leverage our techniques to develop quantum algorithms for other problems (that make use of LCU), tailored to intermediate-term quantum computers. Problems that make use of quantum linear systems such as quantum linear regression [26] and differential equation solving [27; 28; 29] are natural candidates in this regard. Our work also opens avenues to investigate whether variants of other generic quantum algorithmic paradigms, such as QSVT [46; 47], can be designed so that they are implementable on intermediate-term quantum computers. ###### Acknowledgements. I thank Andras Gilyen, Jeremie Roland, Simon Apers, and Leonardo Novo for helpful discussions. I also thank Samson Wang and Mario Berta for providing feedback on an early version of this manuscript. I am grateful to Leonardo Novo and Hamed Mohammady for proofreading this manuscript. I acknowledge funding from the Science and Engineering Research Board, Department of Science and Technology (SERB-DST), Government of India under Grant No. SRG/2022/000354. I also acknowledge support from the Faculty Seed Grant, IIIT Hyderabad. Finally, I would like to thank Tanima Karmakar for acting as a sounding board during the writing of this manuscript.
2305.17794
Stability and the equality case in the B-theorem
In this paper, we show the stability, and characterize the equality cases in the strong B-inequality of Cordero-Erasquin, Fradelizi and Maurey \cite{B-conj}. As an application, we establish uniqueness of Bobkov's maximal Gaussian measure position from \cite{Bobkov-Mpos}.
Orli Herscovici, Galyna V. Livshyts, Liran Rotem, Alexander Volberg
2023-05-28T18:39:32Z
http://arxiv.org/abs/2305.17794v1
# Stability and the equality case in the B-theorem ###### Abstract. In this paper, we show the stability, and characterize the equality cases in the strong B-inequality of Cordero-Erasquin, Fradelizi and Maurey [11]. As an application, we establish uniqueness of Bobkov's maximal Gaussian measure position from [5]. ## 1. Introduction Let \(\gamma\) denote the standard Gaussian measure on \(\mathbb{R}^{n}\) with density \((2\pi)^{-n/2}e^{-\frac{|x|^{2}}{2}}\). We say that a set \(K\) in \(\mathbb{R}^{n}\) is _symmetric_ if \(K=-K\), i.e. for all \(x\in K\) we have \(-x\in K\). The B-theorem of Cordero-Erausquin, Fradelizi and Maurey [11] states that for every symmetric convex set \(K\subset\mathbb{R}^{n}\), and every \(a,b>0\), \[\gamma\left(\sqrt{ab}K\right)\geq\sqrt{\gamma(aK)\gamma(bK)}. \tag{1}\] In other words, \(\gamma(e^{t}K)\) is log-concave in \(t\) when \(K\) is a convex symmetric set. The inequality (1) was first conjectured by Latala in [19], who attributed the question to Banaszczyk. It is a strengthening of the inequality \[\gamma\left(\frac{a+b}{2}K\right)\geq\sqrt{\gamma(aK)\gamma(bK)},\] which holds with no symmetry assumption and follows from the Prekopa-Leindler inequality [25], [21] - see also Borell [6, 7] and Brascamp and Lieb [10]. More generally, for a vector \(x\in\mathbb{R}^{n}\), we will use the notation \[e^{x}K=\{(e^{x_{1}}y_{1},...,e^{x_{n}}y_{n}):\ (y_{1},...,y_{n})\in K\}. \tag{2}\] The strong version of the B-theorem then states that for any pair \(x,y\in\mathbb{R}^{n}\), \[\gamma\left(e^{\frac{x+y}{2}}K\right)\geq\sqrt{\gamma(e^{x}K)\gamma(e^{y}K)}. \tag{3}\] Nayar and Tkocz [24] showed that the assumption that \(K\) is symmetric is necessary and (1) does not necessarily hold if one replaces it with the weaker assumption \(0\in K\). A natural question, raised in [11], is what other measures satisfy an inequality such as (1) or (3). In dimension \(n=2\) it follows from the works of Boroczky, Lutwak, Yang and Zhang [9] and of Saroglou [26] that all even log-concave measures satisfy (1). For uniform measures on convex bodies this was verified independently by Livne Bar-On [4]. In arbitrary dimensions, Eskenazis, Nayar and Tkocz [13] proved that (3) holds for certain Gaussian mixtures, and in [12] it was proved in particular that all rotation-invariant log-concave measures satisfy (3). In this paper we will concentrate solely on the Gaussian case, and prove a stability version of this celebrated inequality. Recall that the in-radius \(r(K)\) of a symmetric convex body \(K\) is the largest number \(r>0\) such that \(rB_{2}^{n}\subset K\), where \(B_{2}^{n}\) denotes the unit Euclidean ball centered at the origin. Our theorem then reads: **Theorem 1.1**.: _Suppose \(0\leq a<b<\infty\) and let \(K\) be a symmetric convex body. Suppose that_ \[\gamma(\sqrt{ab}K)\leq\sqrt{\gamma(aK)\gamma(bK)}(1+\epsilon)\] _for small enough \(\epsilon>0\). Then either the in-radius \(r(K)\) satisfies_ \[r(K)\geq\frac{1}{b}\sqrt{\log\left(\frac{c\log(b/a)^{2}}{n^{2}\epsilon}\right)},\] _or_ \[r(K)\leq\frac{C\sqrt{n}}{a}\epsilon^{\frac{1}{n+1}}\left(\log(b/a)\right)^{- \frac{2}{n+1}}.\] **Remark 1.2**.: _We note that our estimate for \(r\) is essentially sharp: indeed, consider first a case when \(K\) is a symmetric strip \(S_{R}\) of width \(2R\), that is_ \[S_{R}=\{x=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\,:\,x_{1}\in[-R,R]\}.\] _Note that_ \[\gamma(S_{R})=\frac{1}{\sqrt{2\pi}}\int_{-R}^{R}e^{-\frac{s^{2}}{2}}ds=1- \frac{2}{\sqrt{2\pi}}\int_{R}^{\infty}e^{-\frac{s^{2}}{2}}ds.\] _Suppose \(0<a\leq b\leq\infty\) and there exists \(\epsilon>0\) such that_ \[\gamma(\sqrt{ab}S_{R})=\sqrt{\gamma(aS_{R})\gamma(bS_{R})}(1+\epsilon).\] _Relaxing the upper and lower bound obtained by Szarek and Werner [29] for Komatsu inequality, we have that_ \[\frac{1}{R+1}\,e^{-\frac{R^{2}}{2}}\leq\int_{R}^{\infty}e^{-\frac{s^{2}}{2}}ds \leq\frac{1}{R}\,e^{-\frac{R^{2}}{2}}.\] _Let \(F(t)=\frac{2}{\sqrt{2\pi}}\int_{tR}^{\infty}e^{-\frac{s^{2}}{2}}\,ds\) for \(t>0\). Then_ \[\epsilon=\frac{\gamma(S_{\sqrt{ab}R})}{\sqrt{\gamma(S_{aR})\gamma(S_{bR})}}-1 =\frac{1-F(\sqrt{ab})}{\sqrt{(1-F(a))(1-F(b))}}-1,\] _and, since \(F(a)>F(\sqrt{ab})>F(b)\), we get_ \[\epsilon\leq\frac{1-F(\sqrt{ab})}{1-F(a)}-1\leq\frac{F(a)}{1-F(a)},\] _which leads to_ \[\frac{\epsilon}{1+\epsilon}\leq F(a)=\frac{2}{\sqrt{2\pi}}\int_{aR}^{\infty} e^{-\frac{s^{2}}{2}}ds\leq\frac{2}{\sqrt{2\pi}aR}\,e^{-\frac{a^{2}R^{2}}{2}}.\] _Thus,_ \[\frac{\epsilon}{1+\epsilon}\leq\frac{2}{\sqrt{2\pi}}\frac{1}{aR}e^{-\frac{a^{ 2}R^{2}}{2}},\] _or,_ \[R\leq C\,\sqrt{\log\left(1+\frac{1}{\epsilon}\right)}.\] _However, the upper estimate on \(r\) in Theorem 1.1 is likely not sharp: indeed, by considering \(K=rB_{2}^{n}\) to be a very small ball, we only get that \(r(K)\geq C\sqrt{\epsilon}\), where \(\epsilon\) is the corresponding deficit in the B-inequality._ Theorem 1.1 implies: **Corollary 1.3**.: _Suppose \(0\leq a<b<\infty\) and \(K\) is a symmetric convex set such that \(\gamma(\sqrt{ab}K)=\sqrt{\gamma(aK)\gamma(bK)}\). Then either \(K=\mathbb{R}^{n}\) or \(K\) has an empty interior._ Proof.: We can apply Theorem 1.1 with an arbitrary \(\epsilon>0\). Letting \(\epsilon\to 0\) we see that either \(r(K)=\infty\) so \(K=\mathbb{R}^{n},\) or \(r(K)=0\) so \(K\) has an empty interior. We also prove a stability version for the strong B-theorem. Given a convex body \(K\) and a vector \(x\in\partial K\), the outer unit normal to \(\partial K\) (i.e. the Gauss map) at \(x\) will be denoted by \(n_{x}=(n_{x}^{1},...,n_{x}^{n})\). It is well known (see e.g. [27]) that \(n_{x}\) is uniquely defined almost everywhere on \(\partial K\). Recall that for an \((n-1)\)-dimensional surface \(M,\)\(\gamma^{+}(M)\) denotes the Gaussian perimeter of \(M.\) Our theorem reads: **Theorem 1.4**.: _Fix parameters \(\delta,\alpha,\beta>0\) Fix \(x,y\in\mathbb{R}^{n}\) such that \(|e^{x}|\leq|e^{y}|\) (where we use the notation \(e^{x}=(e^{x_{1}},...,e^{x_{n}})\), as well as (2)). Let \(K\) be a symmetric convex body, and suppose that_ \[\gamma(e^{\frac{x+y}{2}}K)\leq\sqrt{\gamma(e^{x}K)\gamma(e^{y}K)}(1+\epsilon)\] _for small enough \(\epsilon>0\). Consider_ \[\sigma^{\delta}=\{i\in[n]:\,|x_{i}-y_{i}|\geq\delta\},\] _and let_ \[\Omega_{\delta,\alpha}(K)=\big{\{}x\in\partial K:\,\sum_{i\in\sigma^{\delta}}( n_{x}^{i})^{2}\geq\alpha\big{\}}.\] _Then either there exists a vector \(z\in[x,y]\) such that_ \[\gamma^{+}(\Omega_{\delta,\alpha}(e^{z}K))\leq\beta\gamma^{+}(\partial(e^{z}K)),\] _or_ \[r(K)\geq|e^{y}|^{-1}\sqrt{\log\frac{\delta^{2}\alpha\beta}{\epsilon n^{2}}},\] _or_ \[r(K)\leq C|e^{x}|^{-1}\sqrt{n}\epsilon^{\frac{1}{n+1}}\left(\delta^{2}\alpha \beta\right)^{-\frac{1}{n+1}}.\] As a corollary, we shall deduce: **Corollary 1.5**.: _Let \(x,y\in\mathbb{R}^{n}\), set \(\sigma_{x,y}=\{j\in[n]:\,x_{j}\neq y_{j}\}\), and let \(K\) be a symmetric convex set in \(\mathbb{R}^{n}.\) Then_ \[\gamma\left(e^{\frac{x+y}{2}}K\right)=\sqrt{\gamma\left(e^{x}K\right)\gamma \left(e^{y}K\right)},\] _if and only if either \(K\) has an empty interior, or \(K=\mathbb{R}^{n},\) or (more generally) \(K=K_{0}\times H_{x,y},\) with \(K_{0}\subset H_{x,y}^{\perp},\) where \(H_{x,y}=\{z\in\mathbb{R}^{n}:z_{j}=0\ \forall j\in\sigma_{x,y}\}.\)_ We apply Corollary 1.5 to show uniqueness of the Bobkov maximal Gaussian measure position (MGM from now on) for a convex body \(K.\) Recall from [5] that a symmetric convex body \(K\) is said to be in Bobkov's MGM position if for any volume preserving linear operator \(T\) on \(\mathbb{R}^{n},\) we have \(\gamma(K)\geq\gamma(TK).\) Bobkov showed that \(K\) is in the MGM position if and only if the restriction of the Gaussian measure on \(K\) is isotropic (recall that a measure is isotropic if its barycenter is at the origin, and the covariance matrix is proportional to the identity). Isotropicity often arises when the measure is placed in some optimizing position, see e.g. [1], [2], [3], as well as [15], [16]. It is a natural question: _is the Bobkov MGM position unique for a symmetric convex body?_ We answer this question in the affirmative: **Theorem 1.6**.: _Let \(K\) be a symmetric convex body. The expression \(\sup_{T}\gamma(TK)\), where the supremum runs over all linear volume preserving operators \(T\) on \(\mathbb{R}^{n}\), is attained for the unique \(T.\)_ Proof.: Without loss of generality, assume that \(K\) is in the Bobkov MGM position. Suppose by contradiction that there exists a non-identity volume preserving linear map \(T\) such that \(\gamma(TK)=\gamma(K)\). Then there exists a traceless matrix \(D\) such that \(T=e^{D}\) (see e.g. [3] for the details), and by the rotation-invariance of the Gaussian measure we may assume that \(D\) is diagonal. Let us now consider a function \(F:[0,1]\to\mathbb{R}\) given by \(F(t)=\gamma(e^{tD}K)\). On one hand, by the strong B-property of the Gaussian measure [11], \(F\) is log-concave on \([0,1].\) On the other hand, \(F(0)=F(1)\) are maximal points for \(F\), and therefore, the equality should be attained in the inequality \(F(\frac{1}{2})\geq\sqrt{F(0)F(1)}.\) By Corollary 1.5, we get a contradiction with the assumption that \(K\) is a convex _body_ (that is, a convex compact set with non-empty interior). We remark that the above proof is inspired by the works of Artstein-Avidan, Katzin [2] and Artstein-Avidan, Putterman [3], where in particular the authors consider _maximal intersection position_: a symmetric convex body \(K\) is said to be in the maximal intersection position if for any volume preserving linear operator \(T\) on \(\mathbb{R}^{n}\), we have \(\mu(K)\geq\mu(TK),\) where \(\mu\) is the uniform measure on the centered euclidean ball of the same volume as \(K.\) In both of the aforementioned papers, this position is viewed as part of different families of positions. It was conjectured in [2], and reiterated in [3], that the maximal intersection position is indeed unique, and they explained (along the lines of the argument outlined above) that this conjecture would follow from the fact that the strong B-property for the uniform measure on the ball does not have non-trivial equality cases. We expect some of our ideas to be useful for studying this question, and leave it for future research. In Section 2 we outline some preliminaries. In Section 3 we discuss some estimates concerning special functions related to the Gaussian measure. In Section 4 we outline the stability for the Gaussian Poincare inequality restricted to a convex set, which is the result similar to what was obtained in [23], however we prove stability in a stronger distance. In Section 5 we outline the stability in the "even version" of the Gaussian Poincare inequality restricted to a convex set, for quadratic forms - this corresponds to stability in the "local versions" of the Theorems 1.1 and 1.4. Lastly, in Section 6 we prove Theorems 1.1 and 1.4 and Corollary 1.5. **Acknowledgements.** The second named author is supported by NSF DMS-1753260. The third named author is supported by ISF grant 1468/19 and BSF grant 2016050. The second and third authors are supported by NSF-BSF DMS-2247834. The fourth named author is supported by NSF DMS 2154402. The authors are grateful to ICERM for hospitality during the program "Harmonic Analysis and Convexity". ## 2. Preliminaries Given a convex set \(K\) in \(\mathbb{R}^{n}\) and the standard Gaussian measure \(\gamma\) on \(\mathbb{R}^{n}\), we shall use the notation \[\frac{1}{\gamma(K)}\int_{K}d\gamma=\mathchoice{\vbox{\hbox{$-$}} \kern-13.0pt\int_{K}d\gamma}{\vbox{\hbox{$-$}}\kern-13.0pt \int_{K}d\gamma}{\vbox{\hbox{$-$}}\kern-13.0pt\int_{K}d\gamma}.\] We will denote by \(L^{2}(K,\gamma)\) the class of functions \(u:\mathbb{R}^{n}\to\mathbb{R}\) such that \(\int_{K}|u|^{2}d\gamma<\infty,\) with the normalized \(L2\) norm \(\|u\|_{L^{2}(K,\gamma)}^{2}=\mathchoice{\vbox{\hbox{$-$}} \kern-13.0pt\int_{K}|u|^{2}d\gamma}{\vbox{\hbox{$-$}} \kern-13.0pt\int_{K}|u|^{2}d\gamma}{\vbox{\hbox{$-$}} \kern-13.0pt\int_{K}|u|^{2}d\gamma}.\) Given a convex set \(K\) in \(\mathbb{R}^{n}\), for a point \(x\in\partial K\) we denote by \(n_{x}\) the outward unit normal at \(x\); the vector field \(n_{x}\) is uniquely defined almost everywhere on \(\partial K\). We say that \(K\) is of class \(C^{2}\) if its boundary is locally twice differentiable. In this case, \(n_{x}\) is well-defined for all \(x\in\partial K\). For a \(C^{2}\) convex set, consider the second fundamental form of \(K\) to be the matrix \(\mathrm{II}=\frac{\mathrm{d}n_{x}}{\mathrm{d}x}\) (with a "plus" because the normal is outer) acting on the tangent space at \(x\). The Gauss curvature at \(x\) is \(\det(\mathrm{II})\) and the mean curvature is \(\mathrm{tr}(\mathrm{II})\). We denote by \(L\) the Ornstein-Uhlenbeck operator \(L:C^{2}(\mathbb{R}^{n})\to C(\mathbb{R}^{n})\), given by \[Lu=\Delta u-\langle\nabla u,x\rangle.\] The operator \(L\) satisfies the following integration by parts identity whenever it makes sense (as follows immediately from the classical divergence theorem): \[\int_{K}vLu\,d\gamma=-\int_{K}\langle\nabla v,\nabla u\rangle d\gamma+\int_{ \partial K}v\langle\nabla u,n_{x}\rangle d\gamma_{\partial K}.\] Here by \(\gamma_{\partial K}\) we mean the measure on \(\partial K\) with density \((2\pi)^{-n/2}e^{-\frac{|x|^{2}}{2}}\) with respect to \(H_{n-1}\), the \((n-1)\)-dimensional Hausdorff measure. We denote by \(W^{k,2}(K,\gamma)\) the Sobolev space of all functions \(u\) such that \(u\) has weak partial derivatives up to order \(k\), and all those partial derivatives (including \(u\) itself) belong to \(L^{2}(K,\gamma)\). We will use the notation \[\|u\|_{W^{1,2}(K,\gamma)}^{2}=\mathchoice{{\vbox{\hbox{$-$}}\kern-13.49974pt} \kern-13.49974pt}{{\vbox{\hbox{$-$}}\kern-12.149815pt}}{{\vbox{ \hbox{$-$}}\kern-9.899849pt}}{{\vbox{\hbox{$-$}} \kern-8.999863pt}}\!\int_{K}|\nabla u|^{2}d\gamma\] (even though strictly speaking this is not a norm on \(W^{1,2}(K,\gamma)\), since \(\|u\|_{W^{1,2}(K,\gamma)}^{2}=0\) for constant functions). Recall that the trace operator is a continuous linear operator \[\mathrm{TR}:W^{1,2}(K,\gamma)\to L^{2}(\partial K,\gamma)\] such that for every \(u\in C^{1}(K)\), continuous up to the boundary, we have \[\mathrm{TR}(u)=u|_{\partial K}.\] We shall use informal notation \(\int_{\partial K}ud\gamma\) to mean \(\int_{\partial K}\mathrm{TR}(u)d\gamma\). Similarly, we use notation \(\int_{\partial K}\langle\nabla u,n_{x}\rangle d\gamma\) to mean \(\int_{\partial K}\langle\mathrm{TR}(\nabla u),n_{x}\rangle d\gamma\), where \(\mathrm{TR}(\nabla u)\) is the vector formed by the trace functions of the weak first partial derivatives of \(u\). We shall also use notation \(\nabla\), \(\Delta\) and so on to denote the appropriate quantities in the sense of weak derivatives. We will also use the following notation for the Gaussian perimeter: \[\gamma^{+}(\partial K)=\int_{\partial K}d\gamma.\] The following result appears in [23]; we sketch its proof for completeness. **Theorem 2.1** (Gaussian Trace Theorem for convex sets containing the origin).: _Let \(K\) be a convex domain such that \(rB_{2}^{n}\subset K\) for some \(r>0\). Fix \(g\in W^{1,2}(K,\gamma)\). Then_ \[\int_{\partial K}g^{2}d\gamma_{\partial K}\leq\frac{1}{r}\int_{K}(ng^{2}+| \nabla g|^{2})d\gamma.\] Proof.: We use the estimate \(\langle x,n_{x}\rangle\geq r\), and incorporate the trick similar to the ones from [14], [18] and use the divergence theorem. We get \[\int_{\partial K}g^{2}d\gamma_{\partial K}\leq\frac{1}{r}\int_{\partial K} \langle g^{2}x,n_{x}\rangle d\gamma_{\partial K}=\frac{1}{r}\int_{K}(\mathrm{ div}(g^{2}x)-g^{2}|x|^{2})d\gamma.\] Note that \[\mathrm{div}(g^{2}x)=ng^{2}+2g\langle\nabla g,x\rangle\leq ng^{2}+|\nabla g|^{2}+g ^{2}|x|^{2}.\] Combining the above yields the result. ## 3. Estimates on the special functions which concern the rate of the stability estimate In what follows, \(C,c,C_{1}\) etc denote positive absolute constants that do not depend on the dimension and whose value may change from line to line. Recall that the in-radius \(r(K)\) of a convex set \(K\subset\mathbb{R}^{n}\) is the largest number \(r>0\) such that \(rB_{2}^{n}\subset K\). We also denote by \[\gamma^{+}\left(\partial K\right)=\int_{\partial K}\left(2\pi\right)^{-n/2}e^ {-|x|^{2}/2}\mathrm{d}H_{n-1}\] the Gaussian surface area of \(K\). The main goal of this section is to prove the following technical estimate: **Proposition 3.1**.: _Let \(K\) be a symmetric convex body in \(\mathbb{R}^{n}\) with in-radius \(r=r(K)>0\). Assume that for \(\delta<c_{0}\) we have_ \[\frac{\gamma(K)}{\int_{rB_{2}^{n}}\left|x\right|^{2}\mathrm{d}\gamma}+\frac{ \gamma(K)}{r\gamma^{+}(\partial K)}\geq\frac{1}{\delta}\] _Then either \(r\geq\sqrt{\log\frac{1}{\delta}}\) or \(r\leq C\sqrt{n}\delta^{\frac{1}{n+1}}\)._ For the proof we need several lemmas. First, we will need the Gaussian isoperimetric inequality: Let \(\Phi\left(x\right)=\gamma\left(\left(-\infty,x\right]\right)\) denote the CDF of a standard normal random variable, and let \(\Phi^{-1}:\left[0,1\right]\rightarrow\mathbb{R}\) denote the inverse function. Define the isoperimetric profile \(I:\left[0,1\right]\rightarrow\mathbb{R}\) by \(I(x)=\frac{1}{\sqrt{2\pi}}e^{-\frac{\Phi^{-1}\left(x\right)^{2}}{2}}\). Then the Gaussian isoperimetric inequality ([8], [28]) states that for every \(K\subset\mathbb{R}^{n}\) we have \(\gamma^{+}\left(\partial K\right)\geq I\left(\gamma(K)\right)\). Note that \(I\) is concave and symmetric around \(x=\frac{1}{2}\) where it attains its maximum. We first give a lower bound on \(\gamma^{+}\left(\partial K\right)\) in terms of \(r\) instead of the measure \(\gamma(K)\): **Lemma 3.2**.: _Let \(K\) be a symmetric convex body in \(\mathbb{R}^{n}\) with in-radius \(r>0\)._ 1. _If_ \(\gamma(K)\geq\frac{1}{2}\) _then_ \(\gamma^{+}\left(\partial K\right)\geq\frac{1}{\sqrt{2\pi}}e^{-r^{2}/2}\)_._ 2. _If_ \(\gamma(K)\leq\frac{1}{2}\) _then_ \(\gamma^{+}\left(\partial K\right)\geq\left(\frac{c\pi}{\sqrt{n}}\right)^{n}e^{ -r^{2}/2}\)_._ Proof.: (1) By the definition of \(r\) we know that \(K\) is contained in a strip \[S=\left\{x:\ |\langle x,\theta\rangle|\leq r\right\}\] for some \(\theta\in S^{n-1}\). We therefore also have \(K\subset H\) where \(H=\left\{x:\ \langle x,\theta\rangle\leq r\right\}\), and therefore \(\frac{1}{2}\leq\gamma(K)\leq\gamma\left(H\right)=\Phi(r)\). Since \(I\) is decreasing on \(\left[\frac{1}{2},1\right]\) it follows from the Gaussian isoperimetric inequality that \(\gamma^{+}\left(\partial K\right)\geq I\left(\Phi(r)\right)=\frac{1}{\sqrt{2 \pi}}e^{-r^{2}/2}\). (2) By the concavity of \(I\) we have for all \(0\leq x\leq\frac{1}{2}\) \[I(x)=I\left(2x\cdot\frac{1}{2}+\left(1-2x\right)\cdot 0\right)\geq 2x\cdot I \left(\frac{1}{2}\right)=\sqrt{\frac{2}{\pi}}x.\] Therefore, by the Gaussian isoperimetric inequality \[\gamma^{+}\left(\partial K\right)\geq I\left(\gamma(K)\right)\geq\sqrt{\frac{ 2}{\pi}}\cdot\gamma(K)\geq\sqrt{\frac{2}{\pi}}\cdot\gamma\left(rB_{2}^{n} \right).\] Since the density of \(\gamma\) on \(rB_{2}^{n}\) is bounded from below by \(\frac{1}{\left(2\pi\right)^{n/2}}e^{-r^{2}/2}\) we have \[\gamma\left(rB_{2}^{n}\right)\geq\frac{1}{\left(2\pi\right)^{n/2}}e^{-r^{2}/2} \cdot r^{n}\omega_{n},\] where \(\omega_{n}\) denotes the volume of \(B_{2}^{n}\). Since \(\omega_{n}\geq\left(\frac{c}{\sqrt{n}}\right)^{n}\) we get \[\gamma^{+}\left(\partial K\right)\geq\left(\frac{cr}{\sqrt{n}}\right)^{n}e^{- r^{2}/2}.\] as claimed. Next, we need some rough estimates for Gaussian integrals over balls. Sharper estimates are definitely known, but this lemma will suffice for our needs: **Lemma 3.3**.: 1. _We have_ \(\gamma\left(2\sqrt{n}B_{2}^{n}\right)\geq\frac{3}{4}\) _and_ \(\int_{2\sqrt{n}B_{2}^{n}}\left|x\right|^{2}\mathrm{d}\gamma\geq cn\)_._ 2. _For every_ \(r>0\) _we have_ \(\int_{rB_{2}^{n}}\left|x\right|^{2}\mathrm{d}\gamma\geq\left(\frac{c}{\sqrt{n }}\right)^{n}r^{n+2}e^{-r^{2}/2}\)_._ Proof.: (1) We use simple probabilistic bounds. Let \(Z=\left(Z_{1},\ldots,Z_{n}\right)\) denote a standard normal random vector in \(\mathbb{R}^{n}\). Then \[\mathbb{E}\left|Z\right|^{2} =n\mathbb{E}Z_{1}^{2}=n\] \[\operatorname{Var}\left|Z\right|^{2} =n\operatorname{Var}Z_{1}^{2}=2n.\] It follows by Markov inequality that \[\mathbb{P}\left(\left|Z\right|\geq 2\sqrt{n}\right)=\mathbb{P}\left(\left|Z \right|^{2}\geq 4n\right)\leq\frac{n}{4n}=\frac{1}{4},\] or \(\gamma\left(2\sqrt{n}B_{2}^{n}\right)\geq\frac{3}{4}\) as claimed. By Chebyshev inequality we have for \(n\geq 15\) \[\mathbb{P}\left(\left|Z\right|<\frac{\sqrt{n}}{2}\right)=\mathbb{P}\left(\left| Z\right|^{2}<\frac{n}{4}\right)\leq\mathbb{P}\left(\left|Z\right|^{2}<n-2 \sqrt{2n}\right)\leq\frac{1}{4},\] and therefore \[\gamma\left(2\sqrt{n}B_{2}^{n}\setminus\frac{\sqrt{n}}{2}B_{2}^{n}\right)= \mathbb{P}\left(\left|Z\right|\leq 2\sqrt{n}\right)-\mathbb{P}\left(\left|Z \right|\leq\frac{\sqrt{n}}{2}\right)\geq\frac{3}{4}-\frac{1}{4}=\frac{1}{2}.\] It follows that \[\int_{2\sqrt{n}B_{2}^{n}}\left|x\right|^{2}\mathrm{d}\gamma\geq\int_{2\sqrt{n }B_{2}^{n}\setminus\frac{\sqrt{n}}{2}B_{2}^{n}}\left|x\right|^{2}\mathrm{d} \gamma\geq\frac{n}{4}\gamma\left(2\sqrt{n}B_{2}^{n}\setminus\frac{\sqrt{n}}{2} B_{2}^{n}\right)=\frac{n}{8},\] and we choose \(c>0\) so that the estimate will also hold for \(1\leq n\leq 14\). (2) The density of \(\gamma\) on \(rB_{2}^{n}\) is bounded from below by \(\frac{1}{\left(2\pi\right)^{n/2}}e^{-r^{2}/2}\) and so \[\int_{rB_{2}^{n}}\left|x\right|^{2}\mathrm{d}\gamma\geq\frac{1}{\left(2\pi \right)^{n/2}}e^{-r^{2}/2}\cdot\int_{rB_{2}^{n}}\left|x\right|^{2}\mathrm{d}x =\frac{1}{\left(2\pi\right)^{n/2}}e^{-r^{2}/2}\cdot n\omega_{n}\cdot\frac{r^{n +2}}{n+2}\] Again we use \(\omega_{n}\geq\left(\frac{c}{\sqrt{n}}\right)^{n}\) to get \(\int_{rB_{2}^{n}}\left|x\right|^{2}\mathrm{d}\gamma\geq\left(\frac{c}{\sqrt{n }}\right)^{n}r^{n+2}e^{-r^{2}/2}\). Now we can prove Proposition 3.1 Proof of Proposition 3.1.: Under the assumption of the proposition we have either \(\frac{\gamma\left(K\right)}{\int_{rB_{2}^{n}}\left|x\right|^{2}\mathrm{d} \gamma}\geq\frac{1}{2\delta}\) or \(\frac{\gamma\left(K\right)}{r\gamma^{+}\left(0K\right)}\geq\frac{1}{2\delta}\). Assume first that \(\frac{\gamma(K)}{\int_{rB_{2}^{n}}\left|x\right|^{2}\mathrm{d}\gamma}\geq\frac{1}{2\delta}\). For \(\delta\) small enough we must have \(r\leq 2\sqrt{n}\): If this is not the case then by Lemma 3.3(1) we have \[2\delta\geq 2\delta\gamma(K)\geq\int_{rB_{2}^{n}}\left|x\right|^{2}\mathrm{d} \gamma\geq\int_{2\sqrt{n}B_{2}^{n}}\left|x\right|^{2}\mathrm{d}\gamma\geq cn \geq c,\] which is clearly impossible for \(\delta\) small enough. Next, as in Lemma 3.2 we use the fact that \(K\) is contained in a strip \(S\) of width \(2r\) and hence \(\gamma(K)\leq\gamma(S)\leq Cr.\) Using Lemma 3.3(2) we obtain \[\left(\frac{c}{\sqrt{n}}\right)^{n}r^{n+2}e^{-r^{2}/2}\leq\int_{rB_{2}^{n}} \left|x\right|^{2}\mathrm{d}\gamma\leq 2\delta\cdot\gamma(K)\leq Cr\delta,\] and since \(r\leq 2\sqrt{n}\) we get \[\left(\frac{c}{\sqrt{n}}\right)^{n}r^{n+1}e^{-2n}\leq C\delta,\] or \(r\leq C\sqrt{n}\delta^{\frac{1}{n+1}}\) as we claimed. Finally, assume that \(\frac{\gamma(K)}{r\gamma^{+}(\partial K)}\geq\frac{1}{2\delta}\). Using again the fact that \(\gamma(K)\leq Cr\) we obtain \[\gamma^{+}\left(\partial K\right)\leq\frac{2\delta}{r}\gamma(K)=C\delta.\] If \(\gamma\left(K\right)\geq\frac{1}{2}\) we see from Lemma 3.2(1) that \(e^{-r^{2}/2}\leq C\delta\leq\sqrt{\delta}\) (assuming we take \(\delta\leq\frac{1}{C^{2}}\)), and then \(r\geq\sqrt{\log\frac{1}{\delta}}\). If on the other hand \(\gamma(K)\leq\frac{1}{2}\) we see from Lemma 3.2(2) that \[\left(\frac{cr}{\sqrt{n}}\right)^{n}e^{-r^{2}/2}\leq C\delta.\] However, again in this case it follows from Lemma 3.3(1) that \(r\leq 2\sqrt{n}\), so we have \(\left(\frac{cr}{\sqrt{n}}\right)^{n}e^{-2n}\leq C\delta\) or \(r\leq C\sqrt{n}\delta^{\frac{1}{n}}\). **Remark 3.4**.: _It is an interesting question - what (symmetric) convex set in \(\mathbb{R}^{n}\) has the smallest Gaussian perimeter if the largest ball centered at the origin which is contained in this set has radius \(r\)._ _In the non-symmetric case, it is natural to conjecture that the answer is the ball for smaller \(r\) and the half-space for larger \(r\). In fact, if \(\gamma(rB_{2}^{n})\geq\frac{1}{2}\) then indeed one can prove using the Gaussian isoperimetric inequality ([8], [28]) that the half-space minimize the perimeter for a fixed \(r\)._ _In the symmetric case, it is natural to conjecture that the answer is the ball for smaller \(r\) and the symmetric strip for larger \(r.\) Once again, for large enough \(r\) this follows from the result of Latala and Oleszkiewicz [20], who showed that \(r(K)\gamma^{+}(\partial K)\) is minimized for a symmetric convex set when \(K\) is a symmetric strip of the same Gaussian measure as \(K\). In other words,_ \[\gamma^{+}(\partial K)\geq\frac{2J_{0}^{-1}(\gamma(K))e^{-\frac{J_{0}^{-1}( \gamma(K))^{2}}{2}}}{\sqrt{2\pi}r}\] _where \(J_{0}(a)=2\Phi(a)-1\) denotes the Gaussian measure of a symmetric strip of width \(2a\). Let \(\tilde{R}_{n}\) be the radius of the ball whose Gaussian measure is \(J_{0}(1)\), and suppose that \(r\geq\tilde{R}_{n}.\) Then \(J_{0}(1)\leq\gamma(K)\leq J_{0}(r)\), and noting that \(J_{0}^{-1}(a)e^{-\frac{J_{0}^{-1}(a)^{2}}{2}}\) is decreasing for \(a\geq J_{0}(1),\) we conclude that_ \[\gamma^{+}(\partial K)\geq\frac{2re^{-\frac{r^{2}}{2}}}{\sqrt{2\pi}r}=\frac{2} {\sqrt{2\pi}}e^{-\frac{r^{2}}{2}},\] _and the inequality is sharp when \(K\) is a strip. We could have used this estimate that improves Lemma 3.2, but this sharper result would only affect our outcome in terms of the value of the absolute constants which we are not tracking._ ## 4. Stability in the Poincare inequality on the convex set The Gaussian Poincare inequality on a convex set (which follows e.g. from the Theorem of Brascamp and Lieb [10]) states that \[\gamma(K)\int_{K}f^{2}d\gamma-\left(\int_{K}fd\gamma\right)^{2}\leq\gamma(K) \int_{K}|\nabla f|^{2}d\gamma. \tag{4}\] The result below is similar to Theorem 1.5 in [23], although the \(W^{1,2}\) norm is replaced there by \(L^{1}\) norm. Using the \(W^{1,2}\) norm is crucial for us, and thus for completeness, we outline the full proof here: **Theorem 4.1** (the quantitative stability in the Gaussian Poincare).: _Suppose that for a convex set \(K\) containing \(rB_{2}^{n}\), a function \(f\in W^{1,2}(K)\cap C^{1}(K),\) and for \(\epsilon>0,\) we have_ \[\fint_{K}f^{2}d\gamma-\left(\fint_{K}fd\gamma\right)^{2}\geq\fint_{K}|\nabla f |^{2}d\gamma-\epsilon.\] _Then there exists a vector \(\theta\in\mathbb{R}^{n}\) (possibly zero), which depends only on \(K\) and \(f\), such that_ * \(\int_{\partial K}\langle\theta,n_{x}\rangle^{2}d\gamma_{\partial K}\leq\frac{2 (n+1)\gamma(K)\epsilon}{r};\)__ * \(\|f-\langle x,\theta\rangle-\fint_{K}fd\gamma\|_{W^{1,2}(K,\gamma)}^{2}\leq 4\epsilon.\)__ Proof.: We first assume that the boundary of \(K\) is of class \(C^{\infty}\). Without loss of generality we may also assume that \(\int_{K}fd\gamma=0\). Consider the function \(u\in W^{2,2}(K)\cap C^{2}(K)\) such that \(Lu=f\) and \(\langle\nabla u,n_{x}\rangle=0\) on \(\partial K\) (see e.g. [23] for existence and regularity). By Bochner's formula (see e.g. [17]), we write \[\begin{split}\int_{K}f^{2}d\gamma=&-\int_{K}\left(2 \langle\nabla f,\nabla u\rangle+|\nabla u|^{2}\right)d\gamma\\ &-\int_{K}\|\nabla^{2}u\|^{2}d\gamma-\int_{\partial K}\langle \operatorname{II}\nabla_{\partial K}u,\nabla_{\partial K}u\rangle\,d\gamma_{ \partial K}(x).\end{split} \tag{5}\] Combining (5) with the Cauchy's inequality \[\int_{K}\left(2\langle\nabla f,\nabla u\rangle+|\nabla u|^{2}\right)d\gamma \geq-\int_{K}|\nabla f|^{2}d\gamma, \tag{6}\] and an application of convexity of \(K\) \[\int_{\partial K}\langle\operatorname{II}\nabla_{\partial K}u,\nabla_{ \partial K}u\rangle)\,d\gamma_{\partial K}\geq 0,\] we get \[\fint_{K}f^{2}d\gamma\leq\fint_{K}|\nabla f|^{2}d\gamma-\fint_{K}\|\nabla^{2} u\|^{2}d\gamma. \tag{7}\] Under the assumption of the theorem, this yields \[\fint_{K}\|\nabla^{2}u\|^{2}d\gamma\leq\epsilon. \tag{8}\] Therefore, there exists a vector \(\theta\in\mathbb{R}^{n}\) such that \[u=-\langle x,\theta\rangle+v\] with \(\fint_{K}\nabla vd\gamma=0\) and \(\fint_{K}\|\nabla^{2}v\|^{2}d\gamma\leq\epsilon.\) By the Poincare inequality (4), this implies that \(\fint_{K}|\nabla v|^{2}d\gamma\leq\epsilon\). By our choice of \(u\), we have \[\langle\nabla v,n_{x}\rangle=\langle\nabla u,n_{x}\rangle+\langle\theta,n_{x} \rangle=\langle\theta,n_{x}\rangle.\] In order to show the first assertion, we apply the Trace Theorem 2.1 to all the partial derivatives of \(v\) and sum it up: \[\frac{1}{\gamma(K)}\int_{\partial K}|\langle\nabla v,n_{x}\rangle|^{2}d\gamma _{\partial K}\leq\frac{1}{\gamma(K)}\int_{\partial K}|\nabla v|^{2}d\gamma_{ \partial K}\leq\] \[\frac{n+1}{r}\fint_{K}(|\nabla v|^{2}+\|\nabla^{2}v\|^{2})d\gamma\leq\frac{2( n+1)}{r}\epsilon.\] The desired inequality follows if we remember that \(\langle\theta,n_{x}\rangle=\langle\nabla v,n_{x}\rangle\), which completes the proof when \(K\) is smooth. In order to show the second assertion, note that we used (6) while proving the Poincare inequality, and therefore, the assumption of the theorem gives \[\fint_{K}\left(2\langle\nabla f,\nabla u\rangle+|\nabla u|^{2}\right)d\gamma- \epsilon\leq-\fint_{K}|\nabla f|^{2}d\gamma,\] which amounts to \[\fint_{K}|\nabla f+\nabla u|^{2}d\gamma\leq\epsilon. \tag{9}\] We write \[\|f-\langle x,\theta\rangle\|_{W^{1}(K,\gamma)}=\fint_{K}|\nabla f-\theta|^{2}d \gamma\leq\] \[2\fint_{K}|\nabla f+\nabla u|^{2}d\gamma+2\fint_{K}|\nabla v|^{2}d\gamma\leq 4\epsilon,\] where we used the properties of \(v\) together with (9), and the conclusion follows. Next, assume \(K\) is a general compact convex body. Choose a sequence \(\{K_{i}\}\) of \(C^{\infty}\)-smooth convex bodies such that \(K_{i}\subset K\) and \(K_{i}\to K\) in the Hausdorff distance. Clearly \[\fint_{K_{i}}f^{2}d\gamma-\left(\fint_{K_{i}}fd\gamma\right)^{2}\geq\fint_{K_ {i}}|\nabla f|^{2}d\gamma-\epsilon_{i}\] for \(\epsilon_{i}\to\epsilon\), and \(K_{i}\supseteq r_{i}B_{2}^{n}\) for \(r_{i}\to r\). Let \(\theta_{i}\in\mathbb{R}^{n}\) be the vector constructed in the proof for the body \(K_{i}\). Since \[|\theta_{i}|^{2} \leq 2\left(\fint_{K_{i}}|\nabla f-\theta_{i}|^{2}\,d\gamma+\fint_{ K_{i}}|\nabla f|^{2}\,d\gamma\right)\] \[\leq 8\epsilon_{i}+2\fint_{K_{i}}|\nabla f|^{2}\,d\gamma\to 8 \epsilon+2\fint_{K}|\nabla f|^{2}\,d\gamma\] it follows that \(\{\theta_{i}\}\) is bounded. Therefore by passing to a subsequence we may assume without loss of generality that \(\theta_{i}\to\theta\in\mathbb{R}^{n}\). The conclusion \[\left\|f-\left\langle x,\theta\right\rangle-\mathchoice{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-12.499803pt}}{{ \vbox{\hbox{$-$}}\kern-9.899849pt}}{{\vbox{\hbox{$-$}} \kern-8.999863pt}}\!\int_{K}fd\gamma\right\|_{W^{1,2}(K,\gamma)}^{2}\leq 4\epsilon.\] now follows by continuity. For the second conclusion, we note that for a fixed \(\eta\in\mathbb{R}^{n}\) the convergence \[\int_{\partial K_{i}}\left\langle\eta,n_{K_{i},x}\right\rangle^{2}\mathrm{d} \gamma_{\partial K_{i}}\to\int_{\partial K}\left\langle\eta,n_{K,x}\right\rangle ^{2}\mathrm{d}\gamma_{\partial K}\] follows e.g. from Proposition A.3 of [22]. Using this fact it is now straightforward to deduce that \[\int_{\partial K}\left\langle\theta,n_{K,x}\right\rangle^{2}\mathrm{d}\gamma _{\partial K}=\lim_{i\to\infty}\int_{\partial K_{i}}\left\langle\theta_{i},n _{K_{i},x}\right\rangle^{2}\mathrm{d}\gamma_{\partial K_{i}}\leq\frac{2(n+1) \gamma(K)\epsilon}{r}\] as claimed. Finally, if \(K\) is not compact we approximate it by the bodies \(K_{m}=K\cap(mB_{2}^{n})\) for \(m=1,2,3,...\). This time the convergence \[\int_{\partial K_{m}}\left\langle\eta,n_{K_{m},x}\right\rangle^{2}\mathrm{d} \gamma_{\partial K_{m}}\to\int_{\partial K}\left\langle\eta,n_{K,x}\right\rangle ^{2}\mathrm{d}\gamma_{\partial K}\] follows from the fact that on \(\partial K_{m}\cap\partial K\) we have \(n_{K_{m},x}=n_{K,x}\) almost everywhere, and the contribution of the integral on \(\partial K_{m}\setminus\partial K\subset m\mathbb{S}^{n-1}\) tends to zero as \(m\to\infty\). The rest of the argument is the same as before. ## 5. Stability in the "symmetric" Gaussian Poincare inequality for quadratic functions The main result of this section is stability (in some partial cases) of the "symmetric" Gaussian Poincare inequality due to Cordero-Erasquin, Fradelizi and Maurey: if \(f\) is an even function and \(K\) is a symmetric convex set, then \[\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$ -$}}\kern-12.499815pt}}{{\vbox{\hbox{$-$}}\kern-9.899849pt}}{{ \vbox{\hbox{$-$}}\kern-8.999863pt}}\!\int_{K}f^{2}d\gamma-\left( \mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{ \hbox{$-$}}\kern-12.499815pt}}{{\vbox{\hbox{$-$}} \kern-9.899849pt}}{{\vbox{\hbox{$-$}}\kern-8.999863pt}}\! \int_{K}fd\gamma\right)^{2}\leq\frac{1}{2}\mathchoice{{\vbox{\hbox{$-$}} \kern-13.499794pt}}{{\vbox{\hbox{$-$}}\kern-12.499815pt}}{{ \vbox{\hbox{$-$}}\kern-9.899849pt}}{{\vbox{\hbox{$-$}} \kern-8.999863pt}}\!\int_{K}|\nabla f|^{2}d\gamma.\] **Lemma 5.1**.: _Let \(K\) be a symmetric convex body such that \(K\supseteq rB_{2}^{n}\). Assume an odd function \(f:\mathbb{R}^{n}\to\mathbb{R}\) satisfies_ \[\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$ -$}}\kern-12.499815pt}}{{\vbox{\hbox{$-$}}\kern-9.8999849pt}}{{ \vbox{\hbox{$-$}}\kern-8.999863pt}}\!\int_{K}f^{2}\mathrm{d}\gamma\geq \mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{ \hbox{$-$}}\kern-12.499815pt}}{{\vbox{\hbox{$-$}} \kern-9.8999849pt}}{{\vbox{\hbox{$-$}} \kern-8.999863pt}}\!\int_{K}|\nabla f|^{2}\,\mathrm{d}\gamma-\epsilon\] _as well as_ \[\|f-\left\langle x,\eta\right\rangle\|_{L^{2}(\gamma_{K})}^{2}<\epsilon\] _for some \(\eta\in\mathbb{R}^{n}\) and \(\epsilon>0\). Then_ \[\int_{\partial K}\left\langle\eta,n_{x}\right\rangle^{2}\mathrm{d}\gamma_{ \partial K}\leq C\left(\frac{\gamma^{+}(\partial K)}{\int_{rB_{2}^{n}}x^{2} \mathrm{d}\gamma}+\frac{1}{r}\right)n\epsilon\gamma(K).\] Proof.: By Theorem 4.1 there exists \(\theta\in\mathbb{R}^{n}\) such that the conclusions (1) and (2) hold. Since \(\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$ -$}}\kern-12.499815pt}}{{\vbox{\hbox{$-$}} \kern-9.899849pt}}{{\vbox{\hbox{$-$}}\kern-8.999863pt}}\! \int_{K}fd\gamma=0\) we have \(\left\|f-\left\langle x,\theta\right\rangle\right\|_{L^{2}(K,\gamma)}^{2}\leq 4\epsilon\). Also, by our assumption we have \(\left\|f-\left\langle x,\eta\right\rangle\right\|_{L^{2}(K,\gamma)}^{2}\leq\epsilon\). Thus by the triangle inequality, we conclude \[\frac{1}{\gamma(K)}\int_{K}\left\langle x,\theta-\eta\right\rangle^{2}\mathrm{d} \gamma=\left\|\left\langle x,\theta\right\rangle-\left\langle x,\eta\right\rangle \right\|_{L^{2}(K,\gamma)}^{2}\leq\] \[\left\|\left\langle x,\theta\right\rangle-f\right\|_{L^{2}(K,\gamma)}^{2}+\left\|f- \left\langle x,\eta\right\rangle\right\|_{L^{2}(K,\gamma)}^{2}\leq(2\cdot 4+2) \epsilon=10\epsilon. \tag{10}\] Using the fact that \(K\supseteq rB_{2}^{n}\) we have from (10): \[\left|\theta-\eta\right|^{2}\cdot\frac{1}{n}\int_{rB_{2}^{n}}|x|^{2}d\gamma= \int_{rB_{2}^{n}}\left\langle x,\theta-\eta\right\rangle^{2}\mathrm{d}\gamma \leq\int_{K}\left\langle x,\theta-\eta\right\rangle^{2}\mathrm{d}\gamma\leq 10 \gamma(K)\epsilon,\] and thus \[\left|\theta-\eta\right|^{2}\leq 10\frac{n\gamma(K)}{\int_{rB_{2}^{n}}|x|^{2}d \gamma}\epsilon. \tag{11}\] It follows from (11) and the conclusion (2) of Theorem 4.1: \[\int_{\partial K}\left\langle\eta,n_{x}\right\rangle^{2}\mathrm{ d}\gamma_{\partial K} \leq 2\left(\int_{\partial K}\left\langle\eta-\theta,n_{x} \right\rangle^{2}\mathrm{d}\gamma_{\partial K}+\int_{\partial K}\left\langle \theta,n_{x}\right\rangle^{2}\mathrm{d}\gamma_{\partial K}\right)\] \[\leq 2\cdot\left(\frac{10n\gamma(K)}{\int_{rB_{2}^{n}}|x|^{2}d \gamma}\epsilon\cdot\gamma^{+}\left(\partial K\right)+\frac{2(n+1)}{r}\gamma( K)\epsilon\right)\] \[\leq\left(\frac{20\gamma^{+}(\partial K)}{\int_{rB_{2}^{n}}|x|^{ 2}d\gamma}+\frac{8}{r}\right)\gamma(K)n\epsilon\] Lemma 5.1 allows us to deduce: **Theorem 5.2**.: _Let \(K\) be a symmetric convex set with the in-radius \(r\). Let \(T\) be a positive definite matrix with columns \(t_{i}=Te_{i},\)\(i=1,...,n,\) and let \(s\geq 0\) denote the smallest eigenvalue of \(T\). Assume that_ \[\fint_{K}\left\langle Tx,x\right\rangle^{2}d\gamma-\left(\fint_{K}\left\langle Tx,x\right\rangle d\gamma\right)^{2}\geq 2\fint_{K}\left|Tx\right|^{2}\mathrm{d} \gamma-\epsilon\] _for small enough \(\epsilon>0\). Then for every \(i=1,...,n,\) we have_ \[\int_{\partial K}\left\langle t_{i},n_{x}\right\rangle^{2}\mathrm{d}\gamma_ {\partial K}\leq C\left(\frac{\gamma^{+}(K)}{\int_{rB_{2}^{n}}|x|^{2}d\gamma }+\frac{1}{r}\right)n^{2}e\gamma(K).\] _Therefore, if \(s>0\) and \(\epsilon<c(\frac{s}{n})^{2}\) then either \(r\geq\sqrt{\log\frac{cs^{2}}{n^{2}\epsilon}}\), or \(r\leq C\sqrt{n}\left(\frac{\epsilon}{s^{2}}\right)^{\frac{1}{n+1}}.\)_ Proof.: Using the same approximation argument as in Theorem 4.1, we may assume \(\partial K\) is \(C^{\infty}\)-smooth. Write \(f=\left\langle Tx,x\right\rangle+c\) such that \(\fint_{K}f\mathrm{d}\gamma=0\). Consider the function \(u\in W^{2,2}(K)\cap C^{2}(K)\) such that \(Lu=f\) and \(\left\langle\nabla u,n_{x}\right\rangle=0\) on \(\partial K\) (again see e.g. [23] for existence and regularity). By Bochner's formula (again see e.g. [17]), we write \[\int_{K}f^{2}\mathrm{d}\gamma =-\int_{K}\left(\left\|\nabla^{2}u\right\|^{2}+\left|\nabla u \right|^{2}+2\left\langle\nabla u,\nabla f\right\rangle\right)\mathrm{d}\gamma -\int_{\partial K}\left\langle\mathrm{II}\,\nabla_{\partial K}u,\nabla_{ \partial K}u\right\rangle\mathrm{d}\gamma_{\partial K}.\] \[\leq-\int_{K}\left(\left\|\nabla^{2}u\right\|^{2}+\left|\nabla u \right|^{2}+2\left\langle\nabla u,\nabla f\right\rangle\right)\mathrm{d}\gamma,\] where in the last passage we used that \(\mathrm{II}\geq 0\) since \(K\) is convex. Since \(\nabla u\) is odd we have \(\int\nabla u\mathrm{d}\gamma_{K}=0\), so by the Poincare inequality we have \[\delta_{1}=\int_{K}\left(\left\|\nabla^{2}u\right\|^{2}-\left|\nabla u\right|^ {2}\right)\mathrm{d}\gamma\geq 0, \tag{12}\] and thus \[\int_{K}f^{2}\mathrm{d}\gamma \leq-\delta_{1}-\int_{K}\left(2\left|\nabla u\right|^{2}+2\left\langle \nabla u,\nabla f\right\rangle\right)\mathrm{d}\gamma\] \[=-\delta_{1}-\int_{K}\left(\left|\sqrt{2}\nabla u+\frac{1}{\sqrt {2}}\nabla f\right|^{2}-\frac{1}{2}\left|\nabla f\right|^{2}\right)\mathrm{d}\gamma. \tag{13}\] It follows from (13) that \[-\hskip-10.0pt\int_{K}\left\langle Tx,x\right\rangle^{2}d\gamma+ \left(-\hskip-10.0pt\int_{K}\left\langle Tx,x\right\rangle d\gamma\right)^{2}+2 \hskip-10.0pt\int_{K}\left|Tx\right|^{2}\mathrm{d}\gamma =\frac{1}{\gamma(K)}\int_{K}\left(\frac{1}{2}\left|\nabla f \right|^{2}-f^{2}\right)\mathrm{d}\gamma\] \[\geq\frac{\delta_{1}}{\gamma(K)}+\hskip-10.0pt\int_{K}\left| \sqrt{2}\nabla u+\frac{1}{\sqrt{2}}\nabla f\right|^{2}\mathrm{d}\gamma\geq 0.\] Therefore our assumption implies that \[-\hskip-10.0pt\int_{K}\left|\sqrt{2}\nabla u+\sqrt{2}Tx\right|^{2}\mathrm{d} \gamma=-\hskip-10.0pt\int_{K}\left|\sqrt{2}\nabla u+\frac{1}{\sqrt{2}}\nabla f \right|^{2}\mathrm{d}\gamma\leq\epsilon,\] and so \(\hskip-10.0pt\int_{K}\left|\nabla u+Tx\right|^{2}\mathrm{d}\gamma\leq\frac{ \epsilon}{2}\). In particular, for every \(i\) we have \[-\hskip-10.0pt\int_{K}\left|\partial_{i}u+\left\langle x,t_{i}\right\rangle \right|^{2}\mathrm{d}\gamma\leq\epsilon; \tag{14}\] recall that \(t_{i}=Te_{i}\in\mathbb{R}^{n}\) denotes the \(i\)'th row of \(T\). However, our assumption also implies that \[-\hskip-10.0pt\int_{K}\left(\left\|\nabla^{2}u\right\|^{2}-\left|\nabla u \right|^{2}\right)\mathrm{d}\gamma=\frac{\delta_{1}}{\gamma(K)}\leq\epsilon,\] so in particular for all \(i\) we have \[-\hskip-10.0pt\int_{K}\left(\partial_{i}u\right)^{2}\mathrm{d}\gamma\geq- \hskip-10.0pt\int_{K}\left|\nabla\partial_{i}u\right|^{2}\mathrm{d}\gamma- \frac{\epsilon}{2}. \tag{15}\] The first conclusion now follows from (14) and (15), and Lemma 5.1: \[\int_{\partial K}\left\langle t_{i},n_{x}\right\rangle^{2}\mathrm{d}\gamma_{ \partial K}\leq C\left(\frac{\gamma^{+}(K)}{\int_{rB_{2}^{n}}\left|x\right|^{2 }d\gamma}+\frac{1}{r}\right)n\epsilon\gamma(K).\] Summing over all \(i\), and using the bound \(\left|Tn_{x}\right|\geq s\), we obtain the second conclusion: \[s^{2}\gamma^{+}\left(\partial K\right)\leq\int_{\partial K}\left|Tn_{x}\right| ^{2}\mathrm{d}\gamma_{\partial K}\leq C\left(\frac{\gamma^{+}(\partial K)}{ \int_{rB_{2}^{n}}\left|x\right|^{2}d\gamma}+\frac{1}{r}\right)n^{2}\epsilon \gamma(K),\] or \[\frac{\gamma(K)}{\int_{rB_{2}^{n}}\left|x\right|^{2}d\gamma}+\frac{\gamma(K)} {r\gamma^{+}\left(\partial K\right)}\geq\frac{cs^{2}}{n^{2}\epsilon}.\] Applying Proposition 3.1 with \(\delta=\frac{n^{2}\epsilon}{cs^{2}}\), we get the second conclusion. From Theorem 5.2 we deduce **Corollary 5.3**.: _Let \(K\) be a symmetric convex set with the in-radius \(r\). Assume that_ \[-\hskip-10.0pt\int_{K}\left|x\right|^{4}d\gamma-\left(-\hskip-10.0pt\int_{K} \left|x\right|^{2}d\gamma\right)^{2}\geq 2-\hskip-10.0pt\int_{K}\left|x\right|^{2} \mathrm{d}\gamma-\epsilon\] _for \(\epsilon<\frac{c}{n^{2}}\). Then either \(r\geq\sqrt{\log\frac{c}{n^{2}\epsilon}}\), or \(r\leq C\sqrt{n}\epsilon^{\frac{1}{n+1}}\)._ ## 6. Proofs of the main results. We first point out the following very nice fact: **Lemma 6.1**.: _Suppose \(V\in C^{2}(\mathbb{R}^{n})\). Then_ \[V\left(\frac{z_{1}+z_{2}}{2}\right)+\beta(z_{1},z_{2})=\frac{V(z_{1})+V(z_{2})}{ 2}, \tag{16}\] _where, letting \(z(t)=\frac{(1-t)z_{1}+(1+t)z_{2}}{2},\) we have_ \[\beta(z_{1},z_{2})=\frac{1}{8}\cdot\int_{-1}^{1}(1-|t|)\langle\nabla^{2}V(z(t ))(z_{1}-z_{2}),z_{1}-z_{2}\rangle dt. \tag{17}\] Proof.: Note that for \(b\in C^{2}[-1,1],\) \[\int_{-1}^{1}(1-|t|)b^{\prime\prime}(t)dt=\int_{0}^{1}(1-t)b^{\prime\prime}(t )dt+\int_{-1}^{0}(1+t)b^{\prime\prime}(t)dt=\] \[(1-t)b^{\prime}(t)|_{0}^{1}+\int_{0}^{1}b^{\prime}(t)dt+(1+t)b^{\prime}(t)|_{- 1}^{0}-\int_{-1}^{0}b^{\prime}(t)dt=\] \[b(1)+b(-1)-2b(0). \tag{18}\] Let \(b(t)=V(z(t))\). Then \(b(-1)=V(z_{1})\), \(b(1)=V(z_{2})\), \(b(0)=V(\frac{z_{1}+z_{2}}{2})\), and \[b^{\prime\prime}(t)=\frac{1}{4}\langle\nabla^{2}V(z(t))(z_{1}-z_{2}),z_{1}-z_ {2}\rangle. \tag{19}\] Combining (18) and (19) finishes the proof. **Proof of the Theorem 1.1.** Let \(F(t)=\frac{\gamma(e^{t}K)}{\gamma(K)}.\) Then \(F^{\prime}(0)=\mathchoice{\hbox{\hbox to 0.0pt{\kern 2.999968pt\vrule height 6.2999 45pt width 1px\hss}\hbox{$\int$}}}{\hbox{\hbox to 0.0pt{\kern 2.999968pt \vrule height 6.299945pt width 1px\hss}\hbox{$\int$}}}{\hbox{\hbox to 0.0pt{\kern 1.49 9977pt\vrule height 6.299945pt width 1px\hss}\hbox{$\int$}}}{\hbox{\hbox to 0.0pt{ \kern 1.049984pt\vrule height 6.299945pt width 1px\hss}\hbox{$ \int$}}}}{\hbox{\hbox to 0.0pt{\kern 1.049984pt\vrule height 6.2999 45pt width 1px\hss}\hbox{$\int$}}}}(n-|x|^{2})d\gamma\), and \[F^{\prime\prime}(0)=\mathchoice{\hbox{\hbox to 0.0pt{\kern 2.999968pt \vrule height 6.299945pt width 1px\hss}\hbox{$\int$}}}{\hbox{\hbox to 0.0pt{\kern 1.49 9977pt\vrule height 6.299945pt width 1px\hss}\hbox{$\int$}}}{\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.299945pt width 1px\hss}\hbox{$\int$}}}{\hbox{\hbox to 0.0pt{ \kern 1.049984pt\vrule height 6.299945pt width 1px\hss}\hbox{$ \int$}}}}{\hbox{\hbox to 0.0pt{\kern 1.049984pt\vrule height 6.299945pt width 1px\hss}\hbox{$ \int$}}}}_{K}(n-|x|^{2})^{2}d\gamma-2\mathchoice{\hbox{\hbox to 0.0pt{\kern 2.999968pt \vrule height 6.299945pt width 1px\hss}\hbox{$\int$}}}{\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.299945pt width 1px\hss}\hbox{$\int$}}}{\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.299945pt width 1px\hss}\hbox{$\int$}}}{\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.299945pt width 1px\hss}\hbox{$\int$}}}}{\hbox{\hbox to 0.0pt{ \kern 1.049984pt\vrule height 6.299945pt width 1px\hss}\hbox{$ \int$}}}}{\hbox{\hbox to 0.0pt{\kern 1.049984pt\vrule height 6.299945pt width 1px\hss}\hbox{$ \int$}}}}_{K}|x|^{2}d\gamma-\left(\mathchoice{\hbox{\hbox to 0.0pt{\kern 2.999968pt \vrule height 6.299945pt width 1px\hss}\hbox{$\int$}}}{\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.299945pt width 1px\hss}\hbox{$\int$}}}{\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.299945pt width 1px\hss}\hbox{$\int$}}}{\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.299945pt width 1px\hss}\hbox{$ \int$}}}}{\hbox{\hbox to 0.0pt{\kern 1.049984pt\vrule height 6.299945pt width 1px\hss}\hbox{$ \int$}}}}{\hbox{\hbox to 0.0pt{\kern 1.049984pt\vrule height 6.299945pt width 1px\hss}\hbox{$ \int$}}}}(n-|x|^{2})d\gamma\right)^{2}.\] On the other hand, by the assumption, letting \(\alpha=\log a\) and \(\beta=\log b\), we have \[V\left(\frac{\alpha+\beta}{2}\right)-\frac{1}{2}V(\alpha)-\frac{1}{2}V(\beta) \leq\log(1+\epsilon)\leq\epsilon. \tag{20}\] By Lemma 6.1, (20) implies that there exists a \(t\in[\log a,\log b]\) such that \[V^{\prime\prime}(t)=\mathchoice{\hbox{\hbox to 0.0pt{\kern 2.999968pt \vrule height 6.299945pt width 1px\hss}\hbox{$\int$}}}{\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.299945pt width 1px\hss}\hbox{$\int$}}}{\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.299945pt width 1px\hss}\hbox{$\int$}}}{\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.299945pt width 1px\hss}\hbox{$ \int$}}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299945pt width 1px\hss}\hbox{$ \int$}}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299945pt width 1px\hss}\hbox{$ \int$}}}}_{e^{t}K}|x|^{4}d\gamma-2\mathchoice{\hbox{\hbox to 0.0pt{\kern 2.999968pt \vrule height 6.299945pt width 1px\hss}\hbox{$\int$}}}{\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.299945pt width 1px\hss}\hbox{$\int$}}}{\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.299945pt width 1px\hss}\hbox{$ \int$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299945pt width 1px\hss}\hbox{$ \int$}}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299945pt width 1px\hss}\hbox{$ \int$}}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299945pt width 1px\hss}\hbox{$ \int$}}}}_{e^{t}K}|x|^{2}d\gamma-\left(\mathchoice{\hbox{\hbox to 0.0pt{\kern 2.999968pt \vrule height 6.299945pt width 1px\hss}\hbox{$\int$}}}{\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.299945pt width 1px\hss}\hbox{$\int$}}}{\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.299945pt width 1px\hss}\hbox{$ \int$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299945pt width 1px\hss}\hbox{$ \int$}}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299945pt width 1px\hss}\hbox{$ \int$}}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299945pt width 1px\hss}\hbox{$ \int$}}}}_{e^{t}K}}|x|^{2}d\gamma\right)^{2}\geq-\frac{8\epsilon}{(\log b/a)^{2}}.\] An application of the Corollary 5.3 with \(e^{t}K\) in place of \(K,\) and with \(\frac{8\epsilon}{(\log b/a)^{2}}\) in place of \(\epsilon\) finishes the proof. **Proof of the Theorem 1.4.** This time, we let \(F_{i}(t)=\frac{\gamma(e^{\epsilon e_{i}K})}{\gamma(K)}.\) Then \(F_{i}^{\prime}(0 Next, for \(x\in\mathbb{R}^{n}\), let \(V(x)=\log\frac{\gamma(e^{x}K)}{\gamma(K)}\). We have \(\frac{\partial V}{\partial x_{i}\partial x_{j}}=0\) whenever \(i\neq j\), and for all \(i=1,...,n\), \[\frac{\partial^{2}V}{\partial x_{i}^{2}}(0)=F_{i}^{\prime\prime}(0)-F_{i}^{ \prime}(0)^{2}. \tag{21}\] On the other hand, by our assumption, \[V\left(\frac{x+y}{2}\right)-\frac{1}{2}V(x)-\frac{1}{2}V(y)\leq\log(1+\epsilon )\leq\epsilon. \tag{22}\] By Lemma 6.1, and in view of (21), (22) implies that there exists a \(z\in[x,y]\) such that for all \(i=1,..,n\) By Theorem 5.2, we get, denoting \(\tilde{K}=e^{z}K\) and \(\tilde{r}=|e^{z}|r(K)\), and summing up: \[\int_{\partial\tilde{K}}\sum_{i=1}^{n}(x_{i}-y_{i})^{2}(n_{x}^{i})^{2}d\gamma _{\partial\tilde{K}}\leq C\epsilon n^{2}\gamma(\tilde{K})\left(\frac{1}{ \tilde{r}}+\frac{\gamma^{+}(\partial\tilde{K})}{\int_{\tilde{r}B_{2}^{n}}|x|^ {2}d\gamma}\right). \tag{23}\] On the other hand, recalling the notation from the statement of the theorem, we have \[\int_{\partial\tilde{K}}\sum_{i=1}^{n}(x_{i}-y_{i})^{2}(n_{x}^{i})^{2}d\gamma _{\partial\tilde{K}}\geq\delta^{2}\int_{\partial\tilde{K}}\sum_{i\in\sigma^{ \delta}}(n_{x}^{i})^{2}d\gamma_{\partial\tilde{K}}\geq\delta^{2}\alpha\gamma^{ +}(\tilde{\Omega}_{\delta,\alpha}), \tag{24}\] where \(\tilde{\Omega}_{\delta,\alpha}=e^{z}\Omega_{\delta,\alpha}\). Combining (23), (24) and Proposition 3.1 (applied with \(\tilde{K}\), \(\tilde{r}\) and \(\delta^{2}\epsilon\alpha\beta\)), and recalling that \(r(K)\in[|e^{x}|\tilde{r},|e^{y}|\tilde{r}]\), we get the conclusion. \(\square\) Finally, we are moving to proving Corollary 1.5. First, we shall need the following **Lemma 6.2**.: _Let \(H\subset\mathbb{R}^{n}\) be a subspace. Let \(K\subset\mathbb{R}^{n}\) be a closed, convex set with non-empty interior. Assume that at almost every point \(x\in\partial K\) a unique normal \(n_{x}\) exists and \(n_{x}\in H\) (Here "almost everywhere" is with respect to the \((n-1)\)-dimensional Hausdorff measure on \(\partial K\)). Then there exists a closed convex set \(K_{0}\subset H\) such that \(K=K_{0}\times H^{\perp}\)._ Proof.: By translating \(K\) we may assume without loss of generality that \(0\) is an interior point of \(K\). Define \(K_{0}\) to be the orthogonal projection \(K_{0}=\operatorname{Proj}_{H}K\). Then we clearly have \(K\subset K_{0}\times H^{\perp}\), and \(K_{0}\) also has \(0\) in its (relative) interior. We first show that \(\partial K\subset\partial K_{0}\times H^{\perp}\). Indeed, fix \(x\in\partial K\). If we define \[\Omega=\left\{y\in K:\text{ $n_{y}$ is unique and $n_{y}\in H$}\right\}\] then by assumption \(K\setminus\Omega\) has measure \(0\), so in particular \(\Omega\) is dense in \(\partial K\). Choose a sequence \(x_{k}\in\Omega\) such that \(x_{k}\to x\). By compactness we may pass to a subsequence and assume without loss of generality that \(n_{x_{k}}\xrightarrow{k\to\infty}v\in H\). We know that \(x_{k}+n_{x_{k}}^{\perp}\) is a supporting hyperplane for \(K\) for all \(k\). Therefore for every \(z\in K\) we have \[\left\langle z,n_{x_{k}}\right\rangle\leq\left\langle x_{k},n_{x_{k}}\right\rangle.\] Taking the limit as \(k\to\infty\) we see that for every \(z\in K\) we have \(\left\langle z,v\right\rangle\leq\left\langle x,v\right\rangle\). In particular, every point \(z_{0}\in K_{0}\) may we written as \(z_{0}=\operatorname{Proj}_{H}\left(z\right)\) for some \(z\in K\), and since \(v\in H\) we have \[\left\langle z_{0},v\right\rangle=\left\langle z,v\right\rangle\leq\left\langle x,v\right\rangle=\left\langle\operatorname{Proj}_{H}x,v\right\rangle.\] For every \(\epsilon>0\) the point \(z_{\epsilon}=\operatorname{Proj}_{H}x+\epsilon v\) satisfies \[\left\langle z_{\epsilon},v\right\rangle=\left\langle\operatorname{Proj}_{H}x, v\right\rangle+\epsilon\left|v\right|^{2}>\left\langle\operatorname{Proj}_{H}x,v \right\rangle,\] so \(z_{\epsilon}\notin K_{0}\). Therefore \(\operatorname{Proj}_{H}x\in\partial K_{0}\), so indeed \(x\in\partial K_{0}\times H^{\perp}\) as claimed. Now we prove that \(K_{0}\times H^{\perp}\subset K\), finishing the proof. Indeed, for every \(x\in K_{0}\times H^{\perp}\) consider \[A=\left\{t\in[0,1]:\ tx\in K\right\}.\] We know that \(0\in A\) and \(A\) is closed. Define \(t_{0}=\max A\). If \(t_{0}<1\) it follows that \(t_{0}x\in\partial K\subset\partial\left(K_{0}\times H^{\perp}\right)\), which is impossible since \(x\in K_{0}\times H^{\perp}\). Hence \(t_{0}=1\) and \(x\in K\) as claimed. **Proof of Corollary 1.5.** We may approximate \(K\) with a convex set whose boundary is \(C^{2}\) arbitrarily closely. We let \(\sigma\) so that \[\sigma^{c}=\{i\in[n]:\,x_{i}=y_{i}\},\] \[H=\operatorname{span}\{e_{i}:\,i\in\sigma^{c}\},\] and \[\Omega=\{x\in\partial K:\,n_{x}\not\in H^{\perp}\}.\] We see that the assumption of Theorem 1.4 is satisfied for an arbitrarily small \(\epsilon>0.\) Pick \(\alpha=\beta=\delta=\epsilon^{\frac{1}{16}}\) and let \(\epsilon\to 0.\) Note that as \(\epsilon\to 0\), we might get different \(z=z(\epsilon)\) in the conclusion of Theorem 3, but as \(z\in[x,y]\), then by compactness we may select a convergent sub-sequence of \(z(\epsilon)\) to some point \(z_{0}.\) We conclude that either \(r(e^{z_{0}}K)=0\) (in which case \(K\) has an empty interior), or \(r(e^{z_{0}}K)=\infty\) (in which case \(K=\mathbb{R}^{n}\)), or \(\gamma^{+}(\Omega)=0\). Since \(\gamma_{\partial K}\) is absolutely continuous with respect to the Hausdorff measure on \(\partial K,\) we conclude that in this case, for almost every \(x\in\partial K,\) we have \(n_{x}\in H^{\perp}.\) The conclusion follows from Lemma 6.2. \(\square\)
2308.09556
A Principle for Global Optimization with Gradients
This work demonstrates the utility of gradients for the global optimization of certain differentiable functions with many suboptimal local minima. To this end, a principle for generating search directions from non-local quadratic approximants based on gradients of the objective function is analyzed. Experiments measure the quality of non-local search directions as well as the performance of a proposed simplistic algorithm, of the covariance matrix adaptation evolution strategy (CMA-ES), and of a randomly reinitialized Broyden-Fletcher-Goldfarb-Shanno (BFGS) method.
Nils Müller
2023-08-18T13:39:29Z
http://arxiv.org/abs/2308.09556v1
# A Principle for Global Optimization with Gradients ###### Abstract This work demonstrates the utility of gradients for the global optimization of certain differentiable functions with many suboptimal local minima. To this end, a principle for generating search directions from non-local quadratic approximants based on gradients of the objective function is analyzed. Experiments measure the quality of non-local search directions as well as the performance of a proposed simplistic algorithm, of the covariance matrix adaptation evolution strategy (CMA-ES), and of a randomly reinitialized Broyden-Fletcher-Goldfarb-Shanno (BFGS) method. _Keywords_ Global Optimization \(\cdot\) Robust Optimization \(\cdot\) Continuous Optimization \(\cdot\) Mathematical Optimization \(\cdot\) Simulation-Based Optimization ## 1 Introduction This work motivates the use of gradients for solving the optimization problem \(\min_{x\in\mathbb{R}^{n}}f(x)\), where the differentiable function \(f:\mathbb{R}^{n}\to\mathbb{R}\) is assumed to have many suboptimal local minima but possesses (unknown) global structure. Optimization methods based on the (quasi-)Newton method, see, e.g., [13, Chapter 8], undesirably converge when applied to the minimization of functions with many suboptimal local minima. Contrary, in a formal setting, non-local information (as opposed to global) provides search directions for which iterative optimizers converge to the global minimum of certain functions with many suboptimal local minima [17, Theorem 4.1 & Theorem 5.2]. Considering in addition that non-local information based on objective evaluations has been successfully used in many practical optimization methods for decades, this work develops a (quasi-)Newton method that approximates search directions from non-local gradient information of practically realistic evaluation counts. To achieve this, the search direction, commonly based on a quadratic Taylor-approximant in (quasi-)Newton methods, is replaced with a non-local generalization, i.e., line 3 of Algorithm 1. Algorithm 1 is a simplistic method with the purpose of demonstrating the usefulness of gradients beyond local optimization. The interest in investigating the utility of gradients for the global optimization of functions with many suboptimal local minima lies in the hope of generalizing their established success from local optimization. Computer simulation, in particular in the context of the adjoint method [10, 11, 12] and automatic differentiation [13, 14], can provide gradient information on challenging simulation-based objectives from science and engineering [1, 15, 16]. Therefore, associated optimization methods based on non-local evaluations of the objective gradients may have a large impact on a wide range of real-world challenges. ``` 0: Continuously differentiable function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\), and its gradient \(\nabla f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\); initial point \(x_{0}\in\mathbb{R}^{n}\); initial scaling \(\sigma_{0}\in\mathbb{R}_{>0}\); sample size \(k\in\mathbb{N}\); a probability measure \(\mathbb{P}_{k}\) on \(\mathbb{R}^{n\times k}\); non-local line search method linesearch(\(\cdot\)); scaling adaption method scaling(\(\cdot\)); maximum iteration count \(C\in\mathbb{N}\). 0: Element \(x_{C}\in\mathbb{R}^{n}\) with "low" function value \(f(x_{C})\). 0:\(q_{A,b}(x):=\langle x,(A+A^{T})x\rangle+b^{T}x\) for all \(x\in\mathbb{R}^{n}\), \(A\in\mathbb{R}^{n\times n}\), \(b\in\mathbb{R}^{n}\); \(t:=0\). 1:sample\(z_{1},\ldots,z_{k}\in\mathbb{R}^{n}\) according to \(\mathbb{P}_{k}\) # independent of any other time-step 2:compute\(\nabla f(x_{t}+\sigma_{t}z_{1}),\ldots,\nabla f(x_{t}+\sigma_{t}z_{k})\in \mathbb{R}^{n}\) # sample a neighborhood of \(x_{t}\) 3:solvefor\(\Delta x_{t}\in\mathbb{R}^{n}\) # determine non-local Newton direction2 \[\begin{cases}\Delta x_{t}\in\operatorname*{arg\,min}_{x\in\mathbb{R}^{n}}\,q_ {A_{t},b_{t}}(x)\\ (A_{t},b_{t})\in\operatorname*{arg\,min}_{A\in\mathbb{R}^{n\times n},b\in \mathbb{R}^{n}}\sum_{j=1}^{k}\left\|\nabla q_{A,b}(z_{j})-\nabla f(x_{t}+ \sigma_{t}z_{j})\right\|^{2}\end{cases}\] 4:\(x_{t+1}:=\text{linesearch}(f,\Delta x_{t},-b_{t},x_{t})\) # non-local linesearch based on \(\Delta x_{t}\) and \(-b_{t}\) 5:\(\sigma_{t+1}:=\text{scaling}(\sigma_{0},\sigma_{t},x_{t+1}-x_{t})\) # adapt scaling 6:\(t\gets t+1\) 7:if\(t<C\)go to line 1 # until the budget is used 8:return\(x_{C}\) ``` **Algorithm 1** Non-local quasi-Newton method1 for optimization of differentiable \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) **Related Work.** Non-local operators for optimization have been shown to impose favorable structure on objective approximations [1, 1, 15, 16] and can be asymptotically consistent with local operators [17], i.e., when the non-local kernel converges to the Dirac-measure. [1] point out the open question of sample-efficient approximations of non-local operators, while [11] indicate the superiority of first-order interpolations from function evaluations over empirical gradient averages. [12] propose smoothing with a non-local kernel with low-dimensional support. [17, 1] propose a similar approach to the one of this work using quadratic approximation from function instead of gradient evaluations. A (quasi-)Newton method roughly similar to Algorithm 1 for the different goal of locally solving systems of equations based on a least-squares principle has been described by [14]. Practical methods employing non-local operators of function evaluations have been widely used, a few examples include [1, 1, 15, 16, 17, 18, 19]. Specifically, attempts have been made to design methods that combine the advantages of non-local operators of function evaluations and gradient methods, however, in contrast to this work they do not make use of explicit gradient information [11, 12, 13]. Under the assumption of a priori knowledge of non-local information about the objective function at hand, even local gradient methods have been developed for global optimization [10]. The proposed principle can also be considered a non-local gradient-based trust region method. Trust region methods are commonly restricted to finding local-optima [14, Chapter 3.2], while non-local extensions have been restricted to using function evaluations, i.e., a black-box setting [1, 2, 1]. **Outline.** In Theorem 2.1 and Corollary 2.1, necessary and sufficient conditions that let us compute the proposed non-local search direction based on a solution to a Lyapunov-type equation are developed. Under the assumption of infinite sampling, it is then proven in Theorem 2.2 that the solution of line 3 of Algorithm 1 is a consistent approximator of the optimal search direction on a quadratic that is disturbed by a function with bounded first derivative. The theoretical analysis is concluded in Theorem 2.3 by a non-local residual bound for Rastrigin-type objective functions which present an important model for functions with many suboptimal local minima. In a first experiment in Section 3.1, gradient-based search directions for a Rastrigin-type objective function are compared. Further, Algorithm 1, the _Covariance matrix adaptation evolution strategy (CMA-ES)_, and a randomly reinitialized _Broyden-Fletcher-Goldfarb-Shanno method_ are benchmarked on selected functions with many local minima in Section 3.2. Lastly, it is shown in Section 3.3 that Algorithm 1 solves _Problem 4 of the SIAM News: A Hundred-dollar, Hundred-digit Challenge_. The work concludes with a discussion of the results and promising future work in Section 4. ## 2 Analysis Initially, necessary and sufficient conditions that let us efficiently compute the approximant \(q_{A_{t},b_{t}}\) in Algorithm 1 are developed. To this end, consider the following Theorem 2.1. **Theorem 2.1** (Necessary and sufficient conditions for \((A_{t},b_{t})\)).: _In the setting of Algorithm 1, \(A_{t}\in\mathbb{R}^{n\times n}\) and \(b_{t}\in\mathbb{R}^{n}\) are the parameters of a best quadratic approximation in the sense of line 3 of Algorithm 1 if and only if_ \[\begin{cases}(A_{t}+A_{t}^{T})(Z-\overline{Z})Z^{T}+Z(Z-\overline{Z})^{T}(A_{ t}+A_{t}^{T})=(G-\overline{G})Z^{T}+Z(G-\overline{G})^{T}\\ b_{t}=\overline{g}-(A_{t}+A_{t}^{T})\overline{z}\,,\end{cases}\] _where_ * \(G\in\mathbb{R}^{n\times k}\) _by_ \(G_{\cdot,j}:=\nabla f(x_{t}+\sigma_{t}z_{j})\) _for all_ \(j\in\mathbb{N}_{\leq k}\)_,_ * \(Z\in\mathbb{R}^{n\times k}\) _by_ \(Z_{\cdot,j}:=2\sigma_{t}z_{j}\) _for all_ \(j\in\mathbb{N}_{\leq k}\)_,_ * \(\overline{z}:=(1/k)\sum_{j=1}^{k}Z_{\cdot,j}\)_,_ * \(\overline{g}:=(1/k)\sum_{j=1}^{k}G_{\cdot,j}\)_,_ * \(\overline{Z}\in\mathbb{R}^{n\times k}\) _by_ \(\overline{Z}_{\cdot,j}:=\overline{z}\) _for all_ \(j\in\mathbb{N}_{\leq k}\)_, and_ * \(\overline{G}\in\mathbb{R}^{n\times k}\) _by_ \(\overline{G}_{\cdot,j}:=\overline{g}\) _for all_ \(j\in\mathbb{N}_{\leq k}\)_._ Proof.: In the setting of Algorithm 1, * one has \(\nabla q_{A,b}(\sigma_{t}z_{j})=(A+A^{T})2\sigma_{t}z_{j}+b\) * define \(B\in\mathbb{R}^{n\times k}\) by \(B_{\cdot,j}:=b\) for all \(j\in\mathbb{N}_{\leq k}\), and * let \(\left\lVert\cdot\right\rVert_{\mathrm{F}}\) be the Frobenius norm on \(\mathbb{R}^{n\times k}\). _i. Representation of the objective._ One observes that \[\sum_{j=1}^{k}\left\lVert\nabla q_{A,b}(z_{j})-\nabla f(x_{t}+ \sigma_{t}z_{j})\right\rVert^{2} =\sum_{j=1}^{k}\left\lVert(A+A^{T})Z_{\cdot,j}+B_{\cdot,j}-G_{ \cdot,j}\right\rVert^{2} \text{(definitions }G,Z,B)\] \[=\left\lVert(A+A^{T})Z+B-G\right\rVert_{\mathrm{F}}^{2} \text{(definition of }\left\lVert\cdot\right\rVert_{\mathrm{F}})\] \[=:W(A,b)\,.\] _ii. First-order conditions._ For all \(A\in\mathbb{R}^{n\times n}\), one has a simple first order criterion an optimum \(b\in\mathbb{R}^{n}\) of \(W(A,\cdot)\), which reads \[\frac{(\partial W)(A,b)}{\partial b} =k\big{(}(A+A^{T})\overline{z}+b-\overline{g}\big{)}\overset{!}{= }0\] \[\iff b=\overline{g}-(A+A^{T})\overline{z}\] \[\iff B=\overline{G}-(A+A^{T})\overline{Z}\,.\] (definitions \[\overline{G},\overline{Z}\] ) Further, it is known that for all \(b\in\mathbb{R}^{n}\) a minimizer \(A\in\mathbb{R}^{n\times n}\) of \(W(\cdot,b)\) fulfills \[\frac{(\partial W)(A,b)}{\partial A} =2\big{(}(A+A^{T})Z+B-G\big{)}Z^{T}+2Z\big{(}(A+A^{T})Z+B-G\big{)} ^{T}\overset{!}{=}0\] \[\iff(A+A^{T})ZZ^{T}+ZZ^{T}(A+A^{T})=(G-B)Z^{T}+Z(G-B)^{T}\] \[\iff(A+A^{T})ZZ^{T}+ZZ^{T}(A+A^{T})\] \[=\big{(}G-\overline{G}+(A+A^{T})\overline{Z}\big{)}Z^{T}+Z\big{(} G-\overline{G}+(A+A^{T})\overline{Z}\big{)}^{T}\] \[\iff(A+A^{T})ZZ^{T}+ZZ^{T}(A+A^{T})\] \[=(G-\overline{G})Z^{T}+(A+A^{T})\overline{Z}Z^{T}+Z(G-\overline{ G})^{T}+Z\overline{Z}^{T}(A+A^{T})\] \[\iff(A+A^{T})(Z-\overline{Z})Z^{T}+Z(Z-\overline{Z})^{T}(A+A^{T} )=(G-\overline{G})Z^{T}+Z(G-\overline{G})^{T}\,.\] _iii. Sufficiency argument._ As the symmetric matrices are a linear subspace of \(\mathbb{R}^{n\times n}\), and in particular, without boundary and closed, any minimizer of \(W\) must be a critical point. Further, as the problem is also convex, it can be concluded that any critical point of \(W\) is also a minimizer. In fact, by simple first-order conditions, the search direction is also obtained. The second equation of the following Corollary 2.1 is an equation of Lyapunov-type for which a wide range of numerical methods exist. **Corollary 2.1** (Necessary and sufficient conditions for \(\Delta x_{t}\)).: _In the setting of Algorithm 1 and Theorem 2.1, \(\Delta x_{t}\) is a well-defined minimizer of a well-defined \(q_{A_{t},b_{t}}\) if there exists a positive definite matrix \(\tilde{A}_{t}\in\mathbb{R}^{n\times n}\) such that_ \[\begin{cases}(\tilde{A}_{t}+\tilde{A}_{t}^{T})\Delta x_{t}=-b_{t}\\ \tilde{A}_{t}(Z-\overline{Z})Z^{T}+Z(Z-\overline{Z})^{T}\tilde{A}_{t}=(G- \overline{G})Z^{T}+Z(G-\overline{G})^{T}\,,\end{cases}\] _where \(b_{t}=\overline{g}-(A_{t}+A_{t}^{T})\overline{z}\) and \(A_{t}=(\tilde{A}_{t}+\tilde{A}_{t}^{T})/4\)._ **Remark 1**.: _In case there is no positive definite matrix \(\tilde{A}_{t}\in\mathbb{R}^{n\times n}\) that fulfills the conditions of Corollary 2.1, consider finding a minimum in a trust region \(D^{n}:=\{x\in\mathbb{R}^{n}\mid\|x\|\leq 1\}\), i.e.,_ \[\Delta x_{t}\in\operatorname*{arg\,min}_{x\in D^{n}}q_{A_{t},b_{t}}(x)\,,\] _where \(A_{t},b_{t}\) are picked as specified in Corollary 2.1 for a possibly negative definite or indefinite matrix \(\tilde{A}_{t}\) that fulfills the remaining conditions._ Proof.: Define \(P:=(Z-\overline{Z})Z^{T}\) and \(V:=(G-\overline{G})Z^{T}\). _i. It is claimed that a matrix \(A_{t}\in\mathbb{R}^{n\times n}\) that fulfills the conditions of Theorem 2.1 exists if and only if there exists \(\tilde{A}_{t}\in\mathbb{R}^{n\times n}\) with \(\tilde{A}_{t}P+P^{T}\tilde{A}_{t}=V+V^{T}\)._ Assuming such \(\tilde{A}_{t}\) exists and \(A_{t}:=(\tilde{A}_{t}+\tilde{A}_{t}^{T})/4\), one has \[(A_{t}+A_{t}^{T})P+P^{T}(A_{t}+A_{t}^{T}) =\big{(}(\tilde{A}_{t}+\tilde{A}_{t}^{T})/2\big{)}P+P^{T}( \tilde{A}_{t}+\tilde{A}_{t}^{T})/2\] (definition and symmetry of \[A_{t}\] ) \[=(\tilde{A}_{t}P+P^{T}\tilde{A}_{t})/2+(\tilde{A}_{t}P+P^{T} \tilde{A}_{t})^{T}/2\] (distributivity, commutativity of addition, transpose of products) \[=(V+V^{T})/2+(V+V^{T})^{T}/2\] (condition on \[\tilde{A}_{t}\] ) \[=V+V^{T}\,.\] (distributivity, commutativity of addition) The converse statement is true as for \(A_{t}\in\mathbb{R}^{n_{s}\times n}\) that fulfills the condition of Theorem 2.1, the substitution \(\tilde{A}_{t}:=A_{t}+A_{t}^{T}\) generates the condition that is to prove. Without loss of generality \(A_{t}\) is symmetric--otherwise take \((A_{t}+A_{t}^{T})/2\) instead of \(A_{t}\). _ii. A first order condition on \(\Delta x_{t}\in\operatorname*{arg\,min}_{x\in\mathbb{R}^{n}}q_{A_{t},b_{t}}(x)\) is necessary and sufficient due to convexity of \(q_{A_{t},b_{t}}\), and reads_ \[(\nabla q_{A_{t},b_{t}})(\Delta x_{t})=2(A_{t}+A_{t}^{T})\Delta x_{t}+b_{t} \stackrel{{!}}{{=}}0\,.\qed\] For simplicity, the following analysis is restricted to relating the non-local approximation objective of determining a quadratic model in line 3 of Algorithm 1 with a \(k\)-asymptotic setting by the following Assumption 2.1. The assumption models a setting, where an infinite amount of non-local gradient samples is available. Note, that \(\mathbb{P}_{k}\) is a measure on \(\mathbb{R}^{n\times k}\), e.g., the \(k\)-product of an \(n\)-variate normal distribution or the \(k\)-product of (distinct) Dirac measures in \(\mathbb{R}^{n}\). **Assumption 2.1** (Glivenko-Cantelli).: _In Algorithm 1, let w.l.o.g. \(x_{t}=0\), otherwise transform \(f\). Assume that for all \(\sigma\in(0,\infty)\), some probability measure \(\mathbb{P}_{\sigma}:=\mathbb{P}(\sigma^{-1}\cdot)\) on \(\mathbb{R}^{n}\), probability measure \(\mathbb{P}_{\infty}\) on \(\mathbb{R}^{N}\) with \((z_{1},z_{2},\dots)\sim\mathbb{P}_{\infty}\), and for all \(A\in\mathbb{R}^{n\times n},b\in\mathbb{R}^{n}\), one has the convergence in probability_ \[\frac{1}{k}\sum_{j=1}^{k}\left\|\nabla q_{A,b}(\sigma z_{j})-\nabla f(x_{t}+ \sigma z_{j})\right\|^{2}\xrightarrow[k\to\infty]{\mathbb{P}_{\infty}}\int_{ \mathbb{R}^{n}}\left\|\nabla q_{A,b}(z)-\nabla f(z)\right\|^{2}\mathbb{P}_{ \sigma}(\mathrm{d}z)\,.\] The next result considers a model, where the objective function is a sum of a quadratic and a function with bounded first derivative, i.e., a "target" function superimposed with a "disturbance". Theorem 2.2 describes conditions for which the quadratic approximation objective of Algorithm 1 can disregard solutions that are not equal to the underlying target model. **Theorem 2.2** (Consistency).: _Let \(\mathbb{P}\) be a probability measure with second moments on \(\mathbb{R}^{n}\) and \(\mathbb{P}_{\sigma}:=\mathbb{P}(\sigma^{-1}\cdot)\), where \(\sigma\in(0,\infty)\). Further, let \(f:\mathbb{R}^{n}\to\mathbb{R}\) be continuously differentiable with_ * \(f\equiv r+g\)_, where_ \(r\) _is quadratic and_ \(\left\|\nabla g\right\|\) _is uniformly bounded by_ \(M\in(0,\infty)\)_,_ * \(\nabla f\) _square-_\(\mathbb{P}\)_-integrable,_ _and \(q^{*}:\mathbb{R}^{n}\to\mathbb{R}\) be quadratic, such that,_ \[\int_{\mathbb{R}^{n}}\left\|\nabla(q^{*}-r)(z)-\nabla(q^{*}-r)(0)\right\|^{2} \,\mathbb{P}(\mathrm{d}z)>0\,.\] (CON) _Then there exists \(\sigma^{*}\in(0,\infty)\), such that, for all \(\sigma\geq\sigma^{*}\) the function \(q^{*}\) is suboptimal, i.e.,_ \[q^{*}\notin\operatorname*{arg\,min}_{q\text{ quadratic}}\int_{\mathbb{R}^{n}}\left\|\nabla q(z)-\nabla f(z)\right\|^{2}\,\mathbb{P}_{ \sigma}(\mathrm{d}z)\,.\] **Remark 2**.: 1. _The condition (CON) of Theorem_ 2.2 _encodes both a failure of_ \(q^{*}\) _to approximate the correct second derivative of_ \(r\) _as well as the sampling distribution_ \(\mathbb{P}\) _to measure this defect._ 2. _Among the candidates of the minimization problem of Theorem_ 2.2 _that have the same second-derivative as_ \(r\)_, only the optimal ones also have the same first derivatives if_ \(\int_{\mathbb{R}^{n}}\nabla g\,\mathrm{d}\mathbb{P}=0\)_. For this to hold, (CON) nor the boundedness of_ \(\nabla\left\|g\right\|\) _are required._ Proof.: First, it can be seen that \(q\stackrel{{!}}{{=}}r\) has an objective value uniformly bounded in \(\sigma\), i.e., \[\int_{\mathbb{R}^{n}}\left\|\nabla q(z)-\nabla f(z)\right\|^{2} \,\mathbb{P}_{\sigma}(\mathrm{d}z) =\int_{\mathbb{R}^{n}}\left\|\nabla g(z)\right\|^{2}\,\mathbb{P}_{ \sigma}(\mathrm{d}z) \text{(definition of $f$, and $q=r$)}\] \[\leq M^{2} \text{(by $\left\|\nabla g\right\|\leq M$, and $\mathbb{P}_{\sigma}$ normed)}\] It is shown that there exists \(\sigma^{*}\in(0,\infty)\), such that, \(q^{*}\) exceeds this objective value for all \(\sigma\geq\sigma^{*}\), which implies the result. Observe that \(\nabla(q-r)\) is affine, i.e., there exist \(L:\equiv\nabla(q-r)-\nabla(q-r)(0)\in\mathbb{R}^{n\times n}\) and \(w:=\nabla(q-r)(0)\in\mathbb{R}^{n}\), such that, \(\nabla(q-r)(z)=Lz+w\) for all \(z\in\mathbb{R}^{n}\). In fact, it can be seen that the objective value of \(q^{*}\) diverges to \(\infty\) in \(\sigma\). One has \[\int_{\mathbb{R}^{n}}\left\|\nabla q(z)-\nabla f(z)\right\|^{2} \,\mathbb{P}_{\sigma}(\mathrm{d}z)\] \[=\int_{\mathbb{R}^{n}}\left\|\nabla(q-r)(z)-\nabla g(z)\right\|^{ 2}\,\mathbb{P}_{\sigma}(\mathrm{d}z) \text{(definition of $f$, linearity of $\nabla$)}\] \[\geq\int_{\mathbb{R}^{n}}(\left\|\nabla(q-r)(z)\right\|-\left\| \nabla g(z)\right\|)^{2}\,\mathbb{P}_{\sigma}(\mathrm{d}z) \text{(the reverse triangle inequality)}\] \[=\int_{\mathbb{R}^{n}}\left\|\nabla(q-r)(z)\right\|^{2}-2\left\| \nabla(q-r)(z)\right\|\left\|\nabla g(z)\right\|+\left\|\nabla g(z)\right\|^{ 2}\,\mathbb{P}_{\sigma}(\mathrm{d}z) \text{(distributivity)}\] \[\geq\int_{\mathbb{R}^{n}}\left\|\nabla(q-r)(z)\right\|^{2}-2 \left\|\nabla(q-r)(z)\right\|\left\|\nabla g(z)\right\|\,\mathbb{P}_{\sigma} \text{(by $\left\|\cdot\right\|\geq 0$)}\] \[=\int_{\mathbb{R}^{n}}\left\|\nabla(q-r)(z)\right\|^{2}\,\mathbb{P }_{\sigma}(\mathrm{d}z)-2\int_{\mathbb{R}^{n}}\left\|\nabla(q-r)(z)\right\| \left\|\nabla g(z)\right\|\,\mathbb{P}_{\sigma}(\mathrm{d}z) \text{(linearity of $\int$)}\] \[\geq\int_{\mathbb{R}^{n}}\left\|\nabla(q-r)(z)\right\|^{2}\, \mathbb{P}_{\sigma}(\mathrm{d}z)-2M\int_{\mathbb{R}^{n}}\left\|\nabla(q-r)(z) \right\|\,\mathbb{P}_{\sigma}(\mathrm{d}z) \text{(by $\left\|g\right\|\leq M$)}\] \[=\int_{\mathbb{R}^{n}}\left\|Lz+w\right\|^{2}\,\mathbb{P}_{ \sigma}(\mathrm{d}z)-2M\int_{\mathbb{R}^{n}}\left\|Lz+w\right\|\,\mathbb{P}_{ \sigma}(\mathrm{d}z) \text{(by $\nabla(q-r)$ affine)}\] \[=\int_{\mathbb{R}^{n}}\left\|L\sigma z+w\right\|^{2}\,\mathbb{P}_{ \sigma}(\mathrm{d}z)-2M\int_{\mathbb{R}^{n}}\left\|L\sigma z+w\right\|\,\mathbb{P }(\mathrm{d}z)\] (by \[\mathbb{P}_{\sigma}:=\mathbb{P}(\sigma^{-1}\cdot)\] ) \[\geq\int_{\mathbb{R}^{n}}\left(\left\|L\sigma z\right\|-\left\|w \right\|\right)^{2}\mathbb{P}(\mathrm{d}z)-2M\int_{\mathbb{R}^{n}}\left\|L \sigma z\right\|+\left\|w\right\|\,\mathbb{P}(\mathrm{d}z)\] (the (reverse) triangle inequality) \[\geq\sigma^{2}\int_{\mathbb{R}^{n}}\left\|Lz\right\|^{2}\, \mathbb{P}(\mathrm{d}z)-2\sigma(\left\|w\right\|+M)\int_{\mathbb{R}^{n}}\left\| Lz\right\|\,\mathbb{P}(\mathrm{d}z)+\left\|w\right\|^{2}-2M\left\|w\right\|\] (linearity and rearranging) \[\xrightarrow{\sigma\to\infty}\infty\,.\] (by \[Lz=\nabla(q-r)(z)-\nabla(q-r)(0)\], and by the condition (CON)) The analysis will be continued with an objective function model that is similar to that of Theorem 2.2: A quadratic superimposed by a function which is understood as a "disturbance". The error incurred when selecting the correct quadratic, i.e., the residual, and its dependency of \(\sigma\) can generate first insights into which finite values of the scaling \(\sigma\) of the non-local kernel are needed. Further, the residual may be informative about which error a good (or even perfect) non-local quadratic approximation of the objective function should have and could be optimized in the design of advanced sampling, scaling, or linesearch methods. A Rastrigin-type model for objective functions has yielded insightful results in the analysis of non-local optimization algorithms in [13, Theorem 5.2] and [21, 22]. Therefore, in Theorem 2.3, a relatively concrete bound for the residual of an optimal approximant in the Rastrigin-type setting is determined, when the samples are independently drawn from a multivariate normal distribution. The result highlights the role of amplitude modulations for this model class. **Theorem 2.3** (Residual Bound for a Rastrigin Model).: _Let \(f\equiv r+g:\mathbb{R}^{n}\to\mathbb{R}\), where_ * \(r\) _is a convex quadratic,_ * \(g(x)=\sum_{j=1}^{m}a_{j}\cos((s_{j},x)+\psi_{j})\) _for all_ \(x\in\mathbb{R}^{n}\)_, with_ \(s_{1},\ldots,s_{m}\in\mathbb{R}^{n}\)_,_ \(a\in\mathbb{R}^{m}\)_,_ \(\psi_{1},\ldots,\psi_{m}\in\mathbb{R}\)_,_ * _such that for some symmetric, positive definite_ \(\Sigma\in\mathbb{R}^{n\times n}\)_, one has_ \[0<\varepsilon:=\min_{\begin{subarray}{c}j,\ell\in\mathbb{N}_{\zeta}=\\ j\neq\ell\end{subarray}}\left\{\left\|s_{j}+s_{\ell}\right\|_{\Sigma},\left\| s_{j}-s_{\ell}\right\|_{\Sigma}\right\}.\] _Setting \(q\stackrel{{!}}{{\equiv}}r\) and \(\mathbb{P}_{\sigma}\stackrel{{!}}{{=}}\mathcal{N}(0,\sigma^{2}\Sigma)\), and \(S:=(s_{1},\ldots,s_{m})\in\mathbb{R}^{n\times m}\), where \(\sigma>0\), one has_ \[\int_{\mathbb{R}^{n}}\left\|\nabla q(x)-\nabla f(x)\right\|^{2}\,\mathbb{P}_ {\sigma}(\mathrm{d}x)\leq 2\left\|a\right\|^{2}\left\|S\right\|_{\mathrm{F}}^{2} \left(1+(m-1)\exp(-\sigma^{2}\varepsilon^{2}/2)\right).\] Proof.: One has \[\int_{\mathbb{R}^{n}}\left\|\nabla q(x)-\nabla f(x)\right\|^{2} \,\mathbb{P}_{\sigma}(\mathrm{d}x)\] (by \[q\equiv r\] and \[f\equiv r+g\] ) \[=\int_{\mathbb{R}^{n}}\left\|-\sum_{j=1}^{m}a_{j}\sin(\langle s_{ j},x\rangle+\psi_{j})s_{j}\right\|^{2}\mathbb{P}_{\sigma}(\mathrm{d}x)\] (derivative of \[g\] \[=\sum_{j=1}^{m}\sum_{\ell=1}^{m}a_{j}a_{\ell}s_{j}^{T}s_{\ell}\int_{ \mathbb{R}^{n}}\sin(\langle s_{j},x\rangle+\psi_{j})\sin(\langle s_{\ell},x \rangle+\psi_{\ell})\,\mathbb{P}_{\sigma}(\mathrm{d}x)\] \[=\frac{1}{4}\sum_{j=1}^{m}\sum_{\ell=1}^{m}a_{j}a_{\ell}s_{j}^{T}s _{\ell}\bigg{(}-\exp(-i\psi_{j}-i\psi_{\ell})\int_{\mathbb{R}^{n}}\exp\big{(}i \langle-s_{j}-s_{\ell},x\rangle\big{)}\,\mathbb{P}_{\sigma}(\mathrm{d}x)\] \[\qquad\qquad+\exp(i\psi_{j}-i\psi_{\ell})\int_{\mathbb{R}^{n}} \exp\big{(}i\langle s_{j}-s_{\ell},x\rangle\big{)}\,\mathbb{P}_{\sigma}( \mathrm{d}x)\] \[\qquad\qquad+\exp(-i\psi_{j}+i\psi_{\ell})\int_{\mathbb{R}^{n}} \exp\big{(}i\langle-s_{j}+s_{\ell},x\rangle\big{)}\,\mathbb{P}_{\sigma}( \mathrm{d}x)\] \[\qquad\qquad-\exp(i\psi_{j}+i\psi_{\ell})\int_{\mathbb{R}^{n}} \exp\big{(}i\langle s_{j}+s_{\ell},x\rangle\big{)}\,\mathbb{P}_{\sigma}( \mathrm{d}x)\bigg{)}\qquad\text{ (Euler formula, linearity)}\] \[=\frac{1}{4}\sum_{j=1}^{m}\sum_{\ell=1}^{m}a_{j}a_{\ell}s_{j}^{T}s _{\ell}\bigg{(}-\exp(-i\psi_{j}-i\psi_{\ell})\varphi_{\sigma}(-s_{j}-s_{\ell} )+\exp(i\psi_{j}-i\psi_{\ell})\varphi_{\sigma}(s_{j}-s_{\ell})\] \[\qquad\qquad+\exp(-i\psi_{j}+i\psi_{\ell})\varphi_{\sigma}(-s_{j} +s_{\ell})-\exp(i\psi_{j}+i\psi_{\ell})\varphi_{\sigma}(s_{j}+s_{\ell})\bigg{)}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\ ## 3 Experiments ### Comparing the search directions \(\Delta x_{0}\), \(-b_{0}\), and \(-\overline{g}\) The quality of search directions is determined by their deviation from the global optimum. The first experiment measures this deviation with the Euclidean angle between the search direction and the vector that would yield the global optimum (known in the experiment). In a practical setting, however, it should be noted that search directions that deviate consistently by little and those with low deviation in particularly adverse settings are useful--especially when used jointly. In Algorithm 1, \(\Delta x_{0}\) denoted the non-local Newton step, \(-b_{0}\) denoted the first order term in the non-local quadratic model, and \(-\overline{g}\) denoted in Theorem 2.1 the empirical average gradient. For the objective of this experiment, an ill-conditioned quadratic superimposed with a disturbance modeled by several high-frequency cosine functions is chosen. For the selected objective function, second-order information influences the optimal search direction. As in Theorem 2.3, this model is an adverse setting that still offers a sufficient degree of interpretation and generality. Therefore, the **inputs of Algorithm 1 are set as** * \(n\stackrel{{!}}{{=}}20\), \(f(x)\stackrel{{!}}{{=}}\langle x,\operatorname{diag}(1,1+(100/ 19),\ldots,100)x\rangle-\sum_{j=1}^{n}a\cos(sx_{j})\) for all \(x\in\mathbb{R}^{n}\), where \(a=10\) and \(s=20\pi\), * \(x_{0}\stackrel{{!}}{{\sim}}\operatorname{Unif}([-U,U]^{n})\), and where \(\sigma_{0},U\stackrel{{!}}{{\in}}\{10^{-2},10^{-1},1,10,10^{2},1 0^{3}\}\), * \((z_{1},\ldots,z_{k})\sim\mathbb{P}_{k}\stackrel{{!}}{{=}} \mathcal{N}(0,I_{n})^{\otimes k}\), i.e., the product measure of the standard normal distribution of order \(k\stackrel{{!}}{{=}}3n/2\) with \((z_{1},\ldots,z_{k})\) being independent of \(x_{0}\). * The quantities \(\Delta x_{0},-b_{0}\) and \(-\overline{g}\) do not depend on the other inputs of Algorithm 1. The results of this experiment are shown in Figure 1. An **interpretation of the results of the first experiment** is the non-local least-squares gradient estimator \(-b_{0}\) is considerably more robust than the gradient average \(-\overline{g}\), as the empirical cumulative probability of \(\operatorname{angle}(-b_{0},-x_{0})\) is larger or almost equal to that of \(\operatorname{angle}(-\overline{g}_{0},-x_{0})\) for all scales of \((\sigma_{0},U)\). On large scales of \((\sigma_{0},U)\), where the signal-to-noise ratio of gradients of \(f\) is large, the non-local Newton direction \(\Delta x_{0}\) is instructive. Here an interpretation is that large \(\sigma_{0}\) improves the estimator, whereas large \(U\) makes the estimation problem easier. It is to be noted that estimation is done on relatively few samples \(k=3n/2\). Still, the search directions of Algorithm 1, i.e., \(\Delta x_{0}\) and \(-b_{0}\) are clearly significantly more useful than a random search direction in all presented settings. While the experiment studies a particular setting, it is expected that the results generalize whenever the signal-to-noise ratio and the global structure are similar. Figure 1: Empirical cumulative probability (ecp) over \(\text{angle}(\cdot,-x_{0}):\equiv\arccos\left(\langle\cdot,-x_{0}\rangle/(\left\| \cdot\right\|\left\|x_{0}\right\|)\right)\) for different estimators of descent directions in the setting of Section 3.1 (an ill-conditioned Rastrigin-type function in dimension \(n=20\)), where \(100\) samples of the initial element \((x_{0},z_{1},\ldots,z_{k})\) are drawn and \(k=30\) gradients are used to estimate the search direction. The vector \(-x_{0}\) is the direction that points towards the global minimum. If the ecp of \(\text{angle}(-\overline{g}_{0},-x_{0})\) is not visible, it is almost equal to that of \(\text{angle}(-b_{0},-x_{0})\). **Not shown:** Gradient estimators (\(-b_{0}\) and \(-\overline{g}\)) degrade for \(U\leq 1\), whereas \(\Delta x_{0}\) degrades for \(U\leq 10\). Further, gradient estimation performs well even for small \(\sigma_{0}\) (i.e., \(\sigma_{0}=10^{-2}\)), while \(\Delta x_{0}\) degrades for \(\sigma_{0}\leq 10\). All estimators perform well for the local setting of \(\sigma_{0},U=10^{2}\). ### Benchmarking Algorithm 1 on selected functions with many local minima In this experiment, Algorithm 1, i.e., the simplistic non-local quasi-Newton method proposed in this work; the _Covariance matrix adaptation evolution strategy_ (CMA-ES) [10], an optimizer iteratively modeling up-to-second-moment search distributions from non-local function evaluations; and the _Broyden-Fletcher-Goldfarb-Shanno method_[20, Chapter 8.1], a local quasi-Newton method, that is randomly and independently reinitialized uniformly on \([-\sigma_{0}+x_{0},\sigma_{0}+x_{0}]^{n}\) on convergence (the algorithm shall be called rBFGS) are compared. The goal is to provide evidence for the utility of non-local gradient information for global optimization in a specific, yet non-trivial and practically relevant, setting. The selected methods CMA-ES and rBFGS are distinct state-of-the-art methods that both model second-order information without explicit access to it (such as Algorithm 1). The functions that are used as a benchmark are selected to fit the design domain of Algorithm 1, i.e., roughly modeled by a "disturbed" convex function, and can be considered challenging problems for most optimizers. In particular, consider the functions defined for all \(x\in\mathbb{R}^{n}\) by \[f_{\text{levy}}(x):=\sin^{2}\big{(}\pi w(x_{1})\big{)}+\big{(}w(x _{n})-1\big{)}^{2}\big{(}1+\sin^{2}\big{(}2\pi w(x_{n})\big{)}\big{)}\\ +\sum_{i=1}^{n-1}\big{(}w(x_{i})-1\big{)}^{2}\big{(}1+10\sin^{2} \big{(}\pi w(x_{i})+1\big{)}\big{)}\,,\] \[\text{where }w(x_{i}):=1+\tfrac{x_{i}-1}{4}\text{ for all }i\in\mathbb{N}_{\leq n}.\] \[f_{\text{salomon}}(x):=1-\cos(12\pi\left\|x\right\|)+\frac{3}{5}\left\|x \right\|\,,\text{ and}\] \[f_{\text{cigar}}(x):=an+\langle x,\text{diag}(1,1+100/(n-1),\ldots,100)x \rangle-\sum_{i=1}^{n}a\cos(sx_{i})\,,\text{ where }a=10\text{ and }s=20\pi\,.\] The **inputs of Algorithm 1 are set as** * \(n\stackrel{{!}}{{=}}50\), \(f\stackrel{{!}}{{\in}}\{f_{\text{levy}},f_{\text{salomon}},f_{ \text{cigar}}\}\), \(x_{0}\stackrel{{!}}{{\sim}}\text{Unif}([-10,10]^{n})\), \(\sigma_{0}\stackrel{{!}}{{=}}10\), and * \((z_{1},\ldots,z_{k})\sim\mathbb{P}_{k}\stackrel{{!}}{{=}}\mathcal{ N}(0,I_{n})^{\otimes k}\), i.e., the product measure of the standard normal distribution of order \(k=3n\) with \((z_{1},\ldots,z_{k})\) being independent of \(x_{0}\). * Define \[\text{scaling}(\sigma_{0},\sigma_{t},x_{t+1}-x_{t}):=\begin{cases} \text{scaling}(\sigma_{0},\sigma_{0},x_{t+1}-x_{t})&\text{if }\sigma_{t}<10^{-4}\\ \sigma_{t}/2&\text{else if }\left\|x_{t+1}-x_{t}\right\|<10^{-4}\\ \left\|x_{t+1}-x_{t}\right\|/2&\text{else if }\left\|x_{t+1}-x_{t}\right\|>2 \sigma_{t}\\ \sigma_{t}&\text{else,}\end{cases}\] and \[\text{linesearch}(f,\Delta x_{t},-b_{t},x_{t}):=\operatorname*{ arg\,min}_{x\in\mathcal{A}(\Delta x_{t},-b_{t},x_{t})}f(x)\,,\] where \[\mathcal{A}(\Delta x_{t},-b_{t},x_{t}):=\{x_{t}+(6/5)^{i}\Delta x_{t},x_{t}+( 6/5)^{i}(-b_{t})\mid i\in\mathbb{Z}_{[-10,10]}\}.\] The initial scaling of CMA-ES, i.e., the size of the non-local kernel, was chosen to be the same as for Algorithm 1. Virtually all other parameters of CMA-ES are heavily tuned for the considered function classes and adaptive to the functions at hand. The local search of rBFGS is terminated (and restarted) when the gradient norm is smaller than \(10^{-4}\). The other parameters of rBFGS do not affect its performance significantly, as local search is extremely efficient on the considered functions. The reinitialization will likely not be improved much by gridded sampling as the experiments are done in 50 dimensions, i.e., reinitializations are very sparse. The results of this experiment are shown in Figure 2. An **interpretation of the results of the second experiment** is that for functions with benign second-order structure, such as \(f_{\text{levy}}\), Algorithm 1 and CMA-ES perform similarly well on large-scales, as non-local approximates of up-to-second order objective structure seems to yield good search directions and both methods estimate them. Algorithm 1 seems to have a slight advantage due to gradients being more informative than function values. In the particular case of \(f_{\text{levy}}\) the likelihood of reinitializing in a relatively good basin of attraction is small, such that, rBFGS performs relatively poorly. It is plausible that due to its robust linesearch, Algorithm 1 enters a basin of attraction of \(f_{\text{levy}}\) close to the minimum and fast local convergence using second-order information is possible. This is evidence for the hypothesis that non-local methods profit from linesearch. Similar behavior Figure 2: **Top:** Benchmark of the proposed Algorithm 1, zero-order stochastic search method CMA-ES and uniformly randomly reinitialized local quasi-Newton method rBFGS. The benchmark functions \(f_{\text{levy}},f_{\text{salomon}}\) and \(f_{\text{reigar}}\) in dimension \(n=50\), defined in Section 3.1 are selected for their pathological structure with many local minima. The global minimum of all functions is 0. Function and gradient evaluations (including the linesearch) are counted equally. All algorithms are initialized independently for each run uniform randomly on \([-10,10]^{n}\). **Bottom:** Visualization of 2-dimensional analogs of the benchmark functions. Lighter/reddish colors correspond to relatively large function values and darker/bluish colors correspond to relatively small function values. is observed for \(f_{\text{salomon}}\), while here it is likely enough that rBFGS reinitializes in a good basin of attraction given the available budget. Therefore, rBFGS performs relatively well on \(f_{\text{salomon}}\). The reduction in the rate of convergence of Algorithm 1 of \(f_{\text{salomon}}\) is likely due to its coarse stepping in the scaling function possibly paired with the non-smoothness in the origin of \(f_{\text{salomon}}\). Despite the forgetfulness of Algorithm 1 (function evaluations are not retained past an iteration), Algorithm 1 seems to outperform CMA-ES in estimating second-order information in domains of benign signal-to-noise-ratio of \(f_{\text{reigar}}\). CMA-ES on the contrary seems to perform relatively better in the domain of poor signal-to-noise-ratio close to the optimum of \(f_{\text{reigar}}\). The many relatively bad local minima of \(f_{\text{reigar}}\) yield a relatively bad performance of rBFGS. ### Solving Problem 4 of _SIAM News: A Hundred-dollar, Hundred-digit Challenge_ The _Hundred-dollar, Hundred-digit Challenge_, posed on January 2, 2002 in SIAM News [10], asks to solve 10 numerical problems with a precision of 10 digits each. Problem 4 of the challenge is the minimization of \[x\in\mathbb{R}^{2}\longmapsto f_{\text{siam}}(x):=\exp\big{(} \sin(50x_{1})\big{)}+\sin\big{(}60\exp(x_{2})\big{)}+\sin\big{(}70\sin(x_{1}) \big{)}\] \[+\sin\big{(}\sin(80x_{2})\big{)}-\sin\big{(}10(x_{1}+x_{2})\big{)} +(x_{1}^{2}+x_{2}^{2})/4\,.\] Although many fundamentally different solutions to Problem 4 exist [11], it was proposed as a particularly challenging example for many optimization methods. Therefore, it is interesting to verify whether the newly proposed Algorithm 1 can solve this familiar challenge. When uniform randomly initialized on \([-100,100]^{2}\), Algorithm 1 solves the problem in many cases within 30000 evaluations. The results of the experiment are visualized in Figure 3. For this experiment the **inputs of Algorithm 1 were set as** * \(n=2,f\stackrel{{!}}{{=}}f_{\text{siam}}\), \(x_{0}\stackrel{{!}}{{\sim}}\text{Unif}([-100,100]^{n})\), \(\sigma_{0}\stackrel{{!}}{{=}}1\), and * \((z_{1},\dots,z_{k})\sim\mathbb{P}_{k}\stackrel{{!}}{{=}}\mathcal{ N}(0,I_{n})^{\otimes k}\), i.e., the product measure of the standard normal distribution of order \(k=3\) with \((z_{1},\dots,z_{k})\) being independent of \(x_{0}\). * Define \[\text{scaling}(\sigma_{0},\sigma_{t},x_{t+1}-x_{t}):=\begin{cases}\text{ scaling}(\sigma_{0},\sigma_{0},x_{t+1}-x_{t})&\text{if }\sigma_{t}<10^{-4}\\ 10\sigma_{t}/11&\text{else if }\left\|x_{t+1}-x_{t}\right\|<10^{-4}\\ 10\left\|x_{t+1}-x_{t}\right\|/11&\text{else if }\left\|x_{t+1}-x_{t}\right\|>2 \sigma_{t}\\ \sigma_{t}&\text{else.}\end{cases}\] All other elements of Algorithm 1 are defined as in Section 3.2. ## 4 Discussion and Future Work This work asked the question of whether practically realistic numbers of gradient evaluations can yield useful information for minimizing functions with many suboptimal local minima. The proposed simplistic Algorithm 1 generalizes the quasi-Newton method based on a non-local approximation of the objective function. When optimizing a "disturbed" quadratic function, Algorithm 1 is a consistent approximator of the underlying quadratic model in the sense of Theorem 2.2. Further, at least one of the two generated search directions outperforms gradient averaging in Section 3.1. Similarly, even with simple sampling, scaling, or linesearch methods, Algorithm 1 performs comparable to or outperforms state-of-the-art methods on minimization of differentiable functions with many suboptimal local minima selected in Section 3.2. An additional motivation for the extension of the presented method is that Algorithm 1 solves Problem 4 of the _SIAM News: A Hundred-dollar, Hundred-digit Challenge_[15, 16]--likely as (one of) the first non-global non-local iterative methods with search directions based on gradients. The results of this work motivate the design/analysis/integration of sophisticated sampling, scaling, and linesearch methods that adapt to the particular objective function at hand. More precisely, promising future work builds on the described results by * Regularizing and improving the approximation of the quadratic model based on objective evaluations; * Designing improved linesearch and scaling methods towards an objective-adaptive optimizer; * Integrating objective and objective gradient evaluations of multiple algorithm iterations to improve the sample efficiency. Figure 3: **Left:** Benchmark of the proposed Algorithm 1 on Problem 4 of the _Hundred-dollar, Hundred-digit Challenge_, posed on January 2, 2002 in SIAM News [15]. The algorithm is initialized independently for each run uniform randomly on \([-100,100]^{2}\). Function and gradient evaluations (including the linesearch) are counted equally. **Right:** Visualization of \(f_{\text{siam}}\) on \([-1,1]^{2}\), which contains the global minimum \(-3.306868647475\dots\) at \(x^{*}=(-0.0244\dots,0.2106\dots)\). Lighter/reddish colors correspond to relatively large function values and darker/bluish colors correspond to relatively small function values.
2305.06841
Think Twice: Measuring the Efficiency of Eliminating Prediction Shortcuts of Question Answering Models
While the Large Language Models (LLMs) dominate a majority of language understanding tasks, previous work shows that some of these results are supported by modelling spurious correlations of training datasets. Authors commonly assess model robustness by evaluating their models on out-of-distribution (OOD) datasets of the same task, but these datasets might share the bias of the training dataset. We propose a simple method for measuring a scale of models' reliance on any identified spurious feature and assess the robustness towards a large set of known and newly found prediction biases for various pre-trained models and debiasing methods in Question Answering (QA). We find that while existing debiasing methods can mitigate reliance on a chosen spurious feature, the OOD performance gains of these methods can not be explained by mitigated reliance on biased features, suggesting that biases are shared among different QA datasets. Finally, we evidence this to be the case by measuring that the performance of models trained on different QA datasets relies comparably on the same bias features. We hope these results will motivate future work to refine the reports of LMs' robustness to a level of adversarial samples addressing specific spurious features.
Lukáš Mikula, Michal Štefánik, Marek Petrovič, Petr Sojka
2023-05-11T14:35:00Z
http://arxiv.org/abs/2305.06841v2
# Think Twice: Measuring the Efficiency ###### Abstract While the Large Language Models (LLMs) dominate a majority of language understanding tasks, previous work shows that some of these results are supported by modelling spurious correlations of training datasets. Authors commonly assess model robustness by evaluating their models on out-of-distribution (OOD) datasets of the same task, but these datasets might _share_ the bias of the training dataset. We propose a simple method for measuring a scale of models' reliance on any identified spurious feature and assess the robustness towards a large set of known and newly found prediction biases for various pre-trained models and debiasing methods in Question Answering (QA). We find that the reported OOD gains of debiasing methods can not be explained by mitigated reliance on biased features, suggesting that biases are _shared_ among QA datasets. We further evidence this by measuring that performance of OOD models depends on bias features _comparably_ to the ID model, motivating future work to refine the reports of LLMs' robustness to a level of known spurious features. ## 1 Introduction Unsupervised pre-training and vast parametrization (Devlin et al., 2018; Radford and Narasimhan, 2018) enable Large Language Models (LLMs) to reach close-to-human accuracy on complex downstream tasks such as Natural Language Inference, Sentiment Analysis, or Question Answering. However, previous work shows that these outstanding results can partially be attributed to models' reliance on non-representative patterns in training data shared with the test set, such as the high lexical intersection of the entailed hypothesis to premise (Tu et al., 2020) in Natural Language Inference (NLI) or the intersection of the question and answer vocabulary (Shinoda et al., 2021) in extractive Question Answering (QA). A primary motivation for mitigating models' reliance on such features is to enhance their _robustness_ in practice, avoiding fragility to systematic errors when responding the open-ended user requests. Models' robustness is commonly assessed by measuring prediction quality on samples from other out-of-distribution (OOD) datasets (Clark et al., 2019; Karimi Mahabadi et al., 2020; Utama et al., 2020; Xiong et al., 2021). However, the OOD datasets might _share_ training biases introduced by shared features, such as data collection methodology, or human annotators' background (Mehrabi et al., 2021). In such cases, conversely, a model reliant on biased correlations can reach _higher_ OOD score despite being more fragile to the adversarial inputs exploiting the biased correlation. With this motivation, we propose a framework to evaluate models' reliance on a biased feature in prediction by _splitting_ evaluation data to two groups based on a biased feature and _comparing_ the prediction quality on these two groups (Fig. 1). This way, we assess a reliance on bias of diverse QA models for several previously and newly identified bias fea Figure 1: We quantify model reliance on a spurious feature using bootstrapped evaluation on segments of data separated by exploiting chosen bias (left) and subsequently, by measuring the difference in model’s performance over these two groups (right), that we refer to as _Prediction bias_ (§3). tures identified in this work. Finally, we assess the efficiency of the state-of-the-art debiasing methods in mitigating reliance on spurious features over a resampling baseline and compare the findings to the commonly-assessed OOD performance. We find that avoiding reliance on spurious features does not imply improvements in OOD performance; In many cases, debiasing methods mitigate the model's prediction bias, but the OOD performance drops, while counterintuitively, a magnification of bias reliance can also bring large OOD gains. Aiming to explain this, we directly evaluate the prediction bias of models trained on different datasets and confirm that even models trained on OOD datasets often rely on the _same_ spurious correlations as the ID models. This finding motivates the presented assessment of model robustness towards known biases, in addition to OOD performance. This paper is structured as follows. Section 2 overviews data biases observed in NLP datasets, recent debiasing methods, and the previous methods related to measuring inclination to spurious correlations. Section 3 presents our method for measuring the significance of specific biases. We follow in Section 4 with details on our evaluation setup, including the tested debiasing methods, addressed bias features, and the design of a set of heuristics that can exploit them. Subsequently, in Section 5, we measure and report models' robustness to biases and OOD datasets before and after applying the selected debiasing methods and wrap up our observations in Sections 6 and 7. Problem definitionGiven a set of inputs \(X=x_{1..i}\) with corresponding labels \(Y=y_{1..i}\) from a dataset \(\mathcal{D}_{\textit{ID}}\), a model \(M\) learns a _task_\(\mathcal{T}\) by identifying _features_\(\mathcal{F}_{1..n}\) that map each \(x_{j}\) to a corresponding \(y_{j}\), assuming that the learned features must be _consistent_ with \(\mathcal{D}_{\textit{ID}}\). Since the learned \(\mathcal{F}_{1..n}\) are distributed in \(M\) and can not be directly evaluated, we assess whether the learned features are _robust_ for the task \(\mathcal{T}\) by evaluating \(M\) on samples \(X_{\textit{OOD}}\) of the same task, but drawn from \(\mathcal{D}_{\textit{OOD}}\not\approx\mathcal{D}_{\textit{ID}}\); we assume that if \(\mathcal{F}_{1..n}\in M\) are robust, the model will also perform well on \(X_{\textit{OOD}}\). However, the consistency of the learned \(\mathcal{F}_{k}\) with both \(X_{\textit{ID}}\) and \(X_{\textit{OOD}}\) is merely a necessary and not a sufficient condition for \(\mathcal{F}_{k}\) to be robust; If there exists a pair \((x,y)\) such that the pair is a _valid_ sample of the task \(\mathcal{T}\), but is not consistent with \(\mathcal{F}_{k}\), we denote \(\mathcal{F}_{k}\) as _spurious_ or _bias features_ for \(\mathcal{T}\) and refer to models' reliance on such features as _prediction bias_. ## 2 Background Spurious correlations of NLP datasetsPrevious work analysing LLMs' error cases identified numerous false assumptions that LLMs use in prediction and can be misused to notoriously draw wrong predictions with the model. In Natural Language Inference (NLI), where the task is to decide whether a pair of sentences entail one another, McCoy et al. (2019) identifies LLMs' reliance on a lexical overlap and on specific shared syntactic units such as the constituents in the processed sentence pair. Asael et al. (2021) identify the model's sensitivity to meaning-invariant structure permutations. Similarly, Chaves and Richter (2021) identify BERT's reliance on the invariant morpho-syntactic composition of the input. In Question Answering, LLMs often rely on the positional relation of the question and possible answer words, such as falsely assuming their proximity (Jia and Liang, 2017). Bartolo et al. (2020) find that models tend to assume that questions and answers contain similar keywords, remaining vulnerable to samples with none or multiple occurrences of the keywords in the context. Ko et al. (2020) show models' preference for the answers in the first two sentences of the context, being statistically most likely to answer human-curated questions. A perspective direction circumventing the biases introduced in data collection is presented in adversarial data collection (Jia and Liang, 2017; Bartolo et al., 2020) where the annotators collect the dataset with the intention of fooling the possibly-biased model, possibly enhancing the model-in-the-loop in several iterations. Still, some doubts remain; Models trained on adversarial data may work better on adversarial datasets but underperform on other OOD datasets (Kaushik et al., 2021), or introduce its own set of biases (Kovatchev et al., 2022). Debiasing methodsA well-established line of work proposes to address the known dataset biases in the training process. Karimi Mahabadi et al. (2020) and He et al. (2019) obtain the debiased model by (i) training a _biased model_ that exploits the unwanted bias, and (ii) training the debiased model as a complement to the biased one in a Product-of-Experts (PoE) framework (Hinton, 2002). Clark et al. (2019) extend this framework in the LearnedMixin method, learning to weigh the contribution of the biased and debiased model in the complementary ensemble. Niu and Zhang (2021) simulate the model for non-biased, out-of distribution dataset through counterfactual reasoning Niu et al. (2021) and use the resulting distribution for distilling target Hinton et al. (2015), similarly to the LearnedMixin. Biased samples can also be identified in other ways, for instance, by the model's overconfidence Wu et al. (2020). In a complement to PoE approaches, other works apply model confidence regularization on the samples denoted as biased. Feng et al. (2018) and Utama et al. (2020) downweigh the predicted probability of the examples marked as biased by humans or a model. Xiong et al. (2021) find that a more precise calibration of the biased model might bring further benefits to this framework, consistently to our observations. Distributionally Robust Optimization (DRO) methods are another group of reweighting algorithms, addressing assumed imperfection of training datasets by (i) segmenting data into groups of diverse covariate shifts Sagawa et al. (2020) and (ii) minimizing the worst-case risk over all groups Zhou et al. (2021). We note that our bias measurement method closely relates to group DRO methods and can, for instance, also serve as a method for quantifying per-group risk. Robustness measuresMost of the work on enhancing models' robustness evaluates the acquired robustness on OOD datasets. In some cases, the evaluation utilizes datasets specially constructed to exploit the biases typical for a given task, such as HANS McCoy et al. (2019) for NLI, PAWS Zhang et al. (2019) for Paraphrase Identification, or AdversarialQA Bartolo et al. (2020) for Question Answering, that we also use in evaluations. Similar to us, some previous work quantified dataset biases by splitting data into two subsets, comparing model behaviour between these groups. McCoy et al. (2019) perform such evaluation over MNLI, demonstrating large margins in accuracy over the two groups and superior robustness of BERT over previous models. Similarly, Utama et al. (2020) compare two groups based on prediction confidence. Our Prediction bias measure follows a similar approach in QA but provides a more reliable assessment thanks to bootstrapping. Compared to the previous work, we assess models' reliance on a range of 7 spurious features, making overall conclusions more robust. An ability to measure a model's reliance on undesired features is well-applicable in quantifying socially problematic biases. Previous work also utilizes specialized domain knowledge in models' bias evaluation but might not scale to other bias features; Parrish et al. (2022) collect ambiguous contexts and assess the models' inclination to utilize stereotypes as prediction features. Bordia and Bowman (2019) quantify LM's gender bias by the co-occurrence of selected gender-associated words with gender-ambiguous words, such as _doctor_. ## 3 Measuring Prediction Bias We assess a model's sensitivity to a known spurious feature in the following sequence of steps. This methodology is visualized in Figure 1, described in Algorithm 1 and can be used to measure biases of any other QA model using the project repository1. Footnote 1: [https://github.com/MIR-MU/isbiased](https://github.com/MIR-MU/isbiased) We start by (i) implementing a _heuristic_, i.e. a method \(h:X\rightarrow\mathbb{R}\), that for _all_ samples of dataset \(X\) computes an _attribute_\(A_{h}\in\mathbb{R}\) corresponding to the feature \(\mathcal{F}\) that we suspise as non-representative, yet predictive for our training set and (ii) evaluate \(h\) on evaluation dataset \(X\). (iii) We choose a threshold \(T_{h}\) that we use to (iv) split the dataset into two segments by \(A_{h}\). Finally, (v) we evaluate the assessed model \(M\) on both of these segments, in our case using Exact match measure, and (vi) measure model **prediction bias** as the _difference_ in performance between these two groups. Using bootstrapped evaluation, we mitigate the effect of randomness by only comparing selected quantiles of confidence intervals. We propose to perform a hyperparameter search for the heuristic's threshold \(T_{h}\) that maximizes the measured distance. InterpretationGiven the reliance on bootstrapping, we state that model's _true_ performance polarisation is \(0.975\times 0.975=95.06\%\)-likely to be equal or higher than the measured Prediction bias (with \(q^{\uparrow}=0.975,q^{\downarrow}=0.025\) as in Algorithm 1). Nevertheless, one should note that the proposed measure should not be used in a standalone but rather in a complement to an ID evaluation, as one can reduce the Prediction bias merely by _lowering_ the performance on the better-performing ID subset. Therefore, we report the values of Prediction bias together with the performance on a worse-performing, i.e. presumably non-biased split. Another consideration concerns the "natural" polarisation of difficulty between samples; That is a portion of Prediction bias which can be explained by the features \(\mathcal{F}\) that are _representative_ for the evaluated task (SS1). One should note that the reduction of Prediction bias is meaningful only up to the level of the natural sample difficulty. The validation set of SQuAD contains the annotations by three annotators that we use to quantify a level of Prediction bias that can be explained by the questions' natural difficulty (further denoted as _Human_ model); We report the minimum over Prediction biases of the annotators among each other. Finally, even though we perform a hyperparameter search for optimal heuristics' thresholds \(T_{h}\) feasible for a given size of dataset splits, there are no guarantees on the maximality of the found \(T_{h}\). Hence, Prediction bias only provides the _lower bounds_ of the model's worst-case polarisation. ## 4 Experiments Our main objective is to assess the efficiency of different training decisions in mitigating the reliance of the model on spurious correlations that we assume to be present in a dataset. In QA, we identify the spurious covariates in SQuAD dataset (Rajpurkar et al., 2016), with existing work documenting a variety of learnt spurious correlations. For each suspected bias feature, we first describe and implement the exploiting heuristics that we use to segment groups in the Prediction bias measure (SS4.1). Subsequently, we observe the impact of the selected pre-training strategies (SS4.2) and debiasing methods designed to address the over-reliance on biased features (SS4.3 - SS4.4) on the Prediction bias and OOD performance of the resulting models. ### Biases and Exploiting Heuristics Our work extends the list of previously-reported QA biases based on our experience with two novel bias features that we later assess as significant. The spurious features newly identified in this work are preceded with +. Together with each bias, we also briefly describe its exploiting heuristic computing the non-representative feature \(A_{h}\) (Algorithm 1). Distance of Question words from Answer words (_word-dist_)Jia and Liang (2017) propose that the models are prone to return answers close to the vocabulary of the question in context. Hence, _word-dist_ computes how close the closest question word is to the first answer in the context and computes the distance (\(A_{h}\)) as a number of words between the closest question word and the answer span. Similar words between Question and Context (_sim-word_)Shinoda et al. (2021) report the common occurrence of a high lexical overlap between the question and the correct answer over QA datasets. In _sim-word_ heuristic, we represent the lexical overlap by the number of shared words between the question and the context. Both are defined as sets, and the intersection size of these two sets is computed as the heuristic's evaluation (\(A_{h}\)). Answer position in Context (_ans-pos_)Ko et al. (2020) report that QA models may learn to falsely assume the answer's occurrence in the first two sentences. The exploiting heuristic first segments the context into sentences, then identifies the sentence containing the answer and yields a scalar corresponding to the rank of the sentence within the context that contains the answer (\(A_{h}\)). Cosine similarity of Question and Answer (_cossim_)Clark et al. (2019) use the TF-IDF similarity as a biased model for QA, implicitly identifying a bias in undesired reliance of the model on the match of the keywords between the question and retrieved answer. We exploit this feature by (i) fitting the TF-IDF model on all SQuAD contexts, (ii) inferring the TF-IDF vectors of both questions and their corresponding answers, and (iii) returning the scalar (\(A_{h}\)) as cosine similarity between the TF-IDF vectors of question and answer. Answer length (_ans-len_)Bartolo et al. (2020) show that QA models trained on SQuAD make errors much more often on questions asking for longer answers, implicitly identifying models' reliance on a feature that the answer must comprise at most a few words. We exploit this feature by simply computing \(A_{h}\) as the length of the answer. +Number of Question's Named Entities in Context (_sim-ents_)We suspect that the in-context presence of multiple named entities, such as multiple personal names or locations, might perplex the QA model's prediction. This might suggest that models tend to reduce the QA task to a simpler yet irrelevant problem of Named Entity Recognition. We utilize a pre-trained BERT NER model provided within spaCy library Honnibal and Montani (2017) to identify named entities of the _question type_ (i.e., _personal names_ if the question starts with "Who"). Then, we count \(A_{h}\) as the number of matching named entities in the context. +Position of Question's subject to the correct Answer in Context (_subj-pos_)Our observations suggest that the position of the question's subject in the context impacts the predicted answer spans of QA models. In the corresponding heuristic, using SpaCy library, we (i) identify the questions' subject expression and (ii) locate its occurrences in the context. We (iii) locate the answer span and compute \(A_{h}\) as a relative position of the answer: either before the subject, after the subject, or after multiple occurrences of the question subject. ### Evaluated Models To estimate the impact of selected pre-training strategies on the robustness of the resulting model, we conventionally fine-tune a set of diverse pre-trained LLMs for extractive QA. We alternate between the following models: BERT-Base Devlin et al. (2019), RoBERT-Base and RoBERTa-Large Liu et al. (2019), Electra-BaseClark et al. (2020) and T5-Large Raffel et al. (2020). This selection allows us to outline the impact of the various features on the robustness of the final QA model: (i) pre-training data volume (BERT-Base vs RoBERTa-Base), (ii) model size (RoBERTa-Base vs RoBERTa-Large), (iii) pre-training objective (BERT-Base vs Electra-Base), or (iv) extractive vs. generative prediction mode (T5 vs. others). We also evaluate the prediction bias of recent multi-task in-context learners, without fine-tuning: T0 Sanh et al. (2022) trained for zero-shot in-context learning excluding SQuAD, and Flan-T5 Chung et al. (2022) trained on a mixture of more than 1,800 tasks, including SQuAD. ### Debiasing Baseline: Resampling (ReSam) Based on the heuristics and their tuned configuration, our baseline method performs simple super-sampling of the underrepresented group (\(X_{1}\) or \(X_{2}\) in Algorithm 1) until the two groups are represented equally. This approach shows the possibility of bias reduction by simply normalizing the distribution of the biased samples in the dataset, requiring only the identification of the members of the under-represented group. ReSam closely follows the routine of Algorithm 1 and splits the data by the optimal threshold of the attributes of the heuristics corresponding to each addressed bias. ### Assessed Debiasing Methods We assess the efficiency of debiasing methods in eliminating Prediction bias for the representatives of two diverse debiasing methods. In addition to Prediction bias, we also report the resulting performance on three OOD datasets. We follow the reference implementations as closely as possible while scaling the scope of experiments from one to seven separately-addressed biases. Complete description of training settings is in Appendix B.2. LearnedMix (LMix)method Clark et al. (2019) is a popular adaptation of Product-of-Experts framework Hinton (2002), with a set of refinements (SS2), that uses a _biased model_ as a complement of the trained debiased model in a weighted composition. We reimplement the reference implementation with the following alterations. Instead of the BiDAF model, we use stronger BERT-Base as the trained debiased model. Instead of using a TF-IDF-based bias model custom-tailored for a single bias type, we opt for a universal approach for obtaining biased models (Appendix B.2.1). We rerun the parameter search and use a different entropy penalty (\(\text{H}=0.4\)) throughout all experiments. Confidence Regularization (CReg)aims to reduce the model's confidence, i.e. the predicted score over samples marked as biased. Utama et al. (2020) propose to reduce the confidence of the biased samples using a distillation from the conventional QA teacher model, scaled down by the relative scores of a biased predictor. In our experiments, we consistently use BERT-Base for both the teacher and bias model. To enable comparability with LMix, we use identical bias models for both methods (Described in Appendix B.2.1). ## 5 Results ### Impact of Pre-training Figure 2 compares the Prediction bias of the fine-tuned models of diverse pre-training data volumes and objectives, followed by in-context learning models and human reference. The results suggest that increased amounts of pre-training data of the base models (cf. BERT-Base and others) might mitigate the models' reliance on the bias. The results are less conclusive in a comparison of different pre-training objectives (cf. RoBERTa-Base and Electra-Base); While Electra is less polarised in 4 out of 7 cases, the differences are minimal. The largest reduction of Prediction bias (\(-1.2\) on average) is achieved by increasing the model size of RoBERTa-Large. Analogically, Figure 3 compares OOD performance on selected QA datasets: AdversarialQA Jia and Liang (2017), NaturalQuestions Kwiatkowski et al. (2019) and TriviaQA Joshi et al. (2017). The concluding robustness ranking is mainly consistent with the Prediction bias ranking, with an exception of generative fine-tuning (T5), which outperforms others on OOD datasets but not on a reduction of the reliance on spurious features. ### Prediction bias of OOD models Figure 4 compares Prediction bias over the least-biased RoBERTa-Large models trained on different datasets. All evaluations are split on heuristics' thresholds \(T_{h}\) optimal for SQuAD model, which allows comparability to the shared human reference but implies that larger Prediction bias for OOD models might exist. We see that all Prediction biases learnt on SQuAD are also learnt from at least one OOD dataset. For Trivia model, _all_ types of biases identified in SQuAD are magnified. We specifically note the comparison of Prediction bias of the SQuAD model to the model trained on AdversarialQA, collected adversarially to a SQuAD model; We find that AdversarialQA model is the only OOD model lowering reliance on all biased features that are over the level of natural bias, supporting the argued efficiency of adversarial data collection in addressing original dataset biases. ### Impact of Debiasing Figure 5 compares the biases of Question Answering models obtained within three debiasing methods (SS4.3 - SS4.4), applied to the most-biased Figure 3: **OOD performance per pre-trained model. Comparison of F1-score of different models fine-tuned on SQuAD and evaluated on listed OOD datasets.** Figure 2: **Prediction bias per pre-trained model. The worse-performing split performance (lower bars) and Prediction bias (upper bars, sorted by group average) of QA models trained from different pre-trained LLMs, trained and evaluated on SQuAD for Exact match. Per-group bootstrapping of 100 repeats with 800 samples.** BERT-Base model. We observe that debiasing methods are not consistent in the efficiency of mitigating the reliance on the addressed bias feature. In fact, only ReSam baseline lowers the bias of the original model consistently. We attribute this inconsistency to methods' sensitivity to _bias model_, further discussed in SS6. While LMix is the most efficient in addressing Prediction bias in average, consistently to Clark et al. (2019) we see that this often comes for a price of the ID performance. Table 1 enumerates the OOD performance of debiased models over three diverse QA datasets. By comparing these results to Prediction bias (Fig. 5), we see many cases where the reduction of Prediction bias can not explain improvements of OOD; For instance, addressing _word-dist_ bias using CREg improves OOD performance by \(2.8\)% of an exact match on average and by \(7.5\) on _NaturalQuestions_, but the Prediction bias of such model increases by \(1.1\) points. Similarly, CREg addressing _sim-word_ bias, delivers \(1.5\)-point average gain on OOD but raises Prediction bias by \(0.9\) points. Figure 6 further evaluates the impact of addressing one bias to other known biases in cases where each method delivers the largest Prediction bias reduction. We see that addressing a specific bias also affects the scope of the model's reliance on other covariates. Results suggest that CREg might be more robust to enlargening of other biases, increasing other Prediction biases by \(0.31\) on average, as compared to LMix (\(0.6\)) and ReSam (\(0.38\)). ## 6 Discussion Pre-training and models' robustnessThe bias-level analyses of diverse pre-trained models (SS5.1) suggest that the mere increase of pre-training data and model parameters guide the fine-tuned models to lower reliance on biased features. However, we can find exceptions, such as in the case of RoBERTa-Large and Electra-Base on _anslen_. We speculate that even larger volumes of data might make the model more attracted to taking a shortcut through easier problem formulations, such as through Named entity recognition (cf. BERTBase and RoBERTa-Base on _sim-ents_). Comparing the prediction bias of in-context learners with the fine-tuned models, we see that multi-task learning does not necessarily result in lower prediction bias or increased performance in the harder group; While Flan-T5 on average reduces bias almost to the human level, T0's quality is affected by spurious features even more than the models fine-tuned on biased SQuAD. OOD performance and Prediction bias relationOur results conclude that the previously-reported \begin{table} \begin{tabular}{c c c c} \hline \hline \multicolumn{2}{c}{Original model} & \(29.8\) / \(67.8\) / \(46.1\) & \\ \hline ReSam & LMix & CReg \\ _QA1 / NQ / Trivia_ & _QA1 / NQ / Trivia_ & _QA4 / NQ / Trivia_ \\ \hline _ans-len_ & \(-0.8\) / \(-5.6\) / \(-7.1\) & \(-0.9\) / \(-19.7\) / \(-3.3\) & \(-0.4\) / \(-1.5\) / \(+2.1\) \\ _word-dist_ / \(0.5\) / \(+1.3\) / \(-1.0\) & \(+0.9\) / \(-6.4\) / \(+1.5\) & \(+1.4\) / \(+7.5\) / \(-0.5\) \\ _cos-sim_ & \(-0.1\) / \(+0.3\) / \(-1.3\) & \(+0.4\) / \(-11.3\) / \(-4.1\) & \(-0.3\) / \(+7.4\) / \(+1.1\) \\ _sim-ets_ & \(+1.1\) / \(+1.5\) / \(+0.3\) & \(-0.1\) / \(-9.5\) / \(-1.2\) & \(-1.0\) / \(+5.9\) / \(+2.0\) \\ _sim-word_ / \(+0.3\) / \(+0.1\) / \(+0.4\) & \(-0.3\) / \(-1.2\) / \(-1.4\) / \(-2.9\) & \(-0.7\) / \(+3.9\) / \(+1.4\) \\ _sim-pos_ & \(-1.6\) / \(-0.7\) / \(-2.2\) & \(-1.3\) / \(-14.8\) / \(-1.3\) & \(+0.7\) / \(+1.6\) \\ \hline _Average_ & \(-0.45\) & \(-5.31\) & \(+2.33\) \\ \hline \hline \end{tabular} \end{table} Table 1: **OOD performance of debiasing methods. Differences of F1-scores of QA models trained on SQuAD using specified debiasing methods (§4.4) to address selected bias features (§4.1) evaluated on three OOD datasets; _AdversarialQA / NaturalQuestions / TriviaQA_, respectively. Largest gains per dataset are bold.** Figure 4: **Prediction bias per dataset. The worse-performing split performance (lower bars) and Prediction bias (upper bars) of RoBERTa-Large trained on different QA datasets, evaluated on a validation split of SQuAD for Exact match. All evaluation splits are identical, identified as maximal for the SQuAD-trained model (Appx. C).** improvements in OOD performance attributed to the debiasing might not be attributed to the mitigated reliance on a spurious correlation; (i) We measure that Prediction bias of the models trained directly on OOD datasets is still present over the level of human Prediction bias (SS5.2). Therefore, it is possible to maintain OOD gains by learning to rely on bias features. (ii) In practice, we find cases where applying a debiasing method _magnifies_ Prediction bias, but the resulting model still performs better in most OOD evaluations (SS5.3). Practical aspects of applying debiasing methodsWhile we confirm that debiasing methods enable improvements in the OOD, we find that the significance of such improvements largely varies between the addressed biases and the suitable configuration for one bias and dataset pair is often suboptimal for others. The scope of this variance can be seen in Table 1 from the comparison of average OOD performance of LMix and CReg on _word-dist_, used to pick methods' hyperparameters and bias models (Appendix B.2), and other biases; Both of the methods perform best on the bias used in parameter tuning, and the differences are often large. Bias-specific parameter tuning is further convoluted by the speed of the convergence of debiasing methods, which we measure as approximately 4-times slower for CReg and 3.5-times slower for LMix, compared to the standard fine-tuning of QA models. The bias model is an important parameter of both assessed debiasing methods. We find that the scores have to be rescaled for trained bias models to avoid perplexing the trained model on biased samples and that the optimal scaling parameter is also bias-specific. The selection of the bias model also affects the optimal Entropy scaling \(H\) of LMix; we find that the optimal value (\(H=2.0\)) for AdversarialQA reported by LMix authors is also not close to optimal (\(H=0.4\)) for our bias model. ## 7 Conclusion Our work sets out to investigate the impact of various training decisions, including different pre-training and debiasing strategies, on models' reliance on specific spurious features in QA, complementing the commonly-used out-of-distribution evaluations. We use SQuAD to survey documented and identify some new biased features but evaluate the reliance on these features for models trained on four different QA datasets. We find that (i) the OOD performance of different base models usually corresponds to models' reliance on bias features. However, (ii) the state-of-the-art debiasing methods can improve OOD performance _without_ minimising the model's reliance on spurious features, suggesting that dataset Figure 5: **Prediction bias per debiasing methods. The worse-performing split performance (lower bars) and Prediction bias (upper bars) of BERT-Base trained using selected debiasing methods, evaluated for Exact match on validation SQuAD. Per-group evaluations were measured using bootstrapping of 100 repeats with 800 samples.** Figure 6: **Cross-bias evaluation of debiased models. A relative change of Prediction bias by all spurious correlations, caused by applying inspected debiasing methods on BERT-Base QA model, in addressing specified spurious correlation. A full matrix is in Appx. A, Fig. 7.** biases might be _shared_ among QA datasets. (iii) We further evidence this by measuring the reliance on a spurious feature of models trained on other (OOD) datasets and find OOD models similarly or even more _reliant_ on spurious features of SQuAD. These findings aim to motivate future work to assess models' robustness also on a level of specific bias features, evading false conclusions on models' robustness, ultimately fostering progress toward reliable and socially unbiased language models.
2310.02891
Large-time behavior of two families of operators related to the fractional Laplacian on certain Riemannian manifolds
This note is concerned with two families of operators related to the fractional Laplacian, the first arising from the Caffarelli-Silvestre extension problem and the second from the fractional heat equation. They both include the Poisson semigroup. We show that on a complete, connected, and non-compact Riemannian manifold of non-negative Ricci curvature, in both cases, the solution with $L^1$ initial data behaves asymptotically as the mass times the fundamental solution. Similar long-time convergence results remain valid on more general manifolds satisfying the Li-Yau two-sided estimate of the heat kernel. The situation changes drastically on hyperbolic space, and more generally on rank one non-compact symmetric spaces: we show that for the Poisson semigroup, the convergence to the Poisson kernel fails -but remains true under the additional assumption of radial initial data.
Effie Papageorgiou
2023-10-04T15:28:01Z
http://arxiv.org/abs/2310.02891v1
Large-time behavior of two families of operators related to the fractional Laplacian on certain Riemannian manifolds ###### Abstract. This note is concerned with two families of operators related to the fractional Laplacian, the first arising from the Caffarelli-Silvestre extension problem and the second from the fractional heat equation. They both include the Poisson semigroup. We show that on a complete, connected, and non-compact Riemannian manifold of non-negative Ricci curvature, in both cases, the solution with \(L^{1}\) initial data behaves asymptotically as the mass times the fundamental solution. Similar long-time convergence results remain valid on more general manifolds satisfying the Li-Yau two-sided estimate of the heat kernel. The situation changes drastically on hyperbolic space, and more generally on rank one non-compact symmetric spaces: we show that for the Poisson semigroup, the convergence to the Poisson kernel fails -but remains true under the additional assumption of radial initial data. Key words and phrases:fractional Laplacian, extension problem, fractional heat equation, asymptotic behavior, long-time convergence, noncompact symmetric spaces 2020 Mathematics Subject Classification: 26A33, 35R11, 35B40, 35K05, 58J35, 58J65 ###### Contents * 1 Introduction * 2 Preliminaries * 2.1 Rank one non-compact symmetric spaces * 3 Fractional Laplacian and semigroups subordinated to the heat semigroup * 4 Asymptotics for two operators related to the fractional Laplacian: the case of non-negative * 4.1 A suitable class of kernels * 4.2 Asymptotics for solutions to the Caffarelli-Silvestre extension problem * 4.3 Asymptotics for solutions to the fractional heat equation * 5 The Poisson semigroup on rank one non-compact symmetric spaces ## 1. Introduction Let \(\mathcal{M}\) be a complete, non-compact Riemannian manifold and \(\Delta\) be its Laplace-Beltrami operator. It is well understood that the long time behavior of solutions to the heat equation \[\begin{cases}\partial_{t}u(t,x)&=\ \Delta u(t,x),\qquad t>0,\ x\in\mathcal{M},\\ u(0,x)&=\ f(x),\end{cases} \tag{1}\] is strongly related to the global geometry of \(\mathcal{M}\). This applies also to the heat kernel \(h_{t}\left(x,y\right)\), that is, the minimal positive fundamental solution of the heat equation or, equivalently, the integral kernel of the heat semigroup \(\exp\left(t\Delta\right)\) (see for instance [23]). The connection between the long time behavior of the solution \(u(t,x)\) of (1) for initial data \(f\in L^{1}(\mathcal{M})\) with respect to the Riemannian measure \(\mu\) on \(\mathcal{M}\) and that of the heat kernel \(h_{t}(x,y)\) has recently been the subject of extensive studies, see for example [5, 25, 39] or see [1, 2, 5, 29] for other Laplacians or settings. Denote by \(M=\int_{\mathcal{M}}f(x)\,\mathrm{d}\mu(x)\) the mass of the initial data. In the case when \(\mathcal{M}=\mathbb{R}^{n}\) with the euclidean metric, the heat kernel is given by \[h_{t}(x,y)\,=\,(4\pi t)^{-\frac{n}{2}}e^{-\frac{|x-y|^{2}}{4t}}\] and the solution to (1) satisfies as \(t\to\infty\) \[\|u(t,\,.\,)\,-\,M\,h_{t}(\,.\,x_{0})\|_{L^{1}(\mathbb{R}^{n})}\,\longrightarrow \,0 \tag{2}\] and \[t^{\frac{n}{2}}\,\|u(t,\,.\,)\,-\,M\,h_{t}(\,.\,x_{0})\|_{L^{\infty}(\mathbb{R }^{n})}\,\longrightarrow\,0. \tag{3}\] By interpolation, a similar convergence holds with respect to any \(L^{p}\) norm when \(1<p<\infty\): \[t^{\frac{n}{2^{\prime}}}\,\|u(t,\,.\,)\,-\,M\,h_{t}(\,.\,x_{0})\|_{L^{p}( \mathbb{R}^{n})}\,\longrightarrow\,0\] where \(p^{\prime}\) is the Holder conjugate of \(p\). Note that (2) holds for _any_ choice of \(x_{0}\), which means that in the long run the solution \(u\left(t,x\right)\) and the heat kernel \(h_{t}\left(x,x_{0}\right)\) "forget" about the initial function \(f\), resp. initial point \(x_{0}\). We refer to a recent survey [37] for more details about this property in the euclidean setting. It is worth mentioning that this long-time asymptotic convergence result corresponds to the Central Limit Theorem of probability in the PDE setting. On manifolds of non-negative Ricci curvature, the results (2) and (3) were generalized in [25]. The situation is drastically different in hyperbolic spaces. It was shown by Vazquez [39] that (2) fails for general absolutely integrable initial data \(f\) but is still true if \(f\) is spherically symmetric around \(x_{0}\). Similar results were obtained in [5] in a more general setting of symmetric spaces of non-compact type by using tools of harmonic analysis. Note that these spaces have nonpositive sectional curvature. Recall that in hyperbolic spaces Brownian motion \(X_{t}\) tends to escape to \(\infty\) along geodesics, which means that it "remembers" at least the direction of the starting point \(x_{0}\). In [25], it was also shown that (3) fails on connected sums \(\mathbb{R}^{n}\#\mathbb{R}^{n}\), \(n\geq 3\). The _fractional_ Laplacian is the operator \((-\Delta)^{\sigma}\), \(\sigma\in(0,1)\), defined as the spectral \(\sigma\)-th power of the Laplace-Beltrami operator, with \(\mathrm{Dom}(-\Delta)\subset\mathrm{Dom}((-\Delta)^{\sigma})\). It is connected to _anomalous_ diffusion, which accounts for much of the interest in modeling with fractional equations (quasi-geostrophic flows, turbulence and water waves, molecular dynamics, and relativistic quantum mechanics of stars). It also has various applications in probability and finance. On certain "good" non-compact Riemannian manifolds \(\mathcal{M}\) (e.g. Cartan-Hadamard manifolds or manifolds with non-negative Ricci curvature, see [7, Proposition 3.3]) one can obtain the fractional Laplacian through a Dirichlet-to-Neumann map extension problem introduced by Caffarelli and Silvestre [12], as well as a Poisson formula and a fundamental solution, see the work of Stinga and Torrea [35]. More precisely, let \(H^{\circ}(\mathcal{M})\) denote the usual Sobolev space on \(\mathcal{M}\). Then for any given \(f\in H^{\circ}(\mathcal{M})\) there exists a unique solution of the extension problem \[\Delta v+\frac{(1-2\sigma)}{t}\frac{\partial v}{\partial t}+\frac{\partial^{2 }v}{\partial t^{2}}=0,\quad 0<\sigma<1,\quad t>0,\ x\in\mathcal{M}, \tag{4}\] with \(v(0,x)=f(x)\) and the fractional Laplacian can be recovered through \[(-\Delta)^{\alpha}f(x)=-2^{2\alpha-1}\frac{\Gamma(\sigma)}{\Gamma(1-\sigma)} \lim_{t\to 0^{+}}t^{1-2\alpha}\,\frac{\partial v}{\partial t}(x,t).\] Notice that equation (4) gives rise to the first family of operators this note is concerned about. The second family of operators we consider arises from the fractional heat equation \[\partial_{t}u+(-\Delta)^{\alpha/2}u=0,\quad 0<\alpha<2,\quad t>0,\ x\in\mathcal{M}. \tag{5}\] These two families of operators have drawn much attention, see for instance [2, 7, 9, 10, 11, 14, 15, 33, 38] and the references therein. It is worth mentioning that both families of operators include the Poisson semigroup (for \(\sigma=1/2\) and \(\alpha=1\) respectively). The aim of this paper is to study the long time behavior of these two families of operators on certain Riemannian manifolds, for absolutely integrable initial data. More precisely, we treat the case of non-negative Ricci curvature and generalizations of these, that is, doubling volume, complete, non-compact manifolds with double-sided heat kernel estimates of the Li-Yau type, and show that the results are essentially euclidean, in the sense of convergence proved in [38]. This is no longer the case in negatively curved manifolds. More precisely, we consider the Poisson semigroup on real hyperbolic space, and more generally on rank one symmetric spaces of non-compact type and show that in this case, the long time results are vastly different: the aforementioned euclidean-type results fail even for compactly supported initial data. Notice that for all manifolds considered here, these two families of operators admit integral kernels which are actually probability measures. Our main result is the following. **Theorem 1**.: _Let \(\mathcal{M}\) be a complete, connected and non-compact Riemannian manifold of non-negative Ricci curvature. Let \(\psi_{t}^{\gamma}\), \(\gamma\in\{1,\alpha\}\), be either the fundamental solution to (4) for \(\gamma=1\) for all \(\sigma\in(0,1)\) or the fundamental solution to (5) for \(\gamma=\alpha\), \(\alpha\in(0,2)\), and consider the corresponding integral operators_ \[\mathcal{K}_{t}^{\gamma}(f)(x)=\int_{\mathcal{M}}\psi_{t}^{\gamma}(x,y)\,f(y) \,d\mu(y),\quad f\in L^{1}(\mathcal{M}).\] _Set \(M=\int_{\mathcal{M}}f\,d\mu\) and fix \(a\) base point \(x_{0}\in\mathcal{M}\). Then, as \(t\to+\infty\),_ \[\|\mathcal{K}_{t}^{\gamma}(f)-\,M\,\psi_{t}^{\gamma}(\,.\,,x_{0})\|_{L^{1}( \mathcal{M})}\,\longrightarrow\,0 \tag{6}\] _and_ \[\|\,\big{|}\mathcal{K}_{t}^{\gamma}(f)-\,M\,\psi_{t}^{\gamma}(\,.\,,x_{0})\big{|} \,V(\,.\,,t^{1/\gamma})\|_{L^{\infty}(\mathcal{M})}\,\longrightarrow\,0. \tag{7}\] **Remark**.: _By interpolation between (24) and (25), we obtain for any \(p\in(1,\infty)\)_ \[\|\,\big{|}\mathcal{K}_{t}^{\gamma}(f)-\,M\,\psi_{t}^{\gamma}(\,.\,,x_{0}) \big{|}\,V(\,.\,,t^{1/\gamma})^{1/p}\|_{L^{p}(\mathcal{M})}\,\longrightarrow \,0.\] The situation changes drastically in real hyperbolic space, and generally, in rank one non-compact symmetric spaces. More precisely, the Poisson semigroup fails to satisfy these convergences. **Theorem 2**.: _Let \(\mathbb{X}\) be a rank one non-compact symmetric space. Assume that \(f\in L^{1}(\mathbb{X})\) and let \(M=\int_{\mathbb{X}}f\) denote its mass. If \(e^{-t\sqrt{-\Delta}}\) is the Poisson semigroup and \(p_{t}\) the Poisson kernel, then, in general_ \[\|e^{-t\sqrt{-\Delta}}f-M\,p_{t}\|_{L^{1}(\mathbb{X})}\,\longrightarrow\,0\quad \text{as}\quad t\to+\infty.\] _However, the convergence holds if \(f\) is radial._ To the best of our knowledge, even though the fractional Laplacian and related diffusion equations have been the center of many studies, the above asymptotics have been established only on euclidean space, [38]. We use different methods to prove our results. For the case of manifolds with non-negative Ricci curvature, we use subordination formulas to use information for the heat kernel, such as double-sided bounds and a quantitative Holder continuity estimate, which is a key component to our proof. Notice that one could have pursued gradient estimates, but the approach used here is more general. For the case of rank one non-compact symmetric spaces, we rely on tools of harmonic analysis available in this setting and large-time asymptotics of the Poisson kernel. An essential idea of the proof in the rank one case is to describe the critical region of the Poisson kernel, since this kernel is a probability measure. ## 2. Preliminaries From now on, \(\mathcal{M}\) denotes a complete, connected, non-compact Riemannian manifold of dimension \(n\geq 2\). Let \(\mu\) be the Riemannian measure on \(\mathcal{M}\). Let \(d(x,y)\) be the geodesic distance between two points \(x,y\in\mathcal{M}\), and \(V(x,r)=\mu\left(B(x,r)\right)\) be the Riemannian volume of the geodesic ball \(B(x,r)\) of radius \(r\) centered at \(x\in\mathcal{M}\). Throughout the paper we follow the convention that \(C,C_{1},c,c_{1}...\) denote positive constants. These constants may depend on \(\mathcal{M}\) but do not depend on the variables \(x,y,t\). Moreover, the notation \(A\lesssim B\) between two positive expressions means that \(A\leq CB\), and \(A\asymp B\) means \(cB\leq A\leq CB\). Also, \(A(t)\sim B(t)\) means that \(A(t)/B(t)\to 1\) as \(t\to+\infty\). We say that \(\mathcal{M}\) satisfies the volume doubling property if, for all \(x\in\mathcal{M}\) and \(r>0\), we have \[V(x,2r)\,\leq\,C\,V(x,r). \tag{8}\] It follows from (8) that there exist some positive constants \(\nu,\nu^{\prime}>0\) such that \[c\,\left(\frac{R}{r}\right)^{\nu^{\prime}}\,\leq\,\frac{V(x,R)}{V(x,r)}\,\leq \,C\,\left(\frac{R}{r}\right)^{\nu} \tag{9}\] for all \(x\in\mathcal{M}\) and \(0<r\leq R\) (see for instance [23, Section 15.6]). Moreover, (9) implies that, for all \(x,y\in\mathcal{M}\) and \(r>0\), \[\frac{V(x,r)}{V(y,r)}\leq C\left(1+\frac{d\left(x,y\right)}{r}\right)^{\nu}.\] Notice that hyperbolic spaces as well as all non-compact symmetric spaces fail to be doubling (they are locally doubling, though). More precisely, in the rank one case, if \(n=\dim\mathbb{X}\) and \(\rho^{2}>0\) is the bottom of the spectrum, it holds \[V(x,r)\asymp\begin{cases}r^{\mu},&\text{if }0<r<1\,,\\ e^{2\rho r},&\text{if }r\geq 1.\end{cases} \tag{10}\] The integral kernel \(h_{t}\left(x,y\right)\) of the heat semigroup \(\exp(t\Delta)\) is the smallest positive fundamental solution to the heat equation (1). It is known that \(h_{t}\left(x,y\right)\) is smooth in \((t,x,y)\), symmetric in \(x\), \(y\), and satisfies the semigroup identity (see for instance [23], [36]). Besides, for all \(y\in\mathcal{M}\) and \(t>0\) \[\int_{\mathcal{M}}h_{t}\left(x,y\right)\mathrm{d}\mu(x)\leq 1.\] The manifold \(\mathcal{M}\) is called stochastically complete if for all \(y\in\mathcal{M}\) and \(t>0\) \[\int_{\mathcal{M}}h_{t}(x,y)\,\mathrm{d}\mu(x)=\,1.\] It is known that if \(\mathcal{M}\) is geodesically complete and, for some \(x_{0}\in\mathcal{M}\) and all large enough \(r\), \[V\left(x_{0},r\right)\leq e^{C^{2}},\] then \(\mathcal{M}\) is stochastically complete. In particular, the volume doubling property (9) and the volume bounds (10) for rank one symmetric spaces imply that all manifolds considered in this paper are stochastically complete. When the Ricci curvature of \(\mathcal{M}\) is non-negative, the following two-sided estimates of the heat kernel were proved by Li and Yau [27]: \[\frac{c_{1}}{V(y,\sqrt{t})}\,\exp\Big{(}-C_{1}\frac{d^{2}(x,y)}{t}\Big{)}\, \leq\,h_{t}(x,y)\,\leq\,\frac{C_{2}}{V(y,\,\sqrt{t})}\,\exp\Big{(}-c_{2}\, \frac{d^{2}(x,y)}{t}\Big{)}. \tag{11}\] Apart from manifolds with non-negative Ricci curvature, the above-described manifolds cover many other examples. Let us recall that, on a complete Riemannian manifold, the following three properties are equivalent: * The two-sided estimate (11) of the heat kernel; * The uniform parabolic Harnack inequality: (12) \[\sup_{(\frac{T}{\tau},\frac{T}{\tau})\times B(x,\frac{\tau}{\tau})}u(t,x)\, \leq\,C\sup_{(\frac{T}{\tau},T)\times B(x,\frac{\tau}{\tau})}u(t,x),\] where \(u(t,x)\) is a non-negative solution of the heat equation \(\partial_{t}u=\Delta u\) in a cylinder \((0,T)\times B(x,r)\) with \(x\in\mathcal{M}\), \(r>0\) and \(T=r^{2}\). * The conjunction of the volume doubling property (8) and the Poincare inequality: (13) \[\int_{B(x,r)}|f-f_{B}|^{2}\,\mathrm{d}\mu\,\leq\,C\,r^{2}\,\int_{B(x,r)}| \nabla f|^{2}\,\mathrm{d}\mu,\] for all \(x\in\mathcal{M}\), \(t>0\), and bounded Lipschitz functions \(f\) in \(B(x,r)\). Here, \(f_{B}\) is the mean of \(f\) over \(B(x,r)\). See, for instance, [19, 22, 30, 31] for more details. Manifolds satisfying these equivalent conditions include complete manifolds with non-negative Ricci curvature, connected Lie groups with polynomial volume growth, co-compact covering manifolds whose deck transformation has polynomial growth, and many others. We refer to [32, pp.417-418] for a list of examples. **Corollary 3**.: _Let \(\mathcal{M}\) be a geodesically complete non-compact manifold that satisfies one of the following equivalent conditions:_ * _the two-sided estimate (_11_) of the heat kernel;_ * _the uniform parabolic Harnack inequality (_12_);_ * _the conjunction of the volume doubling property (_8_) and the Poincare inequality (_13_)._ _Then the conclusions of Theorem 1 are true._ Let us now recall a consequence of the two-sided estimate (11), which will be essential for this note, see for instance [31, Theorem 5.4.12]: there exists \(0<\theta\leq 1\) such that for all \(t>0\), \(x,y,z\in\mathcal{M}\), and \(d(y,z)\leq\sqrt{t}\), \[|h_{t}(x,y)\,-\,h_{t}(x,z)|\,\leq\,\Big{(}\frac{d(y,z)}{\sqrt{t}}\Big{)}^{ \theta}\,\frac{C}{V(x,\,\sqrt{t})}\,\exp\Big{(}-c\frac{d^{2}(x,y)}{t}\Big{)}. \tag{14}\] Notice that a pointwise gradient estimate of the form \[|\nabla_{y}\,h_{t}(x,y)|\,\leq\,\frac{C}{\sqrt{t}\,V(x,\,\sqrt{t})}\,\exp \Big{(}-c\,\frac{d^{2}(x,y)}{t}\Big{)}\] is the limit case \(\theta=1\) of (14), but requires more structure on the manifold, such as non-negative Ricci curvature, see for instance [27]. In this sense, the Holder continuity estimate (14) is more general. Last, recall that on spaces of essentially negative curvature the above estimates of the heat kernel typically fail: for example, these are hyperbolic spaces [17], non-compact symmetric spaces [3, 4], asymptotically hyperbolic manifolds [13], and fractal-like manifolds [6]. We finally recall the definition of real hyperbolic space and more generally, of rank one symmetric spaces, as well as some indispensable tools from Fourier analysis on these spaces. ### Rank one non-compact symmetric spaces Our main reference for this subsection is [26]. Let \(G\) be a connected, non-compact semisimple Lie group with finite center. Let \(K\) be a maximal compact subgroup of \(G\) and \(X=G/K\) be the corresponding symmetric space. We consider a Cartan decomposition \(g=\mathbb{I}\oplus p\) of the Lie algebra of \(G\). Fix a maximal abelian subspace \(a\) of \(p\) and consider the decomposition \(g=\mathbb{I}\oplus a\oplus\mathbb{I}\). If \(a\cong\mathbb{R}\), then we say that the symmetric space \(X\) has rank one. From now on, we assume that \(\operatorname{rank}\!X=1\). In this case, after fixing some order on the non-zero restricted roots, there are at most two roots which are positive with respect to this order, which we denote by \(\alpha\) and \(2\alpha\). Let \(m_{\alpha}\) and \(m_{2\alpha}\) be the multiplicities of these roots, and define the number \(\rho\) by \(\rho:=(m_{\alpha}+2m_{2\alpha})/2\). A rank one non-compact symmetric space is one of the following: the real, the complex, the quaternionic hyperbolic space and the octonionic hyperbolic plane. We have \(n=\dim X=m_{\alpha}+m_{2\alpha}+1\) and \(\rho\) equal to \((n-1)/2\), \(n/2\), \(n/2+1\) and \(11\), respectively. The group \(G\) admits the following decompositions, \[\begin{cases}G\,=\,N\,\mathbb{A}\,K&\text{(Iwasawa)},\\ G\,=\,K\,\overline{\mathbb{A}^{+}}\,K&\text{(Cartan)}.\end{cases}\] Let \(H_{0}\) be the unique element of \(a\) with the property that \(\langle\alpha,H_{0}\rangle=1\) and normalize the Killing form on \(g\) such that \(|H_{0}|=1\). Denote by \(\tau(g)\) the real number such that \[g=n\exp(\tau(g)\,H_{0})k.\] To simplify the notation, we often identify the Lie subgroup \(\mathbb{A}=\exp a\) with the real line \(\mathbb{R}\) using the map \(\tau\mapsto\exp(\tau\,H_{0})\). Notice that we may also identify \(\overline{\mathbb{A}^{+}}\) with \([0,+\infty)\). In the Cartan decomposition, the Haar measure on \(\mathbb{G}\) writes \[\int_{\mathbb{G}}f(g)\,\mathrm{d}g=\,\operatorname{const.}\,\int_{K}\mathrm{d}k_ {1}\,\int_{0}^{\infty}\delta(r)\,\mathrm{d}r\int_{K}f(k_{1}(\exp r\,H_{0})k_{2 })\,\mathrm{d}k_{2}, \tag{15}\] with density \[\delta(r)=(\sinh r)^{m_{\alpha}}(\sinh 2r)^{m_{\alpha}}\lesssim e^{2\rho\tau}. \tag{16}\] Here \(K\) is equipped with its normalized Haar measure and "\(\operatorname{const}\)" is a positive normalizing constant, so that for right-\(K\) invariant functions, we have \[\int_{\mathbb{X}}f(x)\,\mathrm{d}\mu(x)=\int_{\mathbb{G}}f(g)\,\mathrm{d}g.\] Notice that viewed on \(\mathbb{G}/K\), \(r\) is the distance of \(gK\) to the origin \(o=\{K\}\). Finally, we describe Fourier analysis on rank one non-compact symmetric spaces. For continuous compactly supported functions, the Helgason-Fourier transform is defined by \[\widehat{f}(\lambda,k\mathbb{M})=\,\int_{\mathbb{G}}f(gK)\,e^{(-i\lambda+\rho )\,\tau(k^{-1}g)}\,\mathrm{d}g,\quad\lambda\in\mathbb{C},\ k\in K. \tag{17}\] Here, \(\mathbb{M}\) denotes the centralizer of \(\exp\,\mathfrak{a}\) in \(K\). Let us also define the spherical transform of continuous compactly supported and radial functions by \[\mathcal{H}f(\lambda)=\int_{\mathbb{G}}f(gK)\,\varphi_{-\lambda}(g)\,\mathrm{ d}g,\] where \(\varphi_{\lambda}\) is the elementary spherical function of index \(\lambda\in\mathbb{C}\). The functions \(\varphi_{\lambda}\) are normalized eigenfunctions of \(\Delta\), that is, \(\Delta\varphi_{\lambda}=-(\lambda^{2}+\rho^{2})\varphi_{\lambda}\) with \(\varphi_{\lambda}(o)=1\). They are also radial and have the property that \(\varphi_{\lambda}=\varphi_{-\lambda}\). Finally, let us recall that in the case of radial Schwartz functions, we have \[\widehat{f}(\lambda,k\mathbb{M})=\mathcal{H}f(\lambda).\] ## 3. Fractional Laplacian and semigroups subordinated to the heat semigroup This section deals with two families of operators related to the fractional Laplacian but both subordinated via integral representations to the heat semigroup. Interestingly, the Poisson semigroup belongs to both, for special values of their parameters. In recent years there has been intensive research on various kinds of fractional order operators. Being nonlocal objects, local PDE techniques to treat nonlinear problems for the fractional operators do not apply. To overcome this difficulty, in the euclidean case, Caffarelli and Silvestre [12] studied the extension problem associated with the Laplacian and realized the fractional power as the map taking Dirichlet data to Neumann data. In [35] Stinga and Torrea related the extension problem for the fractional Laplacian to the heat semigroup, providing a subordination formula and conditions for the existence of an integral kernel. On certain classes of non-compact manifolds, which include symmetric spaces of non-compact type, the extension problem has been studied by Banica, Gonzalez and Saez [7]. Interestingly, in the non-compact setting one needs to have a precise control of the behavior of the metric at infinity and geometry plays a crucial role. To begin with, using the spectral theorem, one can define fractional powers of the Laplacian via the heat semigroup, \[(-\Delta)^{\sigma}f(x)=\int_{0}^{\infty}\big{(}e^{u\Delta}f(x)-f(x)\big{)}\frac{ du}{u^{1+\sigma}}\quad\text{ in }L^{2}(\mathcal{M}),\ f\in\text{Dom}(-\Delta).\] see [40, (5), p.260]. Then, the relation between the fractional Laplacian and the extension problem (4) is the following. **Theorem 4**.: _[_35_]__. Let \(\sigma\in(0,1)\). Then for \(f\in Dom((-\Delta)^{\sigma})\), a solution to the extension problem_ \[\Delta v+\frac{(1-2\sigma)}{t}\frac{\partial v}{\partial t}+\frac{\partial^{2 }v}{\partial t^{2}}=0,\quad v(0,x)\,=\,f(x),\quad t>0,\ x\in\mathcal{M}, \tag{18}\] _is given by_ \[T^{\sigma}_{t}f(x):=v(t,x)=\frac{t^{2\sigma}}{2^{2\sigma}\Gamma(\sigma)}\int_ {0}^{+\infty}e^{u\Delta}f(x)\,e^{-\frac{t^{2}}{4\alpha}}\frac{du}{u^{1+\sigma}}. \tag{19}\] _Moreover, the fractional Laplacian on \(\mathcal{M}\) can be recovered through_ \[(-\Delta)^{\sigma}f(x)=-2^{2\sigma-1}\frac{\Gamma(\sigma)}{\Gamma(1-\sigma)} \lim_{t\to 0^{+}}t^{1-2\sigma}\frac{\partial v}{\partial t}(x,t).\] From a probabilistic point of view, the extension problem corresponds to the property that all symmetric stable processes can be obtained as traces of degenerate Bessel diffusion processes, see [34]. However, despite the subordination of \(\{T^{\sigma}_{t}\}_{t>0}\) to the heat semigroup, passing from the heat kernel to a Poisson kernel is a non-trivial issue in the case of non-compact manifolds since one needs to control the behavior at infinity. By [7, 35] and under the description of the heat semigroup given in the present paper, one needs to check whether, given \(x_{0}\), there exists a constant \(C_{x_{0}}\) and \(c>0\) such that the heat kernel on the manifold \(\mathcal{M}\) satisfies \[\|h_{t}(\,.\,x_{0})\|_{L^{2}(\mathcal{M})}+\|\partial_{t}h_{t}(\,.\,x_{0})\|_ {L^{2}(\mathcal{M})}\leq C_{x_{0}}(1+t^{\sigma})t^{-\varepsilon}. \tag{20}\] Thus the problem of an integral kernel for the operator \(T^{\sigma}_{t}\) reduces to obtaining suitable upper bounds for the heat kernel (from where one may derive information for its time derivatives as well, see for instance [24]). Inequality (20) is true on real hyperbolic space as well as on non-compact symmetric spaces of arbitrary rank. It is also true on manifolds satisfying a volume doubling condition and the local Poincare inequality [7, Proposition 3.3], such as manifolds of non-negative Ricci curvature, or more generally the manifolds considered in Corollary 3. Then, the function \(T^{\sigma}_{t}f\) in (19) is given by \[T^{\sigma}_{t}f(x)=v(t,x)=\int_{\mathcal{M}}Q^{\sigma}_{t}(x,y)f(y)\,\text{d} \mu(y),\] where the integral kernels are given by \[Q^{\sigma}_{t}(x,y)=\frac{t^{2\sigma}}{2^{2\sigma}\Gamma(\sigma)}\int_{0}^{+ \infty}h_{u}(x,y)\,e^{-\frac{t^{2}}{2}}\frac{\text{d}u}{u^{1+\sigma}}. \tag{21}\] We now pass to the fractional heat equation, again for the manifolds mentioned above. Let \(\alpha\in(0,2)\) and take \(\eta^{\alpha}_{t}\) to be the inverse Laplace transform of the function \(\exp\{-t\,(.)^{\alpha/2}\}\). The fractional Laplacian \((-\Delta)^{\alpha/2}\) is the infinitesimal generator of a standard isotropic \(\alpha\)-stable Levy motion \(X^{\alpha}_{t}\). This process is a Levy process, which can be viewed as the long-time scaling limit of a random walk with power law jumps ([28, Theorem 6.17]). Via subordination to the heat semigroup, we may write \[e^{-t(-\Delta)^{n/2}}=\int_{0}^{\infty}e^{u\Delta}\,\eta^{\alpha}_{t}(u)\,\mathrm{ d}u,\] [40, (7), p.260]. Then, for the manifolds considered in this note (those of Corollary 3 and rank one non-compact symmetric spaces), for any reasonable \(f\) we have that \[W^{\alpha}_{t}f(x):=e^{-t(-\Delta)^{n/2}}f(x)=w(t,x)=\int_{\mathcal{M}}P^{ \alpha}_{t}(x,y)f(y)\,\mathrm{d}\mu(y),\] where the kernels are given by \[P^{\alpha}_{t}(x,y)=\int_{0}^{\infty}h_{u}(x)\,\eta^{\alpha}_{t}(u)\,\mathrm{ d}u. \tag{22}\] The family \(\{W^{\alpha}_{t}\}_{t>0}\) is a \(C_{0}\)- semigroup, [40]. For every \(L^{1}(\mathcal{M})\) (for an optimal class of initial data on euclidean space, see [8]), the function \(w(t,\,.\,)=W^{\alpha}_{t}f\) solves the initial value problem \[\partial_{t}w(t,x)+(-\Delta)^{\alpha/2}w(t,x)=0,\quad w(0,x)\,=\,f(x),\quad t >0,\,\,x\in\mathcal{M}. \tag{23}\] In the case of hyperbolic spaces, the \(\alpha\)-stable process with transition densities \(P^{\alpha}_{t}(x,y)\) was first defined by Getoor [20] (see also [21] for general non-compact symmetric spaces). Finally, observe that owing to the subordination formulas and the properties of the heat kernel, both kernels \(Q^{\sigma}_{t}\), \(P^{\alpha}_{t}\) are non-negative and symmetric. They are also probability measures: to see this, recall first that the manifolds considered here are stochastically complete. Then, by a Fubini argument, the claim follows for \(Q^{\sigma}_{t}\) by the definition of Gamma function, while for \(P^{\alpha}_{t}\) from the fact that \(\int_{0}^{\infty}\eta^{\alpha}_{t}(u)\,\mathrm{d}u=1\), [40, Eq (14), p.262]. Asymptotics for two operators related to the fractional Laplacian: the case of non-negative Ricci curvature Throughout this section, we consider \(\mathcal{M}\) to be a manifold of non-negative Ricci curvature -or more generally, we consider the Riemannian manifolds of Corollary 3- and study the long-time properties of the families of operators considered in Section 3. An essential idea of the proof is the use of the Holder continuity of the heat kernel, owing to subordination formulas of their kernels. ### A suitable class of kernels We begin by introducing a suitable class of kernels to unify our approach for the families of operators \(\{T^{\alpha}_{t}\}_{t>0}\), \(\sigma\in(0,1)\) and \(\{W^{\alpha}_{t}\}_{t>0}\), \(\alpha\in(0,2)\). **Definition 4.1**.: _Suppose \(\gamma>0\). We say that a family of measurable functions \((\psi^{\gamma}_{t})_{t>0}\) on \(\mathcal{M}\) belongs to the class \(\mathcal{P}_{\gamma}\), and write \((\psi^{\gamma}_{t})\in\mathcal{P}_{\gamma}\), if_ 1. _For all_ \(t>0\)_,_ \(\psi^{\gamma}_{t}\) _is positive and symmetric on_ \(\mathcal{M}\)_, i.e._ \(0<\psi^{\gamma}_{t}(x,y)=\psi^{\gamma}_{t}(y,x)\) _for all_ \(x,y\in\mathcal{M}\)_;_ 2. _For all_ \(t>0\)_, for all_ \(x\in\mathcal{M}\)_, it holds_ \[\int_{\mathcal{M}}\psi^{\gamma}_{t}(x,y)\,d\mu(y)=1\quad\text{and}\quad\|\psi^ {\gamma}_{t}(x,\,.\,)\|_{L^{\infty}(\mathcal{M})}\asymp V(x,t^{1/\gamma})^{ -1};\] _._ 3. _There is a constant_ \(C>1\) _such that for all_ \(x,y,x_{0}\in\mathcal{M}\) _such that_ \(d(x_{0},y)\leq t^{1/\gamma}\)_, we have_ \[C^{-1}\leq\frac{\psi_{t}^{\gamma}(x,y)}{\psi_{t}^{\gamma}(x,x_{0})}\leq C;\] 4. _There is a constant_ \(\theta_{\gamma}>0\) _such that_ \(d(x_{0},y)\leq\xi\) _implies there is_ \(t_{0}(\xi,\gamma)>0\) _such that_ \[|\psi_{t}^{\gamma}(x,x_{0})-\psi_{t}^{\gamma}(x,y)|\leq C(\xi,\gamma,\mathcal{ M})\,t^{-\theta_{\gamma}}\,\psi_{t}(x,x_{0})\] _for all_ \(x\in\mathcal{M}\)_, for all_ \(t>t_{0}(\xi,\gamma)\)_._ **Theorem 5**.: _Fix \(\gamma>0\). Let \((\psi_{t}^{\gamma})\in\mathcal{P}_{\gamma}\), and consider the integral operator_ \[\mathcal{K}_{t}^{\gamma}(f)(x)=\int_{\mathcal{M}}\psi_{t}^{\gamma}(x,y)\,f(y) \,d\mu(y),\quad x\in\mathcal{M}\] _acting on functions \(f\in L^{1}(\mathcal{M})\). Set \(M=\int_{\mathcal{M}}f\,d\mu\) and fix a basepoint \(x_{0}\in\mathcal{M}\). Then, as \(t\to+\infty\),_ \[\|\mathcal{K}_{t}^{\gamma}(f)-M\,\psi_{t}^{\gamma}(\,.\,,x_{0})\|_{L^{1}( \mathcal{M})}\,\longrightarrow\,0 \tag{24}\] _and_ \[\|\,\big{|}\mathcal{K}_{t}^{\gamma}(f)\,-M\,\psi_{t}^{\gamma}(\,.\,,x_{0})\, \big{|}\,V(\,.\,,t^{1/\gamma})\|_{L^{\infty}(\mathcal{M})}\,\longrightarrow \,0. \tag{25}\] **Remark**.: _By convexity between (24) and (25), we obtain for any \(p\in(1,\infty)\)_ \[\|\,\big{|}\mathcal{K}_{t}^{\gamma}(f)(\,.\,)-M\,\psi_{t}^{\gamma}(\,.\,,x_{0} )\,\big{|}\,V(\,.\,,t^{1/\gamma})^{1/p^{\prime}}\|_{L^{p}(\mathcal{M})}\, \longrightarrow\,0.\] Let us stress that the conditions (P1)-(P4) are not necessarily optimal. The class \(\mathcal{P}_{\gamma}\) and the above conditions will simply allow us to avoid repeating several steps when proving convergence to the fundamental solution for the two families of operators \(\{T_{t}^{\gamma}\}_{t>0}\), \(\sigma\in(0,1)\) and \(\{W_{t}^{\alpha}\}_{t>0}\), \(\alpha\in(0,2)\). To prove Theorem 5, it is sufficient to consider the action of the operator \(\mathcal{K}_{t}^{\gamma}\) on continuous compactly supported functions \(f\). Then, owing to the properties (P1)-(P4), one can show that the desired convergence remains valid for the whole class of \(L^{1}(\mathcal{M})\) initial data by using a density argument, see for instance the arguments in [5, pp.17-18] or [25, pp.11-12]. Therefore, it suffices to prove the following result, which also gives a rate of convergence for continuous compactly supported initial data. **Proposition 6**.: _Fix \(\gamma>0\) and a basepoint \(x_{0}\in\mathcal{M}\). Let \(f\in C_{c}(B(x_{0},\xi))\) for some \(\xi>0\) and set \(M=\int_{\mathcal{M}}f\,d\mu\). Then, for all \(t>t_{0}(\xi,\gamma)\), it holds_ \[\|\mathcal{K}_{t}^{\gamma}(f)-M\,\psi_{t}^{\gamma}(\,.\,,x_{0})\|_{L^{1}( \mathcal{M})}\,\leq C(\xi,\gamma,\mathcal{M})\,t^{-\theta_{\gamma}}\] _and_ \[\|\,\big{|}\mathcal{K}_{t}^{\gamma}(f)\,-M\,\psi_{t}^{\gamma}(\,.\,,x_{0})\big{|} \,V(\,.\,,t^{1/\gamma})\|_{L^{\infty}(\mathcal{M})}\,\leq C(\xi,\gamma, \mathcal{M})\,t^{-\theta_{\gamma}}.\] Proof.: First of all, observe that the operator \(\mathcal{K}_{t}^{\gamma}\) is bounded on \(L^{1}(\mathcal{M})\), due to (P1) and (P2). Write \[\mathcal{K}_{t}^{\gamma}(f)(x)\,-M\,\psi_{t}^{\gamma}(x,x_{0}) \,=\,\int_{\mathcal{M}}f(y)\,\left(\psi_{t}^{\gamma}(x,y)\,-\psi_{t }^{\gamma}(x,x_{0})\right)\,\mathrm{d}\mu(y) \tag{26}\] \[\,=\,\int_{B(x_{0},\xi)}\,f(y)\,\left(\psi_{t}^{\gamma}(x,y)\,- \psi_{t}^{\gamma}(x,x_{0})\right)\,\mathrm{d}\mu(y).\] Therefore, by (26), (P1), (P4) and by integrating in \(x\) over \(\mathcal{M}\), we obtain \[\int_{\mathcal{M}}|\mathcal{K}_{t}^{\gamma}(f)(x)\,-\,M\,\psi_{t}^{ \gamma}(x,x_{0})|\,\mathrm{d}\mu(x) \leq C(\xi,\gamma,\mathcal{M})\,t^{-\theta_{\gamma}}\int_{\mathcal{ M}}\psi_{t}^{\gamma}(x,x_{0})\,\mathrm{d}\mu(x)\int_{B(x_{0},\xi)}|f(y)|\, \mathrm{d}\mu(y)\] \[\lesssim t^{-\theta_{\gamma}}\int_{\mathcal{M}}\psi_{t}^{\gamma} (x,x_{0})\,\mathrm{d}\mu(x)\,\|f\|_{L^{1}(\mathcal{M})}\lesssim t^{-\theta_{ \gamma}},\] for \(t\) large enough such that \(d(x_{0},y)\leq\xi\leq t^{1/\gamma}\), where in the last step we used (P2). This proves the desired \(L^{1}(\mathcal{M})\) result. We now turn to the proof of the sup norm asymptotics. For \(t\) large enough so that \(d(x_{0},y)\leq\xi\leq t^{1/\gamma}\), by (P3) we get \(\psi_{t}^{\gamma}(x,x_{0})\asymp\psi_{t}^{\gamma}(x,y)\) for all \(x\in\mathcal{M}\). Therefore, by (26), (P1) and (P4) we get \[|\mathcal{K}_{t}^{\gamma}(f)(x)\,-\,M\,\psi_{t}^{\gamma}(x,x_{0})| \leq C(\xi,\gamma,\mathcal{M})\,t^{-\theta_{\gamma}}\int_{B(x_{0},\xi)}\psi_{t}^{\gamma}(x,y)\,|f(y)|\,\mathrm{d}\mu(y)\] \[\lesssim t^{-\theta_{\gamma}}\,\sup_{y\in\mathcal{M}}\psi_{t}^{ \gamma}(x,y)\,\|f\|_{L^{1}(\mathcal{M})}\] \[\lesssim t^{-\theta_{\gamma}}\,V(x,t^{1/\gamma})^{-1},\] where in the last step we used (P2). The claim follows. ### Asymptotics for solutions to the Caffarelli-Silvestre extension problem In this subsection, we study the large-time asymptotic behavior of the family of operators \(\{T_{t}^{\circ}\}_{t>0}\). To this end, we first give some indispensable estimates concerning the kernels \(Q_{t}^{\circ}\) and use them to prove that they belong to the class \(\mathcal{P}_{1}\) for all \(\sigma\in(0,1)\). **Lemma 7**.: _For all \(x,y\in\mathcal{M}\), all \(t>0\), and all constants \(c>0\) and \(\kappa>1\), it holds_ \[\int_{0}^{\infty}V(x,\sqrt{u})^{-1}\,e^{-\frac{t^{2}}{u}}\,e^{-c \frac{t^{2}(x,u)}{u}}\,\frac{du}{u^{\kappa}}\asymp\frac{1}{V(x,t+d(x,y))}\, \frac{1}{(t+d(x,y))^{2(\kappa-1)}},\] _where the implied constants depend only on \(c,\kappa\) and \(\mathcal{M}\)._ _In addition, for fixed \(r>0\), we have_ \[V(x,t+d(x,y))\,(t+d(x,y))^{2(\kappa-1)}\int_{0}^{\sigma}V(x,\sqrt{u})^{-1}\,e ^{-\frac{t^{2}}{u\kappa}}\,e^{-c\frac{t^{2}(x,u)}{u}}\,\frac{du}{u^{\kappa}}= \mathcal{O}(t^{-N}),\quad\forall N>0\] _as \(t\to+\infty\), where the implied constant depends in addition on \(r\), \(N\)._ Proof.: Set \[I:=\int_{0}^{\infty}V(x,\sqrt{u})^{-1}\,e^{-\frac{t^{2}}{u\kappa}}\,e^{-c \frac{t^{2}(x,u)}{u}}\,\frac{du}{u^{\kappa}}.\] Then, for \(C:=\min\{c,1/4\}\), we have \[I \leq\int_{0}^{\infty}\,V(x,\,\sqrt{u})^{-1}e^{-C^{\frac{2+\mu^{2}Q_{ \mathrm{t},0}}{u}}}\,\frac{\mathrm{d}u}{u^{\kappa}}\] \[\lesssim\frac{1}{(t^{2}+d^{2}(x,y))^{\kappa-1}}\int_{0}^{\infty} \,V\Bigg{(}x,\,\sqrt{\frac{t^{2}+d^{2}(x,y)}{u}}\Bigg{)}^{-1}\,e^{-Cu}\,u^{ \kappa-2}\,\mathrm{d}u\] \[\lesssim\frac{1}{(t^{2}+d^{2}(x,y))^{\kappa-1}}\frac{1}{V\Big{(} x,\,\sqrt{t^{2}+d^{2}(x,y)}\Big{)}}\times\] \[\times\Bigg{(}\int_{0}^{1}e^{-Cu}\,u^{\frac{\nu^{\prime}}{2}}\,u^ {\kappa-2}\,\mathrm{d}u+\int_{1}^{\infty}e^{-Cu}\,u^{\frac{\nu}{2}}\,u^{ \kappa-2}\,\mathrm{d}u\Bigg{)}\] \[\lesssim\frac{1}{V(y,t+d(x,y))}\,\frac{1}{(t+d(x,y))^{2(\kappa-1 )}},\] where for the last two inequalities we used the volume doubling property (9). The lower bound follows similarly. Last, as far as asymptotics are concerned, observe that for all \(C>0\), we have \[\int_{0}^{\sigma}\,V(y,\,\sqrt{u})^{-1}e^{-C^{\frac{2}{2}u^{2}(x,y)}}\,\frac{ \mathrm{d}u}{u^{\kappa}}\lesssim e^{-\frac{C^{\frac{2}{2}u^{2}(x,y)}}{\tau}} \int_{0}^{\sigma}\,V(y,\,\sqrt{u})^{-1}e^{-\frac{C^{\frac{2}{2}u^{2}(x,y)}}{u} }\,\frac{\mathrm{d}u}{u^{\kappa}}.\] The additional exponential term on the right-hand side allows for the fast decay in \(t\) as \(t\to+\infty\), while for the remaining integral we may conclude as before. **Corollary 8**.: _There is a constant \(C\geq 1\) such that if \(d(y,z)\leq t\), then_ \[C^{-1}\leq\frac{Q_{t}^{\sigma}(x,y)}{Q_{t}^{\sigma}(x,z)}\leq C.\] Proof.: Observe first that for any \(\gamma>0\), if \(d(y,z)\leq t^{1/\gamma}\), then the triangle inequality implies \[\frac{1}{2}\leq\frac{t^{1/\gamma}+d(x,y)}{2\,t^{1/\gamma}+d(x,y)}\leq\frac{t^ {1/\gamma}+d(x,y)}{t^{1/\gamma}+d(x,z)}\leq\frac{2\,t^{1/\gamma}+d(x,z)}{t^{1/ \gamma}+d(x,z)}\leq 2. \tag{27}\] Recall next the subordination formula (21) for \(Q_{t}^{\sigma}\) and the double-sided heat kernel estimates (11). Then, the claim is a simple consequence of Lemma 7 for \(\kappa=\sigma+1\), which yields \[Q_{t}^{\sigma}(x,y)\asymp\frac{1}{V(x,t+d(x,y))}\,\frac{t^{2\sigma}}{(t+d(x,y ))^{2\sigma}}, \tag{28}\] the doubling volume condition (9) and the inequality (27) for \(\gamma=1\). We are now in a position to prove that \[(Q_{t}^{\sigma})\in\mathcal{P}_{1}\quad\text{for all}\quad\sigma\in(0,1).\] Indeed, as already mentioned, due to the subordination formula (21), the kernel \(Q_{t}^{\sigma}\) satisfies (P1) as well as the first assertion of (P2). The second assertion of (P2) follows immediately from (28). (P3) follows by Corollary 8. Finally, we prove (P4). Fix a basepoint \(x_{0}\in\mathcal{M}\) and consider \(y\in\mathcal{M}\) such that \(d(x_{0},y)<\xi\). Write \[|Q_{t}^{\sigma}(x,y)-Q_{t}^{\sigma}(x,x_{0})| \leq\frac{t^{2\sigma}}{2^{2\sigma}\Gamma(\sigma)}\int_{0}^{+\infty} |h_{u}(x,y)-h_{u}(x,x_{0})|\,e^{-\frac{t^{2}}{\alpha}}\frac{\mathrm{d}u}{u^{1+ \sigma}}\] \[:=I_{1}+I_{2},\] where \[I_{1} =\frac{t^{2\sigma}}{2^{2\sigma}\Gamma(\sigma)}\int_{0}^{\xi^{2}} |h_{u}(x,y)-h_{u}(x,x_{0})|\,e^{-\frac{t^{2}}{\alpha}}\frac{\mathrm{d}u}{u^{1+ \sigma}},\] \[I_{2} =\frac{t^{2\sigma}}{2^{2\sigma}\Gamma(\sigma)}\int_{\xi^{2}}^{+ \infty}|h_{u}(x,y)-h_{u}(x,x_{0})|\,e^{-\frac{t^{2}}{\alpha}}\frac{\mathrm{d}u }{u^{1+\sigma}}.\] Let us first start with \(I_{2}\). Since \(u\geq\xi^{2}>d^{2}(x_{0},y)\), we can use the Holder estimate (14) for the heat kernel. Therefore, applying Lemma 7 for \(\kappa=1+\sigma+\frac{\theta}{2}\), we get \[I_{2} \leq C(\sigma,\mathcal{M})\,\xi^{\theta}\,t^{2\sigma}\int_{\xi^{ 2}}^{+\infty}V(x,\,\sqrt{u})^{-1}\,e^{-\frac{t^{2}}{\alpha}}\,e^{-c\frac{t^{2 }(x_{0})}{\alpha}}\frac{\mathrm{d}u}{u^{1+\sigma+\frac{\theta}{2}}}\] \[\leq C(\xi,\sigma,\mathcal{M})\,\frac{1}{V(x,t+d(x,x_{0}))}\, \frac{t^{2\sigma}}{(t+d(x,x_{0}))^{2\sigma+\theta}}\] \[\leq C(\xi,\sigma,\mathcal{M})\,t^{-\theta}\,Q_{t}^{\sigma}(x,x_ {0}),\] where in the last step we used the bounds (28). It remains to treat \(I_{1}\). For this, observe that by the second part of Lemma 7 for \(\kappa=1+\sigma\) and by (28), we have for all \(x,y\in\mathcal{M}\) and for all \(t\) large enough \[\frac{t^{2\sigma}}{2^{2\sigma}\Gamma(\sigma)}\int_{0}^{\xi^{2}}h_{u}(x,y)\,e^ {-\frac{t^{2}}{\alpha}}\frac{\mathrm{d}u}{u^{1+\sigma}}\lesssim t^{-N}\,Q_{t} ^{\sigma}(x,y),\quad\text{for all $N>0$}.\] Therefore, \[I_{1}\lesssim t^{-N}\,(Q_{t}^{\sigma}(x,y)+Q_{t}^{\sigma}(x,x_{0})).\] Taking \(t\) large enough so that \(d(x_{0},y)<\xi<t\) and making use of Corollary 8 we finally estimate \[I_{1}\lesssim t^{-N}\,Q_{t}^{\sigma}(x,x_{0})\qquad\forall N>0.\] Altogether, we get \[|Q_{t}^{\sigma}(x,y)-Q_{t}^{\sigma}(x,x_{0})|\lesssim t^{-\theta}\,Q_{t}^{ \sigma}(x,x_{0}) \tag{29}\] for all \(x,y,x_{0}\in\mathcal{M}\) such that \(d(x_{0},y)<\xi\) and \(t\) large enough. This proves (P4) for \(\theta_{1}=\theta\), where \(\theta\) is the constant from the heat kernel Holder inequality (14). ### Asymptotics for solutions to the fractional heat equation As in the previous section, we start by proving upper and lower bounds for the kernel \(P_{t}^{\alpha}\), by using the subordination formula (22) and the double-sided estimates of the heat kernel (11). It is well-known that the subordinator \(\eta_{t}^{\alpha}\) cannot be written explicitly, except for the case \(\alpha=1\). Let us recall, however, that \[\eta_{t}^{\alpha}(u)\asymp\begin{cases}t^{\frac{1}{2-\alpha}}\,u^{-\frac{t- \alpha}{2-\alpha}}\,e^{-c_{\alpha}t^{\frac{2}{2-\alpha}}u^{-\frac{\alpha}{2- \alpha}}},&u\leq t^{2/\alpha}\\ t\,u^{-1-\alpha/2},&u>t^{2/\alpha},\end{cases} \tag{30}\] where \(c_{\alpha}=\frac{2-\alpha}{2}\left(\frac{\alpha}{2}\right)^{\frac{\alpha}{2- \alpha}}\), [21]. From these, we get that \[\eta_{t}^{\alpha}(u)\lesssim t\,u^{-1-\alpha/2},\quad t,u>0. \tag{31}\] **Lemma 9**.: _For all \(x,y\in\mathcal{M}\), all \(t>0\), and all constants \(C>0\), \(\kappa\geq 0\), it holds_ \[\int_{0}^{\infty}V(y,\sqrt{u})^{-1}\ e^{-C\frac{d^{2}(x,y)}{u}}\eta_{t}^{\alpha} (u)\,\frac{du}{u^{\kappa}}\asymp\frac{t}{V(x,t^{\frac{1}{\alpha}}+d(x,y))}\, \frac{1}{(t^{\frac{1}{\alpha}}+d(x,y))^{\alpha+2\kappa}}\] _where the implied constants depend only on \(C,\kappa,\alpha\) and \(\mathcal{M}\)._ _In addition, for fixed \(r>0\), we have_ \[V(x,t^{\frac{1}{\alpha}}+d(x,y))\,(t^{\frac{1}{\alpha}}+d(x,y))^{\alpha+2\kappa }\int_{0}^{r}V(y,\sqrt{u})^{-1}\ e^{-C\frac{d^{2}(x,y)}{u}}\,\eta_{t}^{\alpha} (u)\,\frac{du}{u^{\kappa}}=O(t^{-N})\quad\forall N>0\] _as \(t\to+\infty\), where the implied constant depends in addition on \(r\), \(N\)._ Proof.: Write \[K=\int_{0}^{\infty}V(y,\sqrt{u})^{-1}\ e^{-C\frac{d^{2}(x,y)}{u}}\eta_{t}^{ \alpha}(u)\,\frac{\mathrm{d}u}{u^{\kappa}}.\] To estimate the integral \(K\), we distinguish cases. _Case I: \(d(x,y)\geq t^{1/\alpha}\)._ Then, \[\frac{1}{4}(t^{\frac{1}{\alpha}}+d(x,y))^{2}\leq d^{2}(x,y)\leq(t^{\frac{1}{ \alpha}}+d(x,y))^{2}.\] Therefore, by (31), we get that \[K \lesssim t\int_{0}^{\infty}u^{-1-\frac{\alpha}{2}-\kappa}\,V(x, \sqrt{u})^{-1}\exp\left\{-\frac{C}{4}\left(\frac{t^{\frac{1}{\alpha}}+d(x,y)} {\sqrt{u}}\right)^{2}\right\}\,\mathrm{d}u\] \[\lesssim\frac{t}{V(x,t^{\frac{1}{\alpha}}+d(x,y))}\int_{0}^{ \infty}u^{-1-\frac{\alpha}{2}-\kappa}\,\exp\left\{-\frac{C}{8}\left(\frac{t^{ \frac{1}{\alpha}}+d(x,y)}{\sqrt{u}}\right)^{2}\right\}\,\mathrm{d}u\] \[\lesssim\frac{t}{V(x,t^{\frac{1}{\alpha}}+d(x,y))}\,\frac{1}{(t^{ \frac{1}{\alpha}}+d(x,y))^{\alpha+2\kappa}}.\] Here, for the second inequality we used the volume doubling property (9) and the fact that \(x^{\mu}\,e^{-c\frac{d^{2}(x,y)}{u}}\leq\mathrm{const.}(c,\mu)\,e^{-\frac{c}{ \alpha}x^{2}}\) for all \(c,\mu>0\) and \(x\in(0,+\infty)\). For the last inequality we applied a change of variables. For a lower bound, by (30) we get for all \(C>0\) that \[K\geq\int_{t^{2/\alpha}}^{\infty}V(y,\sqrt{u})^{-1}\ e^{-C\frac{d^{2}(x,y)}{u} }\eta_{t}^{\alpha}(u)\,\frac{\mathrm{d}u}{u^{\kappa}}\gtrsim t\int_{(t^{\frac{ 1}{\alpha}}+d(x,y))^{2}}^{2(t^{\frac{1}{\alpha}}+d(x,y))^{2}}u^{-1-\frac{\alpha }{2}-\kappa}\,V(x,\sqrt{u})^{-1}\,e^{-C\frac{d^{2}(x,y)}{u}}\mathrm{d}u.\] Taking into account that \(u\asymp(t^{\frac{1}{\alpha}}+d(x,y))^{2}\asymp d^{2}(x,y)\), the claim follows immediately by the volume doubling property. _Case II: \(d(x,y)<t^{1/\alpha}\)._ Then, \(t^{\frac{1}{\alpha}}\asymp t^{\frac{2}{\alpha}}+d^{2}(x,y)\asymp(t^{\frac{1}{ \alpha}}+d(x,y))^{2}\), so \[\frac{t}{(t^{\frac{1}{\alpha}}+d(x,y))^{\alpha}}\asymp 1. \tag{32}\] Write \[I+J:=\int_{0}^{t^{2/\alpha}}V(y,\sqrt{u})^{-1}\ e^{-C\frac{d^{2}(x,y)}{u}}\eta _{t}^{\alpha}(u)\,\frac{\mathrm{d}u}{u^{\kappa}}+\int_{d^{2/\alpha}}^{\infty} V(y,\sqrt{u})^{-1}\ e^{-C\frac{d^{2}(x,y)}{u}}\eta_{t}^{\alpha}(u)\,\frac{ \mathrm{d}u}{u^{\kappa}}.\] We first deal with \(I\). By (30), we have \[I \leq t^{\frac{1}{2-\alpha}}\int_{0}^{c^{2/\alpha}}u^{-\frac{t-\alpha }{4-2\alpha}-\kappa}\,e^{-c_{\alpha}\left(\frac{2^{\alpha}}{u}\right)^{\frac{ \alpha}{2-\alpha}}}\,V(x,\,\sqrt{u})^{-1}\,\,e^{-C\frac{d^{2}(x,y)}{\alpha}}\, \,\mathrm{d}u\] \[\leq t^{-2x/\alpha}\int_{1}^{\infty}u^{\frac{3-4}{4-2\alpha}+ \kappa}\,e^{-c_{\alpha}u^{\frac{\alpha}{2-\alpha}}}\,V\left(x,\,\frac{t^{1/ \alpha}}{\sqrt{u}}\right)^{-1}\,\,e^{-C\frac{d^{2}(x,y)}{\alpha^{2/\alpha}}u}\, \,\mathrm{d}u\] \[\lesssim t^{-2x/\alpha}\,V(x,t^{1/\alpha})^{-1}\int_{1}^{\infty}u ^{\frac{3-4}{4-2\alpha}+\kappa}\,e^{-c_{\alpha}u^{\frac{\alpha}{2-\alpha}}}\, u^{\frac{\nu^{\prime}}{2}}\,\,\mathrm{d}u\] \[\lesssim\frac{t}{V(x,t^{\frac{1}{2}}+d(x,y))}\,\frac{1}{(t^{ \frac{1}{2}}+d(x,y))^{\alpha+2\kappa}},\] where we first performed a change of variables, while for the last two inequalities we used (9) and (32). The lower bound for \(I\) is proved similarly, using that \(\exp\left\{-C\,\frac{d^{2}(x,y)}{t^{2/\alpha}}u\right\}>\exp\left\{-C\,u\right\}\). We now turn to \(J\). We restrict to proving upper bounds, since the proof for lower bounds runs similarly. Observe first that for any \(C>0\), we have \[\exp\left\{-C\,\frac{d^{2}(x,y)}{t^{2/\alpha}}u\right\}\asymp 1,\quad\text{ for }\,d(x,y)<t^{1/\alpha},\,\,u\in(0,1). \tag{33}\] Then, by (30), the change of variables \(u\mapsto t^{2/\alpha}/u\), and (33) we get \[J \asymp t\int_{t^{2/\alpha}}^{\infty}u^{-1-\frac{\alpha}{2}-\kappa }\,V(x,\,\sqrt{u})^{-1}\,\,e^{-C\frac{d^{2}(x,y)}{u}}\,\,\mathrm{d}u\asymp \int_{0}^{1}u^{-1+\frac{\alpha}{2}+\kappa}\,V\left(x,\frac{t^{1/\alpha}}{ \sqrt{u}}\right)^{-1}\,\,\mathrm{d}u\] \[\lesssim t^{-2x/\alpha}\,V(x,t^{1/\alpha})^{-1}\int_{0}^{1}u^{-1+ \frac{\alpha}{2}+\kappa}\,u^{\frac{\nu}{2}}\,\,\mathrm{d}u\] \[\lesssim\frac{t}{V(x,t^{\frac{1}{\alpha}}+d(x,y))}\,\frac{1}{(t^ {\frac{1}{\alpha}}+d(x,y))^{\alpha+2\kappa}},\] using the volume doubling property (9). This concludes the proof as far as bounds are concerned. Last, for asymptotics, we distinguish cases once again. If \(d(x,y)\geq t^{1/\alpha}\), the claim follows by using (31), by observing that for all \(C>0\), \[\int_{0}^{r}V(y,\,\sqrt{u})^{-1}e^{-C\frac{d^{2}(x,y)}{u}}\eta_{t }^{\alpha}(u)\frac{\mathrm{d}u}{u^{\kappa}} \lesssim\exp\left\{-\frac{C}{8}\left(\frac{t^{\frac{1}{\alpha}}+d (x,y)}{\sqrt{r}}\right)^{2}\right\}\times\] \[\times t\int_{0}^{\infty}u^{-1-\frac{\alpha}{2}-\kappa}V(x,\, \sqrt{u})^{-1}\exp\left\{-\frac{C}{8}\left(\frac{t^{\frac{1}{\alpha}}+d(x,y) }{\sqrt{u}}\right)^{2}\right\}\,\mathrm{d}u\] and by resuming the computations for \(K\). Therefore, the exponential term allows for the claimed fast decay in time. If \(d(x,y)<t^{1/\alpha}\), take \(t\) large enough so that \(r<t^{2/\alpha}\) whence the claim follows by (30), by resuming the computations for \(I\) and by observing that for all \(C>0\), \[\int_{0}^{r}V(y,\,\sqrt{u})^{-1}\,\,e^{-C\frac{d^{2}(x,y)}{u}}\, \eta_{t}^{\alpha}(u)\,\frac{\mathrm{d}u}{u^{\kappa}} \lesssim\exp\left\{-\frac{c_{\alpha}}{2}\left(\frac{t^{2/\alpha} }{r}\right)^{\frac{\alpha}{2-\alpha}}\right\}\times\] \[\times t^{\frac{1}{2-\alpha}}\int_{0}^{r}u^{-\frac{4-\alpha}{4-2 \alpha}-\kappa}\,e^{-\frac{c_{\alpha}}{2}\left(\frac{t^{2/\alpha}}{2-\alpha} \right)^{\frac{\alpha}{2-\alpha}}}V(x,\,\sqrt{u})^{-1}e^{-C\frac{d^{2}(x,y)}{ \alpha}}\,\mathrm{d}u.\] The proof is now complete. **Remark.** For \(\kappa=0\), the estimates of Lemma 9 amount to bounds for the kernel of the fractional heat semigroup (see the next corollary), which were already known in the literature: we refer to [33, Eq (3.5)] where such bounds are actually established on a more general setting. It is mostly the case \(\kappa>0\) that will be of interest to us. **Corollary 10**.: _There is a constant \(C\geq 1\) such that if \(d(y,z)\leq t^{1/\alpha}\), then_ \[C^{-1}\leq\frac{P_{t}^{\alpha}(x,y)}{P_{t}^{\alpha}(x,z)}\leq C.\] Proof.: Recall first the subordination formula (22) and the double-sided heat kernel estimates (11). Then, by Lemma 9 for \(\kappa=0\) we get for all \(x,y\in\mathcal{M}\) and all \(t>0\) that \[P_{t}^{\alpha}(x,y)\asymp\frac{1}{V(x,t^{\frac{1}{\alpha}}+d(x,y))}\frac{t}{( t^{\frac{1}{\alpha}}+d(x,y))^{\alpha}} \tag{34}\] Then the claim follows as in Corollary 8. We are now in a position to prove that \[(P_{t}^{\alpha})\in\mathcal{P}_{\alpha},\quad\alpha\in(0,2).\] Indeed, as already mentioned, due to the subordination formula (22), the kernel \(P_{t}^{\alpha}\) satisfies (P1) as well as the first assertion of (P2). The second assertion of (P2) follows immediately from (34). (P3) follows by Corollary 10. Finally, we prove (P4). Fix a basepoint \(x_{0}\in\mathcal{M}\) and consider \(y\in\mathcal{M}\) such that \(d(x_{0},y)<\xi\). Write \[|P_{t}^{\alpha}(x,y)-P_{t}^{\alpha}(x,x_{0})| \leq\int_{0}^{+\infty}|h_{u}(x,y)-h_{u}(x,x_{0})|\,\eta_{t}^{ \alpha}(u)\,\mathrm{d}u\] \[:=I_{1}+I_{2},\] where \[I_{1} =\int_{0}^{\varsigma^{2}}|h_{u}(x,y)-h_{u}(x,x_{0})|\,\eta_{t}^{ \alpha}(u)\,\mathrm{d}u,\] \[I_{2} =\int_{\xi^{2}}^{+\infty}|h_{u}(x,y)-h_{u}(x,x_{0})|\,\eta_{t}^{ \alpha}(u)\,\mathrm{d}u.\] Let us first start with \(I_{2}\). Since \(u\geq\xi^{2}>d^{2}(x_{0},y)\), we can use the Holder estimate (14) for the heat kernel. Therefore, applying Lemma 9 for \(\kappa=\theta/2\), we get \[I_{2} \leq C(\alpha,\mathcal{M})\,\xi^{\theta}\,\int_{0}^{\infty}V(y, \,\sqrt{u})^{-1}\ e^{-c\frac{\beta^{2}(x,u)}{u}}\,\eta_{t}^{\alpha}(u)\,\frac{ \mathrm{d}u}{u^{\theta/2}}\] \[\leq C(\xi,\alpha,\mathcal{M})\,\frac{t}{V(x,t^{\frac{1}{\alpha }}+d(x,y))}\,\frac{1}{(t^{\frac{1}{\alpha}}+d(x,y))^{\alpha+\theta}}\] \[\leq C(\xi,\alpha,\mathcal{M})\,t^{-\theta/\alpha}P_{t}^{\alpha} (x,x_{0}),\] where in the last step we used the bounds (34). It remains to treat \(I_{1}\). For this, observe that by the second part of Lemma 9 for \(\kappa=0\) and by (34), we have for all \(x,y\in\mathcal{M}\) and for all \(t\) large enough, \[\int_{0}^{\varsigma^{2}}h_{u}(x,y)\,\eta_{t}^{\alpha}(u)\,\mathrm{d}u\lesssim t ^{-N}\,P_{t}^{\alpha}(x,y),\quad\text{for all}\quad N>0.\] Therefore, \[I_{1}\lesssim t^{-N}\,(P_{t}^{\alpha}(x,y)+P_{t}^{\alpha}(x,x_{0})).\] Taking \(t\) large enough so that \(d(x_{0},y)<\xi<t^{1/\alpha}\) and making use of Corollary 10 we finally estimate \[I_{1}\lesssim t^{-N}\,P_{t}^{\alpha}(x,x_{0})\qquad\forall N>0,\,\forall t>\xi^ {\alpha}.\] Altogether, we get \[|P_{t}^{\alpha}(x,y)-P_{t}^{\alpha}(x,x_{0})|\lesssim t^{-\theta/\alpha}\,P_{t }^{\alpha}(x,x_{0}) \tag{35}\] for all \(t\) large enough and all \(x,y,x_{0}\in\mathcal{M}\) such that \(d(x_{0},y)<\xi\). This proves (P4) for \(\theta_{\alpha}=\theta/\alpha\), where \(\theta\) is the constant from the heat kernel Holder inequality (14). #### 4.3.1. Final remarks on the rate of convergence The rate of convergence for continuous and compactly supported initial data is optimal, both for the extension problem (O(\(t^{-\theta}\)) from (29)) and for the fractional heat equation (O(\(t^{-\theta/\alpha}\)) from (35)), in the following sense: on euclidean space, the heat kernel Holder inequality (14) holds for \(\theta=1\), and it is known that, concerning the fractional heat equation, the optimal rate for convergence in the \(L^{1}\) norm for compactly supported initial data is O(\(t^{-1/\alpha}\)), [38, Theorem 3.2]. Moreover, instead of using (P4) for \(P_{t}^{\alpha}\), one could pursue using a Holder continuity estimate for \(P_{t}^{\alpha}\), that is, that there is a constant \(\Theta>0\) such that \[|P_{t}^{\alpha}(x,y)-P_{t}^{\alpha}(x,z)|\lesssim\left(\frac{d(y,z)}{t^{1/ \alpha}}\right)^{\Theta}P_{t}^{\alpha}(x,y),\] when \(d(y,z)\leq t^{1/\alpha}\). For a proof, see [14, Theorem 4.14] for stable-like processes on a rather general setting (alternatively, one can modify for all \(\alpha\in(0,2)\) the result of [18, Theorem 4] for the Poisson operator, using the semigroup property, the fact that the fractional heat kernel is a probability measure and Corollary 10). However, for compactly supported initial data, this would imply \(L^{1}(\mathcal{M})\) convergence at speed \(\text{O}(t^{-\Theta/\alpha})\). In this sense, our approach gives more information on the rate of convergence for this class of data. Notice moreover that as far as the extension problem is concerned, for \(\sigma\neq 1/2\), the operators \(\{T_{t}^{\alpha}\}_{t>0}\) do not form a semigroup. Indeed, observe that \[\frac{t^{2\sigma}}{2^{2\sigma}\Gamma(\sigma)}\int_{0}^{+\infty}e^{-\mu\lambda ^{2}}e^{-\frac{\rho}{\alpha}}\frac{du}{u^{1+\sigma}}=\frac{t^{\sigma}}{2^{ \alpha-1}\Gamma(\sigma)}\,K_{\sigma}(t\lambda)\lambda^{\sigma},\quad\lambda>0, \quad\sigma\in(0,1),\] where \(K_{\sigma}(\cdot)\) is the modified Bessel function of the second kind and index \(\sigma\) (recall that for \(\sigma=1/2\), we have \(K_{1/2}(x)=\sqrt{\frac{\pi}{2\sigma}}e^{-x}\), so the right hand side above becomes \(e^{-t\lambda}\)). Finally, following some ideas from the euclidean setting in [37, 38], let us show how one can prescribe any rate of convergence to solutions of the fractional heat equation by choosing appropriate initial data (the proof works also for \(T_{t}^{\alpha}\), with obvious modifications). More precisely, we shall show that given any decreasing and positive function \(\phi(t)\) such that \(\phi(t)\to 0\) as \(t\to+\infty\), there is a solution \(w\) with mass \(M=1\) satisfying \[\|\,|w(t,\,.\,)-P_{t}^{\alpha}(\,.\,.\,x_{0})\big{|}\,V(\,.\,,t^{1/\alpha})\|_ {L^{\infty}(M)}\gtrsim k\phi(t_{k}), \tag{36}\] for a sequence of times \(t_{k}\to+\infty\) that can be chosen. To prove (36), fix a basepoint \(x_{0}\in\mathcal{M}\). Let \((m_{k})_{k\geq 1}\) be a nonnegative sum-mable sequence with \(\sum_{k=1}^{\infty}m_{k}=\epsilon<1\), and consider initial data \((1-\epsilon)\,\delta_{x_{0}}(x)+\sum_{k=1}^{\infty}m_{k}\,\delta_{x_{k}}(x)\). Observe that the total mass is \(1\). Also, the points \(x_{k}\in\mathcal{M}\), \(k\geq 1\), where the weighted Dirac measures are located are such that \(r_{k}:=d(x_{0},x_{k})\to+\infty\). In this case, the action of \(W_{t}^{x}\) yields the following solution of the fractional heat equation, \[w(t,x)=(1-\epsilon)\,P_{t}^{\alpha}(x,x_{0})+\sum_{k=1}^{\infty}m_{k}\,P_{t}^{ \alpha}(x,x_{k}).\] Therefore at \(x=x_{0}\) we have \[|w(t,x_{0})-P_{t}^{\alpha}(x_{0},x_{0})| =\left|\sum_{k=1}^{\infty}m_{k}\,(P_{t}^{\alpha}(x_{0},x_{k})-P_{ t}^{\alpha}(x_{0},x_{0}))\right|\] \[=P_{t}^{\alpha}(x_{0},x_{0})\left|\sum_{k=1}^{\infty}m_{k}\, \left(\frac{P_{t}^{\alpha}(x_{0},x_{k})}{P_{t}^{\alpha}(x_{0},x_{0})}-1 \right)\right|\] \[\geq c_{1}\,V(x_{0},t^{1/\alpha})^{-1}\,\left|\sum_{k=1}^{\infty} m_{k}\,\left(\frac{P_{t}^{\alpha}(x_{0},x_{k})}{P_{t}^{\alpha}(x_{0},x_{0})}-1 \right)\right|\] for some constant \(c_{1}>0\) due to (34). Now, again due to (34), there is a constant \(C_{2}\geq 1\) such that \[C_{2}^{-1}\,\frac{V(x_{0},t^{1/\alpha})}{V(x_{0},t^{1/\alpha}+r_{k})}\,\frac{ t}{(t^{1/\alpha}+r_{k})^{\alpha}}\leq\frac{P_{t}^{\alpha}(x_{0},x_{k})}{P_{t}^{ \alpha}(x_{0},x_{0})}\leq C_{2}\,\frac{V(x_{0},t^{1/\alpha})}{V(x_{0},t^{1/ \alpha}+r_{k})}\,\frac{t}{(t^{1/\alpha}+r_{k})^{\alpha}},\] where \[\frac{V(x_{0},t^{1/\alpha})}{V(x_{0},t^{1/\alpha}+r_{k})}\leq C\,\left(1+\frac {r_{k}}{t^{1/\alpha}}\right)^{-\alpha^{\prime}},\quad\frac{t}{(t^{1/\alpha}+r _{k})^{\alpha}}=\left(1+\frac{r_{k}}{t^{1/\alpha}}\right)^{-\alpha}\] for some \(C>1\) owing to (9). Consider now \(\phi(t)\searrow 0\) as \(t\to+\infty\), and choose iteratively \(t_{k}\) and \(x_{k}\) as follows: given choices for the steps \(1,2,...,k-1\), pick \(t_{k}\) to be much larger than \(t_{k-1}\) and such that \(\phi(t_{k})\leq m_{k}/(2k)\). This is possible since \(\phi(t)\) decreases to zero. Choose now \(d(x_{k},x_{0})=r_{k}\) so large that \(\left(1+\frac{r_{k}}{t_{k}^{1/\alpha}}\right)^{-\nu-\alpha}<1/(2\,C\,C_{2})\). Therefore \[|w(t,x_{0})-P_{t}^{\alpha}(x_{0},x_{0})|\,V(x_{0},t^{1/\alpha})\geq c_{1}\,k \phi(t_{k}),\] which proves the claim. ## 5. The Poisson semigroup on rank one non-compact symmetric spaces This section deals with the Poisson semigroup convergence on rank one non-compact symmetric spaces. Our aim is to show that in this case, non-euclidean phenomena occur, namely, the convergence results of the previous sections fail. More precisely, our aim is to prove Theorem 2. The Poisson semigroup \(e^{-t\sqrt{-\Delta}}\) has been studied in various settings, including hyperbolic space (and more generally, non-compact symmetric spaces), see for instance [3, 16] and the references therein. Information for its kernel \(p_{t}\) can be deduced by its subordination to the heat kernel. Therefore, in the rank one case, by well-known properties of the heat kernel, the Poisson kernel \(p_{t}\) is a radial positive function, and for \(f\in L^{1}(\mathfrak{X})\), we may write \[e^{-t\sqrt{-\Delta}}f(x)=e^{-t\sqrt{-\Delta}}f(gK)=\int_{\mathfrak{X}}p_{t}(d( x,y))f(y)\,\mathrm{d}\mu(y)=\int_{\mathfrak{G}}p_{t}(h^{-1}g)f(h)\,\mathrm{d}h.\] We next recall some results about its large time behavior, [3, Theorems 4.3.1 and 5.3.1]. Notice the exponential decay in time and space, which demonstrates the effect of geometry. **Theorem 11**.: _[_3_]_ _The Poisson kernel on \(\mathbb{X}\) satisfies_ \[p_{t}(r)\asymp t\,(1+r)\,(t^{2}+r^{2})^{-\frac{5}{4}}\,e^{-\rho r-\rho\sqrt{t^ {2}+r^{2}}},\quad\text{ for }t\text{ large}. \tag{37}\] _In addition, we have_ \[p_{t}(r)\sim 2^{m_{2n}}\pi^{-\frac{n}{2}}p^{\frac{3}{2}}\,\gamma\left(\rho\frac{r}{ \sqrt{t^{2}+r^{2}}}\right)t\,r\,(t^{2}+r^{2})^{-\frac{5}{4}}\,e^{-\rho r-\rho \sqrt{t^{2}+r^{2}}},\quad\text{ as }t\to+\infty, \tag{38}\] _where_ \[\gamma(s)=\frac{\Gamma(s+\frac{m_{s}}{2})\Gamma(\frac{s}{2}+\frac{\rho}{2})}{ \Gamma(s+1)\Gamma(\frac{s}{2}+\frac{m_{s}}{4})},\quad s\geq 0.\] Recall that the Poisson kernel has total mass \(1\). We next determine its _critical region_, that is, a region \(\Omega_{t}\subseteq\mathbb{X}\) such that \[\int_{\mathbb{X}\smallsetminus\Omega_{t}}p_{t}(x)\,\mathrm{d}\mu(x)\longrightarrow 0 \quad\text{as}\quad t\to+\infty\] or equivalently, \[\int_{\Omega_{t}}p_{t}(x)\,\mathrm{d}\mu(x)\longrightarrow 1\quad\text{as} \quad t\to+\infty.\] This notion will be substantial for our purposes. **Proposition 12**.: _Let \(0<\epsilon<2\). Then the critical region for the Poisson kernel on \(\mathbb{X}\) is_ \[\Omega_{t}=\{x\in\mathbb{X}:\ t^{2-\epsilon}\leq d(x,o)\leq t^{2+\epsilon}\},\] _for \(t\) large. More precisely,_ \[\int_{\mathbb{X}\smallsetminus\Omega_{t}}p_{t}(x)\,d\mu(x)=O(t^{-\frac{\epsilon }{2}}).\] Proof.: Using the bounds (37), the radiality of the Poisson kernel and the integration formula (15) along with (16), we have, for \(b>a\geq 0\), \[\int_{a\leq d(x,o)\leq b}p_{t}(x)\,\mathrm{d}\mu(x)\lesssim\int_{a\leq r\leq b }t\,(1+r)\,(t^{2}+r^{2})^{-\frac{5}{4}}\,e^{-\rho(\sqrt{t^{2}+r^{2}}-r)}\,dr.\] On the one hand, we compute \[\int_{d(x,o)<t^{2-\epsilon}}p_{t}(x)\,\mathrm{d}\mu(x)\leq C(N,\epsilon)\,t^ {-N}\quad\forall N>0,\] due to the fact that in this case, for \(t\) large enough we have \(\sqrt{t^{2}+r^{2}}\leq 2r\), so \[\exp\left(-\rho(\sqrt{t^{2}+r^{2}}-r)\right)=\exp\left(-\rho\frac{t^{2}}{ \sqrt{t^{2}+r^{2}}+r}\right)\leq\exp\left(-\frac{\rho}{3}t^{\epsilon}\right).\] On the other hand, we compute \[\int_{d(x,o)>t^{2+\epsilon}}p_{t}(x)\,\mathrm{d}\mu(x)\lesssim\int_{r^{2}\leq t ^{2+\epsilon}}t\,(1+r)\,(t^{2}+r^{2})^{-\frac{5}{4}}\,\mathrm{d}r\lesssim \int_{r^{2}\leq t^{2+\epsilon}}t\,r^{-\frac{3}{4}}\,\mathrm{d}r\lesssim t^{- \frac{\epsilon}{2}},\] which completes the proof. We next give a lemma related to the Busemann function on \(\mathbb{X}\). **Lemma 13**.: _Let \(y=y_{0}K\) be in a bounded region of \(\mathbb{X}\). Then, for every \(x=gK\) in the critical region \(\Omega_{t}\),_ \[d(x,o)-d(x,y)=\tau(k^{-1}y_{0})+\operatorname{O}\!\left(t^{-2+\epsilon}\right),\] _Here, \(k\) is the left component of \(g\) in the Cartan decomposition and \(\exp(\tau(k^{-1}y_{0})H_{0})\) is the middle component of \(k^{-1}y_{0}\) in the Iwasawa decomposition._ Proof.: The arguments follow closely those of [5, Lemma 3.8], but we include the proof for the sake of completeness. Write \(x=gK\), where \(g=k\left(\exp rH_{0}\right)k^{\prime}\) in the Cartan decomposition. Then \(d(gK,o)=r\). Consider the Iwasawa decomposition \(k^{-1}y_{0}=n(k^{-1}y_{0})\left(\exp\tau(k^{-1}y_{0})H_{0}\right)k^{\prime}\) for some \(k^{\prime\prime}\in K\). Then, \[d(x,y)=d(gK,y_{0}K) = d\Big{(}k\left(\exp rH_{0}\right)K,\ k\,n(k^{-1}y_{0})\left(\exp \tau(k^{-1}y_{0})H_{0}\right)K\Big{)}\] \[= d\Big{(}\exp(-rH_{0})\left[n(k^{-1}y_{0})\right]^{-1}(\exp rH_{0 })\,K,\ \exp((\tau(k^{-1}y_{0})-r)H_{0})\,K\Big{)},\] therefore we write \[d(x,o)-d(x,y)=d(gK,o)-d(gK,y_{0}K) = \overbrace{d(gK,o)-d\Big{(}\exp(\tau(k^{-1}y_{0})-r)H_{0})\,K,o \Big{)}}^{l}\] \[+ \underbrace{d\Big{(}\exp((\tau(k^{-1}y_{0})-r)H_{0})\,K,o\Big{)} -d(gK,y_{0}K)}_{ll}.\] On the one hand, we have \[l\,=\,r-|\tau(k^{-1}y_{0})-r| = \frac{2\,r\,\tau(k^{-1}y_{0})-\tau(k^{-1}y_{0})^{2}}{r+|\tau(k^{ -1}y_{0})-r|}\] \[= \tau(k^{-1}y_{0})+\operatorname{O}\!\left(\tfrac{1}{r}\right)\] \[= \tau(k^{-1}y_{0})+\operatorname{O}\!\left(t^{-2+\epsilon}\right)\] by using that \(r\geq t^{2-\epsilon}\) and the well-known fact that \(|\tau(k^{-1}y_{0})|\leq d(k^{-1}y_{0}K,o)=d(y,o)\), thus bounded. On the other hand, the term \(ll\) tends exponentially fast to \(0\), see for instance [5, Lemma 3.8], thus we are done. The next lemma is crucial for our proof. **Lemma 14**.: _Let \(x=k(\exp rH_{0})K\in\Omega_{t},\ y=y_{0}K\) be bounded and let \(0<\epsilon<1/2\). Then,_ \[\frac{p_{t}(x,y)}{p_{t}(x,o)}=e^{2\rho\,\tau(k^{-1}y_{0})}+\operatorname{O}\! \left(t^{-2+4\epsilon}\right).\] Proof.: Write \(r=d(x,o)\), \(s=d(x,y)\) and let \(d(y,o)<\xi\), for some \(\xi>0\). By the triangle inequality, we have \(|r-s|\leq\xi\) and since \(x\) is in the critical region, we have \(t^{2-\epsilon}\leq r\leq t^{2+\epsilon}\). In addition, for \(t\) large enough, we have \[\frac{1}{2}\,t^{2-\epsilon}\leq t^{2-\epsilon}-\xi\leq s=d(x,y)\leq t^{2+ \epsilon}+\xi\leq 2\,t^{2+\epsilon},\] and \(t^{2}+r^{2}\asymp r^{2}\), \(t^{2}+s^{2}\asymp s^{2}\). By (38), we get \[\frac{p_{t}(d(x,y))}{p_{t}(d(x,0))}\sim\frac{\gamma\left(\rho\frac{s}{\sqrt{t^ {2}+r^{2}}}\right)}{\gamma\left(\rho\frac{r}{\sqrt{t^{2}+r^{2}}}\right)}\frac {s}{r}\,\frac{(t^{2}+r^{2})^{\frac{5}{2}}}{(t^{2}+s^{2})^{\frac{5}{2}}}\exp \left\{\rho(r-s)\left(1+\frac{r+s}{\sqrt{t^{2}+r^{2}}+\sqrt{t^{2}+s^{2}}} \right)\right\}.\] We next give asymptotics for the terms of the quotient on the right hand side. First, recall by [3, p.1042] that the function \(\gamma\) satisfies \[\gamma(u)\asymp 1,\quad\frac{d}{du}\gamma(u)=O(1), \tag{39}\] if \(u>0\) is bounded above, and below away from zero. Therefore, by the mean value theorem and (39), we have, for some \(r_{0}\) between \(r,s>0\), \[\left|\gamma\left(\rho\frac{r}{\sqrt{t^{2}+r^{2}}}\right)-\gamma \left(\rho\frac{s}{\sqrt{t^{2}+s^{2}}}\right)\right| \lesssim|r-s|\left|\gamma^{\prime}\left(\rho\frac{r_{0}}{\sqrt{t^{2 }+r_{0}^{2}}}\right)\right|\frac{t^{2}}{(t^{2}+r_{0}^{2})^{3/2}}\] \[\lesssim t^{2}/r_{0}^{3}.\] Therefore, again by (39) and by the fact that \(r_{0}\gtrsim t^{2-\epsilon}\), we have \[\gamma\left(\rho\frac{s}{\sqrt{t^{2}+s^{2}}}\right)/\gamma\left(\rho\frac{r}{ \sqrt{t^{2}+r^{2}}}\right)=1+O\left(t^{-4+3\epsilon}\right).\] Next, by a similar mean value argument applied to \((t^{2}+(.)^{2})^{\frac{5}{4}}\), we have \[\frac{(t^{2}+r^{2})^{\frac{5}{4}}}{(t^{2}+s^{2})^{\frac{5}{4}}}=1+O\left(t^{- 2+4\epsilon}\right).\] Also, since \(|r-s|\leq\xi\), we have \[\frac{s}{r}=1+O\left(t^{-2+\epsilon}\right).\] It remains to deal with the exponential terms, which are the main ones. We first claim that \[\frac{r+s}{\sqrt{t^{2}+r^{2}}+\sqrt{t^{2}+s^{2}}}=1+O\left(t^{-2+2\epsilon} \right). \tag{40}\] Indeed, consider the function \(f(u)=\sqrt{u^{2}+r^{2}}+\sqrt{u^{2}+s^{2}}\), \(u\geq 0\), and observe that the left hand side of (40) is equal to \(f(0)/f(t)\). Then, the mean value theorem for \(f\) in \([0,t]\) together with the fact that \[f^{\prime}(u)\lesssim\frac{u}{r}+\frac{u}{s}\lesssim t^{-1+\epsilon},\qquad f (u)\gtrsim t^{2-\epsilon},\qquad\forall u\in[0,t],\] yield the claimed asymptotics (40). Finally, in Lemma 13 it was shown that \[r-s=d(x,o)-d(x,y)=d(gK,o)-d(gK,y_{0}K)=\tau(k^{-1}y_{0})+\text{O}\!\left(t^{-2 +\epsilon}\right).\] Therefore, \[\exp\left\{\rho(r-s)\left(1+\frac{r+s}{\sqrt{t^{2}+r^{2}}+\sqrt{t^{2}+s^{2}} }\right)\right\}=e^{2\rho\,\tau(k^{-1}y_{0})+\text{O}\!\left(t^{-2+2\epsilon} \right)}=e^{2\rho\,\tau(k^{-1}y_{0})}+\text{O}\!\left(t^{-2+2\epsilon}\right).\] Altogether, we have \[\frac{p_{t}(x,y)}{p_{t}(x,o)}=e^{2\rho\,\tau(k^{-1}y_{0})}+\text{O}\!\left(t^ {-2+4\epsilon}\right).\] Proof of Theorem 2.: We consider the case of continuous compactly supported initial data. Next, we work separately outside and inside the critical region: we show first that \(\|e^{-t\sqrt{-\Delta}}f\|_{L^{1}(\mathbb{X}\smallsetminus\Omega_{i})}\to 0\) for all \(f\in\mathcal{C}_{c}(\mathbb{X})\), without any further symmetry assumptions on \(f\). However, the convergence to the Poisson kernel inside \(\Omega_{i}\), unless \(f\) is radial, may break down. Finally, let us point out that having proven the desired convergence in the \(L^{1}\) norm for radial \(\mathcal{C}_{c}(\mathbb{X})\) functions, one may conclude for the whole class of radial \(L^{1}(\mathbb{X})\) initial data by a density argument, see [5]. To begin with, let \(x\notin\Omega_{t}\). Let also \(\xi>0\) be a constant such that the compact support of \(f\) is contained in \(B(0,\xi)\). Then we have \[\int_{\mathbb{X}\setminus\Omega_{t}}|e^{-t\sqrt{-\Delta}}f(x)|\,\mathrm{d}\mu( x)\leq\int_{B(0,\xi)}|f(y)|\int_{\mathbb{X}\setminus\Omega_{t}}p_{t}(d(x,y))\, \mathrm{d}\mu(x)\,\mathrm{d}\mu(y).\] Notice that \(x\in\mathbb{X}\setminus\Omega_{t}\) and \(y\in B(0,\xi)\) imply \(x\in\mathbb{X}\setminus\widetilde{\Omega}_{t,y}\), where \[\widetilde{\Omega}_{t,y}\,=\,\left\{x\in\mathbb{X}\left|\,2\,t^{2-c}\,\leq\,d (x,y)\,\leq\,\frac{1}{2}\,t^{2+c}\right.\right\}\] provided \(t\) is large enough. Indeed, for \(t\) large enough, when \(d(x,o)\geq t^{2+c}\) we have \(d(x,y)\geq d(x,o)-d(y,o)>t^{2+c}-\xi>\frac{1}{2}\,t^{2+c}\), while when \(d(x,o)\leq t^{2-c}\) we have \(d(x,y)\leq d(x,o)+d(y,o)<t^{2-c}+\xi<2\,t^{2-c}\). Therefore, we have \[\int_{\mathbb{X}\setminus\Omega_{t}}|e^{-t\sqrt{-\Delta}}f(x)|\, \mathrm{d}\mu(x) \leq\int_{B(0,\xi)}|f(y)|\int_{\mathbb{X}\setminus\widetilde{ \Omega}_{t,y}}p_{t}(d(x,y))\,\mathrm{d}\mu(x)\,\mathrm{d}\mu(y)\] \[\lesssim t^{-\frac{1}{2}}\,\|f\|_{L^{1}(\mathbb{X})}\lesssim t^{- \frac{1}{2}},\] working as in Proposition 12 for \(\int_{\mathbb{X}\setminus\widetilde{\Omega}_{t,y}}p_{t}(d(x,y))\,\mathrm{d}\mu (x)\). Thus, \[\int_{\mathbb{X}\setminus\Omega_{t}}|e^{-t\sqrt{-\Delta}}f(x)-Mp_ {t}(x)|\,\mathrm{d}\mu(x) \leq\int_{\mathbb{X}\setminus\Omega_{t}}|e^{-t\sqrt{-\Delta}}f(x)| \,\mathrm{d}\mu(x)+M\,\int_{\mathbb{X}\setminus\Omega_{t}}p_{t}(x)\,\mathrm{ d}\mu(x)\] \[\lesssim t^{-\frac{1}{2}}.\] This proves the desired convergence outside the critical region for all \(f\in\mathcal{C}_{c}(\mathbb{X})\). We now turn to \(x\in\Omega_{t}\). By Lemma 14, the right-\(K\)-invariance of \(\tau(k^{-1}\cdot)\) and \(f\), and the definition (17) of the Helgason-Fourier transform that \[e^{-t\sqrt{-\Delta}}f(x)-Mp_{t}(x) =\int_{\mathbb{X}}\left(p_{t}(x,y)-p_{t}(x,o)\right)f(y)\,\mathrm{ d}\mu(y)\] \[=p_{t}(x,o)\int_{\mathbb{X}}\left(\frac{p_{t}(x,y)}{p_{t}(x,o)}-1 \right)f(y)\,\mathrm{d}\mu(y)\] \[=p_{t}(x,o)\left\{\int_{\mathbb{G}}\left(e^{2\rho\,\tau(k^{-1}y_{ 0})}-1+\mathrm{O}(t^{-2+4c})\right)f(y_{0})\,\mathrm{d}y_{0}\right\} \tag{41}\] \[=p_{t}(x,o)\left(\widehat{f}(i\rho,k\mathbb{M})\,-\,\widehat{f}( -i\rho,k\mathbb{M})\,+\,\mathrm{O}\!\left(t^{-2+4c}\,\|f\|_{L^{1}(\mathbb{X})} \right)\right).\] Notice that \(\widehat{f}(\pm i\rho,k\mathbb{M})=\mathcal{H}f(\pm i\rho)=M\) when \(f\) is radial, see Subsection 2.1. Therefore in this case, we deduce the desired convergence by integrating (41) over the critical region. On the other hand, using the Cartan decomposition (15) we have \[\int_{\Omega_{t}}|e^{-t\sqrt{-\Delta}}f(x)\,-\,M\,p_{t}(x)|\,\mathrm{d}\mu(x) \,\longrightarrow\,\int_{\mathbb{X}}\Big{|}\int_{\mathbb{G}}f(y_{0})\left(e^{2 \rho\,\tau(k^{-1}y_{0})}\,-\,1\right)\mathrm{d}y_{0}\Big{|}\,\mathrm{d}k\] as \(t\to+\infty\). The last integral is not constantly zero when \(f\) is not radial. For example, consider \(f\) to be a Dirac measure supported on some point \(y=y_{0}K\) other than the origin, thus for \(y_{0}\notin K\). In other words, the solution now coincides with \(p_{t}(\,\cdot\,,y)\) and the mass is equal to \(1\). In this case, however, the last integral is equal to \(\int_{K}\left|e^{2\rho\,\tau(k^{-1}y_{0})}\,-\,1\right|\mathrm{d}k\), thus does not vanish identically.
2302.04029
Geometrical optics of first-passage functionals of random acceleration
Random acceleration is a fundamental stochastic process encountered in many applications. In the one-dimensional version of the process a particle is randomly accelerated according to the Langevin equation $\ddot{x}(t) = \sqrt{2D} \xi(t)$, where $x(t)$ is the particle's coordinate, $\xi(t)$ is Gaussian white noise with zero mean, and $D$ is the particle velocity diffusion constant. Here we evaluate the $A\to 0$ tail of the distribution $P_n(A|L)$ of the functional $I[x(t)]=\int_0^{T} x^n(t) dt=A$, where $T$ is the first-passage time of the particle from a specified point $x=L$ to the origin, and $n\geq 0$. We employ the optimal fluctuation method akin to geometrical optics. Its crucial element is determination of the optimal path -- the most probable realization of the random acceleration process $x(t)$, conditioned on specified $A\to 0$, $n$ and $L$. This realization dominates the probability distribution $P_n(A|L)$. We show that the $A\to 0$ tail of this distribution has a universal essential singularity, $P_n(A\to 0|L) \sim \exp\left(-\frac{\alpha_n L^{3n+2}}{DA^3}\right)$, where $\alpha_n$ is an $n$-dependent number which we calculate analytically for $n=0,1$ and $2$ and numerically for other $n$. For $n=0$ our result agrees with the asymptotic of the previously found first-passage time distribution.
Baruch Meerson
2023-02-08T13:04:19Z
http://arxiv.org/abs/2302.04029v3
# Geometrical optics of first-passage functionals of random acceleration ###### Abstract Random acceleration is a fundamental stochastic process encountered in many applications. In the one-dimensional version of the process a particle is randomly accelerated according to the Langevin equation \(\ddot{x}(t)=\sqrt{2D\xi}(t)\), where \(x(t)\) is the particle's coordinate, \(\xi(t)\) is Gaussian white noise with zero mean, and \(D\) is the particle velocity diffusion constant. Here we evaluate the \(A\to 0\) tail of the distribution \(P_{n}(A|L)\) of the functional \(I[x(t)]=\int_{0}^{T}x^{n}(t)dt=A\), where \(T\) is the first-passage time of the particle from a specified point \(x=L\) to the origin, and \(n\geq 0\). We employ the optimal fluctuation method akin to geometrical optics. Its crucial element is determination of the optimal path - the most probable realization of the random acceleration process \(x(t)\), conditioned on specified \(A\to 0\), \(n\) and \(L\). This realization dominates the probability distribution \(P_{n}(A|L)\). We show that \(A\to 0\) tail of this distribution has a universal essential singularity, \(P_{n}(A\to 0|L)\sim\exp\left(-\frac{\alpha_{n}L^{3n+2}}{D\dot{A}^{3n}}\right)\), where \(\alpha_{n}\) is an \(n\)-dependent number which we calculate analytically for \(n=0,1\) and \(2\) and numerically for other \(n\). For \(n=0\) our result agrees with the asymptotic of the previously found first-passage time distribution. ## I Introduction The random acceleration process is governed by the Langevin equation \[\ddot{x}(t)=\sqrt{2D}\xi(t)\,. \tag{1}\] This equation describes the position of a particle moving along the \(x\)-axis and subject to a random force which is modeled as a Gaussian white noise with zero mean, \(\langle\xi(t)\xi(t^{\prime})\rangle=\delta(t-t^{\prime})\). Alternatively, \(x(t)\) can be considered as the integral of a Brownian motion over time. The random acceleration is a fundamental stochastic process in its own right. On the one hand, it serves as a simple example of a non-Markovian process (which becomes Markovian when considered in two dimensions \(x\) and \(\dot{x}\), see _e.g._ Ref. [1]). On the other hand, its mathematical equivalents have found a variety of applications in physics: from a simplified description of free semiflexible polymer chains in narrow channels [2; 3; 4; 5; 6; 7] to interface growth in 1+1 dimensions [8; 9; 10] and to decaying turbulence in the Burgers equation [11; 12]. In all these systems it is a spatial coordinate which plays the role of time \(t\) in Eq. (1), while the polymer shape, or the interface shape, _etc._ plays the role of \(x\). Here we are interested in the statistics of first-passage functionals \(I[x(t)]=\int_{0}^{T}x^{n}(t)\,dt\) of the random acceleration \(x(t)\), defined up to the time of first passage of the process \(T\), starting say at \(x=L>0\), to a specified point in space, for example to the origin. The case \(n=0\) corresponds to the statistics of the first-passage time itself. The case \(n=1\) corresponds to the first-passage area under the graph of \(x(t)\). In the context of interface growth, governed by the noisy Mullins-Herring equation [8; 9], it describes the area under the stochastic interface until it crosses a zero level in space for the first time. The case \(n=2\) corresponds to the statistics of the moment of inertia of a semiflexible polymer chain of a given length in narrow channels. It is natural then to attempt to calculate the distribution of the values of the first-passage functional \(I[x(t)]=\int_{0}^{T}x^{n}(t)\,dt\) for arbitrary \(n\). For comparison, the statistics of first-passage Brownian functionals [13; 14] - where \(x(t)\) is a Brownian motion - is well studied, see Ref. [15] and references therein. For the random acceleration process, however, the problem has been solved only for \(n=0\), that is only for the statistics of the first-passage time itself [16; 6; 17]. In this work we focus on _large-deviation statistics_ of the first-passage functionals of random acceleration for any \(n\geq 0\). Specifically, we evaluate the \(A\to 0\) tail of the probability distribution \(P_{n}(A\to 0|L)\) of the values \(I[x(t)]=A\) and show that this tail exhibit an essential singularity, see Eq. (18) below. To achieve these goals, we employ the optimal fluctuation method akin to geometrical optics. The method relies on the determination of the optimal path, that is the most likely realization of the process \(x(t)\), conditioned on the specified value of \(A\to 0\) at given \(n\) and \(L\). It is this optimal path that dominates the \(A\to 0\) tail of \(P_{n}(A\to 0|L)\). Previously, the geometrical optics was applied to a plethora of problems related to statistics of Brownian motion [18; 19; 20; 21; 22; 23; 24; 25; 26]. An extension of the method to the random acceleration is a natural next step. Here is a plan of the remainder of the paper. We complete the formulation of the problem, establish the scaling properties of \(P_{n}(A\to 0|L)\) and derive the governing equation of the optimal fluctuation method in Sec. II. Some analytical and numerical solutions for different \(n\) are presented in Sec. III. Section IV includes a brief summary and an extension of our results. A technical derivation is delegated to the Appendix. Formulation of the problem and governing equations We start by completing the formulation of the problem. The initial and final positions of the particle are \[x(t=0)=L\,,\quad x(T)=0\,, \tag{2}\] where \(T\) is the first passage time to the origin, and \(L\) can be assumed positive without loss of generality. We assume for simplicity that the particle starts with zero velocity: \[\dot{x}(t=0)=0\,. \tag{3}\] We consider first-passage functionals of the form \(I[x(t)]=\int_{0}^{T}x^{n}(t)\,dt\) and study the probability distribution \(P_{n}(A|L)\) of their values \(A\): \[\int_{0}^{T}x^{n}(t)\,dt=A\,. \tag{4}\] Equations (1)-(4) define the stochastic problem completely. Their dimensional analysis yields the following _exact_ scaling behavior of \(P_{n}(A|L)\): \[P_{n}(A|L)=\frac{D^{1/3}}{L^{n+\frac{2}{3}}}\,F_{n}\left(\frac{D^{1/3}A}{L^{n+ \frac{2}{3}}}\right) \tag{5}\] with an unknown scaling function \(F_{n}(z)\). Rather than attempting to determine the entire scaling function \(F_{n}(z)\), here we find its leading-order \(z\to 0\) asymptotic. This asymptotic describes the large-deviation tail \(A\to 0\) of the distribution \(P_{n}(A|L)\), and it can be obtained by the optimal fluctuation method, akin to geometrical optics. We identify the action functional, corresponding to the Langevin equation (1): \[S[x(t)]=\frac{1}{4D}\int_{0}^{T}\tilde{x}^{2}(t)\,dt\,, \tag{6}\] and seek the optimal path \(x_{*}(t)\) which minimizes this functional subject to the boundary conditions (2) and (3), to the positivity condition \(x(t)>0\) for \(0<t<T\), and to the integral constraint \[I[x(t)]=\int_{0}^{T}x^{n}(t)\,dt=A\,. \tag{7}\] The minimization must be performed not only with respect to different paths \(x(t)\), but also with respect to the first-passage time \(T\). Let us rescale the coordinate, \(\tilde{x}=x/L\). The action functional (6) takes the form \[S[x(t)]=\frac{L^{2}}{2D}\,s(\tilde{x}),\quad\text{where}\quad s(\tilde{x})= \frac{1}{2}\int_{0}^{T}\tilde{\tilde{x}}^{2}(t)\,dt\,. \tag{8}\] The constraint (7) becomes \[I[\tilde{x}(t)]=\int_{0}^{T}\tilde{x}^{n}(t)dt=\frac{A}{L^{n}}\,. \tag{9}\] The minimization of the rescaled functional \(s(\tilde{x})\) subject to the constraint (9) can be achieved by minimizing the modified functional \[s_{\lambda}[\tilde{x}(t)]=s[\tilde{x}(t)]-\lambda I[\tilde{x}(t)]\,. \tag{10}\] The Lagrange multiplier \(\lambda\) turns out to be negative, so we can set \(\lambda=-\Lambda^{4}\), where \(\Lambda>0\). Now we also rescale time, \(\tilde{t}=\Lambda t\). The first-passage time \(T\) also gets rescaled, \(\tilde{T}=\Lambda T\). The functional (10) becomes \[s_{\lambda}[\tilde{x}(\tilde{t})]=\Lambda^{3}\int_{0}^{\tilde{T}}\left[\frac{ \tilde{x}^{2}(\tilde{t})}{2}+\tilde{x}^{n}(\tilde{t})\right]d\tilde{t}\,. \tag{11}\] and we will drop the tildes everywhere in the following. Since the rescaled functional \(s_{0}[x(t)]\) (recall that the tildes are dropped) involves the particle acceleration \(\tilde{x}(t)\), the Euler-Lagrange equation is of the fourth order (see the Appendix): \[x^{(4)}(t)+nx^{n-1}(t)=0, \tag{12}\] where the superscript (4) denotes the fourth derivative with respect to time. Three boundary conditions for Eq. (12) come with the formulation of the original stochastic problem, see Eqs. (2) and (3): \[x(0)=1\,,\ \dot{x}(0)=0\,,\ \text{and}\ x(T)=0\,. \tag{13}\] The fourth boundary condition, \[\ddot{x}(T)=0\,, \tag{14}\] follows from minimization of the action with respect to all possible variations of the particle velocity \(\dot{x}\) at \(t=T\) (see the Appendix). The general solution of the rescaled Euler-Lagrange equation (12) has four arbitrary constants. When this equation is supplemented by the four boundary conditions (13) and (14) [and the inequality \(x(0<t<T)>0\)], the problem of finding the \(A\to 0\) asymptotic of \(P_{n}(A|L)\) is determined completely only for \(n=0\) where \(A=T\), and one is looking for the distribution \(P_{n}(T|L)\) of first passage times. For all other \(n>0\) one should, in addition, minimize the action \(S(A,T)\) with respect to \(T\). The minimization yields the _optimal value_ of the first-passage time \(T=T_{*}(A)\) which dominates the probability \(P_{n}(A|L)\) that we are after. As we show in the Appendix, this additional minimization brings about a fifth boundary condition \[\dddot{x}(T)=0\,. \tag{15}\] Once the optimal path \(x(t)\) and, for \(n\neq 0\), the optimal value \(T=T_{*}(A)\), are found, we can determine \(\Lambda\) from the relation \[\Lambda=\frac{L^{n}}{A}\int_{0}^{T_{*}}x^{n}(t)\,dt\,, \tag{16}\] which follows from the constraint (7) or, equivalently, (9). The original action (6) can now be written as follows: \[S[x(t)] = \frac{L^{2}\Lambda^{3}}{2D}s_{0}[x(t)]\,,\,\,\,\text{where}\] \[s_{0}[x(t)] = \frac{1}{2}\int_{0}^{T}\vec{x}^{2}(t)dt\,. \tag{17}\] Plugging Eq. (16) into the first line of Eq. (17) we obtain, up to a pre-exponential factor, the \(A\to 0\) tail of \(P_{n}(A|L)\). It scales as \[-\ln P_{n}(A\to 0|L)\simeq S=\frac{\alpha_{n}L^{3n+2}}{DA^{3}}\,, \tag{18}\] where \[\alpha_{n}=\frac{1}{4}\left[\int_{0}^{T_{*}}x^{n}(t)dt\right]^{3}\int_{0}^{T_ {*}}\vec{x}^{2}(t)dt\,. \tag{19}\] Equation (18) describes a universal essential singularity \(\sim\exp(-A^{-3})\) of the \(A\to 0\) tail of the distribution. It is much steeper than the essential singularity \(\sim\exp(-A^{-1})\) of the first-passage Brownian functionals [15]. In fact, the large-deviation scaling (18) (with an unknown \(\alpha_{n}\)) immediately follows from the exact scaling (5) once we realize that the \(A\to 0\) asymptotic of the function \(F_{n}(\dots)\) in Eq. (5) must exhibit, up to a pre-exponent, the characteristic weak-noise scaling \(F_{n}\sim\exp(-\Phi/D)\), where \(\Phi\) depends on \(A\) and \(L\) but is independent of \(D\). Now let us concentrate on finding the optimal path, that is on solving Eq. (12) subject to the boundary conditions (13)-(15). ## III Solution ### General Equation (12) is easily solvable for \(n=0\), \(1\) and \(2\), when the equation is linear. We will present these solutions shortly. In the general case, there is conservation law \[\dot{x}(t)\,\dddot{x}(t)-\frac{1}{2}\ddot{x}^{2}(t)+x^{n}(t)=C=\text{const}, \tag{20}\] which is a higher-order analog of energy conservation in classical mechanics. The conservation law (20) reduces the order of Eq. (12) by one. Using the boundary conditions (13)-(15) at \(t=T\), we find that \(C=0\) for all \(n>0\)[27]. Evaluating the left hand side of the conservation law (20) (where \(C=0\)) at \(t=0\), we uncover one more universal property of the optimal path: \[\ddot{x}(t=0)=-\sqrt{2}\quad\text{for all}\quad n>0\,. \tag{21}\] Finally, using the conservation law (20) with \(C=0\), integration by parts and Eqs. (13) and (14), we can rewrite the expression (19) for \(\alpha_{n}\) in two equivalent alternative forms: \[\alpha_{n}=\frac{1}{6}\left[\int_{0}^{T_{*}}x^{n}(t)dt\right]^{3}=\frac{27}{3 2}\left[\int_{0}^{T_{*}}\vec{x}^{2}(t)dt\right]^{4}\,. \tag{22}\] ### \(n=0\): First-passage time The first-passage time distribution \(P(T|L)\) of the random acceleration process was determined quite some time ago [6; 16; 17]. Its short-time asymptotic coincides, in the leading order, with the short-time asymptotic of the propagator of the random acceleration. For the zero initial particle velocity, the exact propagator (see _e.g._ Ref. [6]) simplifies to \[\rho(T,v)=\frac{\sqrt{3}}{2\pi DT^{2}}\,e^{-\frac{3L^{2}+3LTv+T^{2}v^{2}}{DT^{3 }}}\,, \tag{23}\] where \(v=\dot{x}(t=T)\) is the particle velocity (in the original units) at \(t=T\). We identify the action, corresponding to this distribution, \[S_{\rho}(T,v)=\frac{3L^{2}+3LTv+T^{2}v^{2}}{DT^{3}}\,, \tag{24}\] and focus on the large-deviation regime \(T\to 0\), where this action is much larger than unity. Minimizing \(S_{\rho}(T,v)\) with respect to \(v\), we obtain the optimal value \(v_{*}=-3L/(2T)\). The corresponding minimum of the action, \[S_{\rho}(T,v_{*})=\frac{3L^{2}}{4DT^{3}}\,, \tag{25}\] determines the small-\(A\) asymptotic of \(P(T|L)\): \[-\ln P(T|L)\simeq\frac{3L^{2}}{4DT^{3}}\,, \tag{26}\] which obeys our asymptotic scaling relation (18) with \(\alpha_{0}=3/4\). Now we will rederive the asymptotic (26) by using the optimal fluctuation formalism. For \(n=0\) the Euler-Lagrange equation (12) becomes trivial: \(x^{(4)}=0\). Its solution, satisfying the boundary conditions (13) and (14), \[x(t)=1-\frac{3t^{2}}{2T^{2}}+\frac{t^{3}}{2T^{3}}\,, \tag{27}\] is a cubic parabola. Equation (16) yields \(\Lambda=1\). Then, using Eq. (17), we arrive at Eqs. (25) and (26) as to be expected. ### \(n=1\): First-passage area For \(n=1\) the Euler-Lagrange equation (12) is still very simple: \(x^{(4)}=-1\). Its solution is a quartic parabola. Here we have to demand all five boundary conditions (13)-(15) which determine the four arbitrary constants and the optimal value of the first-passage time \(T_{*}=2^{3/4}\). The resulting rescaled optimal path, \[x(t)=1-\frac{t^{2}}{\sqrt{2}}+\frac{t^{3}}{3\sqrt[4]{2}}-\frac{t^{4}}{24}\,, \tag{28}\] is depicted, alongside with the optimal acceleration \(\ddot{x}(t)\), in Fig. 1. The optimal acceleration is nothing but the (rescaled) optimal realization of the white Gaussian noise \(\xi(t)\), see Eq. (1). Needless to say, the optimal realization of the noise looks very differently from a _typical_ realization of the noise. Now using Eqs. (18) and (19) for \(n=1\), we obtain \[-\ln P(A|L)\simeq\frac{108L^{5}}{625DA^{3}}\,, \tag{29}\] with \(\alpha_{1}=108/625\). ### \(n=2\) Here the Euler-Lagrange equation (12) is still linear and elementary: \[x^{(4)}(t)+2x(t)=0\,. \tag{30}\] The solution, obeying the boundary conditions (13)-(15), yields the rescaled optimal path: \[x(t)=\frac{\left(1-e^{\frac{t}{T_{*}}}\right)\sin\left(\frac{\pi t}{2T_{*}} \right)+\left(1+e^{\frac{t}{T_{*}}}\right)\cos\left(\frac{\pi t}{2T_{*}} \right)}{\left(1+e^{-\pi}\right)e^{\frac{\pi t}{2T_{*}}}}\,, \tag{31}\] where \(T_{*}=2^{-3/4}\pi\) is the optimal first passage time. Figure 1 shows this optimal path alongside with the optimal acceleration \(\ddot{x}(t)\). Using Eqs. (18) and (19) for \(n=2\), we obtain \[-\ln P(A|L)\simeq\frac{27\tanh^{4}\left(\frac{\pi}{2}\right)\,L^{8}}{256DA^{3} }\,, \tag{32}\] Here \(\alpha_{2}=(27/256)\tanh^{4}\left(\pi/2\right)=0.074625\dots\). ### Numerics For arbitrary \(n\) the optimal path can be found numerically. We used artificial relaxation in conjunction with "shooting". Artificial relaxation was implemented as follows. We introduced artificial time \(\tau\) and replaced the Euler-Lagrange equation (12) by the fourth-order partial differential equation \[\partial_{\tau}X(t,\tau)=-\partial_{t}^{4}X(t,\tau)-nX^{n-1}(t,\tau)\,, \tag{33}\] where the physical time \(t\) plays the role of a coordinate. The sign of the right-hand-side of Eq. (33) is chosen so as to enforce relaxation to a steady-state, \(x(t)=X(t,\tau\rightarrow\infty)\) which satisfies our Eq. (12). The initial condition \(X(t,\tau=0)\) is chosen qualitatively similar to the expected steady-state solution. Since we do not know the optimal first-passage time \(T\)_a priori_, we use the "shooting" method, see e.g. Ref. [28]. We first solve Eq. (33) with boundary conditions (13) and (14) for a fixed \(T\) (the first guess of \(T_{*}\), or first "shot") until the steady-state solution \(x(t)\) is reached. Then we evaluate the third derivative \(\partial_{t}^{(3)}X(t,\tau\gg 1)\) at \(t=T\), and iterate \(T\) until the third derivative vanishes [as Eq. (15) demands] with desired accuracy. Alternatively, one can iterate until \(\partial_{t}^{2}X(t,\tau\gg 1)\) at \(t=0\) approaches \(-\sqrt{2}\), see Eq. (21). We validated the method by comparing the numerically found \(x(t)\) with the analytical solutions for \(n=1\) and \(2\). The accuracy was monitored by checking the conservation law (20) with \(C=0\). Once \(T_{*}\) and \(x(t)\) are found, we can evaluate \(\alpha_{n}\) from any of the equations (19) or (22). We used a standard PDE solver of "Mathematica" [29]. Figure 2 shows the numerically found optimal paths \(x(t)\) and the optimal accelerations \(\ddot{x}(t)\) for \(n=3\) and \(4\). In these cases \(T_{*}\simeq 2.036\) and \(2.185\), respectively, whereas the \(A\to 0\) asymptotics of \(P(A|L)\) are described by Eq. (18) with \(\alpha_{3}\simeq 0.041\) and \(\alpha_{4}\simeq 0.026\). Overall, we solved the problem numerically and found the optimal first passage time \(T_{*}\) and the factor \(\alpha_{n}\) for a range of \(n\), see Fig. 3. As one can see, \(T_{*}\) increases with \(n\), while \(\alpha_{n}\) decreases. [FIGURE:S Figure 1: The rescaled optimal path \(x(t)\) (a) and optimal acceleration \(\ddot{x}(t)\) (b), dominating the \(A\to 0\) asymptotics of \(P(A|L)\) for \(n=1\) (blue) and \(n=2\) (magenta). The optimal first-passage time is \(T_{*}=2^{3/4}\) for \(n=1\) and \(T_{*}=\pi/2^{3/4}\) for \(n=2\). ## IV Summary and discussion Statistics of first-passage functionals provide a useful characterization of random processes. Here we evaluated the \(A\to 0\) tail of these statistics for Brownian acceleration. We also used this problem to extend the optimal fluctuation (or geometrical optics) method to a stochastic process of a higher-order. In addition to the \(A\to 0\) asymptotic of the probability distribution \(P_{n}(A|L)\), we calculated analytically and numerically the optimal paths of the conditioned processes at different \(n\). These provide an interesting insight into the nature of large deviations in this system. The problem of statistics of the first-passage functionals \(I[x(t)]=\int_{0}^{T}x^{n}(t)dt\) can be extended to a whole family of processes, described by the Langevin equation \(d^{k}x(t)/dt^{k}=\sqrt{2D}\xi(t)\), where \(k\) is any positive integer. The cases of \(k=1\) and \(k=2\) correspond to the Brownian motion and random acceleration, respectively. Again, let \(x(0)=L\), and suppose for simplicity that all the derivatives of \(x(t)\) with order less than \(k\) vanish at \(t=0\). Then the exact scaling behavior of probability distribution \(P_{n}^{(k)}(A|L)\) of the values \(I[x(t)]=A\) follows from dimensional analysis: \[P_{n}^{(k)}(A|L)=\frac{D^{\nu}}{L^{n+2\nu}}\,F_{n}^{(k)}\left(\frac{D^{\nu}A} {L^{n+2\nu}}\right)\,, \tag{34}\] where \(F_{n}^{(k)}(z)\) is an unknown scaling function, and \(\nu=1/(2k-1)\). In its turn, the leading-order \(A\to 0\) asymptotic of \(P_{k}(A|L)\) must have the characteristic weak-noise form \[-\ln P_{n}^{(k)}(A\to 0|L)\simeq\frac{\alpha_{n}^{(k)}L^{\frac{n}{2}+2}}{DA^{1/ \nu}}\,, \tag{35}\] where \(\alpha_{n}^{(k)}\) is a numerical factor which depends on \(k\) and \(n\). As one can see from Eq. (35), for all these models theory predicts am essential singularity at \(A\to 0\), and the singularity becomes stronger as \(k\) is increased. ###### Acknowledgements. The author is very grateful to Satya N. Majumdar for a useful discussion. This research was supported by the Israel Science Foundation (Grant No. 1499/20). ## Appendix A Derivation of Eq. (12) and boundary condition (15). Here we temporarily switch back to the original variables and consider a linear variation of the constrained action functional \[s_{\lambda}[x(t),T]=\int_{0}^{T}\left(\frac{\ddot{x}^{2}}{2}-\lambda x^{n} \right)dt \tag{36}\] with respect to small variations of both \(x(t)\) and \(T\): \(x(t)\to x(t)+\delta x(t)\) and \(T\to T+\delta T\). We need to linearize the variation \[\delta s_{\lambda}=s[x(t)+\delta x(t),T+\delta T]-s[x(t),T] \tag{37}\] with respect to \(\delta x\) and \(\delta T\). The linearization yields, after simple algebra, \[\delta s_{\lambda} = \int_{0}^{T}\left(\ddot{x}\delta\ddot{x}-\lambda nx^{n-1}\delta x \right)dt \tag{38}\] \[+ \int_{T}^{T+\delta T}\left(\frac{\ddot{x}^{2}}{2}-\lambda x^{n} \right)dt\,.\] Performing two integrations in parts in the first integral, evaluating the second interval in the limit of \(\delta T\to 0\), and taking into account the boundary conditions (13), we obtain \[\delta s_{\lambda} = \int_{0}^{T}\left(x^{(4)}-\lambda nx^{n-1}\right)\delta x\,dt+ \ddot{x}(T)\delta\dot{x}(T) \tag{39}\] \[+ \left[\frac{\ddot{x}^{2}(T)}{2}-\lambda x^{n}(T)-\dddot{x}(T) \right]\delta T\,.\] Each of the three terms in the variation must vanish independently for arbitrary \(\delta x\) and \(\delta T\). The first term yields the Euler-Lagrange equation \(x^{(4)}-\lambda nx^{n-1}=0\) which, upon the rescaling \(\Lambda t\to t\) (we recall that \(\lambda\equiv-\Lambda^{4}\)), coincides with Eq. (12). The second term yields the boundary condition (14). Using the latter condition and the condition \(x(T)=0\) in the third term, we arrive at the boundary condition (15).
2302.12893
Don't be fooled: label leakage in explanation methods and the importance of their quantitative evaluation
Feature attribution methods identify which features of an input most influence a model's output. Most widely-used feature attribution methods (such as SHAP, LIME, and Grad-CAM) are "class-dependent" methods in that they generate a feature attribution vector as a function of class. In this work, we demonstrate that class-dependent methods can "leak" information about the selected class, making that class appear more likely than it is. Thus, an end user runs the risk of drawing false conclusions when interpreting an explanation generated by a class-dependent method. In contrast, we introduce "distribution-aware" methods, which favor explanations that keep the label's distribution close to its distribution given all features of the input. We introduce SHAP-KL and FastSHAP-KL, two baseline distribution-aware methods that compute Shapley values. Finally, we perform a comprehensive evaluation of seven class-dependent and three distribution-aware methods on three clinical datasets of different high-dimensional data types: images, biosignals, and text.
Neil Jethani, Adriel Saporta, Rajesh Ranganath
2023-02-24T21:02:58Z
http://arxiv.org/abs/2302.12893v1
Don't be fooled: label leakage in explanation methods and the importance of their quantitative evaluation ###### Abstract Feature attribution methods identify which features of an input most influence a model's output. Most widely-used feature attribution methods (such as SHAP, LIME, and Grad-CAM) are "class-dependent" methods in that they generate a feature attribution vector as a function of class. In this work, we demonstrate that class-dependent methods can "leak" information about the selected class, making that class appear more likely than it is. Thus, an end user runs the risk of drawing false conclusions when interpreting an explanation generated by a class-dependent method. In contrast, we introduce "distribution-aware" methods, which favor explanations that keep the label's distribution close to its distribution given all features of the input. We introduce SHAP-KL and FastSHAP-KL, two baseline distribution-aware methods that compute Shapley values. Finally, we perform a comprehensive evaluation of seven class-dependent and three distribution-aware methods on three clinical datasets of different high-dimensional data types: images, biosignals, and text. ## 1 Introduction Post-hoc feature attribution methods, which identify the features of an input that most influence predictions, are critical in high-stakes contexts such as healthcare. Feature attribution methods are used not only to interpret individual predictions, but also to better understand a model's global behavior for model development, knowledge discovery, and quality improvement and assurance. For example, such methods have been used to detect spurious signals in hip fracture radiographs (Badgeley et al., 2019), to discover novel gene expression signatures (Janizek et al., 2021), and to identify brain regions that help distinguish between possible sources of dementia (Iizuka et al., 2019). Most widely-used feature attribution methods (such as SHAP (Lundberg and Lee, 2017), LIME (Ribeiro et al., 2016), and Grad-CAM (Selvaraju et al., 2016)) are "class-dependent" methods, which we define to be any approach that generates a feature attribution vector as a function of class. However, we theoretically and empirically show that class-dependent methods can "leak" information about the selected class, making that class appear more likely than it is. Thus, an end user runs the risk of drawing false conclusions interpreting an explanation generated by a class-dependent method. As an alternative, we define a "distribution-aware" method (such as REAL-X (Jethani et al., 2021)) to be a class-independent method that creates explanations based on the change in the label's distribution when the features are perturbed, with a preference for explanations with a small change in distribution. Preferring explanations that keep the label's distribution close to its distribution when given full knowledge of the features ameliorates the miscalibration that can occur when using class-dependent methods. Further, we consider the evaluation strategy that progressively includes only the top \(n\)% of features for each data point and then plots the resulting model performances on an inclusion curve (Arras et al., 2017; Petsiuk et al., 2018; Jethani et al., 2022). For this evaluation strategy, we demonstrate that the optimal feature attribution method is distribution-aware. Finally, we propose a strategy for evaluating a feature attribution method given a fixed model. In summary, our six primary contributions are the following. (1) We introduce and define the difference between class-dependent and distribution-aware feature attribution methods. (2) We demonstrate that explanations generated by class-dependent methods using the true label can leak information about the true label, leading to inflated performance metrics for class-dependent methods, whereas this cannot occur with class-independent methods. (3) We show that explanations generated by class -dependent methods using the predicted label can leak information about the predicted class, making the predicted class appear more likely than it is. (4) We establish that the optimal feature attribution vector, as measured by the above evaluation metric, is distribution-aware. (5) We present two distribution-aware feature attribution methods, SHAP-KL and FastSHAP-KL, that estimate Shapley values, are easy to optimize, and can serve as baselines to facilitate the development of additional distribution-aware methods. (6) We perform a comprehensive evaluation of seven class-dependent and three distribution-aware feature attribution methods on three clinical datasets of different high-dimensional data types: images, biosignals, and text. ## 2 Related Work Feature attribution methods generally fall into one of two categories, which we review below: removal-based methods and gradient-based methods. See Appendix A for relevant feature attribution methods grouped by type. **Removal-based feature attribution methods.** Removal-based methods remove subsets of the input features to determine their influence Covert et al. (2021). Many removal-based methods, such as LIME Ribeiro et al. (2016) and SHAP Lundberg and Lee (2017), perform the removal operation for each sample of data, which can be computationally intensive. Amortized approaches--such as L2X Chen et al. (2018), INVASE Yoon et al. (2018), REAL-X Jethani et al. (2021), and FastSHAP Jethani et al. (2022)--represent a new form of removal-based explainability that performs the removal operation across multiple samples of data at a time in order to learn models that produce explanations for a sample of data with a single forward pass Fong and Vedaldi (2017); Schwab and Karlen (2019). Recent work has shown that when using removal-based methods, replacing the removed features with reference values shifts the input out-of-distribution or off-manifold, which can affect explanation quality and make it easier for adversarial attacks Frye et al. (2021); Slack et al. (2020); Jethani et al. (2022). In addition, some amortized explanation methods, such as L2X and INVASE, can produce explanations that encode the label directly in the shape of the explanation rather than with the feature values the explanation highlights Jethani et al. (2021). **Gradient-based feature attribution methods.** Gradient-based methods determine feature importance using gradients with respect to either the input or intermediate representations of the input Ancona et al. (2019). SmoothGrad Smilkov et al. (2017), for example, measures how sensitive the model output is to small changes in a given feature. Integrated Gradients (IntGrad) Sundararajan et al. (2017), on the other hand, computes the average gradient to measure the salience of input features relative to a user-selected reference input. Another popular method, Grad-CAM Selvaraju et al. (2016), computes the gradient of a class with respect to an intermediate layer of a convolutional neural network (CNN). Gradient-based methods have been shown to be sensitive to small changes or distributional shifts in the input. For example, adding a constant shift to the input can dramatically change the explanations produced by gradient-based methods Kindermans et al. (2019); Ghorbani et al. (2019). Gradient-based methods can also produce explanations that appear invariant to model parameter and training label randomizations Adebayo et al. (2018). ## 3 Evaluation of Feature Attribution Methods A feature attribution method generally produces a single attribution vector that assigns a score to each input feature, where a higher score implies a larger relationship to an output. For a given data point, a single attribution vector could produce many possible _explanations_, where an explanation is some subset of the features based on the scores assigned by the feature attribution method. For example, one could choose the features with the top one, five, or ten percent of scores. In order to evaluate a feature attribution method, one could compare its explanations to human benchmark explanations. However, human explanations can be time-consuming and expensive to obtain, or may not be available at all. For example, while a neural network is able to predict diabetes from an electrocardiogram (ECG), it is not yet clear to practitioners what information in the signal is predictive of the disease Jethani et al. (2022). Multiple strategies have been proposed for evaluating feature attribution vectors without human benchmark explanations. One standard evaluation strategy is to progressively include only the top \(n\)% of features for each data point and measure the resulting effect on model performance Bach et al. (2015); Samek et al. (2017); Hooker et al. (2019); Sturmfels et al. (2020). The expectation is that the better a feature attribution method is, the more model performance will improve upon inclusion of only the top-scoring features. Model performance using each top \(n\)% subset of features is then plotted as an inclusion curve Arras et al. (2017); Petsiuk et al. (2018); Jethani et al. (2022). We follow this evaluation strategy, as described below. **Defining the evaluation.** Let \(\mathbf{x}\in\mathcal{X}\) be a random vector consisting of \(d\) features, or \(\mathbf{x}=(\mathbf{x}_{1},\ldots,\mathbf{x}_{d})\). Let \(\mathbf{y}\in\mathcal{Y}=\{1,\ldots,K\}\) be the target variable for a multi-class classification problem. We use \(\mathbf{s}\in\{0,1\}^{d}\) to denote subsets of the indices \(\{1,\ldots,d\}\). The symbols \(\mathbf{x},\mathbf{y},\mathbf{s}\) are random variables, and the symbols \(x,y,s\) are possible values for those random variables. Hooker et al. (2019) noted that when relevant features are removed, the new altered input comes from a distribution that is different from that of the original unaltered input, thereby making it difficult to know whether any degradation in model performance is caused by the removal of relevant features or by the shift in distribution. The authors solve this problem by training new _surrogate_ models on the altered inputs, but it has been shown that this retraining procedure not only is computationally expensive because it requires re-training for each type of explanation, but also allows the surrogate models to incorrectly assign high scores to feature attribution methods that encode the label in the locations of the removed features as opposed to their actual values (Jethani et al., 2021; Rong et al., 2022). In order to prevent the surrogate model from co-adapting to the explanations, recent work has proposed a computationally efficient strategy that trains a single surrogate model with randomly masked inputs (Jethani et al., 2021, 2022; Covert et al., 2021). We follow this strategy as described below. Let \(F(\mathbf{x},\mathbf{y})\) be the data distribution from which data is drawn, and let \(p(\mathbf{s})\) be the distribution over \(\mathbf{s}\) where all subsets occur with non-zero probability. The surrogate evaluation model \(p_{\text{surr}}\) is trained to predict the label \(\mathbf{y}\) given a vector of masked features. Masking is accomplished with a function \(m(x,s)\), where the masking function \(m\) replaces features \(x_{i}\) where \(s_{i}=0\) with a [mask] value that is not in the support of \(\mathbf{x}_{i}\). The _Surrogate Objective_ is \[\mathcal{L}(\beta)=\\ \underset{F(\mathbf{x})}{\mathbb{E}}\underset{p(\mathbf{s})}{ \mathbb{E}}\Big{[}D_{\mathrm{KL}}\big{(}F\left(\mathbf{y}\,|\,x\right)\,||\,p_ {\text{surr}}(\mathbf{y}\,|\,m(x,s);\beta)\big{)}\Big{]}, \tag{1}\] where \(D_{\mathrm{KL}}\) is the Kullback-Leibler (KL) divergence. The surrogate model at optimality matches the conditional probability distribution of the target variable given some subset of features. More formally, if \(x_{s}\) is the set \(\{x_{i}:s_{i}=1\}\), then \(p_{\text{surr}}(y\,|\,m(x,s);\beta)=F(y\,|\,x_{s})\)(Jethani et al., 2021; Covert et al., 2021). After training, the surrogate evaluation model can evaluate any feature attribution method. Let \(e(x,y)\in\mathbb{R}^{d}\) be a feature attribution vector generated by a feature attribution method for each paired sample of data \(x,y\) where \(e_{i}(x,y)\in\mathbb{R}\) is a score for the feature \(x_{i}\). Let \(\texttt{top}_{n}(e)=\operatorname*{arg\,max}_{s}s^{T}e\), such that \(s\in\{0,1\}^{d},\ \|s\|=\lceil\frac{nd}{100}\rceil,\ \text{ and }n\in[0,100]\), define an operation that returns an explanation that denotes the top _n_% of features with the highest attributions \(e_{i}\in\mathbb{R}\). An inclusion curve is constructed by progressively increasing \(n\) from 0 to 100, selecting the top _n_% of features for each data point in a held-out test set using the corresponding feature attribution vector \(e(x,y)\), and then measuring performance of the surrogate evaluation model \(p_{\text{surr}}\Big{(}\mathbf{y}\,|\,m\big{(}x,\texttt{top}_{n}(e(x,y)) \big{)};\beta\Big{)}\) across the entire held-out test set using the log-likelihood. The area under the inclusion curve (iAUC) is \[\mathrm{iAUC}=\underset{n\sim\texttt{Unif}(0,100)}{\mathbb{E}} \underset{F(\mathbf{x},\mathbf{y})}{\mathbb{E}}\\ \Bigg{[}\log p_{\text{surr}}\Big{(}y\,|\,m\big{(}x,\texttt{top}_{ n}\left(e(x,y)\right);\beta\big{)}\Big{)}\Bigg{]}. \tag{2}\] A higher iAUC indicates a higher likelihood of the labels averaged across different feature subset sizes. See Figure 1 for a diagram of the evaluation procedure. ## 4 Class-Dependent vs. Distribution-Aware Methods In this section, we draw a distinction between class-dependent and distribution-aware feature attribution methods. This new categorization of feature attribution methods exposes an important limitation of class-dependent methods, which are more commonly used than distribution-aware methods. First, we define class-dependent methods Figure 1: Illustration of the evaluation framework. An inclusion curve is constructed by progressively increasing \(n\) from 0 to 100, selecting the top \(n\%\) of features for each data point in a held-out test set using the corresponding feature attribution vector, and then measuring performance of the surrogate evaluation model across the entire test set using the log-likelihood. and show how they can leak information about the selected class. Then, we define distribution-aware methods and show that the maximizer of iAUC is a distribution-aware method. Finally, we introduce two baseline distribution-aware methods that compute Shapley values and are easy to optimize. ### Class-dependent methods Feature attribution methods can be divided into two categories: _class-dependent_ and _class-independent_. We define a class-dependent feature attribution method to be any approach that generates a feature attribution vector as a function of class. Formally, for each sample of data \(x\) and class \(c\), a class-dependent feature attribution method \(e(x,c):\mathcal{X}\times\mathcal{Y}\rightarrow\mathbb{R}^{d}\) generates an attribution vector such that \(e(x,c)\neq e(x,c^{\prime})\) for some \(c\neq c^{\prime}\). LIME, SHAP, Grad-CAM, IntGrad, SmoothGrad, and FastSHAP are all examples of class-dependent methods. Appendix B shows how the computation performed by each of these methods is class-dependent. A class-independent method generates an attribution vector that does not depend on any one class. Formally, for each sample of data \(x\), a class-independent feature attribution method \(e(x):\mathcal{X}\rightarrow\mathbb{R}^{d}\) generates an attribution vector as a function of the input \(x\). See Appendix C for a glossary of terms defined in this paper. Label leakage.In the specific case where a class-dependent method generates an attribution vector using the true label, the predictive performance with only a fixed fraction of features can _exceed_ the predictive performance with the entire set of features. In other words, the class-dependent method is able to leak information about the true label through the feature attributes that is not captured by the full set of features. This leakage would cause the evaluation metric iAUC (Equation (2)) to overestimate the utility of the explanation. Formally, **Lemma 1**.: _There exists a class-dependent feature attribution method \(e(\mathbf{x},\mathbf{y})\) and data-generating distribution \(x,y\sim F(\mathbf{x},\mathbf{y})\) such that_ \[\operatorname*{\mathbb{E}}_{F(\mathbf{x},\mathbf{y})} \left[\log F(y\,|\,x_{\operatorname*{\mathbb{missing}}\cup p_{n} (e(x,y))})\right]\] \[>\operatorname*{\mathbb{E}}_{F(\mathbf{x},\mathbf{y})} \left[\log F\left(y\,|\,x\right)\right] \tag{3}\] _for some \(n\in[0,100]\%\)._ The proof can be found in Appendix D. Lemma 1 shows that the explanation can predict the label better than the full feature set, indicating that the explanations are leaking the label. While Lemma 1 introduces label leakage as a theoretical possibility for class-dependent methods using the true label, we show empirically in Section 6.3 that this phenomenon occurs with popular class-dependent methods on clinical datasets, up to estimation error of a model trained to approximate \(F\left(y\,|\,x\right)\). Lemma 1 works by having the feature attribution provide low scores to features that reduce the probability of the observed label. Thus, when only considering the top \(n\%\) of features, features that reduce the probability of the observed label are obfuscated. By obfuscating features that support other classes, feature attributions generated by class-dependent methods fail to track the uncertainty of the true label, making the label appear more likely than it should. This susceptibility could have important implications when interpreting the explanations generated using the true label. For example, a patient's likelihood of hospital readmission given their discharge summary may only be 55%, but by omitting the word "denies" from a note that reads, "... pt denies chest pain" in the discharge summary, the patient may appear to have an 80% chance of readmission. Overconfidence using the predicted class.As shown in Lemma 1, a feature attribution method should not have access to the true labels when generating feature attributions in order to avoid label leakage. An alternative to using the true label is using a model's prediction of the label. Let \(\hat{y}=\operatorname*{arg\,max}_{y}p_{\text{model}}(y\,|\,x;\theta)\) and let \(e^{\prime}(x,\hat{y})\) be a class-dependent method that uses the model's predicted class. Because \(\hat{y}\) is a function of \(x\), \(e^{\prime}(x,\hat{y})=e(x)\), a class-independent method. Therefore, we see that a class-dependent method that uses the predicted class becomes a class-independent method. We call class-dependent methods that use the predicted label _predicted-label-dependent_ methods. Class-independent methods do not leak the label on average: **Lemma 2**.: _There does not exist any class-independent feature attribution method \(e(\mathbf{x})\) where Equation (3) holds for any \(F(\mathbf{x},\mathbf{y})\)._ The proof can be found in Appendix E. Predicted-label-dependent methods need not consider the full distribution across all classes. They could, for example, focus only on the probability of the predicted class. The implication is that explanations generated using the predicted class may instead leak the predicted class and omit predictive features that do not support the predicted class. In other words, an explanation could make the predicted class appear more likely than it is for some subset of feature values. Formally, **Lemma 3**.: _There exists a predicted-label-dependent feature attribution method \(e(\mathbf{x},\hat{y})\) where, for some \(x\) where \(F(\mathbf{x}=x)>0\) and for some \(n\in[0,100]\%\),_ \[F(\mathbf{y}=\hat{y}\,|\,x_{\operatorname*{\mathbb{missing}}\cup p_{n}(e(x, \hat{y}))};\beta)>F(\mathbf{y}=\hat{y}\,|\,x).\] The proof can be found in Appendix F. Lemma 3 demonstrates that an end user runs the risk of drawing false conclusions when interpreting an explanation generated for the predicted class with a class-dependent method. As an example, consider a model that predicts a patient's likelihood of all-cause mortality to be \(52\%\) from data for that patient including clinical notes. Let us say that a clinician is starting a shift in the hospital, and while they do not have time to read all of the patient's clinical notes, they would like to read the most critical portions of the clinical notes as relates to the patient's likelihood of all-cause mortality. Now suppose the critical portions of the text are highlighted using a predicted-label-dependent method. Then for some instances, the clinician will miss those features that have a negative relationship with all-cause mortality, but that would still help to inform how they might choose to care for the patient during their shift. ### Distribution-aware methods The challenge with the full-space of class-independent methods is that class-independent methods need not respect the whole distribution of the label given the inputs, \(F(y\,|\,x)\). To limit to methods that consider the whole distribution, we define _distribution-aware_ feature attribution methods. A distribution-aware feature attribution method is a class-independent method \(e(x)\) that focuses on the data distribution of the label given the features, \(F(y\,|\,x)\). Formally, let \(D\) be a probability divergence, and \(h(x)\) be a perturbation function. Then for some distribution \(r\) a distribution-aware feature attribution method can be written in terms of the divergence \(D\big{(}F\big{(}\mathbf{y}\,|\,x\big{)}\,\,||\,\,r\big{(}\mathbf{y}\,|\,h(x) \big{)}\big{)}\) and prefers smaller divergences. In other words, a distribution-aware method generates feature attributions by measuring the effect of feature perturbation on the distribution of the label. The effect is measured by the divergence between the distribution of \(\mathbf{y}\) given the input and the distribution of \(\mathbf{y}\) given the perturbed input. An example perturbation function removes features from the input. The data distribution \(F(y\,|\,x)\) is unavailable, so practical distribution-aware feature attribution methods make use of distributions trained to approximate \(F(y\,|\,x)\) such as the surrogate \(p_{\text{surr}}\)\((\mathbf{y}\,|\,x;\beta)\). How a distribution-aware method prefers a smaller divergence depends on the method. For example, REAL-X (Jethani et al., 2021) is a distribution-aware method that prefers smaller divergences directly through its optimization procedure; we show how the computation performed by REAL-X is distribution-aware in Appendix B. As shown in Lemma 1, to avoid the potential for label leakage, a feature attribution method should not have access to the true labels when generating feature attributions. Given the constraint of not using the true labels, we show in Appendix G that the maximizer of iAUC assuming an optimal surrogate \(p_{\text{surr}}\) is _not_ a class-dependent method, but a distribution-aware method: \[e^{*}=\operatorname*{arg\,min}_{e}\operatorname*{\mathbb{E}}_{F( \mathbf{x})}\operatorname*{\mathbb{E}}_{n\sim\text{Unif}(0,100)}\] \[\quad\bigg{[}D_{\text{KL}}\Big{(}F\big{(}\mathbf{y}\,|\,x\big{)} \,\,||\,F(\mathbf{y}\,|\,x_{{}_{\text{top}_{n}(e(x))}})\Big{)}\bigg{]}. \tag{4}\] Equation (4) shows that the optimal feature attribution vector \(e^{*}(x)\) for an instance \(x\) is distribution-aware in that it minimizes the KL divergence between the likelihood of the label given all of the features and the likelihood of the target variable given the top \(n\)% of the features, averaged across all possible \(n\). Furthermore, we see that \(e^{*}(x)\) does not depend on a true label \(y\), but instead averages over a distribution of the label. The KL divergence, as with many divergences, measures the closeness of two distributions, and thus also measures the calibration in how well the distribution of the target given a subset of features matches the distribution of the target given the full feature set. Therefore, while a distribution-aware method--like a class-dependent method--returns a subset of the features, the subset that a distribution-aware method returns is calibrated according to the predicted probability. In the all-cause mortality example in Section 4.1, a distribution-aware method would highlight an appropriate ratio of positive and negative features. ## 5 Distribution-aware Shapley value estimators Gradient optimization is generally used to solve optimization problems such as the optimal explainer for iAUC. However, the function \(\text{top}_{n}\) in Equation (4) is not differentiable. We develop two baseline distribution-aware methods, **SHAP-KL** and **FastSHAP-KL**, that yield real-valued and, therefore, simpler optimization problems with a squared loss. SHAP-KL and FastSHAP-KL estimate Shapley values. To compute a Shapley value for each input feature, one must first define how to value a subset of features. Given Equation (4), we propose valuing a subset of features according to the KL divergence between the distribution of \(\mathbf{y}\) given the full set of features and the distribution of \(\mathbf{y}\) given a subset of the features: \[v_{x}(s)=-D_{\text{KL}}\big{(}p_{\text{surr}}\,(\mathbf{y}\,|\,x;\beta)\,\,|| \,\,p_{\text{surr}}(\mathbf{y}\,|\,m(x,s);\beta)\big{)}.\] Notice that a higher value for a subset of features entails a smaller KL divergence, as required for distribution-aware methods. Letting \(n\sim\mathcal{U}(\mathcal{D})\) denote a uniform distribution over the set \(\mathcal{D}\) of the number of features \(\{0,\dots,d-1\}\) to include in a subset, and letting \(s\sim\mathcal{U}(\mathcal{P}_{i}(n))\) denote a uniform distribution over all possible feature subsets (power set of \(\{0,1\}^{d}\)) such that \(n\) features are included in the subset (\(|s|_{0}=n\)) and the \(i\)th feature is not included in the subset (\(s_{i}\neq 1\)), the definition of a Shapley value for the \(i\)th feature is \[\phi_{i}(v) =\operatorname*{\mathbb{E}}_{n\sim\mathcal{U}(\mathcal{D})} \operatorname*{\mathbb{E}}_{s\sim\mathcal{U}(\mathcal{P}_{i}(n))}\Big{[}v_{x} (s+e_{i})-v_{x}(s)\Big{]} \tag{5}\] \[=\operatorname*{\mathbb{E}}_{n\sim\mathcal{U}(\mathcal{D})} \operatorname*{\mathbb{E}}_{s\sim\mathcal{U}(\mathcal{P}_{i}(n))}\operatorname* {\mathbb{E}}_{y\sim F(\operatorname*{\boldsymbol{y}}|\,x)}\] Equation (5) shows that this KL divergence-based Shapley value assigns an attribution to a feature based on how much it increases the log probability of the label when added to different subsets of the rest of the features. Note that the maximizer of iAUC (Equation (4)) is a weighted average across subsets that progressively increase in size (e.g. the top 1% of features is a strict subset of the top 2% of features); the Shapley value (Equation (5)) is a weighted average across all possible feature subsets. Unfortunately, Shapley values introduce computational challenges: the expectation in Equation (5) involves an exponential number of subsets, making it infeasible to calculate for large \(d\). Therefore, SHAP-KL and FastSHAP-KL efficiently approximate the Shapley values. Following Lundberg and Lee (2017), SHAP-KL computes Shapley values using its least-squares characterization: \[e_{\text{SHAP-KL}}(x)=\\ \operatorname*{arg\,min}_{\phi}\operatorname*{\mathbb{E}}_{p( \mathbf{s})}\Big{[}\big{(}v_{x}(s)-s^{T}\phi-v_{x}(\mathbf{0})\big{)}^{2} \Big{]}\,. \tag{6}\] Following Jethani et al. (2022), FastSHAP-KL learns an explanation model \(\phi_{\text{fast-kl}}(x;\eta)\) that outputs Shapley values by minimizing the following objective: \[\mathcal{L}_{\text{FastSHAP-KL}}(\eta)=\\ \operatorname*{\mathbb{E}}_{F(\mathbf{x})}\operatorname*{\mathbb{E }}_{p(\mathbf{s})}\Big{[}\big{(}v_{x}(s)-\mathbf{s}^{\top}\phi_{\text{fast-kl }}(x;\eta)-v_{x}(\mathbf{0})\big{)}^{2}\Big{]} \tag{7}\] where the feature attributions are generated in a single forward-pass through the explanation model: \(e_{\text{FastSHAP-KL}}(x)=\phi_{\text{fast-kl}}(x;\eta)\). For both objectives (Equations (6) and (7)), the efficiency constraint and subset sampling distribution \(p(\mathbf{s})\) are the same as for SHAP and are presented in Appendix B. ## 6 Experiments We validate our theoretical findings by performing a comprehensive evaluation of ten of the most commonly used feature attribution methods using three clinical datasets of different high-dimensional data types: biosignals, images, and text. We also compare SHAP-KL to its class-dependent counterpart SHAP-S using the general image dataset CIFAR10 (Krizhevsky et al., 2009), demonstrating similar findings as in the clinical datasets (Appendix H). ### Datasets and model tasks For biosignal data, we use the PTB-XL ECG dataset (Wagner et al., 2020). We detect right bundle branch block (RBBB) from ECG inputs using a ResNet model adapted from Hannun et al. (2019) (we include details of the model architecture in Appendix I). For image data, we use the EyePACs retinal fundus imaging dataset (Graham, 2015). We detect the presence and severity of diabetic retinopathy in retinal images using a DenseNet121 model (Huang et al., 2017) pre-trained on ImageNet. For text data, we use the MIMIC-IV critical care dataset (Johnson et al., 2022). We predict 30-day readmission from patients' hospital discharge summaries using the pre-trained Bio+Discharge Summary BERT model (Alsentzer et al., 2019; Huang et al., 2019). We provide details on dataset processing and splits in Appendix J and details on training the prediction models in Appendix K. ### Feature attribution methods We evaluate the following seven class-dependent methods: LIME, SHAP, Grad-CAM, IntGrad, SmoothGrad, FastSHAP, and SHAP-S (Covert et al., 2021; Frye et al., 2021). We evaluate the following three distribution-aware methods: SHAP-KL, REAL-X, and FastSHAP-KL. Because Grad-CAM was designed for CNNs, we did not evaluate Grad-CAM using MIMIC-IV. REAL-X failed to optimize on MIMIC-IV using five different regularization hyperparameters, therefore we did not evaluate REAL-X on MIMIC-IV. REAL-X likely requires additional tuning for this task given that it uses score-function gradient optimization. We provide details on explanation generation in Appendix L; describe how iAUC is empirically calculated in Appendix M; and report training and explanation run-times for each method in Appendix N. ### Results Label leakage in class-dependent methods using the true label.First, we plot the log-likelihood inclusion curves of the seven evaluated class-dependent methods when generating an attribution vector using the true label (Figure 2). In general, as important features are included in the input to the surrogate evaluation model, the likelihood of the true label (and therefore the log-likelihood across the entire dataset) should increase. On all three datasets we find that the performance of many of the class-dependent methods when using a subset of the most relevant features exceeds performance when using the full set of features (represented by the horizontal dotted line in Figure 2). With finite data and an imperfect surrogate evaluation model \(p_{\text{surr}}\), the excess performance could be due to either estimation error or label leakage. Therefore, unless we know a priori how the features are related to the input, it is difficult to know whether the unexpectedly high performance of the class-dependent methods is due to label leakage or due to better estimation of the surrogate with fewer features. Distribution-aware methods do not demonstrate label leakage.Next, we compare our baseline distribution-aware methods SHAP-KL and FastSHAP-KL to their class-dependent counterparts SHAP-S and FastSHAP. We plot the log-likelihood inclusion curves of the four methods using the true label to select which class to explain for SHAP-S and FastSHAP (Figure 3). We find that the performance of SHAP-KL and FastSHAP-KL when using a subset of the most relevant features generally does not exceed performance when using the full set of features, validating that distribution-aware feature attribution methods do not leak the label on average (Lemma 2). FastSHAP-KL on the retinal fundus imaging dataset and SHAP-KL on the discharge summaries dataset generate feature attributions that achieve slightly higher log-likelihoods when using a subset of the features than when using the full set of features (Figure 3). Since the performance of a distribution-aware method provably cannot exceed the performance using all features (Lemma 2), the amount SHAP-KL and FastSHAP-KL rise above the performance estimate using the full features (the horizontal dotted line) provides a window into the magnitude of relative model misestimation for different subset sizes. This magnitude of model misestimation is smaller than the excess performance over the full feature set in class-dependent methods, suggesting that label leakage, not model estimation, is the primary driver of excess performance in class-dependent methods. During training, the surrogate evaluation model takes as input a vector of masked features to approximate the probability distribution of the target given a possible subset of features. It is possible that the surrogate evaluation model is better able to optimize over subsets with fewer features. Furthermore, since there is an exponential number of subsets, learning to model each conditional distribution given each subset is a difficult task. However, as a sanity check, we ensure that the surrogate evaluation model performs as well as the original prediction model when evaluated on the Figure 3: The performance of SHAP-KL and FastSHAP-KL when using a subset of the most relevant features generally does not exceed performance when using the full feature set (represented by the horizontal dotted lines), validating that distribution-aware methods do not leak the label on average. Figure 2: When generating explanations using the true label, class-dependent methods can leak information about the true label that is not captured by the full feature set: performance when using a subset of the most relevant features exceeds performance when using the full feature set (represented by the horizontal dotted lines above). full feature set (Appendix O). Predicted-label-dependent vs. distribution-aware methods.Finally, we evaluate the iAUC of the ten feature attribution methods when using the predicted class (instead of the true label) to select which class to explain for the seven class-dependent methods (Table 1). As the most relevant features are included as input to the surrogate evaluation model, we expect the iAUC of a successful feature attribution method to increase. Though the theory shows that the best method for iAUC is distribution-aware (Equation (4)), the distribution-aware methods studied do not directly optimize iAUC, leaving open the possibility for a predicted-label-dependent method to have higher iAUC. We find that compared to predicted-label-dependent methods, distribution-aware methods have, on average, higher iAUCs on two of the three datasets: REAL-X obtained the highest iAUC (-0.068) on the ECG dataset and FastSHAP-KL obtained the highest iAUC (-1.400) on the retinal fundus imaging dataset. On the discharge summaries dataset, however, the predicted-label-dependent methods outperform the distribution-aware methods on average: LIME and SHAP-S obtained the highest iAUCs (both -0.614). ## 7 Discussion ### Choosing a feature attribution method When using the true label, distribution-aware methods are recommended given that they do not demonstrate label leakage. When using the predicted label, however, it is not clear whether a predicted-label-dependent method or a distribution-aware method would be preferred. While in theory a class-dependent method does not perform optimally with respect to iAUC (Equation (4)), it can still outperform a distribution-aware method in practice because existing distribution-aware methods do not optimize iAUC directly (Section 6.3). In order to evaluate a feature attribution method given some fixed model, we recommend constructing an inclusion curve for the method under consideration as described in Section 3. The inclusion curve can then be used to determine how much of the model's performance is explained by different subsets of the top features. For example, an inclusion curve might reveal that the top \(10\%\) of features explains \(90\%\) of the model's accuracy under some attribution method. If the performance is high enough given the desired percentage of features, the feature attribution method can be used. If it is not high enough, alternative feature attribution methods should be evaluated. ### The merits of class-dependent methods While our theoretical and empirical results demonstrate that class-dependent methods can make a given class appear overly likely, there are settings in which focusing on a single class, instead of on the full distribution across all classes, is a useful design feature (as opposed to a "bug") of class-dependent methods. Because iAUC measures how well the target distribution can be approximated using a subset of features, our paper focuses specifically on settings in which each data point can take on different values of the target distribution (because the true label or predicted class for one sample may not be the same for another sample). While class-dependent methods do not maximize iAUC and may leak the label, they are still useful when trying to understand which features increase or decrease the probability of a specific class, in which case explanations are generated using a fixed class for _all_ data points. For exam \begin{table} \begin{tabular}{l c c c} \hline \hline & \multicolumn{3}{c}{iAUC} \\ \cline{2-4} & PTB-XL: Biosignals & EyePACs: Images & MIMIC-IV: Text \\ \hline \multicolumn{4}{l}{_Distribution-aware_} \\ FastSHAP-KL & -0.075 (-0.081, -0.071) & **-1.400 (-1.419, -1.386)** & -0.634 (-0.638, -0.631) \\ REAL-X & **-0.068 (-0.075, -0.064)** & -1.879 (-1.897, -1.855) & \\ SHAP-KL & -0.073 (-0.080, -0.066) & -1.596 (-1.619, -1.578) & -0.618 (-0.623, -0.613) \\ \hline \multicolumn{4}{l}{_Predicted-label-dependent_} \\ FastSHAP & -0.088 (-0.097, -0.082) & -1.851 (-1.879, -1.825) & -0.627 (-0.632, -0.623) \\ Grad-CAM & -0.069 (-0.076, -0.064) & -1.988 (-2.018, -1.962) & \\ IntGrad & -0.128 (-0.141, -0.117) & -1.443 (-1.461, -1.422) & -0.635 (-0.638, -0.632) \\ LIME & -0.095 (-0.103, -0.087) & -1.594 (-1.609, -1.571) & **-0.614 (-0.620, -0.609)** \\ SHAP & -0.097 (-0.106, -0.089) & -1.598 (-1.612, -1.565) & -0.615 (-0.621, -0.608) \\ SHAP-S & -0.095 (-0.105, -0.085) & -1.623 (-1.650, -1.597) & **-0.614 (-0.618, -0.607)** \\ SmoothGrad & -0.130 (-0.143, -0.120) & -1.718 (-1.742, -1.695) & -0.634 (-0.637, -0.631) \\ \hline \hline \end{tabular} \end{table} Table 1: Evaluation of the feature attribution methods using iAUC when class-dependent methods use the predicted class. Parentheses indicate 95% confidence intervals. ple, given a model that predicts which molecules inhibit growth of a bacterial species, a class-dependent method might help highlight moieties that maximize the likelihood of that outcome in order to guide molecule development. It remains open what is the best evaluation for a class-specific explanation. ### The limitations imposed by discrete optimization As discussed in Section 5, directly maximizing evaluation metrics for feature attribution methods can be infeasible since they often involve discrete functions that are not differentiable, such as \(\texttt{top}_{n}\) in Equation (4). While SHAP-KL and FastSHAP-KL serve as distribution-aware baselines that yield real-valued optimization problems with a squared loss, neither optimizes iAUC directly, which could negatively affect their performances. The development of additional distribution-aware methods that make use of advances in discrete optimization to more directly optimize evaluation metrics such as Equation (4) is an important avenue for future work. ### Interpreting the feature attribution vector As discussed in Section 3, feature attribution scores can produce many possible explanations, and often it is not known in advance which \(n\)% of features will ultimately be of interest. When this is the case, feature attribution method performance can be evaluated across different feature subset sizes and measured using a summary statistic such as iAUC. Eventually, a single feature attribution vector is produced that includes a score for each input feature. Because iAUC is a weighted average, if we were to use the single feature attribution vector to select the "top" \(k\) features, it is not guaranteed that we would in fact retrieve the most predictive feature subset of size \(k\). To see why, consider a scenario in which there are three input features \(x_{1}\), \(x_{2}\), and \(x_{3}\): _together_\(x_{1}\) and \(x_{2}\) are perfectly predictive of the output, but _separately_ they are not very predictive of the output; \(x_{3}\) alone is almost, but not quite, perfectly predictive of the output. Given \(k=1\), the most predictive feature would be \(x_{3}\). Given \(k=2\), the most predictive two features would be \(x_{1}\) and \(x_{2}\). However, given the constraint that all features are ranked and the relevant feature subsets monotonically increase in size so that each subset always includes the "top" \(n\)% of features, there is no way to choose \(x_{3}\) when \(k=1\)_and_ choose \(x_{1}\) and \(x_{2}\) when \(k=2\). Therefore, there is no single attribution vector with scores for all features such that the \(k\) highest ranked features are the most predictive \(k\) features for all values of \(k\). Care should be taken when referring to the features with the top scores in the attribution vector as the "most predictive" features. Future work might investigate ways to address this limitation when developing new attribution methods. ### Cognitive burden of class-dependent methods Given a data point, a class-dependent method produces a set of feature attributions for every possible class. A distribution-aware method, on the other hand, produces for a data point a single set of feature attributions, taking into consideration the full distribution of class probabilities. However, this extra degree of freedom afforded by class-dependent methods comes at a cost. As discussed in Section 4.1, class-dependent methods can surface features that make the selected class appear more reasonable and obfuscate features that support other classes. Because class-dependent methods are miscalibrated and fail to adequately capture the uncertainty of a class label, it is important that any end user interpreting the results of a class-dependent method take into consideration not only the explanation generated for the selected class, but also the explanations generated for all other classes. In other words, the end user runs the risk of drawing inaccurate conclusions by only looking at the explanation for the selected class. However, considering the feature attributions generated for every class, and then reducing them to a single explanation for the task at hand, constitutes a significant--and perhaps unrealistic--cognitive burden on the part of the end user. Future work should explore the effect of miscalibrated explanations on human decision-making. ## 8 Conclusion In this work, we introduce and define class-dependent and distribution-aware feature attribution methods. We demonstrate that class-dependent methods--but not distribution-aware methods--can leak information about the true label, causing evaluation metrics to overestimate the utility of their explanations. We show that explanations generated by class-dependent methods using the predicted label can make the predicted class appear more likely than it is. We establish that the maximizer of iAUC is a distribution-aware method. We present two baseline distribution-aware methods, SHAP-KL and FastSHAP-KL, that can be easily optimized. Finally, we validate our theoretical findings by evaluating seven class-dependent and three distribution-aware feature attribution methods on three clinical datasets. ## 9 Reproducibility Formal statements and proofs for all theoretical results are provided in Appendices B and D to G. Experimental details for all empirical results are provided in Appendices H to P and code is available at [https://github.com/explanationleakage/xai](https://github.com/explanationleakage/xai). All datasets used are publicly available as outlined in Appendix J. ## Acknowledgments We thank the reviewers for their thoughtful comments. This work was supported by NIH T32GM007308, NIH T32GM136573, a DeepMind Scholarship, NIH/NHLBI Award R01HL148248, NSF Award 1922658 NRT-HDR: FUTURE Foundations, Translation, and Responsibility for Data Science, and NSF CAREER Award 2145542.
2303.14113
Incentive Mechanism in the Sponsored Content Market with Network Effect
We propose an incentive mechanism for the sponsored content provider market in which the communication of users can be represented by a graph and the private information of the users is assumed to have a continuous distribution function. The content provider stipulates incentive rewards to encourage users to reveal their private information truthfully and increase their content demand, which leads to an increase in advertising revenue. We prove that all users gain a non-negative utility and disclose their private information truthfully. Moreover, we study the effectiveness and scalability of the proposed mechanism in a case study with different network structures.
Mina Montazeri, Pegah Rokhforoz, Hamed Kebriaei, Olga Fink
2023-03-24T16:20:42Z
http://arxiv.org/abs/2303.14113v1
# Incentive Mechanism in the Sponsored Content Market with Network Effects ###### Abstract We propose an incentive mechanism for the sponsored content provider market in which the communication of users can be represented by a graph and the private information of the users is assumed to have a continuous distribution function. The content provider stipulates incentive rewards to encourage users to reveal their private information truthfully and increase their content demand, which leads to an increase in the advertising revenue. We prove that all users gain a non-negative utility and disclose their private information truthfully. Moreover, we study the effectiveness and scalability of the proposed mechanism in a case study with different network structures. Sponsored content market, mechanism design, continuous private information, network system. ## I Introduction Smartphones facilitate people's interaction to share information with their friends, which leads to a huge cellular data flow. Due to the increasing use of cellular data services, the data cost would increase and become one of the critical concerns of the users. This increasing data cost results in decreasing the data consumption in social networks [1], content markets [2], crowdsourcing, and crowdsensing tasks [3, 4]. For this reason, the content/service providers provide some incentives to support partially the mobile users' data usage [2, 3, 4]. In this paper, we focus on the problem of designing incentive rewards in addressing the sponsored content market [5]. The concept of sponsored content market has been introduced in setups where the Content Provider (CP) designs an incentive mechanism by providing some incentive rewards to motivate users to consume more sponsored content and hence maximize his revenue by displaying more advertisements [5]. However, designing optimal incentive rewards in this market can be challenging, and has been the subject of numerous studies [2, 6]. Previous research studies focusing on mechanism design for sponsored contend markets had several limiting assumptions: such as the assumption that the CP has full knowledge of the users' information [5] or that the users will truthfully report their private information to the CP. However, this assumption is not realistic in real-world scenarios since users have incentive to report their information incorrectly to gain more profit. If the users are not revealing information truthfully, it is difficult for the CP to design an optimal incentive reward. To address this challenge, contract theory offers a framework to design mechanisms under asymmetric information in which, the mechanism designer is not aware of the user' private information [7]. In most of the previous studies that applied contract theory to content provider market, the authors assumed that the users' private information is drawn from a discrete distribution which is not realistic since it cannot capture a wide range of possible values [8, 2]. Moreover, previous research did not consider the influence of user interactions on each other. This poses again a limitation to its applicability in real world scenarios since in many social network systems, such as Facebook or Twitter, the behaviors of the users do influence each other. To address the outlined limitations, in this paper, we propose an optimal incentive mechanism for the content provider market when the users have private information. The proposed incentive mechanism aims to achieve the following three goals: 1) Maximize the content provider's utility function; 2) Guarantee the participation of the users (individual rationality); 3) Motivate users to share their private information truthfully (incentive compatibility). To accurately reflect real setups in sponsored content markets, we assume that the users can influence each other's behavior through social interactions that are represented by a graph. Moreover, we assume that the private information of the users, which is also referred to as the "type" of the users, can be modeled by a continuous distribution function. This private information implies the strength of social network ties between the users. We model the sponsored content market as an optimization problem in which the CP aims to obtain incentive rewards which maximize his utility and also satisfy the incentive compatibility and individual rationality constraints. The proposed mechanism leads to a tractable constrained functional optimization which links the demand and the incentive reward function. We prove that the proposed mechanism fulfills the incentive compatibility and individual rationality properties. In addition, we evaluate the effectiveness and scalability of the proposed mechanism in a case study in which we consider different numerical examples with different network structures. In summary, the main contributions of the paper are as follows: * We propose an incentive mechanism for users whose social interaction is represented by a graph with continuous private information. * We reformulate the mechanism design problem to a tractable optimization which links the incentive reward to the demand function. * We prove that the proposed mechanism fulfills the individual rationality and incentive compatibility properties. ## II Related work Previous research studies have focused on different aspects of designing optimal incentive rewards in the content provider market. One of the previously proposed approaches for sponsored content market is based on (Bayesian) Stackelberg games. In [5], the interaction among the network operator, the content provider, and the users is formulated as a Stackelberg game. In this setup, the CP aims to design incentive rewards that maximize his utility based on the assumption that the CP knows all the information of the users. However, this assumption is not realistic in real-world scenarios. To address this limitation, [6] used the Bayesian Stackelberg games to allocate incentive rewards in the sponsored content market. Bayesian Stackelberg games are particularly applicable in setups where the CP is uncertain about some parameters in the objective function of the users. Even though [6] assumes that the CP has incomplete information about the agents, which is more realistic compared to previous research, the proposed method still has several unrealistic assumptions which preclude its application in real world scenarios. Firstly, the authors assume that all the users voluntarily participate in the market. Secondly, in the Bayesian Stackelberg game, the optimal strategy of the CP is obtained by the average with respect to the distribution of the user's private information. This means that all users receive the same amount of reward which is again unrealistic since users prefer to receive a reward based on their own utility functions. Building upon the efforts to tackle the challenge of designing rewards based on users' utility function, several recent research studies have focused on designing an optimal incentive reward that encourages users to report their private information truthfully. These efforts have been seen in various applications, including resource/task allocation markets [9] and blockchain-based environments [10, 11]. To encourage the truthful sharing, contract theory offers a framework to design incentive reward under asymmetric information in such a way that it motivates users to report their private information truthfully[9]. Several research studies have applied contract theory to the content provider market [8, 2]. However, these studies assumed that the users' private information follows a discrete distribution function. While this assumption simplifies the incentive mechanism, it is a restrictive assumption because in many applications the private information of the users follows a continuous distribution [12]. The only study that did take continuous distribution into consideration, proposed an incentive mechanism for the sponsored content market where users have continuous private information [13]. Yet, the study did not consider the influence of user interactions on each other. This poses again a limitation to its applicability in real world scenarios since in many social network systems, such as Facebook or Twitter the behaviors of the users do influence each other. In such systems, the users' interactions can be modeled as a graph where the nodes represent the users and the edges represent the influence strength of the users' social ties [14]. However, the interactions between users have not been considered in any of the previous works that applied contract theory in the content provider market. ## III Model and Problem Formulation ### _Model of Sponsored Content Markets_ We model the sponsored content platform as a market consisting of two entities: a CP and a set of users \(\mathcal{N}=\{1,2,\ldots,N\}\). To accurately reflect real setups in sponsored content markets, we assume that the users can influence each other's behavior through social interactions that are represented by a graph. However, based on the user's personality, the strength of the network effect for each user is different. We model the strength of the network effect for user \(i\) by parameter \(\theta_{i}\in\Theta_{i}\), where \(\Theta_{i}=[\underline{\theta},\bar{\theta}]\). This parameter is private information of each user and is neither known to the content provider nor to the other users. To realistically depict reality, private information is modeled as a continuous variable as it offers more versatility and can encompass a broader range of values [15]. In this market, the CP first obtains the content demand \(x_{i}:\Theta\rightarrow\mathbb{R}^{+}\) and incentive reward \(R_{i}:\Theta\rightarrow\mathbb{R}^{+}\), where \(\Theta=\Theta_{1}\times\cdots\times\Theta_{N}\) as function of users' private information. This incentive reward encourages users to consume more content. After that users share their private information, denoted as \(\hat{\theta}_{i}\in\Theta_{i}\), with the CP such that they maximize their profit. In general, their shared private information may not necessarily be their actual private information, i.e. \(\theta_{i}\), unless it is intrinsically optimal for them to share it truthfully. The general schematic of the sponsored content market is shown in Figure 1. ### _Problem Formulation_ Consider agent \(i\) who is embedded in a given social network and participate in the sponsored content market. In this case, we define the user's utility as follows [16]: \[U_{i}(x(\hat{\theta}),R_{i}(\hat{\theta}),\theta_{i})=\psi(x_{i}(\hat{\theta }))+\theta_{i}\sigma(x(\hat{\theta}))-px_{i}(\hat{\theta})+R_{i}(\hat{\theta}), \tag{1}\] where \(\hat{\theta}=\{\hat{\theta}_{1},\cdots,\hat{\theta}_{N}\}\) and \(x(\hat{\theta})=\{x_{1}(\hat{\theta}),\cdots,x_{N}(\hat{\theta})\}\) are a shared private information profile and content demand profile. These include the shared private information and content demand of each agent, respectively. The first term \(\psi(x_{i}(\hat{\theta}))\) represents the internal utility that user \(i\) gains from consuming and enjoying the content demands. This term derives from user content consumption, independent of the consumption of user's neighbors. Inspired by [17, 14], we model this term as a linear-quadratic function: \(\psi(x_{i}(\hat{\theta}))=ax_{i}(\hat{\theta})-\frac{b}{2}x_{i}^{2}(\hat{ \theta})\), where \(a\geq 0\) models the maximum internal demand willingness rate, and \(b\geq 0\) models the willingness elasticity factor [17]. We model the second term, which represents the external utility due to the network effects, as Fig. 1: The schematic of sponsored content market \(x_{i}(\hat{\theta})\sum_{j=1}^{N}g_{ij}x_{j}(\hat{\theta})\)[17, 14]. In this formulation, \(g_{ij}\geq 0\) represents the influence strength of the social tie of user \(i\) on user \(j\). Thus, the users' behaviors in terms of content demand are influencing each other. As mentioned before, the parameter \(\theta_{i}\), which is the private information of user \(i\), controls the strength of the network effect for user \(i\) and is known neither to the CP nor to the other users. However, it can be assumed that it is commonly known that \(\theta_{i}\) follows a distribution \(F(\theta_{i})\). Note that this assumption is reasonable since the CP can estimate the statistical information about the distribution of users' private information by learning from user historical behavior or conducting a user survey. We also assume that \(F(\theta_{i})\) is continuously differentiable. The term \(px_{i}(\hat{\theta})\) indicates the cost that user \(i\) has to pay to the mobile operator for the consumption of \(x_{i}\). The CP's utility comprises two parts: the total advertisement revenue gained from the user's content consumption and the total rewards paid to all the users. Thus, the CP's utility can be formulated as: \[U^{CP}(x(\hat{\theta}),R(\hat{\theta}))=\mathbf{E}_{\hat{\theta}}[\sum_{i\in \mathcal{N}}\big{(}Q(x_{i}(\hat{\theta}))-R_{i}(\hat{\theta})\big{)}], \tag{2}\] where \(R(\cdot)=\{R_{1}(\cdot),\cdots,R_{N}(\cdot)\}\). The function \(Q\) is the advertisement revenue gained from data usage of user \(i\) and is given by \(Q(x_{i}(\hat{\theta}))=sx_{i}(\hat{\theta})-\frac{t}{2}x_{i}^{2}(\hat{\theta})\) in which \(s,t\geq 0\) are predefined coefficients characterizing the extent of the concavity of the function [17]. Thus, the CP offers \((x(\cdot),R(\cdot))\), which provides the content demand consumption and incentive reward for users based on users' shared private information. ### _Designing the Incentive Mechanism_ We address the mechanism design problem from the content provider perspective. The goal of the CP is to design a utility-maximizing mechanism which receives shared private information from the users and determines both the incentive rewards and the demand consumption. Solving this problem presents two main challenges and requires the CP to ensure that the users participate in the market and share their private information truthfully. In the following, we provide the definition of these properties. **Definition 1**.: _A mechanism is individually rational if the users gain a non-negative utility by sharing their private information truthfully, i.e._ \[IR_{\theta_{i}}: \tag{3}\] _where \(x_{-i}(\cdot)=\{x_{1}(\cdot),\cdots,x_{i-1}(\cdot),x_{i+1}(\cdot),\cdots,x_{N }(\cdot)\}\) is the content demand of all users except user \(i\). A similar definition holds for \(\hat{\theta}_{-i}=\{\hat{\theta}_{1},\cdots,\hat{\theta}_{i-1},\hat{\theta}_{ i+1},\cdots,\hat{\theta}_{N}\}\)._ **Definition 2**.: _A mechanism is incentive-compatible if the users achieve an equal or higher utility by sharing their private information truthfully, i.e._ \[IC_{\theta_{i},\hat{\theta}_{i}}: \mathbf{E}_{\hat{\theta}_{-i}}[U_{i}(x_{i}(\theta_{i},\hat{ \theta}_{-i}),x_{-i}(\theta_{i},\hat{\theta}_{-i}),R_{i}(\theta_{i},\hat{ \theta}_{-i}),\theta_{i})]\geq \tag{4}\] \[\mathbf{E}_{\hat{\theta}_{-i}}[U_{i}(x_{i}(\hat{\theta}_{i},\hat{ \theta}_{-i}),x_{-i}(\hat{\theta}_{i},\hat{\theta}_{-i}),R_{i}(\hat{\theta}_{ i},\hat{\theta}_{-i}),\theta_{i})].\] The CP designs the demand consumption and incentive reward functions such that the three objectives are achieved simultaneously: (i) motivate users to participate in the market, (ii) ensure that users truthfully share their private information, (iii) maximize the CP's utility. Thus, the optimal mechanism is obtained by solving the following optimization problem: \[\max_{\{R(\cdot),x(\cdot)\}}U^{CP}(x(\hat{\theta}),R(\hat{\theta}))\quad\text {s.t.}\quad IR_{\theta_{i}},\quad IC_{\theta_{i},\hat{\theta}_{i}}. \tag{5}\] Since the optimization (5) is not convex and also not straightforward to solve, we propose in the following how we can reformulate it in order to find optimal decisions for the CP. ## IV Solution of the Mechanism In this section, we use the following propositions to investigate the solution of the optimization problem (5). For ease of presentation, we define: \[V_{i}(\hat{\theta}_{i})\equiv \mathbf{E}_{\hat{\theta}_{-i}}[(a-p)x_{i}(\hat{\theta}_{i},\hat{ \theta}_{-i})-\frac{b}{2}x_{i}^{2}(\hat{\theta}_{i},\hat{\theta}_{-i}), \tag{6}\] \[\gamma_{i}(\hat{\theta}_{i})\equiv \mathbf{E}_{\hat{\theta}_{-i}}[x_{i}(\hat{\theta}_{i},\hat{\theta }_{-i})\sum_{j\in\mathcal{N}}g_{ij}x_{j}(\hat{\theta}_{i},\hat{\theta}_{-i})],\] \[r_{i}(\hat{\theta}_{i})\equiv \mathbf{E}_{\hat{\theta}_{-i}}[R_{i}(\hat{\theta}_{i},\hat{\theta }_{-i})],\] \[C_{i}(\hat{\theta}_{i})\equiv \mathbf{E}_{\hat{\theta}_{-i}}[sx_{i}(\hat{\theta}_{i},\hat{\theta }_{-i})-\frac{t}{2}x_{i}^{2}(\hat{\theta}_{i},\hat{\theta}_{-i})],\] \[\tilde{U}_{i}(\theta_{i},\hat{\theta}_{i})\equiv \mathbf{E}_{\hat{\theta}_{-i}}[U_{i}(x_{i}(\hat{\theta}_{i},\hat{\theta }_{-i}),x_{-i}(\hat{\theta}_{i},\hat{\theta}_{-i}),R_{i}(\hat{\theta}_{i},\hat{ \theta}_{-i}),\theta_{i})].\] This results in \(\tilde{U}_{i}(\theta_{i},\hat{\theta}_{i})=V_{i}(\hat{\theta}_{i})+\theta_{i} \gamma_{i}(\hat{\theta}_{i})+r_{i}(\hat{\theta}_{i})\). **Proposition 1**.: _In optimization (5), if the \(IR_{\theta_{i}}\) constraint is satisfied for \(\underline{\theta}\) and the \(IC_{\theta_{i},\hat{\theta}_{i}}\) constraint is satisfied as well, for all \(\hat{\theta}_{i},\theta_{i}\), then the \(IR_{\theta_{i}}\) is satisfied for all \(\theta_{i}\). Also, in any optimal solution, we have \(\tilde{U}_{i}(\underline{\theta},\underline{\theta})=0\), i.e., the \(IR_{\theta_{i}}\) constraint is active for \(\underline{\theta}\)._ Proof.: Let us use the following inequality: \[\tilde{U}_{i}(\theta_{i},\theta_{i})\geq\tilde{U}_{i}(\theta_{i},\underline{ \theta})\geq\tilde{U}_{i}(\underline{\theta},\underline{\theta}), \tag{7}\] where the first inequality holds as the result of \(IC_{\theta_{i},\underline{\theta}}\) and the second inequality follows from \(\frac{d\tilde{U}_{i}(\cdot)}{d\theta_{i}}\geq 0\). Thus, it follows \(\tilde{U}_{i}(\theta_{i},\theta_{i})\geq\tilde{U}_{i}(\underline{\theta}, \underline{\theta})\). Hence, we conclude that \(IR_{\underline{\theta}}\) implies \(IR_{\theta_{i}}\). To finalize the proof, it is left to demonstrate that \(IR_{\underline{\theta}}\) should be binding. If \(IR_{\underline{\theta}}\) is not bind, there exists \(\epsilon>0\) which could decrease \(R_{i}(\theta)\) such that all constraints of (5) are satisfied and the utility of the CP is also increased. The previous proposition suggests that we can safely remove all the \(IR_{\theta_{i}}\) constraints in which \(\theta_{i}\neq\underline{\theta}\). In other words, the infinite number of inequality constraints of \(IR_{\theta_{i}}\) can be converted to a single equality constraint. Next, we provide the following equivalent formulation of the problem (5): \[\max_{\{R(\cdot),x(\cdot)\}}U^{CP}(x(\hat{\theta}),R(\hat{\theta})) \tag{8a}\] \[s.t. \gamma_{i}^{\prime}(\theta_{i})\geq 0,\] (8b) \[r_{i}(\theta_{i})=[\int_{\underline{\theta}}^{\theta_{i}}\gamma_{i}(y)dy] -\theta_{i}\gamma_{i}(\theta_{i})-V_{i}(\theta_{i}). \tag{8c}\] **Proposition 2**.: _Optimizations (5) and (8) are equivalent._ Proof.: To show the equivalence between the two optimization problems, it is sufficient to show that for each optimal solution of optimization (5), there exists a solution of optimization (8) with the same objective, and vice versa. First, we show that given a solution to optimization (5), we can find a solution to optimization (8) with the same objective. If Equation (8c) is satisfied, we can write \(IC_{\theta_{i},\hat{\theta_{i}}}\) as: \[\begin{split}& V_{i}(\theta_{i})+\theta_{i}\gamma_{i}(\theta_{i})+ [\int_{\underline{\theta}}^{\theta_{i}}\gamma_{i}(y)dy]-\theta_{i}\gamma_{i}( \theta_{i})-V_{i}(\theta_{i})\geq\\ & V_{i}(\hat{\theta_{i}})+\theta_{i}\gamma_{i}(\hat{\theta_{i}})+ [\int_{\underline{\theta}}^{\theta_{i}}\gamma_{i}(y)dy]-\hat{\theta_{i}}\gamma _{i}(\hat{\theta_{i}})-V_{i}(\hat{\theta_{i}}).\end{split}\] Thus, we have: \[\begin{cases}\int_{\hat{\theta_{i}}}^{\theta_{i}}\gamma_{i}(y)dy\geq(\theta- \hat{\theta})\gamma_{i}(\hat{\theta_{i}}),&\theta_{i}>\hat{\theta_{i}}, \\ (\hat{\theta}-\theta)\gamma_{i}(\hat{\theta_{i}})\geq\int_{\theta_{i}}^{ \theta_{i}}\gamma_{i}(y)dy,&\theta_{i}<\hat{\theta_{i}}.\end{cases} \tag{9}\] Regarding (8b), both equations in (9) hold true. Next, we show that given an optimal solution to optimization (5), we can find a solution to optimization (8) with the same objective. As a first step, we prove that truthfulness implies monotonicity of \(\gamma_{i}(\theta_{i})\). According to \(IC_{\theta_{i},\hat{\theta_{i}}}\) and \(IC_{\hat{\theta_{i}},\theta_{i}}\), we obtain: \[\begin{split}& V_{i}(\theta_{i})+\theta_{i}\gamma_{i}(\theta_{i})+ r_{i}(\theta_{i})\geq V_{i}(\hat{\theta_{i}})+\theta_{i}\gamma_{i}(\hat{ \theta_{i}})+r_{i}(\hat{\theta_{i}}),\\ & V_{i}(\hat{\theta_{i}})+\hat{\theta_{i}}\gamma_{i}(\hat{\theta _{i}})+r_{i}(\hat{\theta_{i}})\geq V_{i}(\theta_{i})+\hat{\theta_{i}}\gamma_{i }(\theta_{i})+r_{i}(\theta_{i}).\end{split} \tag{10}\] By summing these two inequalities of Equation (10), we get \(\gamma_{i}(\theta_{i})(\theta-\hat{\theta})\geq\gamma_{i}(\hat{\theta_{i}})( \theta-\hat{\theta})\), which implies monotonicity of \(\gamma_{i}(\theta_{i})\) (constraint (8b)). To derive Equation (8c), we can rearrange Equation (10) as follows: \[\begin{split}& V_{i}(\hat{\theta_{i}})+\theta_{i}\gamma_{i}( \hat{\theta_{i}})-V_{i}(\theta_{i})-\theta_{i}\gamma_{i}(\theta_{i})\leq r_{i} (\theta_{i})-r_{i}(\hat{\theta_{i}})\\ &\leq V_{i}(\hat{\theta_{i}})+\hat{\theta_{i}}\gamma_{i}(\hat{ \theta_{i}})-V_{i}(\theta_{i})-\hat{\theta_{i}}\gamma_{i}(\theta_{i}).\end{split} \tag{11}\] By considering \(\hat{\theta}=\theta+\epsilon\) and dividing the entire Equation (11) by \(\epsilon\to 0\), we get: \[\theta_{i}\frac{d}{d\theta_{i}}\gamma_{i}(\theta_{i})-\frac{d}{d\theta_{i}}V_ {i}(\theta_{i})\leq\frac{d}{d\theta_{i}}r_{i}(\theta_{i})\leq\theta_{i}\frac{d }{d\theta_{i}}\gamma_{i}(\theta_{i})-\frac{d}{d\theta_{i}}V_{i}(\theta_{i}).\] This results in \[\theta_{i}\frac{d}{d\theta_{i}}\gamma_{i}(\theta_{i})-\frac{d}{d\theta_{i}}V_ {i}(\theta_{i})=\frac{d}{d\theta_{i}}r_{i}(\theta_{i}). \tag{12}\] Integrating Equation (12) with respect to \(\theta_{i}\) results in: \[r_{i}(\theta_{i})-r_{i}(\underline{\theta})=\int_{\underline{\theta}}^{\theta _{i}}y\frac{d}{d\theta_{i}}\gamma_{i}(\theta_{i})\bigg{|}_{\theta_{i}=y}+ \frac{d}{d\theta_{i}}V_{i}(\theta_{i})\bigg{|}_{\theta_{i}=y}dy. \tag{13}\] Using Proposition 1, the integration by parts of (13) gives us: \[r_{i}(\theta_{i})=[\int_{\underline{\theta}}^{\theta_{i}}\gamma_{i}(y)dy]- \theta_{i}\gamma_{i}(\theta_{i})-V_{i}(\theta_{i}).\] In the following, we derive the optimal solution for the content demand consumption. Given the fact that \(\mathbf{E}[\cdot]\) is a linear operator, Equation (2) can be rewritten as follows: \[U^{CP}=\sum_{i\in\mathcal{N}}\mathbf{E}_{\theta_{i}}[C_{i}(\theta_{i})-r_{i}( \theta_{i})]. \tag{14}\] By substituting Equation (8c) into Equation (14), we obtain: \[\begin{split}& U^{CP}=\\ &\sum_{i\in\mathcal{N}}\int_{\underline{\theta}}^{\tilde{\theta}}[C_ {i}(\theta_{i})-\int_{\underline{\theta}}^{\theta_{i}}\gamma_{i}(s)ds+V_{i}( \theta_{i})+\theta_{i}\gamma_{i}(\theta_{i})]f(\theta_{i})d\theta_{i}.\end{split}\] We can rewrite the term \(\int_{\underline{\theta}}^{\tilde{\theta}}\int_{\underline{\theta}}^{\theta_{i}}[ \gamma_{i}(s)ds]f(\theta_{i})d\theta_{i}\) as \(\int_{\underline{\theta}}^{\tilde{\theta}}\gamma_{i}(\theta_{i})\frac{1-F( \theta_{i})}{f(\theta_{i})}f(\theta_{i})d\theta_{i}\), where \(F(\cdot)\) is the cumulative distribution function of \(f(\cdot)\). We define \(h(\theta_{i})\equiv\frac{f(\theta_{i})}{1-F(\theta_{i})}\) and \(\phi(\theta_{i})\equiv\theta_{i}-1/h(\theta_{i})\). Thus, the optimization (8) can be rewritten as follows: \[\begin{split}&\underset{x(\cdot)}{\max}\mathbf{E}_{\theta}\sum_{i \in\mathcal{N}}[C_{i}(\theta_{i})+V_{i}(\theta_{i})+\phi(\theta_{i})\gamma_{i }(\theta_{i})]\\ & s.t.\quad\gamma_{i}^{\prime}(\theta_{i})\geq 0.\end{split} \tag{15}\] We make the following assumptions to guarantee the boundedness of the content demand of each user. **Assumption 1**.: _For each \(\theta_{i}\in[\underline{\theta},\bar{\theta}]\), \(h(\theta_{i})\) is increasing1 and \(\phi(\theta_{i})\geq 0\)[7]._ Footnote 1: [18] showed that \(h(\theta_{i})\) is increasing for many common probability density functions, including normal and uniform distributions. **Assumption 2**.: _For each \(i\in\mathcal{N}\), \(t+b>(\bar{\theta}\sum_{j\in\mathcal{N},j\neq i}(g_{ij}+g_{ji}))\) and \(s+a>p\)._ In the following, we present a lemma that will be useful to characterize the optimal mechanism. **Lemma 1**.: _Let us define matrix \(K=[(t+b)I_{N}-(M_{\phi}G+G^{T}M_{\phi})]^{-1}\), where \(M_{\phi}\equiv\text{diag}(\phi(\theta_{1}),\phi(\theta_{2}),...,\phi(\theta_{N}))\), \(G=[g_{ij}],i,j\in\mathcal{N}\) and \(I_{N}\) is the \(N\times N\) identity matrix. Then, \(\frac{\partial K}{\partial\theta_{i}}\) is a matrix with non-negative entries._ Proof.: By applying the chain rule, we get: \[0=\frac{\partial KK^{-1}}{\partial\theta_{i}}=\frac{\partial K}{\partial \theta_{i}}K^{-1}+K\frac{\partial K^{-1}}{\partial\theta_{i}}. \tag{16}\] Furthermore, \(\frac{\partial K^{-1}}{\partial\theta_{i}}=-(E_{i}G+G^{T}E_{i})\), where \(E_{i}=\frac{\partial M_{\phi}}{\partial\theta_{i}}\) is a matrix with \(\frac{\partial\phi(\theta_{i})}{\partial\theta_{i}}\) at the \(ii^{th}\) entry, and zero otherwise. Hence, using Equation (16) we obtain: \[\frac{\partial K}{\partial\theta_{i}}=-K\frac{\partial K^{-1}}{\partial \theta_{i}}K=K(E_{i}G+G^{T}E_{i})K. \tag{17}\] Due to Assumption 1, we have \(\frac{\partial\phi(\theta_{i})}{\partial\theta_{i}}\geq 0\). Thus, since the right-hand side of Equation (17) is a matrix with the non-negative entries, \(\frac{\partial K}{\partial\theta_{i}}\) is a matrix with the non-negative entries as well. The next proposition characterizes the content demand function that this constraint is indeed satisfied. Let \(\theta_{i}\in[\underline{\theta},\bar{\theta}]\) be fixed and given. Hence, \(\{x(\theta)\}\) solves the following problem: \[\max_{x(\theta)}\sum_{i\in\mathcal{N}}[C_{i}(\theta_{i})+V_{i}(\theta_{i})+\phi (\theta_{i})\gamma_{i}(\theta_{i})] \tag{19}\] The Hessian of the objective in optimization (19) is given as: \[\begin{pmatrix}-t-b&\cdots&\phi(\theta_{1})g_{1N}+\phi(\theta_{N})g_{N1}\\ \vdots&\ddots&\vdots\\ \phi(\theta_{1})g_{1N}+\phi(\theta_{N})g_{N1}&\cdots&-t-b\end{pmatrix}\] Considering Assumption 2, this matrix is Hermitian and strictly diagonally dominant. Thus, it is negative semi-definite [19]. Therefore, the objective of optimization (19) is concave. The first order optimality condition of optimization (19) yields: \[(s+a-p)-(t+b)x_{i}(\theta)+\phi(\theta_{i})\sum_{j\in\mathcal{N}} (g_{ij}x_{j}(\theta)) \tag{20}\] \[+\sum_{j\in\mathcal{N}}(\phi(\theta_{j})g_{ji}x_{j}(\theta))=0.\] Equation (20) can be written in the matrix form as follows: \[(s+a-p)\mathbf{1}_{N,1}+(M_{\phi}G+G^{T}M_{\phi})x(\theta)=(t+b)x(\theta).\] Since \([(t+b)I_{N}-(M_{\phi}G+G^{T}M_{\phi})]\) is a strictly diagonally dominant matrix, it is invertible. Hence, \[x(\theta)=(s+a-p)[(t+b)I_{N}-(M_{\phi}G+G^{T}M_{\phi})]^{-1}\mathbf{1}_{N,1}.\] To finalize the proof, it is left to show that \(\gamma_{i}^{\prime}=\frac{\partial\gamma_{i}(\theta_{i})}{\partial\theta_{i}}\geq 0\). Given the definition of \(\gamma_{i}\) in Equation (6), we obtain: \[\frac{\partial\gamma_{i}(\theta_{i})}{\partial\theta_{i}}=E_{ \theta_{-i}}[\frac{\partial x_{i}(\theta_{i},\theta_{-i})}{\partial\theta_{i}} \sum_{j\in\mathcal{N}}g_{ij}x_{j}(\theta_{i},\theta_{-i})+ \tag{21}\] \[x_{i}(\theta_{i},\theta_{-i})\sum_{j\in\mathcal{N}}g_{ij}\frac{ \partial x_{j}(\theta_{i},\theta_{-i})}{\partial\theta_{i}}]\geq 0.\] According to Lemma 1, we have \(\frac{\partial x_{i}(\theta_{i})}{\partial\theta_{i}}\geq 0\) and \(\frac{\partial x_{j}(\theta_{i})}{\partial\theta_{i}}\geq 0\). Thus, by considering Lemma 1 and Equation (21), we can conclude that \(\gamma_{i}^{\prime}\geq 0\). **Remark 1**.: _The calculation of the optimal content demand from Equation (18) has a complexity of \(O(N^{3})\), where N is the number of users in the network._ ## V Case study In this section, we investigate the performance of our proposed mechanism in a case study and evaluate the effect of the network structure on the utility of the users and the CP. We set \(s=1\), \(t=1\), \(a=0.5\), \(b=6\), and \(p=0.1\) as the parameters of the CP and the users. In addition, we assume \(g_{ij}=1\) when agents \(i\) and \(j\) are connected, otherwise \(g_{ij}=0\). In the first step, we assess the validity of individual rationality and incentive compatibility of the proposed mechanism on a simple example. We consider a case with five users who can communicate through a fully connected graph (Figure 1(a)). In this network, four users share their private information truthfully and User \(5\) can share his private information untruthfully. The utility of User \(5\) when he shares values of private information different from those of his actual private information is shown in Figure 3. Each curve in Figure 3 corresponds to the different values of actual private information of User \(5\), while the private information of the other four agents is fixed in all the curves. This figure shows that User \(5\) gains his maximum utility when he shares his private information truthfully, which is marked by the black stars on the curves. Therefore, the mechanism satisfies the incentive compatibility constraint. In addition, the utility of User \(5\) when he shares his information truthfully is positive, which implies that the individual rationality is satisfied as well. In the next step, we evaluate the impact of the network architecture on the utility of the users. We consider three kinds of users, including a fully connected, a branch, and a central user. Two types of networks are considered for this evaluation: a fully connected network (Figure 1(a)) and a star network (Figure 1(b)). We assume that the users in both of these networks have the same parameters and private information and differ only in terms of their connectivity. Thus, the utilities of all the agents in Figure 1(a) are the same. In this case, to investigate the behavior of fully connected users, we consider as an example the utility of user a.1, which is the same as all other agents in Figure 1(a). In addition, since users \(b.2,b.3,b.4\), and \(b.5\) have the same parameters as well as the same connection, we only consider the utility of user b.2 in order to investigate the attributes of branch-users. Also, we consider agent b.1 to be the central agent. As Figure 4 demonstrates, fully connected users have the highest utility compared to the other two types of users since the mutual influence between the users in such a network is the highest. Moreover, the utility of the central user is higher than that of the branch user in the star network Fig. 3: Utility of User \(5\) when sharing different levels of private information. Each curve corresponds to the different levels of the actual private information (i.e. \(\theta_{5}\)) of User \(5\). Fig. 2: The bidirectional network structure between different users. since the central user has a greater mutual influence on other users in this network as compared to the branch user. Hence, we can conclude that the greater the influence of the user on other users, the more content that is allocated to that user (and, consequently, more utility). Please note that we also evaluated the strength of the connectivity between the users. All users are assumed to have the same strength of connectivity. In the third step, we investigate the impact on the CP's utility of a case in which a user shares his private information untruthfully. The bidirectional network structure for this case study is depicted in Figure 5. The utility of the CP when one user shares his private information untruthfully is displayed in Table I. The result when all users share their information truthfully is displayed in the first row. In the other rows, it is assumed that only one user shares the information untruthfully. As we can see, the untruthfully shared private information of User \(1\), who is connected to all other users, has the highest impact on the CP's utility. In contrast, Users \(2\) and \(5\), who only connect to one user, have the lowest effect on the CP's utility. Hence, we conclude that the CP should prioritize designing the mechanism such that it guarantees incentive compatibility for the user with the greatest influence on others. In the last evaluation step, the scalability and robustness of the proposed algorithm are demonstrated through tests on various network sizes, including large networks with many users. In each network, it is assumed that users connect randomly to half of the total users. For simplicity, we also assume that the users in all networks have the same parameters, and the same strength of connectivity. With a maximum network size of 800 users, each user is connected to 400 users, which is similar to the average number of friends per user on Facebook as reported by Pew Research Center [20]. The impact of the number of users on the calculation time for optimal content demand is displayed in Table II. These results support our statement in Remark 1. Figure 6 displays the comparison of the utility of the users in networks with different numbers of users. As shown in this figure, the user in a network with more nodes has the highest utility compared to users in networks with fewer nodes. This is due to the reason that as the number of users in the network increases, the users influence more users. ## VI Conclusion In this paper, we proposed an incentive mechanism for the sponsored content market by considering the social interaction of the agents. The strength of the social network ties of each user is considered to be his private information. We formulate the problem as the constrained functional optimization, which obtains the content demand and incentive reward functions as the CP's decisions. The optimal decisions satisfy incentive compatibility and individual rationality. The results in the case study verify the effectiveness and scalability of the proposed mechanism. A future research direction would be to formulate the market as a mechanism whereby the users decide about their content demand and the content provider obtains the incentive reward.
2310.03557
Mobility Segregation Dynamics and Residual Isolation During Pandemic Interventions
External shocks embody an unexpected and disruptive impact on the regular life of people. This was the case during the COVID-19 outbreak that rapidly led to changes in the typical mobility patterns in urban areas. In response, people reorganised their daily errands throughout space. However, these changes might not have been the same across socioeconomic classes leading to possibile additional detrimental effects on inequality due to the pandemic. In this paper we study the reorganisation of mobility segregation networks due to external shocks and show that the diversity of visited places in terms of locations and socioeconomic status is affected by the enforcement of mobility restriction during pandemic. We use the case of COVID-19 as a natural experiment in several cities to observe not only the effect of external shocks but also its mid-term consequences and residual effects. We build on anonymised and privacy-preserved mobility data in four cities: Bogota, Jakarta, London, and New York. We couple mobility data with socioeconomic information to capture inequalities in mobility among different socioeconomic groups and see how it changes dynamically before, during, and after different lockdown periods. We find that the first lockdowns induced considerable increases in mobility segregation in each city, while loosening mobility restrictions did not necessarily diminished isolation between different socioeconomic groups, as mobility mixing has not recovered fully to its pre-pandemic level even weeks after the interruption of interventions. Our results suggest that a one fits-all policy does not equally affect the way people adjust their mobility, which calls for socioeconomically informed intervention policies in the future.
Rafiazka Millanida Hilman, Manuel García-Herranz, Vedran Sekara, Márton Karsai
2023-10-05T14:08:44Z
http://arxiv.org/abs/2310.03557v1
# Mobility Segregation Dynamics and Residual Isolation During Pandemic Interventions ###### Abstract External shocks embody an unexpected and disruptive impact on the regular life of people. This was the case during the COVID-19 outbreak that rapidly led to changes in the typical mobility patterns in urban areas. In response, people reorganised their daily errands throughout space. However, these changes might not have been the same across socioeconomic classes leading to possibile additional detrimental effects on inequality due to the pandemic. In this paper we study the reorganisation of mobility segregation networks due to external shocks and show that the diversity of visited places in terms of locations and socioeconomic status is affected by the enforcement of mobility restriction during pandemic. We use the case of COVID-19 as a natural experiment in several cities to observe not only the effect of external shocks but also its mid-term consequences and residual effects. We build on anonymised and privacy-preserved mobility data in four cities: Bogota, Jakarta, London, and New York. We couple mobility data with socioeconomic information to capture inequalities in mobility among different socioeconomic groups and see how it changes dynamically before, during, and after different lockdown periods. We find that the first lockdowns induced considerable increases in mobility segregation in each city, while loosening mobility restrictions did not necessarily diminished isolation between different socioeconomic groups, as mobility mixing has not recovered fully to its pre-pandemic level even weeks after the interruption of interventions. Our results suggest that a one fits-all policy does not equally affect the way people adjust their mobility, which calls for socioeconomically informed intervention policies in the future. _Keywords--_ COVID-19, mobility response, segregation dynamics, residual isolation ## 1 Introduction Inequality is a prominent feature of today's society. Unequal distribution and access of resources, among others, stand as a preliminary setting. Untangled paths to income [1], education [2], and employment [3] seed inequality, which further are moulded into behavioural preferences in daily life, mostly reflecting proximity to own socioeconomic and demographic background. Eventually, these unequal configurations can lead to segregation that potentially limits the social dynamics. Socioeconomic segregation is not the only factor that is linked to inequality. There are numbers of ways, such as residential [4], employment [3], income [1] or race along which people are segregated, to mention a few. Residential segregation is manifested as separation of different groups of people into different neighbourhoods within a city. Residential segregation is fuelled by the quality of neighbourhoods moving farther away from each other and result in the highly segmented residential places profile between low and high income neighbourhoods [5, 6]. Therefore, housing plays an intermediary role in reproducing inequality throughout the coupling effects between income inequality and residential segregation [6]. It has also been shown that growing proportion of high-income segment among workforce increases demand for residential units located in inner city neighbourhoods, due to the centrality of location and accessibility of urban living [4, 7]. Mobility patterns follow on restrictions and preferences on residence and employment in order to meet daily errands. An interplay between inequality and the way people organise their mobility in urban space is inevitable. In line with Urry [8], Olvera et al. [9] define inequality in mobility as behavioural differences in the level of transport use due to differences in the distribution of monetary ownership such as income or wealth. Furthermore, they find that car ownership is a strong determinant to mobility pattern and residential locations and diminishes potential interaction with people with heterogeneous backgrounds (compared with shared space in public transportation). As a result, segregation patterns come out as an entanglement between inequality and mobility. In urban mobility network, social stratification in conjunction with unequal access to transport infrastructures brings social exclusion [10, 11] and social segregation [12, 13]. Such inequalities may change due to external shocks, such as the COVID-19 outbreak, natural catastrophes (earthquakes and floods), or political riots (like war and conflicts). The consequences of such events can dramatically change existing socioeconomic configuration and individual mobility patterns, which in themselves are already constrained by socioeconomic stratification [14, 15, 16]. People's capacities to adjust preferences and their way of living in response to disruptions are limited by their socioeconomic status, limited financial resources or due to their jobs that demand physical presence. As existing literature suggests, people with higher income may have the capacity for larger mobility reduction, while mobility inflexibility and less social distancing are observable among low-income, raising disparity in mobility [17, 18, 19]. In the literature, it is argued that social fabric and inequality shape mobility patterns [8, 20]. Spatial distribution of commercial areas, residential units, workplaces, and schools, among others, encourages people to move across urban landscape. Built up on the notion of unequal distribution at individual level, mobility is also engendered and reinforced by inequality [21]. The presence of individual preferences over socioeconomic characteristics of places could be further signified at the socioeconomic (SE) level by taking the visit ratio of people coming from particular SE class to places distributed in various other classes [22, 23]. We build our approach on this finding by using mobility as an operational concept to analyse socioeconomic stratification and spatial isolation brought by the external shocks. This research investigates the impact of the COVID-19 outbreak and non-pharmaceuticals interventions (NPI) that are later followed in the urban areas of Bogota, Jakarta, London, and New York. Our ultimate goal is to study the changing dynamics of isolation and segregation patterns in mobility due to external shock. We also observe whether such phenomena is temporary, caused by timely restrictions such as lockdown, or they induce long term residual effects. To test this, firstly, we capture the changing segregation pattern by quantifying mobility stratification in every sequence of pandemic periods. Secondly, we empirically point out behavioural effects of spatial and socioeconomic exploration in mobility by computing entropy measures derived from spatial and socioeconomic property of visited places. Moreover, we identify types of interventions contributing to aforementioned behavioural effects and their impacts on mobility segregation. Interestingly, these procedures lead us to the still presence of residual effect of shocks even after the removal of interventions. ## 2 Results In this study we focus on aggregated mobility data that is provided by Cuebiq [24], a location intelligence and measurement platform (for more details on the data see Materials and Methods). The dataset contains geolocation of places upleved at census block which were visited by anonymous smartphone users along with timestamps. Time period starts from 1 January 2020 with last day of observation that varies between cities. Given the differences in observation lengths among them, they all come with the time window that adequately covers an extensive period during pandemics before lockdown, during lockdown, and after reopening as presented in Supplementary Material (SM) Section A. From this dataset, we acquire individual trajectories of 995,000 people with different sample sizes between cities. To detect home location, we use home inference algorithm [25, 26, 27] where home location is defined as the most frequent location visited by each individual during the night time (between 9PM to 6AM). Using this method, we obtain 597,000 of home located people. Consequently, places other than home locations found in the trajectories are classified as place of interest (POI). Details of dataset coverage and the home inference algorithm is specified respectively in Materials and Methods (Section 4.1 and Section 4.2). At the same time, we use income related features at spatial resolutions comparable to census tract which are released by respected bureau of statistics, multidimensional poverty index in Bogota [28], poverty rate in Jakarta [29], total annual income in London [30], and per capita income [31] in New York. We combine these mobility data with socioeconomic maps using geospatial information to infer socioeconomic indicator for both people and places. The algorithm pipeline and inference of this study are provided in Materials and Methods (Section 4.2) and SM Section A. In addition, to quantify policy responses, we use the stringency index released on the Oxford COVID-19 Government Response Tracker (OxCGRT) dataset [32]. Using this data we identify different intervention periods with more or less homogeneous policy restrictions: before lockdown, lockdown, and reopening. ### Mobility stratification To quantify socioeconomic stratification in mobility, we take the strategy earlier proposed [23, 33] by constructing stratification matrix from mobility network that codes the frequency of visits of people to places. It is defined from their mobility trajectory and indicates the existence of socioeconomic assortativity in visiting patterns. A stratified mobility network is formally constructed as a bipartite structure \(G=(U,P,E)\) where individual \(u\) is an element of a node set \(U\) and place \(p\) is constituted to a node set \(P\). Visit to \(p\) by \(u\) is defined as edges \(e_{u,p}\in E\), weighted based on the frequency of visit occurrence \(w_{u,p}\). In addition, SES of people is defined in terms of the socioeconomic status \(c_{u}=i\in C_{U}\) of their home location. Following similar method, places are also assigned with a \(c_{p}=j\in C_{P}\) associated to the socioeconomic status of the census tract of their location. ### Baseline mobility segregation Segregation in the socioeconomic network appears as patterns of assortativity where people of different socioeconomic characters meet less likely than with similar others in the same socioeconomic level. We take the first step to capture stratification tendency by transforming _mobility network_ into _mobility stratification matrix_\(M_{i,j}\), denoting the probability of people from a given socioeconomic class to visit places with a given socioeconomic class. As a result, mobility stratification in each period is summarised in a single matrix. To standardise the assortativity measure for the sake of comparability and reproducibility, we compute the mobility assortativity index \(r\) defined as a correlation coefficient of \(M_{i,j}\)[22, 34, 35]. Assortativity index values closer to one signal the higher concentration of visiting venues closer to one's own socioeconomic range (assortative mobility), while 0 pinpoints the dispersion in visiting pattern throughout classes (non-assortative mobility). Otherwise, negative values indicate the tendency to visit places opposite one's own socioeconomic class (disassortative mobility). Complete technical note on transformation technique and assortativity computation is discussed in Materials and Methods (Section 4.3). To demonstrate these metrics and to follow up on the dynamical changes of segregation during different phases of crisis interventions, we take the example of London. Fig. 1a provides snapshots of mobility stratification patterns in London, starting from before lockdown and followed by the interchangeable periods between series of lockdown and reopening. In Fig. 1a, x-axis represents socioeconomic classes of people \(i\) while y-axis denotes socioeconomic classes of places \(j\) they visited. As people move, we calculate the frequency of visits for each pair of classes (people-place), proportional to total visits made by everyone who belongs to \(c_{u}=i\) (column-wise normalisation). Colour shades differ the visit magnitude where it becomes lighter as visit proportion gets larger. Fig. 1a contains all locations in the trajectories, regardless being home or non-home areas. Note that to refine the observation, we isolate home location effects on visiting pattern by removing own home location from mobility trajectory of each individual. The computational result of this sanity check shows weaker but consistent segregation pattern (see SM Section B.2). Assortative mixing is consistently pronounced regardless types of policy imposed on mobility restriction, for instance lockdown and reopening. Moreover, it validates the finding as the revolving pattern persists even after we exclude own home location from mobility trajectory of each individual. We consider next the persistence of the segregation patterns during the baseline period. Here we use the baseline segregation level shown by the mobility assortativity \(r\) value during Before Lockdown (BL) as the reference point to which the changing patterns in segregation could be adequately compared. Looking at the first matrix in Fig. 1a, we obtain an assortativity index \(r=0.416\), indicating baseline segregation in mobility where to be fairly large, due to the visits that are concentrated on areas with similar SE status to of the visitors', even they were far from their home location. Subsequently, we continuously observe how segregation changed daily over an extensive period before the COVID-19 pandemic. In Fig. 1b, we look at more granular temporal length by using sliding windows to construct a sequence of daily mobility stratification matrix (Fig. 1b). For every 2 weeks window with 1 day slide interval, we create a matrix and measure its assortativity index \(r\). The initial value of \(r\) indicates respectively Bogota (green), Jakarta (orange), London (light blue), and New York (purple). Looking at the baseline assortativity index values, New York stands out with \(r\) around 0.571, while Bogota reaches \(r\) value around 0.317. Assortativity degree in daily individual mobility in Jakarta is about 0.366 and Figure 1: **Mobility stratification matrix \(M_{i,j}\).** The structure of empirical socioeconomic stratification in London is visualised in a matrix form composing visit probabilities of individuals in each class to places located in various other classes. Fig. 1a reveals that larger visit proportion happens in a bin with lighter colour grades along diagonal elements across periods: Before Lockdown (BL), Lockdown (L1/L2), and Reopening (R). The strength of assortative mixing is quantified by a correlation coefficient between \(i\) and \(j\) denoted as \(r\). We find stronger diagonal concentration during lockdown, denoting considerable visits to locations within own SES. Therefore, enforcing lockdown levels up assortative mixing. This is considered as a change in mobility preference due to NPI. Fig. 1b is constructed by implementing sliding window algorithm. For every 1 week window with 1 day slide interval, a mobility matrix is generated with computed \(r\). Increasing \(r\) overlaps with lockdown period. Colour shades of line and block denotes city. London records the \(r\) value approximately at 0.416. Apart form that, we see that the assortativity level in mobility during baseline period tends to be constant without remarkable jump or drop between days. ### Segregation dynamics due to external shocks As we can see on Fig. 1b, the assortativity index \(r\) sensitively reflects changes in mobility segregation during different intervention periods. More prominently, the implementation of lockdown (L1 and L2), harnessed mobility at large and encouraged people to visit POI within their own socioeconomic spectrum. This leads the coefficient \(r\) to reach its peak at 0.608 during the first lockdown (L1) in London, after a 46% increase from its baseline level at 0.416. In this city, mobility is reintroduced during reopening (R1), and visiting more places became possible again. Chance for higher socioeconomic mixing in mobility was opened, resulting in lower \(r\) at 0.474. However, it has not retrieved back to the original level before lockdown but remaining 14% higher than the baseline level. A weaker impact of lockdown were found during the second phase (L2) even resulting in a in 11% higher \(r\) at 0.461 as compared to the baseline level. We recognise this phenomena as induced assortativity. Similar matrices computed for other urban areas are presented in SM Section A.1. The general overview of assortativity dynamics in Fig. 1b indicates that mobility assortativity is found in all investigated cities except New York. Since the implementation of lockdown policy onward, increase in \(r\) value in Bogota was visible with the highest value recorded at 0.613 during the first phase of lockdown. It suggests that the large spike of visitation to places located in own socioeconomic status. In the following periods, \(r\) value tended to stabilised around 0.5, still higher than the baseline level. In Jakarta, once the lockdown was introduced, \(r\) value was staggering around 0.6 in the periods that came after. The intermittent reopening phase only decreased the \(r\) value temporarily and it surged again after the second phase of lockdown was taken into account. In the end, the \(r\) value was still twice larger than the original magnitude before lockdown. Mobility assortativity in New York remained relatively stable across the time without any significant temporal cycle. This invariant pattern in New York could be accounted to the imbalance and asymmetric mobility between five boroughs within its territory: Manhattan, Brooklyn, Queens, Bronx, and Staten Island. In related studies, Rajput and other [36] state that stay-at-home orders implemented in the midst of COVID-19 outbreak disturbed 80% typical daily movement within city in New York from as early as the second half of March 2020. Recalling that Manhattan is the epicentre of the city's human dynamics where various mobility motifs and activities occur, we observe the case on Manhattan separately along with mobilities within and between other boroughs in New York. Our results are summarised in the SM Section E to clarify the upsurge in assortativity during lockdown that already found in other cities. ### Residual isolation To further refine the observation related to changing segregation pattern, we measure the presence of _residual isolation_. The ultimate recovery is expected when mobility pattern and assortative mixing during the reopening stage are on the same level as before lockdown. If such conditions hold, sudden changes triggered by external shock namely COVID-19 outbreak might only carry short-temporal effect inducing any barrier for people to return to the normal pre-pandememics configuration. To quantify such effects we define the _mobility adjustment matrix_\(S_{i,j}=M_{i,j}^{t1}-M_{i,j}^{t2}\) is set by taking the difference between _mobility stratification matrix_\(M_{i,j}\) in two consecutive periods, for instance between baseline period \(M_{i,j}^{BL}\) and the first lockdown \(M_{i,j}^{LL}\). Therefore, the matrix element \(a_{ij}\) in \(M_{i,j}\) entails the difference in proportion of frequency of visits between a pair of consecutive periods as seen in Fig. 2. Fig. 2a reveals the difference between a pair of intervention periods before lockdown and the first lockdown, inferring that the first lockdown is the most stringent among others. It tells us that the induced assortativity develops into isolation. In case of London, the upper diagonal elements of \(S_{i,j}\) are dominated by negative value, indicating as away less visits to these places located in the higher socioeconomic class during the first lockdown as compared to the baseline level. The arrival of the second lockdown period pushes the visiting proportion to higher SES places to a lower level again, but not as large as in the first lockdown. The relaxation on mobility restriction during the reopening period increases the visits to these places to an extent, although negative values are still found in some cells. Quantitative measure of residual isolation \(\mu_{re}=\frac{\sum_{l}tr[M_{i,j}]}{\sum_{j\in C_{P}}}\) is provided by taking the summation over main diagonal elements of \(M_{i,j}\) and divide the value by the number of socioeconomic classes which is ten. It results in the average value of matrix diagonal elements as shown in Fig. 2b. In each city even in New York, in the extreme degree, individuals during lockdown restrict their preference to be present in the areas within own socioeconomic boundary more than they used to be. As the reopening is imposed after the first lockdown, the pattern is reversed. The difference between reopening and the second lockdown is very subtle. Interestingly, the reopening is not necessarily able to restore the typical configuration to before lockdown. We still see negative value along main diagonal traces, even higher than -0.2, as shown by the negative diagonal gradient, revealing the existence of residual isolation effect. In Jakara, people tend to spend almost more than 30% frequent activities in the class they belong to. Average residual effect in Bogota, is captured around 20% and nearly about 10% in New York. However, the reopening (compared to before lockdown/BL-R) does not directly bring \(\mu_{re}\) equal to zero in any cities we observe, indicating the prevalent residual isolation. Weaker average residual isolation is found after removing local visits (see SM Section B.2) and pushes \(\mu_{re}\) closely distributed around zero. ### Restriction and behavioural effects Pandemic brings another complexity in the way people move from one location to numerous others across space. During the COVID-19 outbreak, mobility is not merely driven by established personal preference but also supplementary necessity to align with prescribed mobility restrictions. Figure 2: **Mobility adjustment matrix \(S_{i,j}\). It shows the ratio in stratified mobility pattern between a period during the pandemic namely Lockdown (L1/L2) and Reopening (R) as to compare to Before Lockdown (BL). Green shades indicate more visits made before the enforcement of lockdown, white blocks constitute equal visits, otherwise brown blocks appear. Therefore, we observe contrast proportion on the upper diagonal elements in London as visits to these places touch the lowest level in L1 relative to BL, burst in R1 and drop in L2 (Fig. 2a). Residual isolation effects as measured by average value of main diagonal trace in each matrix \(\mu_{re}\). Comparative measure across cities in terms of average residual isolation effect \(\mu_{re}\) is provided in Fig. 2b. Purple block shows the difference between before lockdown baseline and reopening stage.** With this in mind, we look at heterogeneities of where-to-go decision from two different aspects: spatial and socioeconomic composition. We use an entropy based measure, which we develop on top of Shannon's formula, to measure the heterogeneity of mobility traces in term of geolocation. Here we define _spatial mobility entropy_\(H_{m}(X)=-\sum_{x\in X}p_{(x)}\log_{2}p_{(x)}\) where geolocation and SES is \(x\in X\) and _SES mobility entropy_\(H_{s}(X)=-\sum_{x\in X}p_{(x)}\log_{2}p_{(x)}\) for which socioeconomic class is \(x\in X\). In the formalisation of _spatial mobility entropy_\(H_{m}(X)\), we compose a scalar for each individual trajectory containing geographic location of places visited a single people. For _SES mobility entropy_, we replace the geographic location information with socioeconomic classes where visited places belong to. In both types of entropy, lower values correspond to higher domination of particular locations/SES of locations in the visit pattern, signalling the extensive locational/socioeconomic isolation. Given that the measure is normalised by period, the upper cut-off is 1 (absolute heterogeneity) and the lower cut-off is 0 (absolute homogeneity). Formal formulation of entropy is available in Materials and Methods (Section 4.4). As shown in Fig. 3a and b, in London, we deal with four phases of pandemic: Before Lockdown (BL), Lockdown I (L1), Reopening (R1), and Lockdown II (L2). While Fig. 3a reveals the distribution of locational mixing degree in individual trajectory. Fig. 3b follows the similar way but rather emphasising on socioeconomic setting of those listed locations. In both figures, skewness of the curve moves to the left (to the direction of zero) in the first lockdown (light green), so does in the second lockdown (dark green). It points out the tendency of upholding more homogeneous visiting pattern. In respect of spatial scale, urban explorability drops once policy limiting mobility flow implemented. Consequently, the set of visited places becomes more narrow (centred to smaller set of places) and localised (closer to where home is located). Similar pattern also holds with regard to socioeconomic range. As set of locations is shrunken by distance, it becomes highly concentrated to particular socioeconomic level that reflects own well-being. We check the shifting magnitude by computing the average value (\(\mu\)) and standard deviation (\(\sigma\)) of the two entropy distributions for the different cities. In Fig. 3c, the initial phase of lockdown (L1) characterises mobility pattern to be locationally more homogeneous since spatial mobility entropy \(H_{m}(X)\) is lower than before lockdown period (BL). Spatial concentration largely happened in Bogota during L1, reaching the average value at 0.35. Jakarta recorded the average spatial diversity at 0.37. In addition, the average value in New York and London respectively was around 0.4 and 0.5. The reopening phase that Figure 3: **Spatial and SES mobility entropy.** Spatial mobility entropy \(H_{m}(X)\) (Fig. 3a) takes into account the heterogeneity of places in individual trajectory with value range from 0 (visiting same locations) to 1 (visiting various locations). SES mobility entropy \(H_{s}(X)\) (Fig. 3b) takes similar computation after replacing set of locations with socioeconomic status of area where those places located implying visit variation between socioeconomic isolation (0) and socioeconomic diversity (1). In London, we observe less heterogeneity in both locations and socioeconomic status of places visited by individual during lockdown. Even after some relaxations are allowed, people do not experience mobility at pre-pandemics level. Similar observation also become evident in other cities globally (Fig. 3c). follows (R1) does not bounce the variability of locational and socioeconomic preference back to original level before lockdown even though it goes to recovery direction. Compared to spatial mobility entropy, SES mobility entropy \(H_{s}(X)\) in Fig. 3d receives grave repercussions caused by the outbreak even more as \(\mu\) ranges from about 0.5 to lower values. During L1, People in Bogota and Jakarta experience deeper socioeconomic isolation as \(H_{s}(X)\) falls below 0.2. London is close to 0.35 while New York is around 0.4. ### Mobility interventions To this point, we have revealed residual isolation effects of shocks even after mobility restrictions were gradually lifted. However, the kind of restriction that significantly contributes to such configuration is still unknown. Data on NPI [32] contains the strictness level of every single restriction \(k=9\) categories over period of time, including closing of main venues such as school, workplace, and others. For a complete list see Table 1 in SM Section A. We weight the impact of those restrictions listed as NPI by running multivariate linear regression where the dependent variable an entropy (\(H_{m}(X)\) or \(H_{s}(X)\)) and the independent variable a stringency level of each restriction \(s_{k}\in S_{K}\). The methodological definition for this approach is further explained in Materials and Methods (Section 4.5). Individual exploration occurs not only over socioeconomic dimension, but also beyond physical space. Therefore, enforcement of mobility restrictions NPIs also reduce socioeconomic diversity of visiting places. Indeed, from the results shown in Fig. 4, public information campaign (H1/light purple) is the most preponderant in each city, simultaneously affecting mobility in terms of spatial and socioeconomic diversity of visited places. However, the magnitude that public information campaign restriction brings to mobility is not uniform between physical and socioeconomic space. The covariates ratio is defined as \(\beta_{m,s}=\frac{\beta_{H_{m}(X)}^{\mu}}{\beta_{H_{s}(X)}^{\mu}}\) to indicate relative impact of a type of restriction on those two aspects of exploration. Once this restriction is imposed in London, for instance, its impact on the shrinking spatial diversity in individual trajectory is 3.33 times higher. This number is 3.08 in between Bogota and 3.47 in Jakarta. Meanwhile in New Figure 4: **Multivariate regression. The effectiveness of NPI in constraining spatial exploration \(H_{m}(X)\) (Fig. 4a) and socioeconomic exploration \(H_{s}(X)\) (Fig. 4b) is presented as covariates \(\beta\). In all cities except New York, public information campaign (HI/light purple) is the most influential instrument that highly affect both spatial and economic exploration. The \(R^{2}\) of respected regression models namely for \(H_{m}(X)\) and \(H_{s}(X)\) differs across cities. The nine types of restrictions explain around 59% to 76% of the variance in spatial exploration and turns out lower in socioeconomic exploration from 36% to 47%.** York, the cancellation of public events (C3) concurrently diminishes spatial exploration 1.33 times more than socioeconomic exploration. Looking at the \(R^{2}\), we find that the overall values are lower for the model with dependent variable of SES mobility entropy \(H_{s}(X)\) as compared to the one fitting on spatial mobility entropy \(H_{m}(X)\). We compute ratio values of \(R^{2}\) for \(H_{m}(X)\) over \(R^{2}\) for \(H_{s}(X)\), formally expressed as \(R^{2}_{m,s}=\frac{R^{2}_{H_{m}(X)}}{R^{2}_{H_{s}(X)}}\). In Bogota, the same set of NPI explains a much higher variance of \(H_{m}(X)\), 1.76 more than the variance of \(H_{s}(X)\). Similar range of ratio values of \(R^{2}_{m,s}\) is also obtained in London (2.10), Jakarta (1.93), and New York (1.25). As the results show that composition of socioeconomic preference over places in individual visiting patterns is still largely shaped by unobserved factors other than mobility restriction, it could be an indication that socioeconomic exploration incorporates more complex dimension than the delineation spatial boundary alone. ## 3 Discussion and conclusions In this study, we took a step to analyse the impact of COVID-19 outbreak on structural preference reflected in mobility pattern by looking at the mobility dynamics in Bogota, Jakarta, London, and New York. We found that in-class visits dominate mobility pattern in every temporal snapshots, ranging from before lockdown, lockdown, to reopening. Dependency patterns of assortative behaviour dependencies were also detected as the assortativity coefficient \(r\) remained highest during lockdown. Subsequently, the emergence of reopening did not directly bring the typical mobility mixing pattern to the original level observed before the enforcement of lockdown, indicating the existence of residual isolation effect. We further measured the degree of residual isolation by comparing stratification in mobility pattern between two consecutive periods (see Fig. 2a). It validated the presence of residual isolation effects where visits within own class during reopening is still higher than the usual rate. Another feature of isolation in mobility that has been presented in this study is the decreasing heterogeneity of where-to-go decision from two distinctive aspects: spatial and socioeconomic composition (see Fig. 3). Entropy measures revealed that visits became highly concentrated to particular locations and socioeconomic classes. To understand which type of NPI does constrain mobility across time window, we proposed multivariate regression model composing all mobility restrictions to examine their magnitudes in intervening the diversity configuration of visiting patterns. In cities, except New York, we observed the impact of public information campaign (H1) gained its highest importance among any other type of restrictions. The observed variability of magnitude could be related to the structure of urban fabric in respected city as well as the level of socioeconomic well-being. Apart from the computations demonstrated to this point, we realise that stronger evidence for residual isolation in the longer term could be presented if the access to more recent data is available. Our latest data only covers the initial period of reopening where NPI and the COVID-19 protocols were still at the frontier in controlling the outbreak. It solely depends on the behavioural conformity and attitude towards mask wearing and social distancing without any intervention from vaccination policy. Another boundary that we would like to underline is the limitation in direct comparison between cities. This issue is raised due to the different metrics and levels of spatial resolution we use to define SES indicators, that are strongly depending on the availability of data. This study contributes to the scientific importance in refining the impact of pandemic on the reorganisation of mobility segregation. It allowed us to comprehensively understand potential occurrence of residual isolation during pandemic interventions at higher spatial and temporal resolution. Afterwards, it taps the pivotal aspect of societal impact as additional detrimental effects induced by residual isolation might not be equally distributed across socioeconomic class, indicating a higher vulnerability faced by lower socioeconomic class that should be better mitigated by adaptive policy design in the future. Therefore, as a future goal, we consider the importance of conducting class-wise analysis to study how different classes are impacted differently. Materials and methods ### Data description Mobility data is provided by Cuebiq, a location intelligence and measurement platform. Data were shared under a strict contract with Cuebiq through their Data for Good COVID-19 Collaborative program where they provide access to de-identified and privacy-enhanced mobility data for academic research and humanitarian initiatives only. Mobility data are derived from anonymous users who opted to share their data anonymously through a General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) compliant framework. All final outputs provided to partners are aggregated in order to preserve privacy. The aggregation procedure is specified as data upleveling where some proportions of real locations are deterministically shuffled within Census Block Group (CBG) in the US or geohash level 6 in other countries. This protocol aims to mitigate the risk of re-indetification without affecting the analysis in this study since we infer socioeconomic status at a level broader spatial delineation namely census tract as we discuss further in details in the following section. In the actually analysed dataset, the starting point for all observed city is January 2021. Bogota retains longest temporal observation until May 2021, followed by London (February 2021), Jakarta (December 2020), and New York (July 2020). Each individual in every city has a set of trajectories constituting timestamps (start and end) whenever detected at a certain location (latitude and longitude). We focus on mobility traces of people whose home locations are successfully identified at the census tract level as discussed in details in Materials and Methods (Section 4.2). In Bogota, there are approximately 55,000 people containing 25 million trajectories. The number of people fluctuates among cities, so do total trajectories: Jakarta (around 65,000 people/26 million trajectories), London (almost 200,000 people/115 million trajectories), and New York (about 277,000 people/30 million trajectories). To check the general reproducibility of mobility pattern in New York, we also use the SafeGraph dataset [37], which is available at coarser resolution (census tract level) and longer temporal coverage (until May 2021) which is presented in SM Section F. We overlay socioeconomic layer on top of the existing mobility layer. Income related features are fitted for this purpose. In Bogota, multidimensional poverty index [28] at urban section developed by Colombian Bureau of Statistics (DANE) becomes the basis for socioeconomic status computation. It captures quite comprehensive dimension of individual well-being: health, education, utilities and housing, as well as employment. A simpler version of poverty index called poverty rate [29] is used in Jakarta at village-level resolution, taking the proportion of people living below particular amount of average monthly income. Meanwhile, socioeconomic configuration of London and New York is plotted respectively based on total annual income recorded by Office for National Statistics (ONS) [30] in 2015 at middle layer super output area (MSOA) level and per capita income in 2018 at census tract level taken from American Community Survey (ACS) [31]. In each city, we group the people by income distribution in the dataset into 10 equally populated groups from the lowest SES/poorest (1) to highest SES/riches (10). It should be taken into account that direct comparison between cities could not fully established because of diverse characterisation by nonidentical SES indicators and different spatial resolution they are provided at. Nevertheless, comparison across period of the same city is possible to derive in this context. To synchronise the movement along mobility points and to derive observable structural break in mobility pattern induced by the epidemiological outbreak and policies coming after, we refer to the stringency index on Oxford COVID-19 Government Response Tracker (OxCGRT) dataset [32]. We validate this with actual implementation at city level to ensure policy alignment between national and local government. \begin{table} \begin{tabular}{l c c} \hline Urban Area & Number of People & Number of Trajectory \\ \hline Bogota & 55,000 & 25 million \\ \hline Jakarta & 65,000 & 26 million \\ \hline London & 200,000 & 115 million \\ \hline New York & 277,000 & 30 million \\ \hline \end{tabular} \end{table} Table 1: **Sample size**. We have different size sample across cities but preserves the temporal representation of pandemic cycle: before lockdown, lockdown, and reopening. ### Algorithm pipeline and inference We construct an algorithm to detect home and POI (non-home) locations. Our methodology combines the spatial and temporal attributes such as frequency of visit, time window of visit, as well as duration of stay at given locations. We take a further step to infer socioeconomic status for each people (based on home location and POI) by performing spatial projection and merge it with demographic data (average income) from bureau of statistics. Home LocationDetecting home location is a primary step in dealing with mobility data because spatial identifier serves as an intermediary information that allows to couple heterogeneous source of data, including census data. Various decision rules have been developed to identify the whereabouts of people reside. In mobility literature, a single rule home detection algorithm is widely applied in both continuous (e.g.: global positioning system/GPS data) and non-continuous location traces (e.g.: call detailed record/CDR data) [25, 26, 27]. Home is defined as the location where highest proportion of activities occurs during night hours with variations regarding time window. To compensate the unavailability of ground truth to be used as validation set, we design more conservative algorithm in determining home location by combining these criteria: a point where an individual is mostly located between 9PM to 6AM for uninterrupted duration at least 6 hours. It results in 50% people in our dataset of which home locations are being successfully identified. POI LocationApart from home, human individual activities evolve around other areas for some reasons, including work. Trip between home and work location dominates daily mobility, while visits to other locations are broadly distributed with short inter-event times [38]. We set criteria for POI location Figure 5: **Inference Algorithm. Mobility data contains information regarding whereabouts of people namely geographic locations and timestamp (trajectory). Demographic data covers average income of given spatial unit (eg: census tract). We build an algorithm to separate home \(u\) and POI locations \(p\) and identify the inferred income based on its spatial delineation. Discretisation on distribution of inferred income results in two separated SES label: SES People \(i\) and SES POI \(j\). as place other than home where people with identified home locations are present during weekdays from 9AM until 3PM. Afterwards, the rest of locations that do not fall into either home or work category are labelled as others. _Socioeconomic Status (SES):_ We assign SES label to every individual and and POI based on socioeconomic data. The first step to SES people is to identify socioeconomic feature of area where they live (home location). Similarly, SES POI is inferred by mapping out the area where points (work and other locations) are spatially positioned. We sorted the values by ascending order and split them into equally populated bins of 10 SES labels, making SES 1 to be the poorest and SES 10 to be the richest. ### Mobility matrix In Section 2, we rely on the basic formulation of stratification extracted from the _mobility stratification matrix_\(M_{i,j}\) that is defined based on the _mobility network_\(G=(U,P,E)\). The network \(G\) is a bipartite graph that connects people \(u\) in the set of node \(u\in U\) and POI \(p\) from set of node \(p\in P\) if \(u\) visited \(p\), represented as a link \(e_{u,p}\in E\) exists. Frequency of visit is counted as edge weights \(w_{u,p}\). Stratification is introduced in the network by labelling class membership \(c_{u}=i\in C_{U}\) to every people and \(c_{p}=j\in C_{P}\) to every POI based on their inferred income. As defined earlier in [23], we have: \[M_{i,j}=\frac{\sum_{U,c_{u}=i}\sum_{P,c_{p}=j}w_{u,p}}{\sum_{j\in C_{P}}\sum_{ U,c_{u}=i}\sum_{P,c_{p}=j}w_{u,p}}, \tag{1}\] where the probability of frequency of visits (matrix elements \(a_{ij}\)) is generated by column-wise normalisation (SES People \(i\)) of the frequency matrix. As an example for a mobility stratification matrix see Fig. 1. Given a pair of _mobility stratification matrix_\(M_{i,j}\) in two consecutive periods, we define _mobility adjustment matrix_\(S_{i,j}\) where the matrix element \(b_{ij}\) entails the difference in proportion of frequency of visits. More formally: \[S_{i,j}=M_{i,j}^{t_{1}}-M_{i,j}^{t_{2}}, \tag{2}\] where \(t_{1}\) denotes the initial period and \(t_{2}\) is the succeeding rolling period. For instance, if we have three periods namely Before Lockdown (BL), Lockdown (L1) and Reopening (R1), we could generate three \(S_{i,j}\) respectively: \[S_{i,j}^{BL-L1}=M_{i,j}^{BL}-M_{i,j}^{L1}, \tag{3}\] \[S_{i,j}^{L1-R1}=M_{i,j}^{L1}-M_{i,j}^{R1}, \tag{4}\] \[S_{i,j}^{BL-R1}=M_{i,j}^{BL}-M_{i,j}^{R1}, \tag{5}\] while \(S_{i,j}^{BL-R1}\) shows the difference between period before enforcement of lockdown and reopening (removal some mobility restrictions in the post-lockdown). The result of this computation is provided in Fig. 4. The degree of socioeconomic isolation is computed by the assortativity of the mobility stratification matrix. This _mobility assortativity coefficient_\(r\)[22, 34, 35] is computed based on the Pearson correlation between row \(i\in c_{u}\) and column \(j\in c_{p}\) \[r_{N}=\frac{\sum_{i,j}iN_{i,j}-\sum_{i,j}iN_{i,j}\sum_{i,j}N_{i,j}}{\sqrt{\sum_{i, j}i^{2}N_{i,j}-\left(\sum_{i,j}Ni_{i,j}\right)^{2}}\sqrt{\sum_{i,j}j^{2}N_{i,j}- \left(\sum_{i,j}jN_{i,j}\right)^{2}}}. \tag{6}\] Values closer to 1 indicate the higher concentration of visiting venues within own socioeconomic range, while lower cutoff values at -1 reveals the tendency of visiting places outside own class. If the value is equal to 0, this measure indicates dispersion in visiting pattern throughout classes without any structural choice preference regarding socioeconomic status of places. ### Mobility entropy Mobility entropy is measured on the basis of generic Shannon's formula [39]. In the context of mobility, entropy could be employed to quantify predictability of a visiting pattern. Generally, higher entropy is in line with lower predictability, eliciting the more heterogeneous preference of places to visit in all individual trajectory. At first, we define (_spatial mobility entropy_\(H_{m}(X)\)) where m is a notation for spatial mobility at individual level in Fig. 5a as: \[H_{m}(X)=-\sum_{x\in X}p_{(x)}\log_{2}p_{(x)}=E[-\log p_{(X)}] \tag{7}\] where \(x\) is a discrete random variable representing geographic from all possible location in X of POI locations visited by people. We replicate above formulation to measure (_ses mobility entropy_\(H_{s}(X)\)) in Fig. 5b such that: \[H_{s}(X)=-\sum_{x\in X}p_{(x)}\log_{2}p_{(x)}=E[-\log p_{(X)}] \tag{8}\] where \(x\) is replaced by a discrete random variable representing the SES of POI where an user visited. The value is normalised for each period, therefore the maximum value 1 and minimum value 0 is comparable across temporal snapshots. Upper bound value \(H_{m}(X)=1\) implies the sporadic visit to heterogeneous POI locations, while lower bound value \(H_{m}(X)=0\) indicates homogeneous visit pattern to rather limited POI locations. In parallel, \(H_{s}(X)=1\) (heterogeneous SES POI) shows visit to places located in various socioeconomic classes and \(H_{s}(X)=0\) signifies visit pattern characterised by strictly preferred socioeconomic class (homogeneous SES POI). ### Restriction impact We aims to identify the kind of restriction that significantly contributes to changes of diversity in visiting pattern and quantify the magnitude brought by those interventions. To rule out the effectiveness of each type of restrictions, we initiate multivariate linear regression model. There are \(k=[1,...,9]\) restrictions listed as NPI respectively closings of schools and universities (C1), closings of workplaces (C2), cancelling public events (C3), limits on gatherings (C4), closing of public transport (C5), orders to stay-at-home (C6), restrictions on movement between cities/regions (C7), restrictions on international travel(C8) and presence of public information campaigns (H1). Stringency value \(S\) for every restriction in each temporal snapshots is obtained from OxCGRT dataset and to be used as independent variable. The dependent variable is two types of mobility entropy, being computed separately: geographic space-based \(H_{m}(X)\) and socioeconomic space-based \(H_{s}(X)\). To further understand the impact magnitude of a single restriction \(k\in K\) at timestamp \(t\in T\), we fit the data to this form: \[H_{m}(X)^{t}\sim\{S_{k}^{t}\} \tag{9}\] \[H_{s}(X)^{t}\sim\{S_{k}^{t}\}. \tag{10}\] In the equation above, \(\{S_{k}^{t}\}\) denotes a set of variables that represents each type of mobility restriction in NPI. The regression covariates indicate the magnitude of restriction impact on segregation. In details, negative values of those covariates imply reduction in the degree of individual spatial and socioeconomic exploration due to respected mobility restrictions. Therefore, the ratio between a pair of the restriction coefficients allow us to compare different impact sizes. Acknowledgement:The authors are thankful for CUEBIQ for providing access to the mobility data. MK acknowledges support from the DataRedux ANR project (ANR-19-CE46-0008), the SoBigData++ H2020 project (SoBigData++ H2020-871042), the SAI Horizon 2020/CHIST-ERA project that was supported by FWF (I 5205-N), the EmoMap CIVICA project, and the National Laboratory of Health Security (RRF-2.3.1-21-2022-00006). Authors contribution:RMH, VS, MGH and MK conceived the study. RMH and MK designed methodology and analyzed the data. RMH, VS, MGH and MK wrote the manuscript with input from all co-authors. ## Supplementary Materials ### Data and Pipeline Human mobility captures multi-layer information with high spatiotemporal resolution. Not only physical movement from one point to million others, it resumes individual behavioural dynamics in exploring spatial boundaries. In order to make meaningful observation related to individual mobility patterns within urban landscape, we map out socioeconomic condition of people and places they visit by inferring income-based metadata gathered from bureau of statistics of respected locations. This method allows us to comprehensively analyse two aspects of individual trajectory over places: spatial and socioeconomic status (SES) distribution. We construct a pipeline comprising data collection, data processing, and data analysis as depicted in Fig. 6. The mobility interventions are retrieved from Oxford Covid-19 Government Response Tracker (OxCGRT) dataset, containing nine categories over pandemic period. The list of interventions are provided in Table 2. ## Appendix B Mobility stratification matrix ### All visits Distribution of frequency visit with regards to socioeconomic stratification between SES People \(i\) and SES POI \(j\) is conceptually introduced in Section 5.2 as Mobility Stratification Matrix \(M_{ij}\). Normalisation in performed by own SES (column-wise). Fig. 7 reveals the generic pattern in which assortative mixing increases during the lockdown as increasing \(r\) is found across cities. It reflects the extend individual responds to the pandemics by reorganising their typical mobility configuration. In the case of more than one period of lockdown appears (L1 and L2), the first seems to be stronger in inducing the isolation effect. As the reopening (R1) phase is started, the assortative visit remains higher than the level before lockdown (BL). ### Without home area visit We repeat the procedure used to generate Fig. 7 after excluding local visits to own neighbourhood to generate Mobility Stratification Matrix for visits outside home area \(Mc_{ij}\). This step is considered as robustness control over the persistent assortative mixing. In Fig. 8 we see that the first lockdown is still the most stringent because it alters preference to visit more places within own socioeconomic class. Comparing to Fig. 7, assortativity coefficient \(r\) in general is away lower, indicating that short distance visit in the surrounding neighbourhood assumes considerable proportion on mobility pattern. ## Appendix C Mobility adjustment matrix Mobility adjustment matrix \(S_{ij}\) is constructed to detect the indication of residual isolation effect. We operationalise the computation in Section 5.2 in which the difference in proportion of frequency visits between two consecutive periods is visible in Fig. 9. None of cities in this study exhibit full recovery after the occurrence of reopening as the bin colour remains under brown shades, indicating larger visit ratio to places in own socioeconomic class as to compare with before lockdown period. Therefore, it leads to the notion of residual isolation induced by COVID outbreak. \begin{table} \begin{tabular}{l r} \hline Code & Description \\ \hline C1 & School closing \\ \hline C2 & Workplace closing \\ \hline C3 & Cancel public events \\ \hline C4 & Restrictions on gatherings \\ \hline C5 & Close public transport \\ \hline C6 & Stay at home requirements \\ \hline C7 & Restrictions on internal movement \\ \hline C8 & International travel controls \\ \hline H1 & Public information campaigns \\ \hline \end{tabular} \end{table} Table 2: **Non-pharmaceutical intervention (NPI)**. There are nine restrictions included in this data. ### All visits ### Without home area visit Mobility adjustment matrix \(S_{ij}\) is transformed to \(Sc_{ij}\) by eliminating visits to own neighbourhood. It runs on similar motivation in Section C.2, namely as robustness check given the large local visits in the individual trajectory. Fig. 10 tries to uncover the main attribution of residual isolation effect by eliminating visits to own neighbourhood/home area. This procedure dilutes the magnitude of assortativity force, therefore we address the residual isolation effect as a longer term consequence of localised Figure 7: **Mobility Stratification Matrix for all visits \(M_{ij}\).** Matrix elements in Fig. 7a-d represent the magnitude of frequency visits for each pair of SES People \(i\) and SES POI \(j\) where lighter colour shows larger visit proportion. All locations found in individual trajectories are taken into account. mobility due to COVID restrictions. Interestingly, BL-R shows segregated pattern of visit where before lockdown people tend to explore more places in higher socioeconomic ranks (top rows/green shades) while during the reopening places in lower classes contribute more to visit proportion (brown shades) in every cities. Beyond that, Bogota exhibit bimodal segregation where dominant visit before lockdown does not only happen in upper class, but also lower class. Figure 8: **Mobility Stratification Matrix for visits outside home area \(Mc_{ij}\).** Proportion of frequency visit of people from SES \(i\) to places in SES \(j\) is computed after removing places located in own neighbourhood. The lighter bin colour, the higher visit probability is. ## Appendix D Mobility entropy ### Spatial mobility entropy Heterogeneity of places visited by individual is quantified by computation of Spatial Mobility Entropy \(H_{m}(X)\) proposed in Section 5.3. Dispersion of value may take either to the direction of 0, signifying strict preference on particular locations over the rest and making the trajectory more homogeneous spatial wise. In contrast, as the value takes closer to 1, no strict preference presumed and visits are widely distributed across locational space. We find that people become Figure 9: **Mobility Adjustment Matrix for all visits \(S_{ij}\). The difference in term of visit probability between a pair of two consecutive Mobility Stratification Matrix \(M_{ij}\) is measured. The presence of white bins indicates indifferent visiting pattern, while green shows more visits during the first period. Otherwise, brown shades appear. All locations found in individual trajectories are taken into account.** more restricted in deciding which locations to visit as the average value \(H_{m}(X)\) hits the lowest point than ever in all cities. The introduction of reopening phase does not directly bounce the value back to the normal level before lockdown, in line with condition suggested in Fig. 7 and Fig. 9. ### Socioeconomic mobility entropy In this section, we redo the computation for trajectory heterogeneity in terms of socioeconomic factor based on entropy formulation in Section 5.3. To measure Socioeconomic Mobility Entropy \(H_{s}X)\), we substitute geolocation feature with SES of places. The result in Fig. 12 confirms previous finding where people have stricter preference over places during lockdown. It is beyond spatial boundary since socioeconomic profile of those places is now also heavily skewed, making average value \(H_{s}(X)\) touches lowest record in comparison to other periods. Therefore, it reaffirms Figure 10: **Mobility Adjustment Matrix for visits outside home area \(Sc_{ij}\).** Every Mobility Stratification Matrix for visits outside home area \(Mc_{ij}\) is paired with the one in the following period. There are three patterns to detect: no difference between those two periods (white), dominant visit in the first period (green), and dominant visit in the second period (brown). condition stipulated in Fig. 7, Fig. 9 and Fig. 11. ## Appendix E Robustness of mobility adjustment We take into account the robustness check of isolation effect by applying Kruskal-Wallis H Test (non-parametric one-way ANOVA) on Mobility Stratification Matrix for both before (\(M_{i,j}\)) and after removing visits to own home area (\(Mw_{i,j}\)). The formulation of the null hypothesis (\(H0\)) could be defined as an equal median between before lockdown and another period that comes after. If the \(p\)-value appears to be smaller than the confidence level \(\alpha=0.05\), \(H0\) is rejected. Otherwise, he alternative hypothesis (\(H_{a}\)) remains. Table 3 and 4 provide justification for the presence of different degrees of isolation effect due to the variability of mobility in response to the dynamics of mobility restrictions. New York stands on strikingly opposite pattern as statistically significant difference is seen after removing local visits to the area where home is located while other cities exhibit such pattern for broad visits to any locations. Figure 11: **Spatial Mobility Entropy \(H_{m}(X)\).** We measure heterogeneity of individual preference regarding location of places visited. The presence of commonly repeated places pushes the value closer to zero, denoting lower degree of heterogeneity. On the other hand, higher variability of locations is represented by value near 1. ### All visits ### Without home area visit ## Appendix F Manhattan Effect New York is made up of five boroughs respectively Manhattan, Brooklyn, Queens, Bronx, and Staten Island. Among others, Manhattan is the centre of human activity agglomeration. Manhattan as a borough with the highest economic pull-factors in New York is massively affected, because mobility disruption hit not only movement of people inside borough, but also interborough movement that usually found in commuting pattern to workplace. People who reside in Brooklyn and Queens, for example, stop commuting to Manhattan as many of them switched to working from home practice. It is also reflected in lower use of public transportation and level of road traffic. Segregation pattern changes as a response to mobility restriction imposed due to the pandemic. In Section 2.2, we see that the mobility assortativity \(r\) in New York is relatively flat as Figure 12: **Socioeconomic Mobility Entropy \(H_{s}(X)\).** After replacing geolocation of places in individual trajectory by SES information, we recompute entropy. As the value skews to 0, visiting pattern tends to be concentrated on particular SES, otherwise it is somewhere close to 1. to compare to other cities such as Bogota, Jakarta, and London, but a more substantial mechanism at work that shapes urban human dynamics might contribute as well. In this section we take two strategies to disentangle spatial scale. At first, we focus in the area of Manhattan where activities and mobilities are heavily concentrated. Later on, we analyse mobilities in each borough that together unite as New York (intra-mobility), followed by mobilities between a pair of boroughs (inter-mobility). Mobility stratification in Manhattan is visualised as matrix in Fig. 13a. Homophilic mobility defined as movement within own socioeconomic class during the lockdown is 26% higher than before lockdown. Emergence of reopening phase does not directly brings back the normal condition since it still exceeds the original level by 20%. Even after removing local visits (Fig. 13d), the pattern stands still. This finding is consistent with global pattern previously captured in other cities in this study such as Bogota, Jakarta, and London (see Section B.1). Taking a pair of matrices in two consecutive periods, we have another form of matrix to show mobility adjustment as seen in Section 13b. Measures taken during lockdown affect individual \begin{table} \begin{tabular}{l r r r r r r r} \hline Urban Area & Matrix Element & BL \& L1 & L1 \& R1 & R1 \& L2 & L2 \& R2 & BL \& R1 & BL \& R2 \\ \hline Bogota & all & 1.728 & 0.795 & 0.202 & — & 6.595\({}^{*}\) & — \\ \hline Bogota & diagonal & 1.851 & 0.006 & 0.001 & — & 1.463 & — \\ \hline Jakarta & all & 6.090\({}^{*}\) & 0.006 & 0.013 & 0.160 & 4.550\({}^{*}\) & 3.252 \\ \hline Jakarta & diagonal & 1.286 & 0.051 & 0.001 & 0.281 & 0.691 & 0.966 \\ \hline London & all & 0.294 & 0.199 & 0.001 & — & 0.638 & — \\ \hline London & diagonal & 1.286 & 0.463 & 0.023 & — & 1.286 & — \\ \hline New York & all & 0.119 & 0.084 & — & — & 0.001 & — \\ \hline New York & diagonal & 0.206 & 0.051 & — & — & 0.206 & — \\ \hline \multicolumn{10}{l}{\({}^{*}p<0.05\)} \\ \end{tabular} \end{table} Table 4: **Kruskal-Wallis H Test on Mobility Stratification Matrix after removing visits to home area across pairs of policy period (\(Mw_{i,j}\)).** Mobility pattern differs significantly between before and during the first lockdown (BL & L1) in Jakarta (for all elements) but not apparent in other urban areas given the \(p\)-value is away lower than the confidence level at \(\alpha=0.05\). Similar direction also becomes visible between before and during the first reopening (BL & R1). Strict isolation along diagonal elements is not found anywhere. Therefore, levelling up the contribution of local visits in the surrounding of home locations to isolation. \begin{table} \begin{tabular}{l r r r r r r r} \hline Urban Area & Matrix Element & BL \& L1 & L1 \& R1 & R1 \& L2 & L2 \& R2 & BL \& R1 & BL \& R2 \\ \hline Bogota & all & 7.556\({}^{*}\) & 3.567\({}^{*}\) & 1.664 & — & 0.435 & — \\ \hline Bogota & diagonal & 11.063\({}^{*}\) & 2.063 & 7.406\({}^{*}\) & — & 9.606\({}^{*}\) & — \\ \hline Jakarta & all & 9.135\({}^{*}\) & 5.108\({}^{*}\) & 0.748 & 0.043 & 1.504 & 2.720 \\ \hline Jakarta & diagonal & 12.091\({}^{*}\) & 10.079\({}^{*}\) & 1.651 & 0.571 & 9.143\({}^{*}\) & 9.606\({}^{*}\) \\ \hline London & all & 10.832\({}^{*}\) & 12.362\({}^{*}\) & 0.215 & — & 0.299 & — \\ \hline London & diagonal & 14.286\({}^{*}\) & 13.719\({}^{*}\) & 1.286 & — & 9.143\({}^{*}\) & — \\ \hline New York & all & 1.404 & 5.970 & — & — & 1.381 & — \\ \hline New York & diagonal & 7.406\({}^{*}\) & 0.143 & — & — & 6.606\({}^{*}\) & — \\ \hline \multicolumn{10}{l}{\({}^{*}p<0.05\)} \\ \end{tabular} \end{table} Table 3: **Kruskal-Wallis H Test on Mobility Stratification Matrix before removing visits to home area across pairs of policy period (\(M_{i,j}\)).** Statistical significance could be implied in which the induced isolation effect largely takes place between before lockdown and the first lockdown (BL & L1). It happens in all urban areas (for diagonal elements) but New York (for all elements) as the \(p\)-value is away lower than the confidence level at \(\alpha=0.05\). Even after the introduction of the first reopening, the distribution of mobility pattern still does not revert to the pre-pandemic level (BL & R1) preference regarding their mobility. There is increase in visits to places within own socioeconomic range by at least 15% (see left matrix). Reopening happen at some points, however nothing such fully recovery exists. We still find that the average value of diagonal elements is 12% higher than before lockdown (see BL-R1). In the case of disregarding dominant local visits to own neighbourhood (Fig. 13e), average residual isolation effect \(\mu_{re}\) in the reopening still surpasses the baseline period before lockdown by 19%. After all, residual isolation effect remains prominent in Manhattan. Sliding window algorithm is implemented to generate Fig. 13c and Fig. 13f. For every 1 week window with 1 day slide interval, a mobility matrix is generated with computed mobility assortativity \(r\). For both all visits (Fig. 13c) and visits to places other than own neighbourhood (Fig. 13f), increasing \(r\) overlaps with lockdown period. Computations for mobility in New York based on Cuebiq dataset (Fig. 14a) are reproduced for SafeGraph dataset (Fig. 14b). The two comes in conformity in terms of the proportion of mobility category in which individual flows within a single borough (intra-mobility) surpasses the fluxes across different territories (inter-mobility). The first is presented in Fig. 14c (Cuebiq) and Fig. 14d (SafeGraph). A striking mirroring degree of assortativity in mobility \(r\) within Manhattan is seen, ranging from 0.6 before the implementation of lockdown to 0.8 in the aftermath. While the value of \(r\) is slightly different in Bronx (light green), Brooklyn (orange), Queens (purple), and Staten Island (pink), the pattern stays the same: increasing segregation since the lockdown period. One reason behind is that once people stay at residential area, they are bounded not only by spatial scale, but also socioeconomic homogeneity in the surrounding neighbourhoods. On contrary, individual flows across boroughs (inter-mobility) exhibits decreasing segregation as shown in Fig. 14e in the case of mobility flux between Manhattan and Bronx. As a Figure 13: **Mobility stratification matrix \(M_{i,j}\), mobility adjustment matrix \(S_{i,j}\), and mobility assortativity \(r\). We impose additional layer of filtering in New York by only looking at the locations within Manhattan boundary. On the left (Fig. 13a-c), we take into account all visits, while on the right (Fig. 13d-f), we remove local visits to home area. Assortative mixing touches the highest level during lockdown (\(r=0.656\)). After reopening, average residual isolation effect \(\mu_{re}\) is still 12.8% higher as to compare to before lockdown period.** undirected mobility network, mobility recorded in Cuebiq dataset (dark green) and SafeGraph dataset (dark blue) indicate the emergence of disassortative mixing with value lower than 0, implying that people abruptly visit places differ from own socioeconomic status whenever they need to step out territory/borough where they reside due to multiple mobility reasons (e.g.: work or school).
2302.03020
RLSbench: Domain Adaptation Under Relaxed Label Shift
Despite the emergence of principled methods for domain adaptation under label shift, their sensitivity to shifts in class conditional distributions is precariously under explored. Meanwhile, popular deep domain adaptation heuristics tend to falter when faced with label proportions shifts. While several papers modify these heuristics in attempts to handle label proportions shifts, inconsistencies in evaluation standards, datasets, and baselines make it difficult to gauge the current best practices. In this paper, we introduce RLSbench, a large-scale benchmark for relaxed label shift, consisting of $>$500 distribution shift pairs spanning vision, tabular, and language modalities, with varying label proportions. Unlike existing benchmarks, which primarily focus on shifts in class-conditional $p(x|y)$, our benchmark also focuses on label marginal shifts. First, we assess 13 popular domain adaptation methods, demonstrating more widespread failures under label proportion shifts than were previously known. Next, we develop an effective two-step meta-algorithm that is compatible with most domain adaptation heuristics: (i) pseudo-balance the data at each epoch; and (ii) adjust the final classifier with target label distribution estimate. The meta-algorithm improves existing domain adaptation heuristics under large label proportion shifts, often by 2--10\% accuracy points, while conferring minimal effect ($<$0.5\%) when label proportions do not shift. We hope that these findings and the availability of RLSbench will encourage researchers to rigorously evaluate proposed methods in relaxed label shift settings. Code is publicly available at https://github.com/acmi-lab/RLSbench.
Saurabh Garg, Nick Erickson, James Sharpnack, Alex Smola, Sivaraman Balakrishnan, Zachary C. Lipton
2023-02-06T18:57:14Z
http://arxiv.org/abs/2302.03020v2
# RLSbench: Domain Adaptation Under Relaxed Label Shift ###### Abstract Despite the emergence of principled methods for domain adaptation under label shift, the sensitivity of these methods for minor shifts in the class conditional distributions remains precariously under explored. Meanwhile, popular deep domain adaptation heuristics tend to falter when faced with shifts in label proportions. While several papers attempt to adapt these heuristics to accommodate shifts in label proportions, inconsistencies in evaluation criteria, datasets, and baselines, make it hard to assess the state of the art. In this paper, we introduce RLSbench, a large-scale _relaxed label shift_ benchmark, consisting of \(>\)500 distribution shift pairs that draw on 14 datasets across vision, tabular, and language modalities and composes them with varying label proportions. First, we evaluate 13 popular domain adaptation methods, demonstrating more widespread failures under label proportion shifts than were previously known. Next, we develop an effective two-step meta-algorithm that is compatible with most deep domain adaptation heuristics: (i) _pseudo-balance_ the data at each epoch; and (ii) adjust the final classifier with (an estimate of) target label distribution. The meta-algorithm improves existing domain adaptation heuristics often by 2-10% accuracy points under extreme label proportion shifts and has little (i.e., \(<\)0.5%) effect when label proportions do not shift. We hope that these findings and the availability of RLSbench will encourage researchers to rigorously evaluate proposed methods in relaxed label shift settings. Code is publicly available at [https://github.com/acmi-lab/RLSbench](https://github.com/acmi-lab/RLSbench). ## 1 Introduction Real-world deployments of machine learning models are typically characterized by distribution shift, where data encountered in production exhibits statistical differences from the training data (Quinonero-Candela et al., 2008; Torralba and Efros, 2011; Koh et al., 2021). Because continually labeling data can be prohibitively expensive, researchers have focused on the unsupervised Domain Adaptation (DA) setting, where only labeled data from the _source_ distribution and unlabeled from the _target_ distribution are available for training. Absent further assumptions, the DA problem is well known to be underspecified (Ben-David et al., 2010) and thus no method is universally applicable. Researchers have responded to these challenges in several ways. One approach is to investigate additional assumptions that render the problem well-posed. Popular examples include covariate shift and label shift, for which identification strategies and principled methods exist whenever the source and target distributions have overlapping support (Shimodaira, 2000; Scholkopf et al., 2012; Gretton et al., 2009). Under label shift in particular, recent research has produced effective methods that are applicable in deep learning regimes and yield both consistent estimates of the target label marginal and principled ways to update the resulting classifier (Lipton et al., 2018; Alexandari et al., 2021; Azizzadenesheli et al., 2019; Garg et al., 2020). However, these assumptions are typically, to some degree, violated in practice. Even for archetypal cases like shift in disease prevalence, the label shift assumption can be violated. For example, over the course of the COVID-19 epidemic, changes in disease positivity have been coupled with shifts in the age distribution of the infected and subtle mutations of the virus itself. A complementary line of research focuses on constructing benchmark datasets for evaluating methods, in the hopes of finding heuristics that tend to incorporate the unlabeled target data profitably for the kinds of problems that arise in practice. Examples of such benchmarks include OfficeHome (Venkateswara et al., 2017), Domainnet (Peng et al., 2019), WILDS (Sagawa et al., 2021). However, most academic benchmarks exhibit little or no shift in the label distribution \(p(y)\). Consequently, benchmark-driven research has produced a variety of heuristic methods (Ganin et al., 2016; Sohn et al., 2020; Wang et al., 2021; Li et al., 2016) that despite yielding gains in benchmark performance tend to break when \(p(y)\) shifts. While this has previously been shown for domain-adversarial methods (Wu et al., 2019; Zhao et al., 2019), we show that this problem is more widespread than previously known. Several recent papers attempt to address shift in label distribution compounded by natural variations in \(p(x|y)\)(Tan et al., 2020; Tachet des Combes et al., 2020; Prabhu et al., 2021). However, the experimental evaluations are hard to compare across papers owing to discrepancies in how shifts in \(p(y)\) are simulated and the choice of evaluation metrics. Moreover, many methods violate the unsupervised contract by peeking at target validation performance during model selection and hyperparameter tuning. In short, there is a paucity of comprehensive and fair comparisons between DA methods for settings with shifts in label distribution. In this paper, we develop RLSbench, a standardized test bed of _relaxed label shift_ settings, where \(p(y)\) can shift arbitrarily and the class conditionals \(p(x|y)\) can shift in seemingly natural ways (following the popular DA benchmarks). We evaluate a collection of popular DA methods based on domain-invariant representation learning, self-training, and test-time adaptation across 14 multi-domain datasets spanning vision, Natural Language Processing (NLP), and tabular modalities. The different domains in each dataset present a different shift in \(p(x|y)\). Since these datasets exhibit minor to no shift in label marginal, we simulate shift in target label marginal via stratified sampling with varying severity. Overall, we obtain 560 different source and target distribution shift pairs and train \(>30k\) models in our testbed. Based on our experiments on RLSbench, we make several findings. First, we observe that while popular DA methods often improve over a source-only classifier absent shift in target label distribution, their performance tends to degrade, dropping below source-only classifiers under severe shifts in target label marginal. Next, we develop a meta-algorithm with two simple corrections: (i) re-sampling the data to balance the source and pseudo-balance the Figure 1: _Domain adaptation under Relaxed Label Shift._**(Left)** Overview of our RLSbench setup where \(p(y)\) can shift arbitrarily and the class conditionals \(p(x|y)\) can shift in seemingly natural ways (following the popular DA benchmarks). RLSbench draws on 14 multi-domain datasets spanning vision, NLP, and tabular modalities. **(Right)** As the severity of target label proportion increases, the performance of existing popular DA methods degrades, often dropping below source-only classifiers. The performance of DA methods, when paired with our meta-algorithm, significantly improves over a source-only classifier. target; (ii) re-weighting the final classifier using an estimate of the target label marginal. We observe that in these relaxed label shift settings, the performance of existing DA methods (e.g. CDANN, FixMatch, and BN-adapt), when paired with our meta-algorithm, significantly improves over a source-only classifier. On the other hand, existing methods specifically proposed for relaxed label shift (e.g., IW-CDANN and SENTRY), often fail to improve over a source-only classifier and significantly underperform when compared to existing DA methods paired with our meta-algorithm. Finally, we observe that (blackbox) classifiers obtained via DA methods yield better target label marginal estimates in step (ii), and hence large accuracy improvements in target performance, than source-only classifiers. Our findings highlight the efficacy of simple baseline missed from prior work. We hope that the RLSbench testbed and our meta-algorithm (that can be paired with any DA method) provide a framework for rigorous and reproducible future research in relaxed label shift scenarios. ## 2 Preliminaries and Prior Work We first setup the notation and formally define the problem. Let \(\mathcal{X}\) be the input space and \(\mathcal{Y}=\{1,2,\ldots,k\}\) the output space. Let \(\mathrm{P}_{s},\mathrm{P}_{t}:\mathcal{X}\times\mathcal{Y}\to[0,1]\) be the source and target distributions and let \(p_{s}\) and \(p_{t}\) denote the corresponding probability density (or mass) functions. Unlike the standard supervised setting, in unsupervised DA, we possess labeled source data \(\{(x_{1},y_{1}),(x_{2},y_{2}),\ldots,(x_{n},y_{n})\}\) and unlabeled target data \(\{x_{n+1},x_{n+2},\ldots,x_{n+m}\}\). With \(f:\mathcal{X}\to\Delta^{k-1}\), we denote a predictor function which predicts \(\hat{y}=\operatorname*{arg\,max}_{y}f_{y}(x)\) on an input \(x\). For a vector \(v\), we use \(v_{y}\) to access the element at index \(y\). In the traditional label shift setting, one assumes that \(p(x|y)\) does not change but that \(p(y)\) can. Under label shift, two challenges arise: (i) estimate the target label marginal \(p_{t}(y)\); and (ii) train a classifier \(f\) to maximize the performance on target domain. This paper focuses on the _relaxed label shift_ setting. In particular, we assume that the label distribution can shift from source to target arbitrarily but that \(p(x|y)\) varies between source and target in some comparatively restrictive way (e.g., shifts arising naturally in the real-world like ImageNet (Russakovsky et al., 2015) to ImageNetV2 (Recht et al., 2019)). Mathematically, we assume a divergence-based restriction on \(p(x|y)\). That is, for some small \(\epsilon>0\) and distributional distance \(\mathcal{D}\), we have \(\max_{y}\mathcal{D}(p_{s}(x|y),p_{t}(x|y))\leq\epsilon\) and allow an arbitrary shift in the label marginal \(p(y)\). We discuss several precise instantiations in App. F. However, in practice, it's hard to empirically verify these distribution distances for small enough \(\epsilon\) with finite samples. Moreover, we lack a rigorous characterization of the sense in which those shifts arise in popular DA benchmarks, and since, the focus of our work is on the empirical evaluation with real-world datasets, we leave a formal investigation for future work. The goal in DA is to adapt a predictor from a source distribution with labeled data to a target distribution from which we only observe unlabeled examples. While prior work addressing relaxed label shift has primarily focused on classifier performance, we also separately evaluate methods for estimating the target label marginal. This can be beneficial for two reasons. First, it can shed more light into how improving the estimates of target class proportion improves target performance. Second, understanding how the class proportions are changing can be of independent interest. ### Prior Work Unsupervised domain adaption Two popular settings for which DA is well-posed include (i) _covariate shift_(Zhang et al., 2013; Zadrozny, 2004; Cortes et al., 2010; Cortes and Mohri, 2014; Gretton et al., 2009) where \(p(x)\) can change from source to target but \(p(y|x)\) remains invariant; and (ii) _label shift_(Saerens et al., 2002; Lipton et al., 2018; Azizzadenesheli et al., 2019; Alexandari et al., 2021; Garg et al., 2020; Zhang et al., 2021; Roberts et al., 2022) where the label marginal \(p(y)\) can change but \(p(x|y)\) is shared across source and target. Principled methods with strong theoretical guarantees exists for adaptation under these settings when target distribution's support is a subset of the source support. Ben-David et al. (2010, 2010); Mansour et al. (2009); Zhao et al. (2019); Wu et al. (2019); Johansson et al. (2019) present theoretical analysis when the assumption of contained co-variate support is violated. In another line of work, Elkan and Noto (2008); Bekker and Davis (2020); Garg et al. (2021, 2022a) extend the label shift setting to problems where previously unseen classes may appear in the target and \(p(x|y)\) remains invariant among seen classes. More recently, a massive literature has emerged exploring a benchmark-driven heuristic approach (Long et al., 2015, 2017; Sun and Saenko, 2016; Sun et al., 2017; Zhang et al., 2019, 2018; Ganin et al., 2016; Sohn et al., 2020). However, rigorous evaluation of popular DA methods is typically restricted to these carefully curated benchmark datasets where their is minor to no shift in label marginal from source to target. **Relaxed Label Shift** Exploring the problem of shift in label marginal from source to target with natural variations in \(p(x|y)\), a few papers highlighted theoretical and empirical failures of DA methods based on domain-adversarial neural network training (Yan et al., 2017; Wu et al., 2019; Zhao et al., 2019; Johansson et al., 2019). Subsequently, several papers attempted to handle these problems in domain-adversarial training (Tachet et al., 2020; Prabhu et al., 2021; Liu et al., 2021; Tan et al., 2020; Manders et al., 2019). However, these methods often lack comparisons with other prominent DA methods and are evaluated under different datasets and model selection criteria. To this end, we perform a large scale rigorous comparison of prominent representative DA methods in a standardized evaluation framework. **Domain generalization** In domain generalization, the model is given access to data from multiple different domains and the goal is to generalize to a previously unseen domain at test time (Blanchard et al., 2011; Muandet et al., 2013). For a survey of different algorithms for domain generalization, we refer the reader to Gulrajani and Lopez-Paz (2020). A crucial distinction here is that unlike the domain generalization setting, in DA problems, we have access to unlabeled examples from the test domain. **Distinction from previous distribution shift benchmark studies** Previous studies evaluating robustness under distribution shift predominantly focuses on transfer learning and domain generalization settings Wenzel et al. (2022); Gulrajani and Lopez-Paz (2020); Djolonga et al. (2021); Wiles et al. (2021); Koh et al. (2021). Taori et al. (2020); Hendrycks et al. (2021) studies the impact of robustness interventions (e.g. data augmentation techniques, adversarial training) on target (out of distribution) performance. Notably, Sagawa et al. (2021) focused on evaluating DA methods on WILDS-2.0. Our work is complementary to these studies, as we present the first extensive study of DA methods under shift in \(p(y)\) and natural variations in \(p(x|y)\). ## 3 RLSbench: A Benchmark for Relaxed Label Shift In this section, we introduce RLSbench, a suite of datasets and DA algorithms that are at the core of our study. Motivated by correction methods for the (stricter) label shift setting (Saerens et al., 2002; Lipton et al., 2018) and learning under imbalanced datasets (Wei et al., 2021; Cao et al., 2019), we also present a meta-algorithm with simple corrections compatible with almost any DA method. ### Datasets RLSbench builds on 14 multi-domain datasets for classification, including tasks across applications in object classification, satellite imagery, medicine, and toxicity detection. Across these datasets, we obtain a total of 56 different source and target pairs. More details about datasets are in App. D. (i) **CIFAR-10** which includes the original CIFAR-10 (Krizhevsky and Hinton, 2009), CIFAR-10-C (Hendrycks and Dietterich, 2019) and CIFAR-10v2 (Recht et al., 2018); (ii) **CIFAR-100** including the original dataset and CIFAR-100-C; (iii) all four BREEDs datasets (Santurkar et al., 2021), i.e., **Entity13**, **Entity30**, **Nonliving26**, **Living17**. BREEDs leverages class hierarchy in ImageNet (Russakovsky et al., 2015) to repurpose original classes to be the subpopulations and define a classification task on superclasses. We consider subpopulation shift and natural shifts induced due to differences in the data collection process of ImageNet, i.e, ImageNetv2 (Recht et al., 2019) and a combination of both. (iv) **Office-Home**(Venkateswara et al., 2017) which includes four domains: art, clipart, product, and real; (v) **DomainNet**(Peng et al., 2019) where we consider four domains: clipart, painting, real, sketch; (vi) **Visda**(Peng et al., 2018) which contains three domains: train, val and test; (vii) **FMoW**(Koh et al., 2021; Christie et al., 2018) from WILDS benchmark which includes three domains: train, OOD val, and OOD test--with satellite images taken in different geographical regions and at different times; (viii) **Camelyon**(Bandi et al., 2018) from WILDS benchmark which includes three domains: train, OOD val, and OOD test, for tumor identification with domains corresponding to different hospitals; (ix) **Civilcomments**(Borkan et al., 2019) which includes three domains: train, OOD val, and OOD test, for toxicity detection with domains corresponding to different demographic subpopulations; (x) **Retiring Adults**(Ding et al., 2021) where we consider the ACSIncome prediction task with various domains representing different states and time-period; and (xi) **Mimic Readmission**(Johnson et al., 2020; PhysioBank, 2000) where the task is to predict readmission risk with various domains representing data from different time-period. Simulating a shift in target marginalThe above datasets present minor to no shift in label marginal. Hence, we simulate such a shift by altering the target label marginal and keeping the source target distribution fixed (to the original source label distribution). Note that, unlike some previous studies, we do not alter the source label marginal because, in practice, we may have an option to carefully curate the training distribution but might have little to no control over the target label marginal. For each target dataset, we have the true labels which allow us to vary the target label distribution. In particular, we sample the target label marginal from a Dirichlet distribution with a parameter \(\alpha\in\{0.5,1,3.0,10\}\) multiplier to the original target marginal. Specifically, \(p_{t}(y)\sim\text{Dir}(\beta)\) where \(\beta_{y}=\alpha\cdot p_{t,0}(y)\) and \(p_{t,0}(y)\) is the original target label marginal. The Dirichlet parameter \(\alpha\) controls the severity of shift in target label marginal. Intuitively, as \(\alpha\) decreases, the severity of the shift increases. For completeness, we also include the target dataset with the original target label marginal. For ease of exposition, we denote the shifts as None (no external shift) in the set of Dirichlet parameters, i.e. the limiting distribution as \(\alpha\rightarrow\infty\). After simulating the shift in the target label marginal (with two seeds for each \(\alpha\)), we obtain 560 pairs of different source and target datasets. ### Domain Adaptation Methods We implement the following algorithms (a more detailed description of each method is included in App. K): Source onlyAs a baseline, we include model trained with empirical risk minimization (Vapnik, 1999) with cross-entropy loss on the source domain. We include source only models trained with and without augmentations. We also include adversarial robust models trained on source data with augmentations (**Source (adv)**). In particular, we use models adversarially trained against \(\ell_{2}\)-perturbations. Domain alignment methodsThese methods employ domain-adversarial training schemes aimed to learn invariant representations across different domains (Ganin et al., 2016; Zhang et al., 2019; Tan et al., 2020). For our experiments, we include the following _five_ methods: Domain Adversarial Neural Networks (**DANN**(Ganin et al., 2016)), Conditional DANN (**CDANN**(Long et al., 2018), Maximum Classifier Discrepancy (**MCD**(Saito et al., 2018)), Importance-reweighted DANN and CDANN (i.e. **IW-DANN**\(\&\)**IW-CDANN**Tachet des Combes et al. (2020)). Self-training methodsThese methods "pseudo-label" unlabeled examples with the model's own predictions and then train on them as if they were labeled examples. For vision datasets, these methods often also use consistency regularization, which encourages the model to make consistent predictions on augmented views of unlabeled examples (Lee et al., 2013; Xie et al., 2020; Berthelot et al., 2021). We include the following three algorithms: **FixMatch**(Sohn et al., 2020), **Noisy Student**(Xie et al., 2020), Selective Entropy Optimization via Committee Consistency (**SENTRY**(Prabhu et al., 2021)). For NLP and tabular dataset, where we do not have strong augmentations defined, we consider **PseudoLabel** algorithm (Lee et al., 2013). Test-time adaptation methodsThese methods take a source model and adapt a few parameters (e.g. batch norm parameters, etc.) on the unlabeled target data with an aim to improve target performance. We include: **CORAL**(Sun et al., 2016) or Domain Adjusted Regression (DARE (Rosenfeld et al., 2022)), BatchNorm adaptation (**BN-adapt**(Li et al., 2016; Schneider et al., 2020)), Test entropy minimization (**TENT**(Wang et al., 2021)). ### Meta algorithm to handle target label marginal shift Here we discuss two simple general-purpose corrections that we implement in our framework. First, note that, as the severity of shift in the target label marginal increases, the performance of DA methods can falter as the training is done over source and target datasets with different class proportions. Indeed, failure of domain adversarial training methods (one category of deep DA methods) has been theoretically and empirically shown in the literature (Wu et al., 2019; Zhao et al., 2019). In our experiments, we show that a failure due to a shift in label distribution is not limited to domain adversarial training methods, but is common with all the popular DA methods (Sec. 4). Re-samplingTo handle label imbalance in standard supervised learning, re-sampling the data to balance the class marginal is a known successful strategy (Chawla et al., 2002; Buda et al., 2018; Cao et al., 2019). In relaxed label shift, we seek to handle the imbalance in the target data (with respect to the source label marginal), where we do not have access to true labels. We adopt an alternative strategy of leveraging pseudolabels for target data to perform pseudo class-balanced re-sampling1(Zou et al., 2018; Wei et al., 2021). For relaxed label shift problems, (Prabhu et al., 2021) employed this technique with their committee consistency objective, SENTRY. However, they did not explore re-sampling based correction for existing DA techniques. Since this technique can be used in conjunction with any DA methods, we employ this re-sampling technique with existing DA methods and find that re-sampling benefits all DA methods, often improving over SENTRY in our testbed (Sec. 4). Footnote 1: A different strategy could be to re-sample target pseudolabel marginal to match source label marginal. For simplicity, we choose to balance source label marginal and target pseudolabel marginal. Re-weightingWith re-sampling, we can hope to train the classifier \(\widehat{f}\) on a mixture of balanced source and balanced target datasets in an ideal case. However, this still leaves open the problem of adapting the classifier \(\widehat{f}\) to the original target label distribution which is not available. If we can estimate the target label marginal, we can post-hoc adapt the classifier \(\widehat{f}\) with a simple re-weighting correction (Lipton et al., 2018; Alexandari et al., 2021). To estimate the target label marginal, we turn to techniques developed under the stricter label shift assumption (recall, the setting where \(p(x|y)\) remains domain invariant). These approaches leverage off-the-shelf classifiers to estimate target marginal and provide \(\mathcal{O}(1/\sqrt{n})\) convergence rates under the label shift condition with mild assumptions on the classifier (Lipton et al., 2018; Azizzadenesheli et al., 2019; Garg et al., 2020). While the relaxed label shift scenario violates the conditions required for consistency of label shift estimation techniques, we nonetheless employ these techniques and empirically evaluate efficacy of these methods in our testbed. In particular, to estimate the target label marginal, we experiment with: (i) RLLS (Azizzadenesheli et al., 2019); (ii) MLLS (Alexandari et al., 2021); and (iii) _baseline estimator_ that simply averages the prediction of a classifier \(f\) on unlabeled target data. We provide precise details about these methods in App. E. Since these methods leverage off-the-shelf classifiers, classifiers obtained with any DA methods can be used in conjunction with these estimation methods. SummaryOverall, in Algorithm 1, we illustrate how to incorporate the re-sampling and re-weighting correction with existing DA techniques. Algorithm \(\mathcal{A}\) can be any DA method and in Step 7, we can use any of the three methods listed above to estimate the target label marginal. We instantiate Algorithm 1 with several algorithms from Sec. 3.2 in App. K. Intuitively, in an ideal scenario when the re-sampling step in our meta-algorithm perfectly corrects for label imbalance between source and target, we expect DA methods to adapt classifier \(f\) to \(p(x|y)\) shift. The re-weighting step in our meta-algorithm can then adapt the classifier \(f\) to the target label marginal \(p_{t}(y)\). We emphasize that in our work, we _do not_ claim to propose these corrections. But, to the best of our knowledge, our work is the first to combine these two corrections together and perform extensive experiments across diverse datasets. ### Other choices for realistic evaluation For a fair evaluation and comparison across different datasets and DA algorithms, we re-implemented all the algorithms with consistent design choices whenever applicable. We also make several additional implementation choices, described below. We defer the additional details to App. L. Model selection criteria and hyperparametersGiven that we lack validation i.i.d data from the target distribution, model selection in DA problems _can not_ follow the standard workflow used in supervised training. Prior works often omit details on how to choose hyperparameters leaving open a possibility of choosing hyperparameters using the test set which can provide a false and unreliable sense of improvement. Moreover, inconsistent hyperparameter selection strategies can complicate fair evaluations mis-associating the improvements to the algorithm under study. In our work, we use source hold-out performance to pick the best hyperparameters. First, for \(\ell_{2}\) regularization and learning rate, we perform a sweep over random hyperparameters to maximize the performance of source only model on the hold-out source data. Then for each dataset, we keep these hyperparameters fixed across DA algorithms. For DA methods specific hyperparameters, we use the same hyperparameters across all the methods incorporating the suggestions made in corresponding papers. Within a run, we use hold out performance on the source to pick the early stopping point. In appendices, we report _oracle_ performance by choosing the early stopping point with target accuracy. Evaluation criteriaTo evaluate the target label marginal estimation, we report \(\ell_{1}\) error between the estimated label distribution and true label distribution. To evaluate the classifier performance on target data, we report performance of the (adapted) classifier on a hold-out partition of target data. Architectural and pretraining detailsWe experiment with different architectures (e.g., DenseNet121, Resenet18, Resnet50, DistilBERT, MLP and Transformer). We experiment with randomly-initialized models and Imagenet, and DistillBert pre-trained models. Given a dataset, we use the same architecture across different DA algorithms. Data augmentationData augmentation is a standard ingredient to train vision models which can approximate some of the variations between domains. Unless stated otherwise, we train all the vision datasets using the standard strong augmentation technique: random horizontal flips, random crops, augmentation with Cutout (DeVries and Taylor, 2017), and RandAugment (Cubuk et al., 2020). To understand help with data augmentations alone, we also experiment with source-only models trained without any data augmentation. For tabular and NLP datasets, we do not use any augmentations. ## 4 Main Results We present aggregated results on vision datasets in our testbed in Fig. 2. In App. B, we present aggregated results on NLP and tabular datasets. We include results on each dataset Figure 2: _Performance of different DA methods relative to a source-only model across all distribution shift pairs in vision datasets grouped by shift severity in label marginal._ For each distribution shift pair and DA method, we plot the relative accuracy of the model trained with that DA method by subtracting the accuracy of the source-only model. Hence, the black dotted line at \(0\) captures the performance of the source-only model. Smaller the Dirichlet shift parameter, the more severe is the shift in target class proportion. **(a)** Shifts with \(\alpha=\{\text{None},10.0,3.0\}\) have little to no impact on different DA methods whereas the performance of all DA methods degrades when \(\alpha\in\{1.0,0.5\}\) often falling below the performance of a source-only classifier (except for Noisy Student). **(b)** RS and RW (in our meta-algorithm) together significantly improve aggregate performance over no correction for all DA methods. While RS consistently helps (over no correction) across different label marginal shift severities, RW hurts slightly for BN-adapt, TENT, and NoisyStudent when shift severity is small. However, for severe shifts (\(\alpha\in\{3.0,1.0,0.5\}\)) RW significantly improves performance for all the methods. Parallel results on tabular and language datasets in App. B. Detailed results with all methods on individual datasets in App. I. A more detailed description of the plotting technique in App. A. in App. I. Note that we do not include RS results with a source only model as it is trained only on source data and we observed no differences with just balancing the source data (as for most datasets source is already balanced) in our experiments. Unless specified otherwise, we use source validation performance as the early stopping criterion. Based on running our entire RLSbench suite, we distill our findings into the following takeaways. **Popular deep DA methods without any correction falter.** While DA methods often improve over a source-only classifier for cases when the target label marginal shift is absent or low, the performance of these methods (except Noisy Student) drops below the performance of a source-only classifier when the shift in target label marginal is severe (i.e., when \(\alpha=0.5\) in Fig. 1(a), 1(a), and 1(a)). On the other hand, DA methods when paired with RS and RW correction, significantly improve over a source-only model even when the shift in target label marginal is severe (Fig. 1(b), 1(b), and 1(b)). **Re-sampling to pseudobalance target often helps all DA methods across all modalities.** When the shift in target label marginal is absent or very small (i.e., \(\alpha\in\{\textsc{None},10.0\}\) in Fig. 1(b), 1(b), and 1(b)), we observe no (significant) differences in performance with re-sampling. However, as the shift severity in target label marginal increases (i.e., \(\alpha\in\{3.0,1.0,0.5\}\) in Fig. 1(b), 1(b), and 1(b)), we observe that re-sampling typically improves all DA methods in our testbed. **Benefits of post-hoc re-weighting of the classifier depends on shift severity and the underlying DA algorithm.** For domain alignment methods (i.e. DANN and CDANN) and self-training methods, in particular FixMatch and PseudoLabel, we observe that RW correction typically improves (over no correction) significantly when the target label marginal shift is severe (i.e., \(\alpha\in\{3.0,1.0,0.5\}\) in Fig. 1(b), 1(b), and 1(b)) and has no (significant) effect when the shift in target label marginal is absent or very small (i.e., \(\alpha\in\{\textsc{None},10.0\}\) in Fig. 1(b), 1(b), and 1(b)). For BN-adapt, TENT, and NoisyStudent, RW correction can slightly hurt when target label marginal shift is absent or low (i.e., \(\alpha\in\{\textsc{None},10.0\}\) in Fig. 1(b)) Figure 4: _Target label marginal estimation (\(\ell_{1}\)) error and accuracy with RLLS and classifiers obtained with different DA methods._ **(Left)** Across all shift severities in vision datasets, RLLS with classifiers obtained with DA methods improves over RLLS with a source-only classifier. **(Right)** For tabular datasets, RLLS with classifiers obtained with DA methods improves over RLLS with a source-only classifier for severe target label marginal shifts. Plots for each DA method and all datasets are in App. G. Figure 3: _Average accuracy of different DA methods aggregated across all distribution pairs in each modality._ Parallel results with all methods on individual datasets in App. I. but continues to improve significantly when the target label marginal shift is severe (i.e., \(\alpha\in\{3.0,1.0,0.5\}\) in Fig. 2b). Additionally, we observe that in specific scenarios of the real-world shift in \(p(x|y)\) (e.g., subpopulation shift in BREEDs datasets, camelyon shifts, and replication study in CIFAR-10 which are benign relative to other vision dataset shifts in our testbed), RW correction does no harm to performance for BN-adapt, TENT, and NoisyStudent even when the target label marginal shift is less severe or absent (refer to datasets in App. 1). **DA methods paired with our meta-algorithm often improve over source-only classifier but no one method consistently performs the best.** First, we observe that our source-only numbers are better than previously published results. Similar to previous studies (Gulrajani and Lopez-Paz, 2020), this can be attributed to improved design choices (e.g. data augmentation, hyperparameters) which we make consistent across all methods. While there is no consistent method that does the best across datasets, overall, FixMatch with RS and RW (our meta-algorithm) performs the best for vision datasets. For NLP datasets, source-only with RW (our meta-algorithm) performs the best overall. For tabular datasets, CDANN with RS and RW (our meta-algorithm) performs the best overall (Fig. 3). **Existing DA methods when paired with our meta-algorithm significantly outperform other DA methods specifically proposed for relaxed label shift.** We observe that, with consistent experimental design across different methods, existing DA methods with RS and RW corrections often improve over previously proposed methods specifically aimed to tackle relaxed label shift, i.e., IW-CDANN, IW-DANN, and SENTRY (Fig. 7). For severe target label marginal shifts, the performance of IW-DANN, IW-CDANN, and SENTRY often falls below that of the source-only model. Moreover, while the importance weighting (i.e., IW-CDANN and IW-DANN) improves over CDANN and DANN resp. (Fig. 2a, 5a and 6a), RS and RW corrections significantly outweigh those improvements (Fig. 7). **BN-adapt and TENT with our meta-algorithm are simple and strong baselines.** For models with batch norm parameters, BN-adapt (and TENT) with RS and RW steps is a computationally efficient and strong baseline. We observe that while the performance of BN-adapt (and TENT) can drop substantially when the target label marginal shifts (i.e., \(\alpha\in\{1.0,0.5\}\) in Fig. 2(a)), RS and RW correction improves the performance often improving BN-adapt (and TENT) over all other DA methods when the shift in target label marginal is extreme (i.e., \(\alpha=0.5\) in Fig. 2(b)). **DA methods yield better target label marginal estimates, and hence larger accuracy improvements with re-weighting, than source-only classifiers.** Recall that we experiment with target label marginal estimation methods that leverage off-the-shelf classifiers to obtain an estimate. We observe that estimators leveraging DA classifiers tend to perform better than using source-only classifiers for tabular and vision datasets (Fig. 4). For NLP, we observe that DA classifier and source-only classifier have performance (with source-only often performing slightly better). Correspondingly, as one might expect, better estimation yields greater accuracy improvements when applying our RW correction. In particular, RW correction with DA methods improves over the source-only classifier for vision and tabular datasets and vice-versa for NLP datasets. (Fig. 4). **Early stopping criterion matters.** We observe a consistent \(\approx\)2% and \(\approx\)8% accuracy difference on vision and tabular datasets respectively with all methods (Fig. 13). On NLP datasets, while the early stopping criteria have \(\approx\)2% accuracy difference when RW and RS corrections are not employed, the difference becomes negligible when these corrections are employed (Fig. 13). These results highlight that subsequent works should describe the early stopping criteria used within their evaluations. **Data augmentation helps.** Corroborating findings from previous studies in other settings (Gulrajani and Lopez-Paz, 2020; Sagawa et al., 2021), we observe that data augmentation can improve the performance of a source-only model on vision datasets in relaxed label shift scenarios (refer to result on each dataset in App. 1). Thus, whenever applicable, subsequent methods should use data augmentations. Conclusion Our work is the first large-scale study investigating methods under the relaxed label shift scenario. Relative to works operating strictly under the label shift assumption, RLSbench provides an opportunity for sensitivity analysis, allowing researchers to measure the robustness of their methods under various sorts of perturbations to the class-conditional distributions. Relative to the benchmark-driven deep domain adaptation literature, our work provides a comprehensive and standardized suite for evaluating under shifts in label distributions, bringing these benchmarks one step closer to exhibit the sort of diversity that we should expect to encounter when deploying models in the wild. On one hand, the consistent improvements observed from label shift adjustments are promising. At the same time, given the underspecified nature of the problem, practitioners must remain vigilant and take performance on any benchmark with a grain of salt, considering the various ways that it might (or might not) be representative of the sorts of situations that might arise in their application of interest. In the future, we hope to extend RLSbench to datasets from real applications in consequential domains such as healthcare and self-driving, where label marginals and class conditionals can be expected to shift across locations and over time. We also hope to incorporate self-supervised methods that learn representations by training on a union of unlabeled data from source and target via proxy tasks like reconstruction (Gidaris et al., 2018; He et al., 2022) and contrastive learning (Caron et al., 2020; Chen et al., 2020). While re-weighting predictions using estimates of the target label distribution yields significant gains, the remaining gap between our results and oracle performance should motivate future work geared towards improved estimators. Also, we observe that the success of target label marginal estimation techniques depends on the nature of the shifts in \(p(x|y)\). Mathematically characterizing the behavior of label shift estimation techniques when the label shift assumption is violated would be an important contribution. ## Reproducibility Statement Our code with all the results will be released on github. [https://github.com/acmi-lab/RLSbench](https://github.com/acmi-lab/RLSbench). We implement our RLSbench library in PyTorch (Paszke et al., 2017) and provide an infrastructure to run all the experiments to generate corresponding results. We have stored all models and logged all hyperparameters to facilitate reproducibility. In our appendices, we provide additional details on datasets and experiments. In App. D, we describe datasets and in App. L, we provide hyperparameter details. ## Acknowledgments We thank Amrith Setlur for providing feedback on an earlier draft of RLSbench. We also thank Xingjian Shi and Weisu Yin for their initial help with running the large-scale experiments. SG acknowledges Amazon Graduate Fellowship and JP Morgan AI Ph.D. Fellowship for their support.
2308.16310
On invariants of foliated sphere bundles
Morita showed that for each power of the Euler class, there are examples of flat $\mathbb{S}^1$-bundles for which the power of the Euler class does not vanish. Haefliger asked if the same holds for flat odd-dimensional sphere bundles. In this paper, for a manifold $M$ with a free torus action, we prove that certain $M$-bundles are cobordant to a flat $M$-bundle and as a consequence, we answer Haefliger's question. We show that the powers of the Euler class and Pontryagin classes $p_i$ for $i\leq n-1$ are all non-trivial in $H^*(\text{BDiff}^{\delta}_+(\mathbb{S}^{2n-1});\mathbb{Q})$. In the appendix, Nils Prigge corrects a claim by Haefliger about the vanishing of certain classes in the smooth group cohomology of $\text{Diff}_+(\mathbb{S}^3)$.
Sam Nariman
2023-08-30T20:36:05Z
http://arxiv.org/abs/2308.16310v2
# On invariants of foliated sphere bundles ###### Abstract. Morita ([10]) showed that for each power of the Euler class, there are examples of flat \(\mathbb{S}^{1}\)-bundles for which the power of the Euler class does not vanish. Haefliger asked ([12, Page 154]) if the same holds for flat odd-dimensional sphere bundles. In this paper, for a manifold \(M\) with a free torus action, we prove that certain \(M\)-bundles are cobordant to a flat \(M\)-bundle and as a consequence, we answer Haefliger's question. We show that the powers of the Euler class and Pontryagin classes \(p_{i}\) for \(i\leq n-1\) are all non-trivial in \(H^{*}(\operatorname{BDiff}^{\otimes}_{+}(\mathbb{S}^{2n-1});\mathbb{Q})\). In the appendix, Nils Prigge corrects a claim by Haefliger ([12, Page 154]) about the vanishing of certain classes in the smooth group cohomology of \(\operatorname{Diff}_{+}(\mathbb{S}^{3})\). ## 1. Introduction Let \(\operatorname{Diff}_{+}(\mathbb{S}^{n})\) be the group of orientation-preserving smooth diffeomorphisms of the sphere \(\mathbb{S}^{n}\). Haefliger ([11, Page 242, problem 4]) asked whether, for a given integer \(k>1\), there exists a manifold \(M\) and a representation \(\pi_{1}(M)\to\operatorname{Diff}_{+}(\mathbb{S}^{1})\) such that the \(k\)-th power of the Euler class of the associated flat circle bundle is non-trivial. For \(k=1\), Benzercri ([1]) and Milnor ([13]) constructed flat circle bundles over surfaces with non-trivial Euler classes. Morita in [10] answered Haefliger's question affirmatively by proving a more general theorem (see also [14]). Let \(\operatorname{BDiff}^{\otimes}_{+}(\mathbb{S}^{n})\) denote the classifying space of orientable flat \(\mathbb{S}^{n}\)-bundles (\(\delta\) denotes the discrete topology on a given topological group) and let \(\mathcal{L}_{\mathbb{S}^{n}}\) denote the topological Lie algebra of smooth vector fields on \(\mathbb{S}^{n}\). There is a natural map \[\Phi\colon H^{*}(\mathcal{L}_{\mathbb{S}^{1}},\operatorname{so}(2))\to H^{*}( \operatorname{BDiff}^{\otimes}_{+}(\mathbb{S}^{1});\mathbb{R}),\] where \(H^{*}(\mathcal{L}_{\mathbb{S}^{1}},\operatorname{so}(2))\) is the relative continuous Lie algebra cohomology (see [12, Page 44]). The Lie algebra cohomology \(H^{*}(\mathcal{L}_{\mathbb{S}^{1}},\operatorname{so}(2))\) is isomorphic to \(\mathbb{R}[e,\operatorname{gv}]/(e\cdot\operatorname{gv}=0)\) where \(e\) is the Euler class and \(\operatorname{gv}\) is also a degree \(2\) class known as the Godbillon-Vey class. Morita ([10, Theorem 1.1]) showed that this map is injective. Later Haefliger [12] used rational homotopy theory models to study the image of \[H^{*}(\operatorname{BSO}(n+1);\mathbb{R})\to H^{*}(\mathcal{L}_{\mathbb{S}^{n} },\operatorname{so}(n+1)),\] and he realized that for \(n\) odd the image of the powers of the Euler class in \(H^{*}(\mathcal{L}_{\mathbb{S}^{n}},\operatorname{so}(n+1))\) are non-trivial. Hence, he posed a more general version of his previous question ([12, Page 154]). **Question 1.1** (Haefliger).: _Are there flat \((2n+1)\)-sphere bundles with a non-zero power of the Euler class?_ We answer this question affirmatively and in fact, we also prove the non-vanishing of the powers of the Pontryagin classes. **Theorem 1.2**.: _All the monomials in \(e\) and \(p_{i}\) for \(i\leq n-1\) are non-trivial in \(H^{*}(\operatorname{BDiff}^{\otimes}_{+}(\mathbb{S}^{2n-1});\mathbb{Q})\)._ _Remark 1.3_.: Morita pointed out to the author that the involution on \(\mathsf{S}^{2n-1}\) induced by reflecting through a hyperplane, changes the sign of the Euler class but it does not change the Pontryagin classes. Hence, the classes \(e^{k}\) and monomials of Pontryagin classes are linearly independent when \(k\) is odd. However, the author does not know whether in general the monomials are linearly independent in \(H^{*}(\operatorname{BDiff}_{+}^{\delta}(\mathsf{S}^{2n-1});\mathsf{Q})\). The techniques that we use also work for volume-preserving diffeomorphisms \(\operatorname{Diff}_{\operatorname{vol}}(\mathsf{S}^{2n-1})\). So these classes are also non-trivial in \(H^{*}(\operatorname{BDiff}_{\operatorname{vol}}^{\delta}(\mathsf{S}^{2n-1}); \mathsf{Q})\). _Remark 1.4_.: There is a van Est type theorem ([10, Page 43]) that implies that \(H^{*}(\mathcal{L}_{\mathsf{S}^{3}},\operatorname{so}(4))\) is isomorphic to the smooth group cohomology \(H^{*}_{\operatorname{sm}}(\operatorname{Diff}_{+}(\mathsf{S}^{3});\mathbb{R})\). In higher dimensions, it is expected from Bott's belief in [10, Page 217] that there is at least a map \[H^{*}_{\operatorname{sm}}(\operatorname{Diff}_{+}(\mathsf{S}^{n});\mathbb{R} )\to H^{*}(\mathcal{L}_{\mathsf{S}^{n}},\operatorname{so}(n+1)).\] Haefliger in [10, Page 154] sketched his method to prove a claim that the kernel of the map \[H^{*}(\operatorname{BSO}(n+1);\mathbb{R})\to H^{*}(\mathcal{L}_{\mathsf{S}^{n }},\operatorname{so}(n+1)),\] is generated by the monomials in Pontryagin classes \(p_{1},\dots,p_{\lfloor n/2\rfloor}\) whose degrees are larger than \(2n\). So according to his claim, \(p_{1}^{2}\) vanishes in \(H^{*}_{\operatorname{sm}}(\operatorname{Diff}_{+}(\mathsf{S}^{3});\mathbb{R})\) and as a consequence it would vanish also in \(H^{*}(\operatorname{BDiff}_{+}^{\delta}(\mathsf{S}^{3});\mathbb{R})\). But in the appendix by Nils Prigge, we shall see that Haefliger's vanishing works when \(n\) is even, and using his method we shall see for \(n=3\), in fact, the class \(p_{1}^{2}\) is not zero in the smooth group cohomology of \(\operatorname{Diff}_{+}(\mathsf{S}^{3})\) which is isomorphic to \(H^{*}(\mathcal{L}_{\mathsf{S}^{3}},\operatorname{so}(4))\). Let us recall how the Pontryagin classes are defined in \(H^{*}(\operatorname{BDiff}_{+}^{\delta}(\mathsf{S}^{2n-1});\mathsf{Q})\). There is a natural map \[\eta\colon\operatorname{BDiff}_{+}^{\delta}(\mathsf{S}^{2n-1})\to \operatorname{BDiff}_{+}(\mathsf{S}^{2n-1}),\] that is induced by the identity homomorphism \(\operatorname{Diff}_{+}^{\delta}(\mathsf{S}^{2n-1})\to\operatorname{Diff}_{+ }(\mathsf{S}^{2n-1})\). The homotopy fiber of \(\eta\) is denoted by \(\overline{\operatorname{BDiff}_{+}(\mathsf{S}^{2n-1})}\). Now by coning the sphere to a disk and then restricting it to the interior of the disk, we obtain the following maps \[\operatorname{BDiff}_{+}(\mathsf{S}^{m-1})\to\operatorname{BHomeo}_{+}( \mathbb{D}^{m})\to\operatorname{BHomeo}_{+}(\mathbb{R}^{m}).\] There are topological Pontryagin classes for Euclidean fiber bundles that are defined rationally in \(H^{*}(\operatorname{BHomeo}_{+}(\mathbb{R}^{m});\mathsf{Q})\) (see [11]). Galatius and Randal-Williams ([14]) proved a remarkable result that the map \[\mathsf{Q}[e,p_{1},p_{2},\dots]\to H^{*}(\operatorname{BHomeo}_{+}(\mathbb{R }^{2n});\mathsf{Q}),\] is injective for \(2n\geq 6\). And they also proved that the map induced by pulling back these classes to \(H^{*}(\operatorname{BDiff}_{+}(\mathsf{S}^{2n-1});\mathsf{Q})\) \[\mathsf{Q}[e,p_{1},p_{2},\dots]\to H^{*}(\operatorname{BDiff}_{+}(\mathsf{S }^{2n-1});\mathsf{Q}),\] is injective for \(2n-1\geq 9\). But for _flat_ odd-dimensional sphere bundles, they proved that for \(2n-1\geq 5\), the map \[\mathsf{Q}[e,p_{1},p_{2},\dots]\to H^{*}(\operatorname{BDiff}_{+}^{\delta}( \mathsf{S}^{2n-1});\mathbb{Z})\otimes\mathsf{Q},\] is injective. However, their method does not say if the image of the composition \[\mathsf{Q}[e,p_{1},p_{2},\dots]\to H^{*}(\operatorname{BDiff}_{+}^{\delta}( \mathsf{S}^{2n-1});\mathbb{Z})\otimes\mathsf{Q}\to H^{*}(\operatorname{BDiff }_{+}^{\delta}(\mathsf{S}^{2n-1});\mathsf{Q}),\] is non-trivial. _Remark 1.5_.: In fact, since the groups \(H_{*}(\mathrm{BDiff}_{+}^{\delta}(M);\mathbb{Z})\) are not in general finitely generated, the map \[H^{*}(\mathrm{BDiff}_{+}^{\delta}(M);\mathbb{Z})\otimes\mathbb{Q}\to H^{*}( \mathrm{BDiff}_{+}^{\delta}(M);\mathbb{Q}),\] could have a large kernel (see [13, Theorem 0.9] and [11, Theorem 8.1]). For flat \(\mathbb{S}^{2n}\)-bundles, we shall see that the Bott vanishing theorem implies the following vanishing result. **Theorem 1.6**.: _The monomials in Pontryagin classes \(p_{1},\dots,p_{n}\) whose degrees are larger than \(4n\) vanish in \(H^{*}(\mathrm{BDiff}_{+}^{\delta}(\mathbb{S}^{2n});\mathbb{Q})\)._ Our method to prove our non-vanishing result, Theorem 1.2, is motivated by the following question in foliation theory. Suppose for an \(n\)-dimensional manifold \(M\), we have a smooth fiber bundle \(M\to E_{0}\to B_{0}\) whose fiber is \(M\) and the base is a closed compact manifold. We want to see whether we can change the fiber bundle "up to bordism" to put a flat structure on its total space meaning to put a codimension \(n\) foliation on the total space that is transverse to the fibers. More precisely, we want to find a bordism \(W\) whose boundary \(\partial W\) is the disjoint union \(B_{0}\coprod B_{1}\) and a fiber bundle \(M\to E\to W\) over the bordism such that its restriction to \(B_{0}\) is given by \(E_{0}\) and its restriction to \(B_{1}\) is a foliated bundle. We use the equivariant version of Mather-Thurston's theory ([13, Section 1.2.2] and [13, Section 5.1]) to answer this question in the following two cases. **Theorem 1.7**.: _Let \(G\) be a finite-dimensional connected Lie group. Any principal \(G\)-bundle over a closed manifold is cobordant via \(G\)-bundles to a foliated \(G\)-bundle (not necessarily flat principal \(G\)-bundle)._ We shall see that for the case \(G=\mathrm{SU}_{2}\) which is diffeomorphic to \(\mathbb{S}^{3}\), this implies that the powers of the Euler class and the first Pontryagin class should be non-trivial in \(H^{*}(\mathrm{BDiff}_{+}^{\delta}(\mathrm{SU}_{2});\mathbb{Q})\). **Theorem 1.8**.: _Suppose \(M\) is a manifold with a free torus \(T\) action. Let \(M\to E\xrightarrow{\rho}B\) be a \(M\)-bundle over a closed manifold \(B\) that is classified by a map \(B\to\mathrm{BT}\). Then this \(M\)-bundle is cobordant to a foliated \(M\)-bundle._ We shall see this theorem implies the non-vanishing results in Theorem 1.2. _Remark 1.9_.: Since the techniques also work for volume-preserving diffeomorphisms, in Theorem 1.7 and Theorem 1.8, we can arrange the foliated bundle at the other end of the bordism to have volume-preserving holonomies. In the appendix, Nils Prigge in particular proves that \[H^{*}(\mathrm{BSO}(4);\mathbb{R})\to H^{*}(\mathcal{L}_{\mathbb{S}^{3}}, \mathrm{so}(4)),\] is injective which corrects the claim by Haefliger (see Remark 1.4). In the course of the proof, he found new relations between characteristic classes in the Gelfand Fuks cohomology \(H^{*}(\mathcal{L}_{\mathbb{S}^{3}},\mathrm{so}(4))\) which is isomorphic to the smooth group cohomology \(H^{*}_{\mathrm{sm}}(\mathrm{Diff}_{+}(\mathbb{S}^{3});\mathbb{R})\). Morita observed that these relations combined with our non-vanishing result give a new type of relations between secondary characteristic classes and characteristic classes in \(H^{*}(\mathrm{BDiff}_{+}^{\delta}(\mathbb{S}^{3});\mathbb{R})\). To describe such a relation, recall for a foliation \(\mathcal{F}\) on \(M\) with a trivial normal bundle, there are secondary classes in \(H^{*}(M)\) coming from Gelfand-Fuks cohomology (see [10]). For the universal flat trivial \(\mathbb{S}^{3}\)-bundle \[\pi\colon\mathbb{S}^{3}\times\overline{\mathrm{BDiff}_{+}(\mathbb{S}^{3})} \to\overline{\mathrm{BDiff}_{+}(\mathbb{S}^{3})},\] we have a codimension \(3\) foliation on the total space and there is a characteristic class \(h_{2}c_{2}\) in \(H^{7}(\mathbb{S}^{3}\times\overline{\mathrm{BDiff}_{+}(\mathbb{S}^{3})}; \mathbb{R})\) (see the appendix for the definition of \(h_{2}c_{2}\)). If we integrate \(h_{2}c_{2}\) along the fiber, we obtain a class \(\int_{\pi}h_{2}\cdot c_{2}\) in \(H^{4}(\overline{\operatorname{BDiff}_{+}(\mathbf{S}^{3})};\mathbb{R})\). There is a certain class \(\bar{x}_{2}\) in \(H^{4}_{\operatorname{sm}}(\operatorname{Diff}_{+}(\mathbf{S}^{3});\mathbb{R})\) whose image in \(H^{4}(\operatorname{BDiff}_{+}^{\delta}(\mathbf{S}^{3});\mathbb{R})\) extends the class \(\int_{\pi}h_{2}\cdot c_{2}\) to a class in \(H^{4}(\operatorname{BDiff}_{+}^{\delta}(\mathbf{S}^{3});\mathbb{R})\). We denote this extension of \(\int_{\pi}h_{2}\cdot c_{2}\) also by \(\bar{x}_{2}\). Morita observed that Prigge's calculation implies that \[p_{1}^{2}=e\cdot\bar{x}_{2},\] in \(H^{*}(\operatorname{BDiff}_{+}^{\delta}(\mathbf{S}^{3});\mathbb{R})\). Since we showed that \(p_{1}^{2}\) is nontrivial, the class \(\bar{x}_{2}\) is also nontrivial. This relation is interesting because the secondary class \(h_{2}c_{2}\) is a continuously varying class (see [10, Remark 2.9] where the notation is \(y_{2}c_{2}\)). So \(\bar{x}_{2}\) is intrinsically a real-valued class but \(p_{1}\) and \(e\) are defined over rational numbers. We hope to pursue finding such relations for higher dimensional spheres. ### Acknowledgments I am first and foremost indebted to Shigeyuki Morita for his questions, comments, and corrections that led me to write this paper. I am grateful to Soren Galatius for his comments and suggestions to study the free torus action in the context of Mather-Thurston's theorem. I would like to thank Sander Kupers for sending me the reference [11]. The author was partially supported by NSF CAREER Grant DMS-2239106 and Simons Foundation Collaboration Grant (855209). NP would like to thank Sam Nariman for offering to write this appendix and asking about this very interesting question. This research was supported by the Knut and Alice Wallenberg Foundation through grant no. 2019.0519. ## 2. Equivariant Mather-Thurston's theorem In this section, we recall from [14, Section 1.2.2] and [14, Section 5.1] the equivariant version of Mather-Thurston's theorem as the main tool in this paper. Mather-Thurston's theorem ([15, 16]) is an h-principle theorem in foliation theory that relates the homotopy type of the classifying space of Haefliger space to the group homology of diffeomorphism groups. Since we are interested in orientation-preserving diffeomorphisms in this paper, we recall Mather-Thurston's theorem in this context. Let \(M\) be an orientable smooth manifold and let \(\operatorname{Diff}_{+}^{\sigma}(M)\) denote the group of \(C^{\prime}\) orientation-preserving diffeomorphisms of \(M\) with the \(C^{\prime}\)-Whitney topology. If we drop the regularity \(r\), we mean smooth diffeomorphisms. We decorate it with superscript \(\delta\) if we consider the same group with the discrete topology. The identity homomorphism \(\operatorname{Diff}_{+}^{r}(M)^{\delta}\to\operatorname{Diff}_{+}^{r}(M)\) induces a map \[\eta:\operatorname{BDiff}_{+}^{r}(M)^{\delta}\to\operatorname{BDiff}_{+}^{r} (M). \tag{2.1}\] Thurston in fact studied \(\overline{\operatorname{BDiff}_{+}^{r}(M)}\) which is the homotopy fiber of the map \(\eta\). This space classifies foliated trivial \(M\)-bundles. Consider a semi-simplicial model \(\overline{\operatorname{BDiff}_{+}^{r}(M)}_{\bullet}\) where the set of \(k\)-simplicies is given by the set of foliations on the trivial bundle \(\Delta^{k}\times M\to\Delta^{k}\) that are transverse to the fibers and whose holonomies lie in \(\operatorname{Diff}_{+}^{r}(M)\). The (fat) realization \(\|\overline{\operatorname{BDiff}_{+}^{r}(M)}_{\bullet}\|\) is a model for \(\overline{\operatorname{BDiff}_{+}^{r}(M)}\). Note that the simplicial group \(\operatorname{Sing}_{\bullet}(\operatorname{Diff}_{+}^{r}(M))\) which is the singular complex of the topological group \(\operatorname{Diff}_{+}^{r}(M)\), acts levelwise on \(\overline{\operatorname{BDiff}_{+}^{r}(M)}_{\bullet}\). Milnor's theorem ([15]) implies that the group \(\|\operatorname{Sing}_{\bullet}(\operatorname{Diff}_{+}^{r}(M))\|\) is homotopy equivalent to \(\operatorname{Diff}_{+}^{r}(M)\) and given that \(\overline{\operatorname{BDiff}_{+}^{r}(M)}\) is the homotopy fiber of \(\eta\), the homotopy quotient \[\|\overline{\operatorname{BDiff}_{+}^{r}(M)}_{\bullet}\|\psi\|\operatorname{ Sing}_{\bullet}(\operatorname{Diff}_{+}^{r}(M))\| \tag{2.2}\] is weakly equivalent to \(\operatorname{BDiff}_{+}^{r}(M)^{\delta}\). The space \(\overline{\operatorname{BDiff}_{+}^{r}(M)}\) is the geometric part of Mather-Thurston's theorem. To recall the part that is more amenable to homotopy theoretic techniques, let \(\operatorname{SI}_{n}^{r}\) denote the topological groupoid whose space of objects is \(\mathbb{R}^{n}\) and space of morphisms is given by germs of \(C^{r}\) orientation-preserving diffeomorphisms between two points in \(\mathbb{R}^{n}\) with a sheaf topology (see [10]). There is a natural map \[\nu\colon\operatorname{BSI}_{n}^{r}\to\operatorname{BGL}_{n}^{+}(\mathbb{R}),\] induced by the derivative of germs. Let \(\overline{\operatorname{BI}_{n}^{r}}\) denote the homotopy fiber of the map \(\nu\). Let \(\tau_{M}\colon M\to\operatorname{BGL}_{n}^{+}(\mathbb{R})\) be the map that classifies the tangent bundle and \(\tau_{M}^{*}(\nu)\) be the bundle over \(M\) induced by the pullback of the map \(\nu\) via \(\tau_{M}\). Let \(\operatorname{Sect}(\tau_{M}^{*}(\nu))\) be the space sections of \(\tau_{M}^{*}(\nu)\). One can find a model for \(\operatorname{Sect}(\tau_{M}^{*}(\nu))\) on which \(\operatorname{Diff}_{+}^{r}(M)\) acts as follows. A \(\nu\)-tangential-structure on the \(n\)-dimensional manifold \(M\) is a bundle map \(\operatorname{T}M\to\nu^{*}\gamma_{n}\) where \(\gamma_{n}\) is the tautological vector bundle over \(\operatorname{BGL}_{n}^{+}(\mathbb{R})\). We denote the space of \(\nu\)-tangential-structures on \(M\) by \(\operatorname{Bun}(\operatorname{T}M,\nu^{*}\gamma_{n})\) and equip it with the compact-open topology. Note that \(\operatorname{Diff}_{+}^{r}(M)\) acts on \(\operatorname{Bun}(\operatorname{T}M,\nu^{*}\gamma_{n})\) by precomposing with the differential of diffeomorphisms. In [11, Section 1.2.2], we showed the following version of Mather-Thurston's theorem. **Theorem 2.3** (Equivariant Mather-Thurston).: _There is a semi-simplicial map_ \[\overline{\operatorname{BDiff}_{+}^{r}(M)}_{\bullet}\to Sing_{\bullet}( \operatorname{Bun}(\operatorname{T}M,\nu^{*}\gamma_{n})),\] _which is equivariant with respect to \(Sing_{\bullet}(\operatorname{Diff}_{+}^{r}(M))\)-action and it induces an acyclic map between fat realization._ _Remark 2.4_.: Mather and Thurston ([12]) first proved that this map is a homology isomorphism for compact manifolds and for compactly supported diffeomorphisms of open manifolds. Later McDuff proved ([13]) the non-compactly supported case for open manifolds. By Milnor's theorem ([12]) and the fact that the fat realization and geometric realization of simplicial sets are weakly equivalent ([1, Section 1.2]), the natural map \(\|Sing_{\bullet}(\operatorname{Bun}(\operatorname{T}M,\nu^{*}\gamma_{n}))\| \to\operatorname{Bun}(\operatorname{T}M,\nu^{*}\gamma_{n})\) is a weak equivalence. So after realization, we obtain a map \[\overline{\operatorname{BDiff}_{+}^{r}(M)}\to\operatorname{Bun}(\operatorname{ T}M,\nu^{*}\gamma_{n}),\] that induces an acyclic map and is equivariant with respect to the map \(\|Sing_{\bullet}(\operatorname{Diff}_{+}^{r}(M))\|\xrightarrow{\simeq} \operatorname{Diff}_{+}^{r}(M)\). **Corollary 2.5**.: _The map \(\eta\) in 2.1, factors as follows_ \[\operatorname{BDiff}_{+}^{r}(M)^{\delta}\xrightarrow{\beta}\operatorname{Bun }(\operatorname{T}M,\nu^{*}\gamma_{n})/\operatorname{Diff}_{+}^{r}(M)\to \operatorname{BDiff}_{+}^{r}(M),\] _where the map \(\beta\) is an acyclic map._ McDuff also proved the volume-preserving case of Mather-Thurston's theorem ([13, 14, 15, 16]). Suppose that \(M\) is a compact manifold with a fixed volume form. Let \(\Gamma_{n}^{\operatorname{vol}}\) denote the topological groupoid whose space of objects is \(\mathbb{R}^{n}\) and space of morphisms is given by germs of volume preserving diffeomorphisms of \(\mathbb{R}^{n}\). Now let \(\theta\colon\operatorname{BT}_{n}^{\operatorname{vol}}\to\operatorname{BSL}_{n }(\mathbb{R})\) and also let \(\gamma_{n}\) be the tautological vector bundle over \(\operatorname{BSL}_{n}(\mathbb{R})\). Similarly we can define \(\overline{\operatorname{BDiff}_{\operatorname{vol}}(M)}\) and \(\operatorname{Bun}(\operatorname{T}M,\theta^{*}\gamma_{n})\). Then the equivariant version of McDuff's theorem ([11, Section 2.2]) says that there is a semi-simplicial map \[\overline{\operatorname{BDiff}_{\operatorname{vol}}(M)}_{\bullet}\to\operatorname {Sing}_{\bullet}(\operatorname{Bun}(\operatorname{T}M,\theta^{*}\gamma_{n})),\] which is equivariant with respect to \(\operatorname{Sing}_{\bullet}(\operatorname{Diff}_{\operatorname{vol}}^{r}(M))\)-action and it induces an acyclic map between fat realization to _the connected component_ that it hits. ## 3. Making a bundle flat up to bordism In this section, we prove the main theorems 1.7 and 1.8 and then we show how Theorem 1.8 implies Theorem 1.2. Proof of Theorem 1.7.: We first translate the problem into finding a "homological" section as follows. A principal \(G\)-bundle is classified by a map to the classifying space \(\operatorname{B}G\). Forgetting the principal structure and just considering it as a \(G\)-smooth fibration induces a map \[\alpha\colon\operatorname{B}G\to\operatorname{BDiff}_{+}(G).\] Here we assume that the map \(\alpha\) is induced by the inclusion \(G\to\operatorname{Diff}_{+}(G)\) that is given by _left_ action of \(G\) on itself. To obtain a foliated \(G\)-bundle, we need to lift the map \(\alpha\) to \(\operatorname{BDiff}_{+}^{\beta}(G)\). This is not always possible but we show that it is possible to lift \(\alpha\) to the space \(\operatorname{Bun}(\operatorname{TG},\nu^{*}\gamma_{n})/\operatorname{Diff}_{ +}(G)\), (3.1) Then, Corollary 2.5 would imply that we have a commutative diagram between oriented bordism groups which in turn implies the bordism statement in Theorem 1.7. Suppose that \(G\) is \(n\)-dimensional. Recall that the tangent bundle \(\operatorname{TG}\) is trivial. So \(\operatorname{Bun}(\operatorname{TG},\nu^{*}\gamma_{n})\) is homotopy equivalent to the space of maps \(\operatorname{Map}(G,\overline{\operatorname{BI}_{n}})\). This homotopy equivalence, however, is not \(\operatorname{Diff}_{+}(G)\)-equivariant. But note that the trivialization \(G\times\mathbb{R}^{n}\to TG\) is \(G\)-equivariant with respect to the natural action of \(G\) from the left on \(G\), on \(\operatorname{TG}\) by acting as a subgroup of \(\operatorname{Diff}(G)\) and the trivial action on \(\mathbb{R}^{n}\). Recall the trivialization sends \((g,v)\) to \((g,Dg(v))\) where \(Dg\) is the differential of the left action by \(g\) and it is easy to see that this map is \(G\)-equivariant. For example, consider \((h,w)\) in \(\operatorname{TG}\) and \(g\) in \(G\). Then \(g.(h,w)=(gh,Dg(w))\). Note that \((h,w)\) comes from \((h,D(h^{-1})(w))\) under the isomorphism \(G\times\mathbb{R}^{n}\to TG\). When \(g\) acts on \((h,D(h^{-1})(w))\) as an element in \(G\times\mathbb{R}^{n}\), we obtain \((gh,D(h^{-1})(w))\) which maps to \((gh,Dg(w))\) via the isomorphism \(G\times\mathbb{R}^{n}\to TG\). Hence, this isomorphism is \(G\)-equivariant. So the natural map \[f\colon\operatorname{Map}(G,\overline{\operatorname{BI}_{n}})\to\operatorname {Bun}(\operatorname{TG},\nu^{*}\gamma_{n})\] is equivariant with respect to \(G\) actions on both sides. Hence, we have a commuting diagram (3.2) Note that the action of \(G\) on the mapping space has fixed points (constant maps) so the left map to \(\mathrm{BG}\) has a section. Therefore, the map \(\alpha\) can be lifted to \(\mathrm{Bun}(\mathrm{TG},\nu^{*}\gamma_{n})/\mathrm{Diff}_{+}(G)\). Already Theorem 1.7 implies Theorem 1.2 for \(\mathbf{S}^{3}\) as follows. Note that \(G=\mathbf{S}^{3}\) is a Lie group and also by Hatcher's theorem \(\mathrm{BDiff}_{+}(\mathbf{S}^{3})\simeq\mathrm{BSO}(4)\). So the action of \(\mathbf{S}^{3}\) on itself from the left induces the map \(\alpha\) that on cohomology gives \[\alpha^{*}\colon H^{*}(\mathrm{BDiff}_{+}(\mathbf{S}^{3});\mathbb{Q})\cong \mathbb{Q}[e,p_{1}]\to H^{*}(\mathrm{BS}^{3};\mathbb{Q})\cong\mathbb{Q}[c_{2}],\] where \(e,p_{1},c_{2}\) are the universal Euler class, the first Pontryagin class, and the second Chern class respectively. So we have \(\alpha^{*}(e)=c_{2}\) and \(\alpha^{*}(p_{1})=-2c_{2}\). Given the diagram 3.1 that \(\alpha^{*}\) factors through the group \(H^{*}(\mathrm{BDiff}_{0}^{\delta}(\mathbf{S}^{3});\mathbb{Q})\), the powers of the Euler class and the first Pontryagin class map non-trivially to \(H^{*}(\mathrm{BDiff}_{0}^{\delta}(\mathbf{S}^{3});\mathbb{Q})\). Now using the same idea, we are ready to prove Theorem 1.8. Proof of Theorem 1.8.: Suppose the dimension of \(M\) is \(n\). For the free torus \(T=(S^{1})^{r}\) action on \(M\), let \(Q\) be the quotient \(M/T\) and \(\pi\) be the natural map \(\pi\colon M\to Q\). Note that the vertical tangent bundle \(T_{\pi}M\) is trivial by differentiating the torus action. So it is isomorphic to \(M\times\mathrm{t}\) where \(\mathrm{t}\) is the Lie algebra of \(T\). And there is a natural isomorphism between \(TM\) and \(\pi^{*}(TQ)\oplus e^{r}\) where \(\epsilon\) is the trivial line bundle over \(M\) and this isomorphism is \(T\)-equivariant with respect to the trivial action on the vectors in the bundle \(\pi^{*}(TQ)\oplus e^{r}\). In general, when we have a free \(G\) action on \(M\), the vertical tangent bundle \(T_{\pi}M\) of the quotient map is isomorphic to the trivial bundle \(M\times\mathfrak{g}\) where \(\mathfrak{g}\) is the Lie algebra of \(G\). And this isomorphism is \(G\)-equivariant with respect to the adjoint action on \(\mathfrak{g}\). Therefore, for the free torus action, we obtain \(T\)-equivariant isomorphism between \(T_{\pi}M\) and \(\pi^{*}(TQ)\oplus e^{r}\). Similar to the proof of Theorem 1.7, it is enough to show that the torus action of the space of bundle maps \(\mathrm{Bun}(TM,\nu^{*}\gamma_{n})\) has fixed points. We think of bundle maps as the space of lifts of the classifying map \(\tau_{M}\colon M\to\mathrm{BGL}_{n}^{+}(\mathbb{R})\) of the tangent bundle to \(\mathrm{BSI}_{n}\). Note that the classifying map \(\tau_{M}\) factors as \(M\xrightarrow{n}Q\to\mathrm{BGL}_{n}^{+}(\mathbb{R})\). On the other hand, the map \(\nu\colon\mathrm{BSI}_{n}\to\mathrm{BGL}_{n}^{+}(\mathbb{R})\) is at least \((n+2)\)-connected ([14, theorem 2]) and the dimension of \(Q\) is less than \(n\). Therefore, by the obstruction theory, the map \(Q\to\mathrm{BGL}_{n}^{+}(\mathbb{R})\) lifts to \(\mathrm{BSI}_{n}\). Hence, the composition \[M\to Q\to\mathrm{BSI}_{n},\] gives a bundle map in \(\mathrm{Bun}(TM,\nu^{*}\gamma_{n})\). But since this map factors through \(Q\), it is fixed under \(T\) action. _Remark 3.2_.: In the case of volume preserving diffeomorphisms, it is a consequence of Moser's trick ([1, Corollary 1.5.4]) that the inclusion \(\mathrm{Diff}_{\mathrm{vol}}(M)\hookrightarrow\mathrm{Diff}_{+}(M)\) is a homotopy equivalence. So the induced map on classifying spaces \(\mathrm{BDiff}_{\mathrm{vol}}(M)\to\mathrm{BDiff}_{+}(M)\) is also a homotopy equivalence. Given McDuff's version of Mather-Thurston's theorem for volume-preserving diffeomorphisms, we can do the above argument to obtain foliations whose holonomies preserve the volume form. Let us now use Theorem 1.8 to prove Theorem 1.2. Proof of Theorem 1.2.: Consider the sphere \(\mathbf{S}^{2n-1}\) as the quotient \(\mathrm{U}(n)/\mathrm{U}(n-1)\). We have a standard free \(\mathrm{U}(1)\)-action on \(\mathbf{S}^{2n-1}\) as follows \[\mathrm{U}(1)\xrightarrow{\Delta}\mathrm{U}(1)^{n}\to\mathrm{U}(n)\to\mathrm{ SO}(2n)\to\mathrm{Diff}_{+}(\mathbf{S}^{2n-1}).\] Recall that the Euler class and Pontryagin class for \(\mathbf{S}^{2n-1}\)-bundles are defined by taking the infinite cone of each fiber to obtain a topological \(\mathbb{R}^{2n}\)-bundle. So we have the following maps between classifying spaces \[\operatorname{BU}(1)\xrightarrow{\Delta}\operatorname{BU}(1)^{n}\to \operatorname{BSO}(2n)\to\operatorname{BDiff}_{+}(\mathbf{S}^{2n-1})\to \operatorname{BHomeo}_{+}(\mathbb{R}^{2n}).\] Hence, the pull-back of the monomials in classes \(e,p_{i}\) in \(H^{*}(\operatorname{BHomeo}_{+}(\mathbb{R}^{2n});\mathbb{Q})\) for \(i\leq n-1\) to \(H^{*}(\operatorname{BU}(1);\mathbb{Q})\) are non-trivial multiples of powers of the first Chern class \(c_{1}\in H^{2}(\operatorname{BU}(1);\mathbb{Q})\). On the other hand, similar to the diagram 3.1, Theorem 1.8 gives the following homotopy commutative diagram (3.3) Therefore, we know that the map \[H^{*}(\operatorname{BDiff}_{+}(\mathbf{S}^{2n-1});\mathbb{Q})\to H^{*}( \operatorname{BU}(1);\mathbb{Q}),\] factors through \(H^{*}(\operatorname{BDiff}_{+}^{\delta}(\mathbf{S}^{2n-1});\mathbb{Q})\) which implies that all the powers of the classes \(e,p_{i}\) for \(i\leq n-1\) are non-trivial in \(H^{*}(\operatorname{BDiff}_{+}^{\delta}(\mathbf{S}^{2n-1});\mathbb{Q})\). _Remark 3.4_.: This argument, in fact, proves a slightly more general result that all elements in \(\mathbb{Q}[e,p_{1},...,p_{n-1}]\) that are not in the kernel of the map \[H^{*}(\operatorname{BSO}(2n);\mathbb{Q})\to H^{*}(\operatorname{BU}(1); \mathbb{Q}),\] are non-trivial in \(H^{*}(\operatorname{BDiff}_{+}^{\delta}(\mathbf{S}^{2n-1});\mathbb{Q})\). _Remark 3.5_.: As we mentioned, Morita observed that the classes \(e^{k}\) and monomials of Pontryagin classes are linearly independent when \(k\) is odd. It would be interesting to determine whether \[\mathbb{Q}[e,p_{1},\cdots,p_{n-1}]\to H^{*}(\operatorname{BDiff}_{0}^{\delta} (\mathbf{S}^{2n-1});\mathbb{Q}),\] is injective. _Remark 3.6_.: One interesting example on which a torus does not act freely is the higher dimensional analog of surfaces \(\operatorname{W}_{g}^{n}=\#_{g}\mathbf{S}^{n}\times\mathbf{S}^{n}\) which is the connected sum of \(g\) copies of \(\mathbf{S}^{n}\times\mathbf{S}^{n}\). Galatius, Grigoriev, and Randal-Williams proved in [1, Theorem 4.1 (ii)] that there exists an \(\operatorname{SO}(n)\times\operatorname{SO}(n)\)-action \(\operatorname{W}_{g}^{n}\). They used this action to detect the non-vanishing of certain MMM-classes \(\kappa_{ep_{i}}\) for all \(i\in\{1,2,\ldots,n-1\}\) and \(g>1\). As we observed in [1, Theorem 6.3], one can use this action to show that the powers of the classes \(\kappa_{ep_{i}}\) for all \(i\in\{1,2,\ldots,n-1\}\) and \(g>1\) are non-trivial in \(H^{*}(\operatorname{BDiff}^{\delta}(\operatorname{W}_{g}^{n});\mathbb{Z})\). It would be interesting to see if the classes \(\kappa_{ep_{i}}\)'s are also non-trivial in \(H^{*}(\operatorname{BDiff}^{\delta}(\operatorname{W}_{g}^{n});\mathbb{Q})\). For the case of \(n=1\) where \(\operatorname{W}_{g}^{1}\) is a closed genus \(g\) surface, MMM-classes \(\kappa_{\ell^{i+1}}\) are denoted by \(\kappa_{i}\). Kotschick and Morita ([13]) showed that \(\kappa_{1}^{k}\) is non-trivial in \(H^{2k}(\operatorname{BDiff}^{\delta}(\operatorname{W}_{g}^{1});\mathbb{Q})\) for all positive integer \(k\) provided that \(g\geq 3k\). The class \(\kappa_{2}\) is not known to be non-trivial for flat surface bundles and by Bott vanishing ([13, Theorem 8.1]) the classes \(\kappa_{i}\) for \(i>2\) vanish in \(H^{*}(\operatorname{BDiff}^{\delta}(\operatorname{W}_{g}^{n});\mathbb{Q})\). ## 4. Even dimensional spheres and the Bott vanishing theorem Here we use the Bott vanishing theorem to prove Theorem 1.6. First, let us recall the Bott vanishing theorem [1]. Let \(\mathcal{F}\) be a foliation on closed manifold \(E\) of codimension \(q\). Then we have \[\operatorname{Pont}^{>2q}(\nu\mathcal{F})=0\] where \(\operatorname{Pont}^{>2q}(\nu\mathcal{F})\) is a ring generated by monomials of Pontryagin classes of the normal bundle of \(\mathcal{F}\) of degree larger than \(2q\). Recall from the introduction that to any smooth oriented sphere bundle \(\mathbf{S}^{2n}\to E\xrightarrow{\pi}B\), we assign Pontryagin classes \(p_{i}(\pi)\in H^{*}(B;\mathbb{Q})\) by taking infinite cone of the fibers and consider the associated topological Euclidean fiber bundle. In order to prove the vanishing theorem 1.6 for a flat smooth oriented \(\mathbf{S}^{2n}\)-bundle \(E\xrightarrow{\pi}B\), we need to relate these Pontryagin classes in \(H^{*}(B;\mathbb{Q})\) to the Pontryagin classes of the normal bundle of the foliation on the total space in \(H^{*}(E;\mathbb{Q})\). **Lemma 4.1**.: _Let \(\pi\colon E\to B\) be a smooth oriented \(\mathbf{S}^{m}\)-bundle and let \(c(\pi)\colon C(E)\to B\) be a topological \(\mathbb{R}^{m+1}\)-bundle obtained by taking infinite cones on each fiber. Let \(\operatorname{T}_{\pi}E\) be the vertical tangent bundle of the fiber bundle \(\pi\) and let \(\epsilon\) be the trivial \(\mathbb{R}\)-bundle over \(E\). Then we have an isomorphism of topological bundles_ \[\operatorname{T}_{\pi}E\oplus\epsilon\cong\pi^{*}(C(E)),\] _as topological \(\mathbb{R}^{m+1}\)-bundles._ Proof.: In the proof of [14, Corollary 1.4], Igusa showed that if \(p\colon L\to X\) is a smooth oriented sphere bundle over a compact manifold \(X\) and \(s\colon X\to L\) is a section, then \(C(L)\) the infinite cone of \(L\) is isomorphic to \(s^{*}(\operatorname{T}_{p}L)\oplus\epsilon\) as topological Euclidean space fiber bundles. Note that \(q\colon\pi^{*}(E)\to E\) has a canonical section that we denote by \(s\). And also \(\pi^{*}(C(E))\) is isomorphic to \(C(\pi^{*}(E))\). So by Igusa's theorem, the fiber bundle \(\pi^{*}(C(E))\) is isomorphic to \(s^{*}(\operatorname{T}_{q}\pi^{*}(E))\oplus\epsilon\). But \(s^{*}(\operatorname{T}_{q}\pi^{*}(E))\) is isomorphic to \(\operatorname{T}_{\pi}E\). Hence, his theorem implies that \[\operatorname{T}_{\pi}E\oplus\epsilon\cong\pi^{*}(C(E)),\] as topological \(\mathbb{R}^{m+1}\)-bundles. Now suppose that \(\pi\colon E\to B\) is a smooth oriented flat \(\mathbf{S}^{2n}\)-bundle. The vertical tangent bundle \(\operatorname{T}_{\pi}E\) is the normal of the foliation on \(E\). By definition, the Pontryagin classes \(p_{i}(\pi)\) are defined to be the Pontryagin classes of the infinite cone bundle \(C(E)\). On the other hand, by Lemma 4.1, we have \[p_{i}(\operatorname{T}_{\pi}E)=\pi^{*}(p_{i}(C(E)).\] So for any monomial \(P\in\operatorname{Pont}^{*}(\operatorname{T}_{\pi}E)\) we have \(P(\operatorname{T}_{\pi}E)=\pi^{*}(P(\pi))\). If we fiber integrate \(e(\operatorname{T}_{\pi}E)\cdot P(\operatorname{T}_{\pi}E)\) which is the MMM-class \(\kappa_{\epsilon P}\), we obtain \[\pi_{!}(e(\operatorname{T}_{\pi}E)\cdot P(\operatorname{T}_{\pi}E))=\pi_{!}(e (\operatorname{T}_{\pi}E)\cdot\pi^{*}(P(\pi)))=2\cdot P(\pi),\] since \(\pi_{!}(e(\operatorname{T}_{\pi}E))=\chi(\mathbf{S}^{2n})=2\). By the Bott vanishing theorem, we know that \(\operatorname{Pont}^{>4n}(\operatorname{T}_{\pi}E)=0\). Hence, \(P(\pi)=0\) if \(P\) is a monomial of Pontryagin classes \(p_{1},\dots,p_{n}\) whose degrees are larger than \(4n\). ## Appendix A Haefliger's model, by Nils Prigge We use Haefliger's method to study the kernel of the map (A.1) \[H^{*}(\operatorname{BSO}(n+1);\mathbb{R})\to H^{*}(\mathcal{L}_{\mathbf{S}^{ n}},\operatorname{so}(n+1))\] and correct a mistake in [10] in the process. Haefliger's main result [10, Thm 1\({}^{\prime}\)] identifies \(H^{*}(\mathcal{L}_{\mathbf{S}^{n}},\operatorname{so}(n+1))\) with the \(\operatorname{SO}(n+1)\)-equivariant cohomology of the following section space: Let \(F_{n}\) be the restriction of the canonical \(\mathrm{U}(n)\)-bundle over \(\mathrm{BU}(n)\) to the \(2n\)-skeleton (with respect to the CW decomposition of the complex Grassmannian by Schubert cells). Given a manifold \(M\) with an action of a compact connected Lie group, the associated bundle \(E_{M}:=\mathrm{Fr}^{+}(M)\times_{\mathrm{SO}(n)}F_{n}\) is a \(G\)-equivariant fiber bundle over \(M\) with fiber \(F_{n}\). Denote by \(\Gamma_{M}\) the space of sections of this bundle which has a \(G\)-action given by conjugation. The complex of continuous and \(G\)-basic Chevalley-Eilenberg cochains \(\mathcal{C}_{CE}(\mathcal{L}_{M},\mathfrak{g})\) is a cdga model for \(\Gamma_{M}/G\) by [10, Thm 1\({}^{\prime}\)] (see loc. cit. for the definitions). Haefliger then uses tools from rational homotopy theory to determine a simple model for \(\Gamma_{\mathbb{S}^{n}}/\mathrm{SO}(n+1)\) which allows for a computation of the kernel of (A.1). We give an alternative proof that confirms his computation for \(n\) even but corrects a mistake for \(n\) odd which contradicts Theorem 1.2. **Theorem A.2**.: _For even \(n\) the kernel of (A.1) consists of all polynomials in the Pontryagin classes \(p_{1},\ldots,p_{n/2}\) of degree \(>2n\). For \(n=3\) the map (A.1) is injective._ We determine a model for \(\Gamma_{\mathbb{S}^{n}}/\mathrm{SO}(n+1)\) from relative Sullivan models of \[E_{\mathbb{S}^{n}}/\mathrm{SO}(n+1)\to\mathbb{S}^{n}/\mathrm{SO}(n+1)\] and a model of the evaluation map \[\mathrm{ev}:(\Gamma_{\mathbb{S}^{n}}\times\mathbb{S}^{n})/\mathrm{SO}(n+1)\to E _{\mathbb{S}^{n}}/\mathrm{SO}(n+1).\] Let \(N\in\mathbb{S}^{n}\) denote the north pole with isotropy group \(\mathrm{SO}(n)\subset\mathrm{SO}(n+1)\) and identify \(F_{n}\) with the fibre over \(N\). Then the inclusion \(F_{n}\hookrightarrow E_{\mathbb{S}^{n}}\) is equivariant with respect to the standard inclusion \(\mathrm{SO}(n)\subset\mathrm{SO}(n+1)\) and thus induces a map \(F_{n}/\mathrm{SO}(n)\stackrel{{\simeq}}{{\to}}E_{\mathbb{S}^{n}}/ \mathrm{SO}(n+1)\) which is a homotopy equivalence as \(\mathbb{S}^{n}/\mathrm{SO}(n+1)\simeq\mathrm{BSO}(n)\). Hence, it is sufficient to determine a relative Sullivan model for \(F_{n}/\mathrm{SO}(n)\). A cdga model of \(F_{n}\) is given by \[WU_{n}:=\left(\frac{\mathbb{Q}[c_{1},\ldots,c_{n}]}{(f,|f|>2n)}\otimes\Lambda( h_{1},\ldots,h_{n}),d(h_{i})=c_{i}\right)\] by [10] which has a trivial product structure by [11]. Hence, \(F_{n}\) has the rational homotopy type of a bouquet of spheres and we denote by \(L_{n}\) a minimal dg Lie model. For example, a straightforward computation of \(\overline{H}\left(WU_{3}\right)\) proves that \[F_{3}\simeq_{\mathbb{Q}}\sqrt[4]{S^{7}\lor S^{9}\vee\sqrt[3]{S^{10}\lor S^{11} \vee\sqrt[4]{S^{12}\lor S^{14}\vee\sqrt[3]{S^{15}}}}}\] so that \(L_{3}=(\mathbbm{L}(y_{1},\ldots,y_{17}),d_{L}=0)\). We denote the Chevalley-Eilenberg complex by \(\mathcal{C}_{CE}(L_{n})=(\Lambda z_{i},d)\) where the generators \(z_{i}\) correspond to a basis of \((sL_{n})^{\vee}\). For \(n=3\), we denote the generators of low degrees corresponding to a dual basis of \(sL_{3}^{1}\) by \(x_{1},\ldots,x_{17}\) and by \(x_{i,j}\) the generators corresponding to a basis of \(sL_{3}^{2}\), where \(L_{n}^{k}\subset L_{n}\) denotes the grading by bracket length, so that (A.3) \[\mathcal{C}_{CE}(L_{3})=(\Lambda z_{i},d)=(\Lambda(x_{1},\ldots,x_{17},x_{i,j },\ldots),d).\] In the following, we compute a relative cdga model for \(F_{n}/\mathrm{SO}(n)\). We denote \(B_{n}:=H^{*}(\mathrm{BSO}(n);\mathbb{Q})\) and let \[A_{n}:=\begin{cases}B_{n+1}[e]/(e^{2}-p_{n})&\text{if }n\equiv 0(2)\\ (B_{n+1}\otimes\Lambda(s),d(s)=e)&\text{if }n\equiv 1(2)\end{cases}\] be a model for \(\mathbb{S}^{n}/\mathrm{SO}(n+1)\simeq\mathrm{BSO}(n)\to\mathrm{BSO}(n+1)\) over \(B_{n+1}\). **Lemma A.4**.: _A relative cdga model of \(F_{n}/\mathrm{SO}(n)\) is given by_ \[\left(B_{n}\otimes WU_{n},\tilde{d}(h_{i})=c_{i}-(-1)^{i/2}p_{i/2}\right).\] Proof.: By definition, \(\mathrm{SO}(n)\) acts freely on \(F_{n}\) and hence \(F_{n}\)/\(\mathrm{SO}(n)\simeq F_{n}\)/\(\mathrm{SO}(n)\). The projection \(F_{n}\)/\(\mathrm{SO}(n)\to F_{n}\)/\(\mathrm{U}(n)=\mathrm{sk}_{2n}\mathrm{BU}(n)\) is a \(\mathrm{U}(n)\)/\(\mathrm{SO}(n)\)-bundle and pulled back from \(\mathrm{U}(n)\)/\(\mathrm{SO}(n)\hookrightarrow\mathrm{BSO}(n)\to\mathrm{BU}(n)\) via the inclusion of the \(2n\)-skeleton. Rationally, both \(\mathrm{BSO}(n)\) and \(\mathrm{BU}(n)\) are products of Eilenberg-MacLane whose minimal models coincide with their cohomology rings. Hence, a relative Sullivan model of \(i:\mathrm{BSO}(n)\to\mathrm{BU}(n)\) is determined from the induced map on cohomology and given by \(E_{n}:=(B_{n}\otimes H^{*}(\mathrm{BU}(n))\otimes\Lambda(h_{1},\ldots,h_{n}), d(h_{k})=c_{k}-(-1)^{k/2}p_{k/2})\) as \(i^{*}(c_{k})=(-1)^{k/2}p_{k/2}\) if \(k\) is even and zero if \(k\) is odd. A model for the pullback over the \(2n\)-skeleton, i.e. \(F_{n}\)/\(\mathrm{SO}(n)\), is determined by a cdga model of the inclusion \(\mathrm{sk}_{2n}\mathrm{BU}(n)\hookrightarrow\mathrm{BU}(n)\) via base change by [11, Prop. 15.8]. Since the inclusion is \((2n+1)\)-connected (as there are no cells of odd degree), a minimal model of \(\mathrm{sk}_{2n}\mathrm{BU}(n)\) has generators corresponding to the Chern classes and additional generators of degrees \(\geq 2n+1\) and therefore there is a quasi-isomorphism to \(H^{*}(\mathrm{sk}_{2n}\mathrm{BU}(n))\) by projection onto the Chern classes. It follows that the inclusion of the \(2n\)-skeleton is formal and thus a model of \(F_{n}\)/\(\mathrm{SO}(n)\) is given by \[\frac{\mathbb{Q}[c_{1},\ldots,c_{n}]}{(f(c_{1},\ldots,c_{n}),\,|f|>2n)}\otimes_ {H^{*}(\mathrm{BU}(n))}E_{n}\cong\left(B_{n}\otimes WU_{n},\bar{d}(h_{i})=c_{i }-(-1)^{i/2}p_{i/2}\right),\] which is isomorphic to the model that Haefliger gives in [10, Sect. 7]. **Corollary A.5**.: _The fibre bundle \(E_{\mathbb{S}^{n}}\to\mathbb{S}^{n}\) is fibrewise rationally equivalent to the trivial fibration \(\pi_{2}:F_{n}\times\mathbb{S}^{n}\to\mathbb{S}^{n}\)_ Proof.: By construction, \(E_{\mathbb{S}^{n}}\) is the pullback of \(F_{n}\)/\(\mathrm{SO}(n)\to\mathrm{BSO}(n)\) along the classifying map of the tangent bundle \(\tau_{M}:\mathbb{S}^{n}\to\mathrm{BSO}(n)\) which is a formal map. Hence, a relative Sullivan model of \(E_{\mathbb{S}^{n}}\to\mathbb{S}^{n}\) is given by \[H^{*}(\mathbb{S}^{n})\otimes_{B_{n}}\left(B_{n}\otimes WU_{n},\bar{d}(h_{i})= c_{i}-(-1)^{i/2}p_{i/2}\right)\] by Lemma A.4. This cdga is isomorphic to \(H^{*}(\mathbb{S}^{n})\otimes WU_{n}\) as the total Pontrjagin class of \(\mathbb{S}^{n}\) is trivial, which proves the claim. The computation of \(H^{*}(\mathbb{I}_{\mathbb{S}^{n}}\)/\(\mathrm{SO}(n+1);\mathbb{Q})\) requires a relative Sullivan model of \(F_{n}\)/\(\mathrm{SO}(n)\) which we only determine in low degrees for \(n=3\). **Lemma A.6**.: _A relative Sullivan model of \(F_{3}\)/\(\mathrm{SO}(3)\) of the form \((B_{3}\otimes\mathcal{C}_{CE}(L_{3}),D)\) extending the differential on \(\mathcal{C}_{CE}(L_{3})\) is given (in low degrees) by_ (A.7) \[\begin{array}{ll}D(x_{2})=-p_{1}^{2}&D(x_{6})=-p_{1}x_{1}&D(x_{7})=p_{1}x_{3 }\\ D(x_{8})=p_{1}x_{4}&D(x_{10})=p_{1}x_{5}&D(x_{14})=p_{1}x_{9}\\ D(x_{15})=p_{1}x_{11}&D(x_{16})=p_{1}x_{12}&D(x_{17})=p_{1}x_{13}\\ D(x_{1,2})=x_{1}x_{2}+p_{1}x_{6}&\end{array}\] _and \(D(x_{i})=0\) for \(i=1,3,4,5,9,11,12,13\). Moreover, denoting by \(\epsilon:\mathcal{C}_{CE}(L_{3})\to\mathbb{Q}\) the augmentation, one can choose \(D\) so that \(x_{2}\) is the only generator for which \(B_{3}\otimes\epsilon(D(z_{i}))\neq 0\)._ Proof.: There exists a relative Sullivan model \[\Phi:(B_{3}\otimes\mathcal{C}_{CE}(L_{3}),D)\xrightarrow{\simeq}(B_{3}\otimes W U _{3},\bar{d}(h_{i})=c_{i}-(-1)^{i/2}p_{i/2}).\] that extends the differential of \(\mathcal{C}_{CE}(L_{3})\). Given a quasi-isomorphism \(\phi:\mathcal{C}_{CE}(L_{3})\to WU_{3}\), one can find \(\Phi\) and \(D\) inductively (with respect to the filtration on the indecomposables of \(\mathcal{C}_{CE}(L_{3})\) given by degree) as follows: For \(z_{i}\in\mathcal{C}_{CE}(L_{3})\), one can find \(a_{i}\in B_{3}\otimes WU_{3}\) and \(b_{i}\in B_{3}\otimes\Lambda(z_{j})_{|z_{j}|<|z_{i}|}\) so that \(\bar{d}(\phi(x)+a_{i})=\Phi(d(x)+b_{i})\) and then set \(\Phi(z_{i})=\phi(z_{i})+a_{i}\) and \(D(z_{i})=\bar{d}(z_{i})+b_{i}\). Using the notation \(\mathcal{C}_{CE}(L_{3})\) from (A.3), a possible choice for \(\phi\) is given by (A.8) \[\begin{split}\phi(x_{1})&=h_{3}c_{1}\qquad\phi(x_{2}) =h_{2}c_{2}\qquad\phi(x_{3})=h_{1}c_{1}^{3}\\ \phi(x_{4})&=h_{1}c_{1}c_{2}\qquad\phi(x_{5})=h_{3}c_ {2}\qquad\phi(x_{6})=h_{1}h_{3}c_{2}-c_{1}h_{2}h_{3}\\ \phi(x_{7})&=h_{1}h_{2}c_{1}^{3}\qquad\phi(x_{8})=h_ {1}h_{2}c_{1}c_{2}\qquad\phi(x_{9})=h_{3}c_{3}\\ \phi(x_{10})&=h_{2}h_{3}c_{2}\qquad\phi(x_{11})=h_ {1}h_{3}c_{1}^{3}\qquad\phi(x_{12})=h_{1}h_{3}c_{1}c_{2}\\ \phi(x_{13})&=h_{1}h_{3}c_{3}\qquad\phi(x_{14})=h_ {2}h_{3}c_{3}\qquad\phi(x_{15})=h_{1}h_{2}h_{3}c_{1}^{3}\\ \phi(x_{16})&=h_{1}h_{2}h_{3}c_{3}\quad\phi(x_{17})= h_{1}h_{2}h_{3}c_{3}\quad\phi(x_{1,2})=-h_{1}h_{2}h_{3}c_{2}\end{split}\] and where \(\phi\) vanishes on all other generators. We arrive at (A.7) carrying out the algorithm with this choice of \(\phi\) and we record only the differential as we do not need \(\Phi\) later on. For example, since \(\tilde{d}(\phi(x_{2})-p_{1}h_{2})=p_{1}c_{2}-p_{1}(c_{2}+p_{1})=\Phi(-p_{1}^{2})\) we set \(\Phi(x_{2})=h_{2}c_{2}-p_{1}h_{2}\) and \(D(x_{2})=-p_{1}^{2}\). Lastly, if there is another generator with \(B_{3}\otimes\epsilon(D(z_{i}))=\lambda p_{1}^{k}\neq 0\in B_{3}\), then there is an algebra automorphism of \(B_{3}\otimes\mathcal{C}_{CE}(L_{3})\) defined by \(z_{i}\mapsto z_{i}+\lambda p_{1}^{k-2}x_{2}\) and which is the identity on the other generators, so that the differential obtained by conjugation satisfies \(B_{3}\otimes\epsilon(D(z_{i}))=0\). Hence, there exists a differential with the property that only \(B_{3}\otimes\epsilon(D(x_{2}))\neq 0\). Before we give the proof of the main theorem, we need the following technical statement regarding relative Sullivan algebras. **Lemma A.9**.: _Let \(\Psi_{1}:(B\otimes\Lambda V,D_{V})\to(B\otimes\Lambda W,D_{W})\) be a map of relative Sullivan algebras with \(B^{0}=\mathbb{Q}\). Suppose \(\psi_{1}:=\Psi_{1}\otimes_{B}\mathbb{Q}\) is homotopic to \(\psi_{2}:(\Lambda V,d_{V})\to(\Lambda W,d_{W})\), then \(\Psi_{1}\) is homotopic relative \(B\) to a map \(\Psi_{2}\) so that \(\Psi_{2}\otimes_{B}\mathbb{Q}=\psi_{2}\)._ Proof.: There exists a map \(h:\Lambda V\to\Lambda W\) of degree \(-1\) so that \(\psi_{2}-\psi_{1}=d_{W}h+hd_{V}\) by [11, Prop. 12.8]. Denote by \(h_{B}:B\otimes\Lambda V\to B\otimes\Lambda W\) its \(B\)-linear extension and define \(\Psi_{2}\) via its restriction by \(\Psi_{2}|_{V}=\Psi_{1}|_{V}+D_{W}h_{B}+h_{B}D_{V}\). Then \(\Psi_{2}\) is a map of \(B\)-algebras and \(\Psi_{2}\otimes_{B}\mathbb{Q}=\psi_{1}+d_{W}h+hd_{V}=\psi_{2}\). Moreover, define \(H:(B\otimes\Lambda V,D_{V})\to(B\otimes\Lambda W,D_{W})\otimes\Lambda(t,dt)\) by \[H(v)=\Psi_{1}(v)+(\Psi_{2}(v)-\Psi_{1}(v))t-(-1)^{|v|}h_{B}(v)dt,\] which is a chain map and a homotopy \(\Psi_{1}\sim\Psi_{2}\) relative \(B\). As a last remark we observe that for a relative Sullivan algebra \((B\otimes\Lambda V,D)\), we have that \(D(B\otimes\Lambda^{\geq k}V)\subset B\otimes\Lambda^{\geq k-1}V\) and we can only decrease the product length in \(\Lambda V\) if there are generators \(v\in V\) with \(B\otimes\epsilon(D(v))\neq 0\in B\), where \(\epsilon:\Lambda V\to\mathbb{Q}\) denotes the augmentation. The natural map \(H(B)\to H(B\otimes\Lambda V,D)\) only has a kernel if there are such generators. Proof of Theorem a.2.: By Corollary A.5, \(\Gamma_{\mathbb{S}^{n}}\simeq_{\mathbb{Q}}\operatorname{Map}(\mathbb{S}^{n},F_{n})\) which is \(n\)-connected and has a Lie model \(H^{*}(\mathbb{S}^{n})\otimes L_{n}\) by [12, Thm 1.5]. In the following, we denote \(H^{*}(\mathbb{S}^{n})=\Lambda(s)/(s^{2})\). There exists a relative Sullivan model for \(\Gamma_{\mathbb{S}^{n}}/\mathbb{SO}(n+1)\) of the form \((B_{n+1}\otimes\mathcal{C}_{CE}(H^{*}(\mathbb{S}^{n})\otimes L_{n}),\overline{D})\) that extends the differential of \(\mathcal{C}_{CE}(H^{*}(\mathbb{S}^{n})\otimes L_{n})\). The evaluation map \((\Gamma_{\mathbb{S}^{n}}\times\mathbb{S}^{n})/\mathbb{SO}(n+1)\to\Gamma_{ \mathbb{S}^{n}}/\mathbb{SO}(n+1)\) is over \(\mathbb{S}^{n}/\mathbb{SO}(n+1)\) and hence modeled by a map (A.10) \[\Psi:(A_{n}\otimes\mathcal{C}_{CE}(L_{n}),D)\longrightarrow A_{n}\otimes_{B_{n+1 }}(B_{n+1}\otimes\mathcal{C}_{CE}(H^{*}(\mathbb{S}^{n})\otimes L_{n}),\overline{D})\] over \(A_{n}\), where 1. \(A_{n}\) is the relative Sullivan model of \(\mathbb{S}^{n}/\mathbb{SO}(n+1)\) over \(B_{n+1}\) defined above; 2. \((A_{n}\otimes\mathcal{C}_{CE}(L_{n}),D)\) is a relative Sullivan model for \(E_{\mathbb{S}^{n}}/\mathbb{SO}(n+1)\simeq F_{n}/\mathbb{SO}(n)\) from Lemma A.4 (and an analogue of Corollary A.5 for general \(n\)) by base change along \(B_{n}\stackrel{{\simeq}}{{\to}}A_{n}\). Since \(E_{\mathbb{S}^{n}}\) is rationally equivalent to a trivial fibration, the evaluation map is equivalent over \(\operatorname{BSO}(n+1)\) to \(\operatorname{ev}\times\pi_{2}:\operatorname{Map}(\mathbb{S}^{n},F_{n})\times \mathbb{S}^{n}\to F_{n}\times\mathbb{S}^{n}\). If we denote the generators of \(\mathcal{C}_{CE}(H(\mathbb{S}^{n})\otimes L_{n})\) by \(\{z_{i},\tilde{z}_{i}\}\) where \(|\tilde{z}_{i}|=|z_{i}|-n\), then a model of the evaluation map of mapping spaces is given by (A.11) \[\psi:\mathcal{C}_{CE}(L_{n})\to\mathcal{C}_{CE}(H(\mathbb{S}^{n})\otimes L_{n })\otimes H(\mathbb{S}^{n}),\quad z_{i}\longmapsto z_{i}\otimes 1+\tilde{z}_{i}\otimes s\] by [2, Thm 3.11]. Hence, we can assume by Lemma A.9 for \(n\) odd that (A.12) \[\Psi(z_{i})=1\otimes z_{i}+s\otimes\tilde{z}_{i}+1\otimes a_{i}+s\otimes b_{i}\] for some \(a_{i},b_{i}\in B_{n+1}^{+}\otimes\Lambda(z_{i},\tilde{z}_{i})\). The same is true for even \(n\) but in order to apply Lemma A.9 we have to use a relative Sullivan model \(A^{\prime}_{n}:=(B_{n+1}\otimes\Lambda(e,y),d(y)=e^{2}-p_{n/2})\) instead of \(A_{n}\). Using that the projection map \(A^{\prime}_{n}\to A_{n}\) is a quasi-isomorphism, one can see that we also obtain (A.12) in the case \(n\) is even. There is an algebra automorphism of \(B_{n+1}\otimes\mathcal{C}_{CE}(H^{*}(\mathbb{S}^{n})\otimes L_{n})\) defined by \(f(z_{i})=z_{i}+a_{i}\) and \(f(\tilde{z}_{i})=\tilde{z}_{i}+b_{i}\) so that \((A_{n}\otimes f^{-1})\circ\Psi(z_{i})=1\otimes z_{i}+s\otimes\tilde{z}_{i}\). Hence, we can find a model for the evaluation map (A.10) by post-composition with \(f^{-1}\) so that \(\Psi=A_{n}\otimes\psi\) which determines the differential \(\overline{D}\) (A.13) \[\overline{D}(z_{i})=\begin{cases}D(z_{i})-e\tilde{z}_{i}&n\text{ odd}\\ \Psi_{1}(D(z_{i}))&n\text{ even}\end{cases}\] where \(\Theta\) is a \(B_{n+1}\)-linear derivation of \(B_{n+1}\otimes\mathcal{C}_{CE}(H^{*}(\mathbb{S}^{n})\otimes L_{n})\) defined by \(\Theta(z_{i})=\tilde{z}_{i}\), and for \(n\) even and \(x\in\mathcal{C}_{CE}(L_{n})\) we denote \(\Psi(x)=1\otimes\Psi_{1}(x)+s\otimes\Psi_{s}(x)\in A_{n}\otimes_{B_{n+1}} \mathcal{C}_{CE}(H^{*}(\mathbb{S}^{n})\otimes L_{n})\). We now prove the claim for \(n\) even. Observe that \(0=B_{n+1}\otimes\epsilon(\overline{D}(\tilde{z}_{i}))\in B_{n+1}\) for all \(\tilde{z}_{i}\). Hence, only \(\overline{D}(z_{i})\) can have a summand in \(B_{n+1}\otimes 1\). As \(F_{n}\) is \(2n\)-connected, \(|z_{i}|>2n\) and hence (A.1) is injective in degrees \(*\leq 2n\). An elementary argument that we give a Corollary A.17 then shows that (A.1) is the zero map in degrees \(>2n\). We now prove that (A.1) is injective for \(n=3\). Again, it follows from (A.13) that for \(n\) odd \(0=B_{n+1}\otimes\epsilon(\overline{D}(\tilde{z}_{i}))\in B_{n+1}\) for all \(\tilde{z}_{i}\). By Lemma A.6, we see that the only generator of \(\mathcal{C}_{CE}(H^{*}(\mathbb{S}^{3})\otimes L_{3})\) with non-trivial contribution \(B_{4}\otimes\epsilon(\overline{D}(z_{i}))\in B_{4}\) is given by \(\overline{D}(x_{2})=-p_{1}^{2}-e\tilde{x}_{2}\). Hence, given an element \(f=f(p_{1},e)\in B_{4}\) in the kernel of (A.1), i.e. \(\overline{D}(x)=f\) for some \(x\in B_{4}\otimes\mathcal{C}_{CE}(H^{*}(\mathbb{S}^{3})\otimes L_{3})\), then \(x=-f/p_{1}^{2}\cdot x_{2}+y\) for some \(y\in B_{4}\otimes\mathcal{C}_{CE}(H^{*}(\mathbb{S}^{3})\otimes L_{3})\) with \(\overline{D}(y)=-ef/p_{1}^{2}\cdot\tilde{x}_{2}\) and which has no summand which is in the \(B_{4}\)-span of \(x_{2}\). By inspection of (A.13), we see that there are two ways how \(\overline{D}(y)\) can have a summand in the \(B_{4}\)-span of \(\tilde{x}_{2}\). First, if \(z_{i}\) is a generator of \(\mathcal{C}_{CE}(L_{3})\) with \(D(z_{i})=\lambda p_{1}^{k}x_{2}+z\) for \(\lambda\neq 0\in\mathbb{Q}\), then \(\overline{D}(\tilde{z}_{i})=\lambda p_{1}^{k}\tilde{x}_{2}+\Theta(z)\). But this implies that \(0=D^{2}(z_{i})=-\lambda p_{1}^{k+2}+D(z)\) which contradicts Lemma A.6 and thus isn't possible. The only other option is to use \(x_{2}\) as the only generator with \(B_{4}\otimes\epsilon(\overline{D}(x_{2}))\neq 0\), i.e. observe that \(\overline{D}(x_{2}\tilde{x}_{2})=-p_{1}^{2}\tilde{x}_{2}-e\tilde{x}_{2}^{2}\). Hence, if \(p_{1}^{2}|f/p_{1}^{2}\) then \(y=ef/p_{1}^{4}x_{2}+y^{\prime}\) where \(y^{\prime}\) contains no summand in the \(B_{4}\)-span of \(x_{2}\tilde{x}_{2}\) so that \(\overline{D}(y^{\prime})=e^{2}f/p_{1}^{4}\tilde{x}_{2}^{2}\). We can iterate this argument again if \(f/p_{1}^{4}\) is again divisible by \(p_{1}^{2}\). But finally we obtain an element \(y\) with no summand in \(B_{4}\otimes\Lambda(x_{2},\tilde{x}_{2})\) satisfying \(\overline{D}(y)=\pm(ef/p_{1}^{2})^{k}\tilde{x}_{2}^{k}\). This again is only possible if there is a generator of \(\mathcal{C}_{CE}(L_{3})\) with \(D(z_{i})=\lambda p_{1}^{k}x_{2}+z\) which we have concluded above cannot exist by Lemma A.6. Hence, there exists no \(y\in B_{4}\otimes\mathcal{C}_{CE}(H^{*}(\mathbf{S}^{3})\otimes L_{3})\) so that \(\overline{D}(y)=-ef/p_{1}^{2}\cdot\tilde{x}_{2}\) and therefore (A.1) is injective _Remark A.14_.: The argument in the proof shows more generally that the map \(\varphi:\mathbb{R}[e,p_{1},\tilde{x}_{2}]/(p_{1}^{2}+e\tilde{x}_{2})\to H^{*}( \Gamma_{\mathbf{S}^{3}}/\!\!/\mathrm{SO}(4);\mathbb{R})\cong H^{*}(\mathcal{L }_{\mathbf{S}^{3}}/\!\!/\mathrm{SO}(4))\) is injective which is an interesting observation in itself. _Remark A.15_.: We have recorded in Lemma A.6 the computation of the differential in low degrees even though we didn't need it in the end for the proof of Theorem A.2. It is interesting because one can compute from it \(H^{*}(\mathcal{L}_{\mathbf{S}^{n}}/\!\!/\mathrm{SO}(4))\) in low degrees and also easily confirm that \(\varphi:\mathbb{R}[e,p_{1},\tilde{x}_{2}]/(p_{1}^{2}+e\tilde{x}_{2})\to H^{*}( \Gamma_{\mathbf{S}^{3}}/\!\!/\mathrm{SO}(4);\mathbb{R})\) is injective in degrees \(\leq 8\). In particular, it follows that \(p_{1}^{2}\neq 0\) which already contradicts Haefliger's statement (see Remark 1.4) and was the main motivation for this appendix. We finish with a completely elementary proof that for \(n\) even all polynomials in the Pontryagin classes of degree \(>2n\) vanish in \(H^{*}(\Gamma_{\mathbf{S}^{n}}/\!\!/\mathrm{SO}(n+1))\). **Lemma A.16**.: _Any polynomial in the Pontryagin classes \(p_{1},\dots,p_{\lfloor n/2\rfloor}\) of degree \(>2n\) vanishes in \(H^{*}((\Gamma_{\mathbf{S}^{n}}\times\mathbf{S}^{n})/\!\!/\mathrm{SO}(n+1))\)._ Proof.: We have seen that \(F_{n}/\!\!/\mathrm{SO}(n)\stackrel{{\simeq}}{{\to}}E_{\mathbf{S} ^{n}}/\!\!/\mathrm{SO}(n+1)\). Since \(\mathrm{SO}(n)\) acts freely on \(F_{n}\), the homotopy quotient is equivalent to \(F_{n}/\mathrm{SO}(n)\) which admits a map to \(F_{n}/\mathrm{U}(n)=\mathrm{sk}_{2n}\mathrm{BU}(n)\) with cohomology ring \(H^{*}(\mathrm{sk}_{2n}\mathrm{BU}(n))=\mathbb{Q}[c_{1},\dots,c_{n}]/(f,|f|>2n)\). Because the Pontryagin classes are pulled back from \(\mathrm{BU}(n)\), every polynomial in the Pontryagin classes of degree \(>2n\) vanishes in \(F_{n}/\mathrm{SO}(n)\simeq E_{\mathbf{S}^{n}}/\!\!/\mathrm{SO}(n+1)\). The evaluation map \(\Gamma_{M}\times M\to E_{M}\) is equivariant (with respect to the diagonal action on the domain) and induces a map on homotopy quotients. As polynomials in the Pontryagin classes of degree \(>2n\) vanish \(H^{*}(E_{\mathbf{S}^{n}}/\!\!/\mathrm{SO}(n+1))\), so do their images in \(H^{*}((\Gamma_{\mathbf{S}^{n}}\times\mathbf{S}^{n})/\!\!/\mathrm{SO}(n+1))\). **Corollary A.17**.: _Let \(n\) be even, then the kernel of (A.1) contains the ideal generated by the monomials in Pontryagin classes \(p_{1},\dots,p_{n/2}\) whose degrees are larger than \(2n\)._ Proof.: The projection \(\pi_{1}:(\Gamma_{\mathbf{S}^{n}}\times\mathbf{S}^{n})/\!\!/\mathrm{SO}(n+1) \to\Gamma_{\mathbf{S}^{n}}/\!\!/\mathrm{SO}(n+1)\) is a fibration with fiber \(\mathbf{S}^{n}\) which satisfies the assumption of the Leray-Hirsch theorem as the Euler class in \(\mathbf{S}^{n}/\!\!/\mathrm{SO}(n+1)\simeq\mathrm{BSO}(n)\) pulls back to a class which restricts to a generator of \(H^{n}(\mathbf{S}^{n})\). Hence, \(\pi_{1}\) induces an injection on cohomology. As the Pontryagin classes are pulled back along \(\pi_{1}\), polynomials in the Pontryagin classes of degree \(>2n\) vanish already in \(\Gamma_{\mathbf{S}^{n}}/\!\!/\mathrm{SO}(n+1)\). _Remark A.18_.: The idea of the proof of Theorem A.2 as presented here is contained in [1], although we couldn't find an argument in Haefliger's papers for the simple model of the evaluation map that is crucial for determining the relative Sullivan model of \(\Gamma_{M}/\!\!/G\). The proof of Theorem A.2 above provides this argument. However, this appendix does not rely on Haefliger's work and instead uses more recent results about mapping spaces and evaluation maps. Haefliger offers no detailed computation for the kernel of (A.1), and following the steps outlined in [1, Sect. 7] for \(n=3\) leads to a contradiction to his result on page 154, so we do not know the origin of the mistake in [1].
2305.01848
Photo-Voltaic Panel Power Production Estimation with an Artificial Neural Network using Environmental and Electrical Measurements
Weather is one of the main problems in implementing forecasts for photovoltaic panel systems. Since it is the main generator of disturbances and interruptions in electrical energy. It is necessary to choose a reliable forecasting model for better energy use. A measurement prototype was constructed in this work, which collects in-situ voltage and current measurements and the environmental factors of radiation, temperature, and humidity. Subsequently, a correlation analysis of the variables and the implementation of artificial neural networks were performed to perform the system forecast. The best estimate was the one made with three variables (lighting, temperature, and humidity), obtaining an error of 0.255. These results show that it is possible to make a good estimate for a photovoltaic panel system.
Antony Morales-Cervantes, Oscar Lobato-Nostroza, Gerardo Marx Chávez-Campos, Yvo Marcelo Chiaradia-Masselli, Rafael Lara-Hernández
2023-05-03T01:28:53Z
http://arxiv.org/abs/2305.01848v1
Photo-Voltaic Panel Power Production Estimation with an Artificial Neural Network using Environmental and Electrical Measurements ###### Abstract Weather is one of the main problems in implementing forecasts for photovoltaic panel systems. Since it is the main generator of disturbances and interruptions in electrical energy. It is necessary to choose a reliable forecasting model for better energy use. A measurement prototype was constructed in this work, which collects in-situ voltage and current measurements and the environmental factors of radiation, temperature, and humidity. Subsequently, a correlation analysis of the variables and the implementation of artificial neural networks were performed to perform the system forecast. The best estimate was the one made with 3 variables (lighting, temperature, and humidity), obtaining an error of 0.255. These results show that it is possible to make a good estimate for a photovoltaic panel system. Photovoltaic generation systems; Energy storage systems; Radiation Forecasting; Artificial Neural Networks. ## I Introduction Nowadays, energy consumption increases each year significantly. As a result, pollution increases by the use of fossil fuels during the production of complementary energy by energy companies [1]. Companies have incorporated alternative energy sources to reduce the impact and accomplish energy demand [2]. One of the most significant sources is solar energy, which has become the most popular alternative in the world [3]. Solar energy has been used to provide electricity for many years [4], by using Photo-Voltaic (PV) panels. The amount of current and power generated by a PV cell depends on external factors such as the environment and internal factors typical of the photo-voltaic system. Specifically, the weather creates disturbances and interruptions in PV cells' electrical power [5, 6, 7, 8]. Due to this variability, it is necessary to implement reliable forecasts in the implementation of these PV systems to avoid penalties resulting from the differences between the programmed and produced energy [9]. Different forecasting models can be chosen based on the parameters to be analyzed, depending on particular needs [10]. The use of artificial intelligence (AI) has increased considerably in recent years, due to its ability to model and solve complicated computational tasks[11]. In fact, AI algorithms in data collection systems help to improve the profitability of the measurement equipment [12][13]. Forecasting models are also good indicators for detecting the right moment to perform maintenance on the photovoltaic system and the distribution of the PV system. There are different methods of forecasting PV energy. The very short-term method(from a few seconds to a few minutes) is used for the control and management of PV systems in the electricity market over micro-networks. The short-term method(48-72 hrs) for control of the energy system's operations, economic dispatch, and unit commitment, among others. The Medium-term (a few days to a week) and long-term methods (a few months to a year or more) are used to plan PV systems [14]. Hence, several models and methods have been implemented to estimate the generated energy of a PV-systems. The advanced methods include diverse artificial intelligence and machine learning techniques, such as artificial neural networks (ANN), nearest neighbor-k (kNN), extreme learning machine (ELM), and support vector machine (SVM), to mention the most used [15]. ML algorithms can be classified into three main groups: (1) 1. Supervised learning, where the algorithm creates relationships between input and output characteristics. 2. Unsupervised learning, in which the algorithm looks for patterns and rules to describe in a better way the data. 3. Reinforcement learning, which is used mainly with extensive data and reduces them for visualization or analysis purposes [16]. The forecasting of power in PV systems is mainly based on ANN due to the complexity of the parameters involved. ANN are techniques that seek to emulate the human brain's behavior and generate responses for decision making [17]. The fundamental part of each ANN is its element processor, the neuron. The neural networks gather these element processors with different methods, to respond to their different numerical needs [18]. Recently reported forecasting models are based on the ANNs. One type of model uses the ANN to estimate solar radiation (in specific cities) based on past weather data (temperature, humidity, and rain probability) together with radiation, with the estimated radiation, the models try to estimate the PV's output energy [19, 20, 21]. Other models use the ANN to estimate the produced energy directly using different inputs like: (i) humidity [22]; (ii) temperature, humidity, and rainfall [23]; (iii) solar radiation and temperature [24]; (iv) solar power and weather data [25]. Eventually, an improved model considers data correlation and ANNs to select the most critical inputs [26]. However, most models are based on databases available online or meteorological measurements not precisely in the same place as the photovoltaic system. Thus, the data information is insufficient from a zone or region, and in some cases, no continuous measurements are available. On the other hand, it is well known that semiconductors are extremely sensitive to high temperatures. Therefore, the present paper proposes an IoT device to measure and log _in-situ_ data about the PV system. The solar radiation, solar panel's temperature-humidity, and the panel's electrical power (voltage and current) have been collected during --120 days-- with a sampling frequency of 5 min, collecting more than 32,200 measurements. Then, measurements were used to train diverse ANN topologies and compare them with a Multiple Linear Regressor model. The best topology has an error level of 0.255326464, which presents a reliable data model. ## II Materials and Methods The present research methodology consists of acquiring and validating data from a photovoltaic system to perform the data analysis and compute electrical power forecasting values by artificial neural networks (see Figure 1). Therefore, a measurement prototype has been designed to collect _in-situ_ voltage and current measurements and environmental factors such as radiation levels, temperature, and humidity. Eventually, the system's behavior is obtained through the prototype's reading analysis, performing a validation process, and correlation of variables. Then, the regression model is obtained with a training set to finally implements the ANN. The obtained estimations were evaluated against a test set using the Root-Mean Square Error (RMSE) as the primary measure, applying equation 1. \[RMSE=\sqrt{\frac{\sum\left(x_{T}-x_{i}\right)}{N}} \tag{1}\] where \(x_{T}\) is the estimated value, \(x_{i}\) is the actual value and \(N\) is the total number of measurements. ### _Experimental setup_ Data has been collected using a stand-alone IoT system embedded with three sensors with \(I^{2}C\) communication. The IoT system also has embedded a web server to configure and manage the logged data [27]. The Figure 2 shows the experimental system conformed by three main sections. The left-must Fig. 1: Experimental setup for data gathering and analysis. includes the environmental variable sensors, the OPT3001, and HDC2080 integrated circuits. The OPT3001 is a light sensor with a \(0.01\,\mathrm{lx}\) resolution, including an upper limit of \(128\,\mathrm{klx}\). However, an attenuating glass has been used to extend 55% of the device limit; a calibration procedure was conducted using the commercial digital luxometer MASTECH ms6612. The HDC2080 sensor measures relative humidity (RH) and temperature. For temperature the sensor has a \(\pm 2\,\mathrm{\SIUnitSymbolCelsius}\) resolution with ranges of \(-40\,\mathrm{\SIUnitSymbolCelsius}\) to \(85\,\mathrm{\SIUnitSymbolCelsius}\), for RH the sensor gives measurements with resolution of \(\pm 0.2\%\). ### _Logged data analysis_ Previous to the forecasting process, logged data have been compared with similar commercial measurement systems to evaluate performance. According to the central limit theorem, our data is normal, so we performed the linear dependency measurement by the Pearson correlation method between measured variables to determine its importance during forecasting. The behavior comparison is made against data from the meteorological station of the Technological Institute of Morelia. Both data-sets are made up of solar radiation, humidity, and temperature, which correspond to the period 12th of November 2018 to the 6th of December 2018. Since measurements are originated by different equipment, different scale, and different locations (5 meters of difference), measurements were normalized using Eq. 2: \[x=\frac{x_{i}-x_{min}}{x_{max}-x_{min}} \tag{2}\] where \(x_{i}\) is the actual value to be normalized, \(x_{min}\) is the minimum value of the entire data-set, and \(x_{max}\) is the maximum value. Fig. 3 shows the measurements collected by the IoT system and the meteorological station only for the variable radiation. The small variations between data-sets can be attributed to the location of each system; however, the general performance is related. Fig. 4 show three correlation plots: (a)luxes, temperature and power; (b)humidity, luxes and power; and (c)temperature, humidity and power. The three plots are made considering power variable as output. Therefore the third plot shows the lower correlation, meaning that humidity will have a lower impact on the final model. ## III Results The ANN performance analysis has considered cases with two and three input variables: (i) lighting, temperature, and humidity, (ii) lighting and temperature, (iii) lighting and humidity, and (iv) humidity and temperature; each option has been tested with different topologies. The training process was performed with random data from the 14th of January to the 10th of February 2019. Table I shows only the head-performing topologies based on the resulting RMSE obtained during the cross-correlation validation. On the table, the first column indicates the input variables. The second represents the topology used; the third and fourth ones are the maximum training cycles and error levels, respectively. The last column is the RMSE value. Notice that the maximum number of training cycles was defined based on the RMSE's performance during various training experiments. The first topology (Table I) owns three computation elements in the input layer, three elements in the hidden layer, and one single output (3:3:1). This topology was evaluated with data that were randomly selected, and then the Easy-NN software was used to import the training set (700 measurements), validation (200), and test (100). Fig. 4(a) shows the comparison of estimated data with topology 3:3:1 Fig. 3: Data comparison between the meteorological station and the IoT system. Fig. 2: Experimental setup system for measurements. vs. the test data-set. The computed level error for this topology was 0.255326464. Another good estimator has been implemented using only lighting and temperature, with a topology 2:8:1, resulting in an error level of 0.273086254; see Fig. 5b. In the case of the estimator based on the Illumination and Humidity, the best topology was 2:3:1, with an error level of 0.26061261; see Fig. 5c. Finally, the estimation with the variables Temperature and Humidity, the best performing topology was 2:7:1, with an error level of 1.522621379; see Fig. 5d. Notice that the best network applies the three variables. However, the optional networks can be used when some data is missing or corrupted. For this research, the humidity sensor was the most problematic due to the warmth variability, specifically in Morelia city, resulting in saturated measurements during the early morning periods. Nevertheless, when all variables are pleasant, the ANN can produce better estimations. On the other hand, a Multiple Linear Regressor (MLR) has been developed with the same data-set with three variables. Figure 6 shows a comparison for one-day estimations of ANN, MLR, and real measurements. Notice that the ANN estimations are closely related to the photovoltaic system's real behavior, compared to the estimation made using the MLR model. According to the central limit theorem, in large samples, the sampling distribution tends to be normal, regardless of the data [28]. Therefore, an ANOVA analysis is carried out. The ANOVA of the MLR analysis was performed on lighting, temperature, and humidity to determine their importance, obtaining the Table II. The results of the ANOVA analysis gave a regression model with a confidence interval of 95 %. The model has an approach of 91.77 % of the real phenomenon. This process is carried out to obtain learning with the right level of trust, which enables a suitable prediction of the power levels in the solar panels. Fig. 4: Correlation variables plots for (a) luxes, temperature and power; (b) humidity, luxes and power; and (c) temperature, humidity and power. ## IV Discussion and Conclusions During this study, it was observed that the most suitable neural network topology changes according to the input variables because of the lowest RMSE value. Experiments were conducted with different training cycles, input information, number of neurons, and hidden layers to discern their execution, then choose the most appropriate for each set of data. The prototype used has certain limitations, such as the resolution and ranges of the measurements and the flow of the readings. A comparison was made to verify the prototype's values besides a commercial station located at the Technological Institute of Morelia, Mexico. The RMSE calculation of 0.19309 determines that the prototype data is reliable for estimating. \begin{table} \begin{tabular}{c c c c c c} **Source** & **DF** & **Adj SS** & **Adj MS** & **F-Value** & **P-Value** \\ \hline Regression & 3 & 10159.50 & 3386.51 & 5895.20 & 0.000 \\ Illumination & 1 & 2761.10 & 2761.10 & 4806.50 & 0.000 \\ Temperature & 1 & 6.80 & 6.78 & 11.81 & 0.001 \\ Humidity & 1 & 47.00 & 47.01 & 81.83 & 0.000 \\ Error & 1586 & 911.10 & 0.57 & & \\ Total & 1589 & 11070.60 & & & \\ \end{tabular} \end{table} TABLE II: ANOVA Analysis of the \(x\) variables (independent variables) and the \(y\) variable (dependent variable). Fig. 5: Graphical comparison of estimated data against actual data for the best topology of two and three input variables The discrimination process, and creation of the data sets, were performed in Matlab(r). The most suitable configuration was determined to carry out both instantaneous estimates and short-term forecasts. The most suitable topology for neural networks and their parameters (the number of computational elements, the number of hidden layers, the number of inputs and outputs, and training cycles) has been identified. The determination of each parameter starts from a proposed random topology, thus reaching the 3:3:1 configuration with 5,000 training cycles. This configuration allows knowing instantly how the solar panel will behave under normal conditions. The results obtained in this work show that the best option for the estimation is the 3:3:1 topology of the neural network, which uses three variables (lighting, temperature, and humidity), which allows estimating how much power you can get from a panel. As for forecasts, the best configuration is a 9:4:3:1 network. Even when using two variables gives a more significant error in the estimation than when using the three variables, this error being 0.30320 is reliable for the estimation. This article introduces a solar forecasting algorithm based on the artificial neural network (ANN) model. The proposed model has a 3:3:1 topology with 5,000 training cycles. The clear sky model and meteorological data from the prototype are used to train the model. The prototype and the meteorological station of the Technological Institute of Morelia, located in Morelia, Mexico, were compared. The RMSE value confirms that it is possible to make sensible estimates using the lighting, temperature, and humidity data. Forthcoming work will direct on developing a more comprehensive multi-layer ANN model taking into account rainfall factors and time of day, as well as using a more massive data set to train the ANN model to achieve greater forecast accuracy. Also, the system's accuracy must be improved. ## V Acknowledgments The authors would like to acknowledge the "Consejo Nacional de Ciencia y Tecnologia" (CONACYT) for the support received for developing this project by supporting student 625015, also to The "Tecnologico Nacional de Mexico" (TecNM) that supports the project 6127.17-P, and the "National Laboratory SEDEAM" by helping the development of the electronic prototypes. ## VI Conflict of interest The authors declare that there is no conflict of interest.
2307.06786
Notes for Neighborly Partitions
A proof of the first Rogers-Ramanujan identity is given using admissible neighborly partitions. This completes a program initiated by Mohsen and Mourtada. The admissible neighborly partitions involve an unusual mod 3 condition on the parts.
Kathleen O'Hara, Dennis Stanton
2023-07-13T14:57:35Z
http://arxiv.org/abs/2307.06786v1
# Notes for neighborly partitions ###### Abstract. A proof of the first Rogers-Ramanujan identity is given using admissible neighborly partitions. This completes a program initiated by Mohsen and Mourtada. The admissible neighborly partitions involve an unusual mod 3 condition on the parts. ## 1. Introduction Using commutative algebra, Mohsen and Mourtada [4] gave combinatorial interpretations of the numerator infinite products of the Rogers-Ramanujan identities [1, p. 104] \[\sum_{k=0}^{\infty}\frac{q^{k^{2}}}{(q;q)_{k}}=\frac{(q^{2},q^{3},q^{5};q^{5} )_{\infty}}{(q;q)_{\infty}},\quad\sum_{k=0}^{\infty}\frac{q^{k^{2}+k}}{(q;q)_{ k}}=\frac{(q^{1},q^{4},q^{5};q^{5})_{\infty}}{(q;q)_{\infty}}.\] To do so they defined a set of integer partitions \(\lambda\), called _neighborly_, a related set of graphs \(H_{\lambda}\), and a _signature_ for each graph \(G\in H_{\lambda}\). **Theorem 1.1**.: _[_4_]_ _Assuming the first Rogers-Ramanujan identity, the numerator infinite product is_ \[\sum_{\lambda\in Neighborly}q^{|\lambda|}\sum_{G\in H_{\lambda}}signature(G) =(q^{2},q^{3},q^{5};q^{5})_{\infty}=1+\sum_{k=1}^{\infty}(-1)^{k}q^{5k^{2}- k/2}(1+q^{k}).\] They ask [4, p. 3] for a proof of Theorem 1.1 without assuming the Rogers-Ramanujan identities. The purpose of this note is twofold: 1. to provide such as proof (see Theorem 4.3), 2. to simplify the double sum in Theorem 1.1 to a single sum of signed admissible neighborly partitions (see Proposition 3.4). Along the way we give a combinatorial interpretation for the classical generalization Theorem 4.3 of Theorem 1.1. We use the standard notation for \(q\)-series found in [1] and [3], and write the parts of an integer partition in increasing order. ## 2. Neighborly partitions **Definition 2.1**.: \(A\) **neighborly** _partition \(\lambda=(\lambda_{1},\lambda_{2},\cdots,\lambda_{s})\) has all multiplicities at most 2, and for any part \(\lambda_{i}\), there is a part \(\lambda_{j}\), \(j\neq i,\) such that \(|\lambda_{i}-\lambda_{j}|\leq 1.\)_ A neighborly partition \(\lambda\) can be considered as an ordered pair of partitions: \(\lambda=(\mu_{1},\mu_{2}),\) a distinct partition \(\mu_{1}\) and another distinct partition \(\mu_{2}\) whose parts are a subset of the parts of \(\mu_{1}.\) **Example 2.2**.: _If the neighborly partition is \(\lambda=(1,2,3,3,6,6,8,8,9,9,14,14),\) then_ \[\lambda=((1,2,3,6,8,9,14),(3,6,8,9,14))=(\mu_{1},\mu_{2}).\] The partition \(\mu_{1}\) consists of some runs, with singletons possible. In the example \[\mu_{1}=(1,2,3,6,8,9,14),\] the runs are \(1\leftrightarrow 2\leftrightarrow 3,6,8\leftrightarrow 9,\text{ and }14.\) Note that if \(x\) is a singleton in \(\mu_{1},\) then \(x\) must appear in \(\mu_{2}.\) Mohsen and Mourtada defined a _signature_ on a graph \(G_{\lambda}\) defined by a neighborly partition \(\lambda.\) **Definition 2.3**.: _The graph \(G_{\lambda}\) of a neighborly partition \(\lambda\) has vertices which are the parts of \(\lambda\), and edges from the consecutive parts in runs of \(\mu_{1}\), called the_ **backbone**_, along with edges between equal parts, called_ **hanging edges.**__ **Example 2.4**.: _If \(\lambda=((1,2,3,6,8,9,14),(3,6,8,9,14))\) the backbone of \(G_{\lambda}\) is_ \[1\leftrightarrow 2\leftrightarrow 3\quad 6\quad 8\leftrightarrow 9\quad 14\] _with hanging edges_ \[\begin{array}{ **Example 2.9**.: _If \(n=5\), let_ \[e_{1}=1\leftrightarrow 2,\ e_{2}=2\leftrightarrow 3,\ e_{3}=3\leftrightarrow 4,\ e_{4}=4 \leftrightarrow 5,\ e_{5}=5\leftrightarrow 6.\] _The vertex spanning subgraphs are_ \[\{e_{1},e_{2},e_{3},e_{4},e_{5}\},\{e_{1},e_{3},e_{4},e_{5}\},\{e_{1},e_{2},e_ {4},e_{5}\},\{e_{1},e_{2},e_{3},e_{5}\},\{e_{1},e_{3},e_{5}\},\] _so \(B_{5}=1.\)_ Proof.: Let \(B_{n}(x)\) be the generating function for vertex spanning forests \(H\) of \(G_{\lambda_{n}}\) according to the number of edges, \[B_{n}(x)=\sum_{H\in VS(G_{\lambda_{n}})}x^{\#\ \rm{edges\ in\ H}},\] By counting the number of edges in connected components from left to right, the coefficient of \(x^{n-k}\) in \(B_{n}(x)\) is the number of compositions of \(n-k\) into \(k+1\) parts. So \[B_{n}(x)=\sum_{k=0}^{\lfloor(n-1)/2\rfloor}{n-k-1\choose k}x^{n-k}.\] The generating function of \(B_{n}(x)\) is \[\sum_{n=1}^{\infty}B_{n}(x)t^{n}=xt/(1-xt-xt^{2}), \tag{2.1}\] so \[\sum_{n=1}^{\infty}B_{n}(-1)t^{n}=-t/(1+t+t^{2})=-t\frac{1-t}{1-t^{3}},\] which proves the mod 3 behavior of \(B_{n}.\) **Remark 2.10**.: _One can also see the generating function as compositions of 1's and 2's (Fibonacci numbers), by counting the number of new vertices each successive edge gives. So one would see (2.1) almost immediately._ We now consider the case when \(G_{\lambda}\) of a neighborly partition \(\lambda\) has one connected component. **Proposition 2.11**.: _Suppose \(\lambda=(\mu_{1},\mu_{2})\) where \(\mu_{1}=(1,2,...,n),\) and \(\mu_{2}=(a_{1},a_{2},...,a_{s}),\)\(s\geq 1.\) Then_ \[signature(G_{\lambda})=(-1)^{s}B_{a_{1}}B_{n-a_{s}+1}\prod_{k=2}^{s}B_{a_{k}-a_{ k-1}+2}.\] _Thus the signature of any connected component of any \(G_{\lambda}\) is +1, -1 or 0._ Proof.: The hanging edges must be in any vertex spanning forest \(H\). Thus we need spanning forests for the chains \[\begin{array}{l}1\leftrightarrow 2\leftrightarrow\cdots\leftrightarrow a_{1} \leftrightarrow a_{1},\\ a_{1}\leftrightarrow a_{1}\leftrightarrow a_{1}+1\leftrightarrow\cdots \leftrightarrow a_{2}\leftrightarrow a_{2},\cdots\\ a_{s}\leftrightarrow a_{s}\leftrightarrow a_{s}+1\cdots\leftrightarrow n, \end{array}\] which have respectively \[a_{1},a_{2}-a_{1}+2,a_{3}-a_{2}+2,\cdots,n-a_{s}+1\quad\rm{edges}.\] The choices for spanning forests in these smaller chains may be done independently. Each hanging edge has been used twice, so the factor \((-1)^{s}\) compensates. **Example 2.12**.: _If \(\lambda=((1,2,3,4,5,6,7),(3,6,7)),\)\(signature(G_{\lambda})=(-1)^{3}B_{3}B_{5}B_{3}B_{1}=0.\)_ Finally we need to keep track of the signatures of the connected components of \(G_{\lambda}.\) **Definition 2.13**.: _Let \(\lambda\) be a neighborly partition. The_ **signature multiset**_\(SIG(c)\) of a connected component_ \[c=((k,k+1,k+2,\cdots,n),(a_{1},a_{2},...,a_{s})),\quad s\geq 1\] _of \(G_{\lambda}\) is the multiset_ \[SIG(c)=\{a_{1}-k+1,a_{2}-a_{1}+2,a_{3}-a_{2}+2,\cdots,a_{s}-a_{s-1}+2,n-a_{s}+1\}.\] _If \(s=0\) then_ \[SIG(c)=\{n-k\}.\] _The signature multiset \(SIG(G_{\lambda})\) for a general neighborly partition \(\lambda\) is the multiset union over all connected components of the individual signature multisets._ **Example 2.14**.: _If \(\lambda=((2,4,5,6,7,10,12,13,14),(2,4,6,10,14)),\)_ \[G_{\lambda}=\begin{array}{ccccccccc}2&4\leftrightarrow&5\leftrightarrow&6 \leftrightarrow&7&10&12\leftrightarrow&13\leftrightarrow&14\\ \updownarrow&&\updownarrow&&\updownarrow&&\updownarrow&&\updownarrow\\ 2&4&&6&&10&&14\end{array}\] _the connected components are_ \[2\leftrightarrow 2,\quad 4\leftrightarrow 4\leftrightarrow 5\leftrightarrow 6 \leftrightarrow 7,\quad 10\leftrightarrow 10,\quad 12\leftrightarrow 13 \leftrightarrow 14\leftrightarrow 14.\] _Because the signature is independent of labels, Proposition 2.11 can be applied to each connected component._ \[SIG(G_{\lambda})=\{1,1,1,4,2,1,1,3,1\}=\{1,1\}\cup\{1,4,2\}\cup\{1,1\}\cup\{3,1\}.\] **Remark 2.15**.: _One may find the signature multiset by counting the edges in the chains that the parts of \(\mu_{2}\) cut in the runs of \(\mu_{1}\)._ The signature of any neighborly partition is always \(0,1,\) or \(-1.\) **Theorem 2.16**.: _Let \(\lambda=(\mu_{1},\mu_{2})\) be a neighborly partition. Then \(signature(G_{\lambda})=0\) exactly when \(SIG(G_{\lambda})\) contains an element \(x\equiv 0\mod 3.\) Otherwise,_ \[signature(G_{\lambda})=(-1)^{t+s}\] _where \(t\) is the number of elements \(x\in SIG(G_{\lambda})\) such that \(x\equiv 1\mod 3\), and \(s\) is the number of parts of \(\mu_{2}\)._ **Remark 2.17**.: _If \(\lambda=(\mu_{1},\mu_{2})\) is neighborly and \(signature(G_{\lambda})\neq 0\), then \(\mu_{2}\) does not contain consecutive parts, and thus \(\mu_{2}\) is a difference 2 partition._ ## 3. Admissible neighborly partitions Theorem 2.16 shows that signature(\(B_{\lambda}\)) is \(\pm 1\) or 0 for any neighborly partition. Thus we can eliminate the inner sum in Theorem 1.1, and replace the the set of neighborly partitions by the smaller set of partitions when signature(\(B_{\lambda}\)) \(\neq 0.\) These are admissible neighborly partitions. **Definition 3.1**.: _A neighborly partition \(\lambda=(\mu_{1},\mu_{2})\) is_ **admissible** _if \(SIG(B_{\lambda})\) contains no elements which are congruent to 0 modulo 3._ **Example 3.2**.: _The neighborly partition \(\lambda\) in Example 2.14 is not admissible. A chain with \(n\) edges is admissible if \(3\) does not divide \(n\)._ Since admissible neighborly partitions have signature \(\pm 1\), we may rename the signature by the sign. **Definition 3.3**.: _The_ **sign** _of an admissible neighborly partition \(\lambda=(\mu_{1},\mu_{2})\) is_ \[sign(\lambda)=(-1)^{t+s}\] _where \(t\) is the number of elements \(x\in SIG(G_{\lambda})\) such that \(x\equiv 1\mod 3\), and \(s\) is the number of parts of \(\mu_{2}.\)_ Then Theorem 1.1 is equivalent to the following propositions. **Proposition 3.4**.: _The generating function for all signed admissible neighborly partitions \(\lambda\) is_ \[\sum_{\lambda\in AdmNeighborly}sign(\lambda)q^{|\lambda|}=\prod_{k=0}^{ \infty}(1-q^{5k+2})(1-q^{5k+3})(1-q^{5k+5})=\sum_{k=-\infty}^{\infty}(-1)^{k} q^{k(5k+1)/2}\] **Example 3.5**.: _There are 4 admissible partitions of \(n=8\), two positive and two negative, so the coefficient of \(q^{8}\) in Proposition 3.4 is 0._ \[positive: \lambda=((2,3),(3)),\quad SIG(\lambda)=\{2,1\},\quad\lambda=((1,3 ),(1,3)),\quad SIG(\lambda)=\{1,1,1,1\},\] \[negative: \lambda=((4),(4)),\quad SIG(\lambda)=\{1,1\},\quad\lambda=((1,2,3 ),(2)),\quad SIG(\lambda)=\{2,2\}.\] ## 4. Generating functions In this section we use generating functions to prove the Main Theorem 4.3. It accomplishes goal (1) of the Introduction by choosing \(x=1..\) **Definition 4.1**.: _Let \(GF_{n}(q)\) denote the generating function for all signed admissible neighborly partitions with exactly \(n\) parts,_ \[GF_{n}(q)=\sum_{\lambda\in Admissible\text{ Neighborly with }n\text{ parts}}sign(\lambda)q^{|\lambda|}.\] We shall later prove the following recurrence. **Proposition 4.2**.: _The generating function \(GF_{n}(q)\) satisfies the recurrence_ \[(1-q^{n})GF_{n}(q)=-(q^{2n-2}+q^{3n-3})GF_{n-2}(q)+(q^{2n-2}+q^{3n-4}+q^{3n-3}) GF_{n-3}(q)-q^{3n-4}GF_{n-4}(q).\] Our main result is the generating function for admissible neighborly partitions according to number of parts and the sum of the parts. **Theorem 4.3**.: _The generating function for all signed admissible neighborly partitions is_ \[GF(x,q)=\sum_{n=0}^{\infty}GF_{n}(q)x^{n}=1+\sum_{k=1}^{\infty}\frac{(-1)^{k}x^{2 k}}{(q;q)_{k}}q^{(5k^{2}-k)/2}(xq;q)_{k-1}(1-xq^{2k}).\] Proof.: Let \(H(x)\) be the right side in Theorem 4.3. Then \(H(x)\) has a well-known functional equation [5], \[\frac{H(x)}{(xq;q)_{\infty}}-\frac{H(xq)}{(xq^{2};q)_{\infty}}=qx\frac{H(xq^{2} )}{(xq^{3};q)_{\infty}}.\] This implies that \(H_{n}\), the coefficient of \(x^{n}\) in \(H(x)\), satisfies \[(1-q^{n})H_{n}=-q^{n}(1-q^{n-1})H_{n-1}-(q^{2n-2}+q^{2n-1})H_{n-2}+q^{2n-2}H_{n- 3}. \tag{4.1}\] Iterating (4.1) on \((1-q^{n-1})H_{n-1}\) gives the same recurrence as in Proposition 4.2, so \(H_{n}=GF_{n}(q)\). ## 5. Another realization of \(sign(\lambda)\) In order to prove Proposition 4.2, we need to simplify the graphs \(G_{\lambda}\), keeping the same vertices and labels, but defining a new sign. This will be done by deleting edges in the chains cut out by the hanging edges to obtain a new graph \(G^{\prime}_{\lambda}\), so that \(sign(\lambda)\) is now just \((-1)^{\#edges(G^{\prime}_{\lambda})}\). Via Theorem 2.16, an admissible neighborly partition \(\lambda\) has \(SIG(G_{\lambda})\) with no elements that are multiples of \(3\). The elements of \(SIG(G_{\lambda})\) are the lengths of chains cut out by the hanging edges. We will delete edges in \(G_{\lambda}\) on each subchain by the following rule, always preserving the hanging edges. If a chain has \(n\) edges \(\{e_{1},e_{2},\cdots,e_{n}\}\) from left to right, 1. delete edges \(\{e_{3},e_{6},\cdots,e_{n-2}\}\) if \(n\equiv 2\mod 3\) 2. delete edges \(\{e_{3},e_{6},\cdots,e_{3m}\}\cup\{e_{3m+2},\cdots,e_{6m-1}\}\) if \(n=6m+1\), 3. delete edges \(\{e_{3},e_{6},\cdots,e_{3m}\}\cup\{e_{3m+2},\cdots,e_{6m+2}\}\) if \(n=6m+4\), **Definition 5.1**.: _Let \(G^{\prime}_{\lambda}\) denote the graph \(G_{\lambda}\) with these edges deleted._ **Example 5.2**.: _If \(\lambda=((1,2,3,4,5,6,7),(1,3,6)),\)_ \[G_{\lambda}=\begin{array}{ccccccc}&1\leftrightarrow&2\leftrightarrow&3& \leftrightarrow&4&\leftrightarrow&5\leftrightarrow&&6\leftrightarrow&7\\ &\updownarrow&&\updownarrow&&&&\updownarrow&&\\ &1&&3&&&&6\end{array}\] _In the chain \(1\leftrightarrow 1\leftrightarrow 2\leftrightarrow 3\leftrightarrow 3\) we delete the third edge \(2\leftrightarrow 3\) to obtain \(1\leftrightarrow 1\leftrightarrow 2\mod 3\leftrightarrow 4\leftrightarrow 5 \leftrightarrow 6\leftrightarrow 6\) we delete the third edge \(4\leftrightarrow 5\) to obtain \(3\leftrightarrow 3\leftrightarrow 4\mod 5\leftrightarrow 6\leftrightarrow 6\), so_ \[G^{\prime}_{\lambda}=\begin{array}{ccccccc}&1\leftrightarrow&2&&3& \leftrightarrow&4&5\leftrightarrow&&6\leftrightarrow&7\\ \updownarrow&&\updownarrow&&&&\updownarrow&&\\ &1&&3&&&&6\end{array}\] Note that the third edge is deleted, along with every next third edge, except for the middle, and the initial and final edges are preserved. Thus all hanging edges and vertex labels are preserved. We need to see how the sign can be preserved. **Proposition 5.3**.: _For any admissible neighborly partition \(\lambda\)_ \[sign(\lambda)=(-1)^{\#edges(G_{\lambda}^{\prime})}.\] Proof.: Let's first check that any chain in \(G_{\lambda}\) with \(3m+1\) edges has an odd number of edges in \(G_{\lambda}^{\prime}\), while chains in \(G_{\lambda}\) with \(3m+2\) edges have an even number of edges in \(G_{\lambda}^{\prime}\). In the second case, if \(n=3m+2\), the sign is \(+1\) and the number of edges is \(n-(n-2)/3=2m+2\) which is even. For the first case, if \(n=6m+1\), the sign is \(-1\) and the number of edges is \(n-2m=4m+1\) which is odd. For the first case, if \(n=6m+4\), the sign is \(-1\) and the number of edges is \(n-2m-1=4m+3\) which is odd. Finally, \(sign(\lambda)\) in Theorem 2.16 also includes a factor of \((-1)^{s}\), where \(s\) is the number of hanging edges. Each hanging edge occurs in \(2\) chains, so this factor compensates for double counting these edges. Since we are deleting every third edge from \(G_{\lambda}\) to obtain \(G_{\lambda}^{\prime}\), the connected components of \(G_{\lambda}^{\prime}\) are small and limited. **Proposition 5.4**.: _For any admissible neighborly partition \(\lambda\), the connected components of \(G_{\lambda}^{\prime}\) are one of six types_ \[a\leftrightarrow a,\ \ a\leftrightarrow a+1,\ \ \ a \leftrightarrow a\leftrightarrow a+1,\ \ \ a\leftrightarrow a+1\leftrightarrow a+1,\] \[a\leftrightarrow a+1\leftrightarrow a+2,\ \ \ a\leftrightarrow a+1 \leftrightarrow a+1\leftrightarrow a+2.\] Finally we use these six possible connected components to prove Proposition 4.2. Proof of Proposition 4.2.: Since \(q^{n}GF_{n}(q)\) is the signed generating function with \(n\) parts and no \(1\), \((1-q^{n})GF_{n}(q)\) is the generating function for signed admissible neighborly partitions with \(n\) parts that include a part of size \(1\). The first connected component in any \(G_{\lambda}^{\prime}\) must contain a \(1\) and be one of the six graphs in Proposition 5.4. 1. If the first component is \(1\leftrightarrow 1\), the remaining \(n-2\) vertices have labels at least \(3\), and the signed generating function is \(-q^{2}q^{2(n-2)}GF_{n-2}(q)\). 2. If the first component is \(1\leftrightarrow 2\), the remaining \(n-2\) vertices have labels at least \(4\), and the signed generating function is \(-q^{3}q^{3(n-2)}GF_{n-2}(q)\). 3. If the first component is \(1\leftrightarrow 1\leftrightarrow 2\), the remaining \(n-3\) vertices have labels at least \(3\), and the signed generating function is \(q^{4}q^{2(n-3)}GF_{n-3}(q).\) This is because deleting \(1\leftrightarrow 1\leftrightarrow 2\) removes \(3\) vertices, possibly from the first chain, so its mod \(3\) value is unchanged, and remains admissible 4. If the first component is \(1\leftrightarrow 2\leftrightarrow 2\), the remaining \(n-3\) vertices have labels at least \(4\), and the signed generating function is \(q^{5}q^{3(n-3)}GF_{n-3}(q)\). 5. If the first component is \(1\leftrightarrow 2\leftrightarrow 3\), the remaining \(n-3\) vertices have labels at least \(4\), and the signed generating function is \(q^{6}q^{3(n-3)}GF_{n-3}(q).\) As before we are deleting \(3\) vertices, so admissibility is preserved. 6. If the first component is \(1\leftrightarrow 2\leftrightarrow 2\leftrightarrow 3\), the remaining \(n-4\) vertices have labels at least \(4\), and the signed generating function is \(-q^{8}q^{3(n-4)}GF_{n-4}(q).\) These are the six terms in Proposition 4.2. ## 6. Remarks A topological explanation of Proposition 2.8 via an Euler characteristic is given in [2, Cor. 6.3]. The second Rogers-Ramanujan identity has a similar interpretation. **Proposition 6.1**.: _The signed generating function for all admissible neighborly partitions \(\gamma\) without a part of size 1 is_ \[\sum_{\gamma}sign(\gamma)q^{|\gamma|}= \prod_{k=0}^{\infty}(1-q^{5k+4})(1-q^{5k+5})(1-q^{5k+6})\] \[= \sum_{k=0}^{\infty}(-1)^{k}q^{k(5k+3)/2}(1+q+\cdots+q^{2k}).\] One may use a version of Proposition 4.2 which counts edges in \(G^{\prime}_{\lambda}\) to prove the next proposition. **Proposition 6.2**.: _The generating function for signed admissible partitions \(\lambda\) such that \(G^{\prime}_{\lambda}\)_ 1. _has_ \(2n\) _vertices and_ \(n+j\) _edges is_ \[(-1)^{n+j}\frac{(-q;q^{2})_{n-j}(q^{2n-2j-1};q^{-2})_{j}}{(q^{2};q^{2})_{2j}(q ^{2};q^{2})_{n-2j}}q^{2(n-j)^{2}+4j^{2}+2j},\] 2. _has_ \(2n+1\) _vertices and_ \(n+j+1\) _edges is_ \[(-1)^{n+j}\frac{(-q;q^{2})_{n-j}(q^{2n-2j-1};q^{-2})_{j}}{(q^{2};q^{2})_{2j+1} (q^{2};q^{2})_{n-2j-1}}q^{2(n-j)^{2}+4j^{2}+6j+2}.\] We do not know a proof of Theorem 4.3 using Proposition 6.2. It is classically known [5] that \[GF(x,q)=(xq;q)_{\infty}\sum_{k=0}^{\infty}\frac{q^{k^{2}}}{(q;q)_{k}}x^{k}=1+ \sum_{k=1}^{\infty}\frac{(-1)^{k}x^{2k}}{(q;q)_{k}}q^{(5k^{2}-k)/2}(xq;q)_{k-1 }(1-xq^{2k}) \tag{6.1}\] also satisfies Proposition 4.2.
2310.05898
Lion Secretly Solves Constrained Optimization: As Lyapunov Predicts
Lion (Evolved Sign Momentum), a new optimizer discovered through program search, has shown promising results in training large AI models. It performs comparably or favorably to AdamW but with greater memory efficiency. As we can expect from the results of a random search program, Lion incorporates elements from several existing algorithms, including signed momentum, decoupled weight decay, Polak, and Nesterov momentum, but does not fit into any existing category of theoretically grounded optimizers. Thus, even though Lion appears to perform well as a general-purpose optimizer for a wide range of tasks, its theoretical basis remains uncertain. This lack of theoretical clarity limits opportunities to further enhance and expand Lion's efficacy. This work aims to demystify Lion. Based on both continuous-time and discrete-time analysis, we demonstrate that Lion is a theoretically novel and principled approach for minimizing a general loss function $f(x)$ while enforcing a bound constraint $\|x\|_\infty \leq 1/\lambda$. Lion achieves this through the incorporation of decoupled weight decay, where $\lambda$ represents the weight decay coefficient. Our analysis is made possible by the development of a new Lyapunov function for the Lion updates. It applies to a broader family of Lion-$\kappa$ algorithms, where the $\text{sign}(\cdot)$ operator in Lion is replaced by the subgradient of a convex function $\kappa$, leading to the solution of a general composite optimization problem of $\min_x f(x) + \kappa^*(x)$. Our findings provide valuable insights into the dynamics of Lion and pave the way for further improvements and extensions of Lion-related algorithms.
Lizhang Chen, Bo Liu, Kaizhao Liang, Qiang Liu
2023-10-09T17:41:29Z
http://arxiv.org/abs/2310.05898v5
# Lion Secretly Solves Constrained Optimization, As Lyapunov Predicts ###### Abstract Lion (Evolved Sign Momentum), a new optimizer discovered through program search, has shown promising results in training large AI models. It performs comparably or favorably to AdamW but with greater memory efficiency. As we can expect from the results of a random search program, Lion incorporates elements from several existing algorithms, including signed momentum, decoupled weight decay, Polak, and Nesterov momentum, but does not fit into any existing category of theoretically grounded optimizers. Thus, even though Lion appears to perform well as a general-purpose optimizer for a wide range of tasks, its theoretical basis remains uncertain. This lack of theoretical clarity limits opportunities to further enhance and expand Lion's efficacy. This work aims to demystify Lion. Based on both continuous-time and discrete-time analysis, we demonstrate that Lion is a theoretically novel and principled approach for minimizing a general loss function \(f(x)\) while enforcing a bound constraint \(\left\|x\right\|_{\infty}\leq 1/\lambda\). Lion achieves this through the incorporation of decoupled weight decay, where \(\lambda\) represents the weight decay coefficient. Our analysis is made possible by the development of a new Lyapunov function for the Lion updates. It applies to a broader family of Lion-\(\mathcal{K}\) algorithms, where the \(\text{sign}(\cdot)\) operator in Lion is replaced by the subgradient of a convex function \(\mathcal{K}\), leading to the solution of a general composite optimization problem of \(\min_{x}f(x)+\mathcal{K}^{*}(x)\). Our findings provide valuable insights into the dynamics of Lion and pave the way for further improvements and extensions of Lion-related algorithms. ## 1 Introduction Optimization serves as the cornerstone in training contemporary AI models. Given the immense computational demands associated with training large AI models, the design of an effective optimizer emerges as a paramount endeavor. Traditionally, efficient optimizers are devised by machine learning experts based on theoretical insights [4, 16, 21, 12]. Adam [15] and its variant AdamW [21] remain the most widely employed methods in deep learning. Recently, however, a new optimization named Lion (Evolved Sign Momentum) [7] was discovered by an evolutionary search algorithm [33] applied to a symbolically represented program space [3]. Lion has been shown to achieve at least comparable performance to AdamW on a wide range of tasks while reducing memory cost and training time [7]. However, as the outcome of a stochastic search algorithm, Lion does not have an _a priori theoretical guarantee by design_. It is still uncertain whether Lion can be regarded as a reliable and legitimate general-purpose optimization algorithm, despite the reported positive results on a large, yet finite, set of tasks [7]. The lack of theoretical understanding also significantly restricts the potential for improving and extending Lion to obtain better new optimizers. In this work, we demonstrate that Lion, along with a broader family of Lion-\(\mathcal{K}\) algorithms, can be established as a theoretically novel and intriguing approach for solving optimization problems with convex regularization or constraints. This is surprising because Lion was discovered in a search space that includes arbitrary symbolic operations and was not designed with any theoretical guarantees. This discovery opens up promising opportunities for developing improved optimizers by leveraging the existing success of Lion. Lion: Evolved Sign MomentumThe update rule of Lion for minimizing a loss \(f(x)\) on \(\mathbb{R}^{d}\) is \[\begin{array}{ll}\text{Lion:}&m_{t+1}=\beta_{2}m_{t}-(1-\beta_{2})\nabla f(x_{ t}),\\ &x_{t+1}=x_{t}+\epsilon(\operatorname{sign}(\beta_{1}m_{t}-(1-\beta_{1}) \nabla f(x_{t}))-\lambda x_{t}),\end{array} \tag{1}\] where \(m_{t}\in\mathbb{R}^{d}\) is the momentum, \(\epsilon>0\) is the learning rate, \(\beta_{1},\beta_{2}\in[0,1]\) are two momentum related coefficients, and \(\lambda\geq 0\) is a weight decay coefficient. A default value of \(\beta_{1}=0.9\) and \(\beta_{2}=0.99\) was suggested in Chen et al. [7], with which the Lion update rule can be written directly as \[x_{t+1}\leftarrow(1-\epsilon\lambda)x_{t}-\epsilon\operatorname{sign}\left( (10+1)g_{t}+0.99g_{t-1}+0.99^{2}g_{t-2}+\cdots 0.99^{k}g_{t-k}+\cdots\right),\] where \(g_{t}=\nabla f(x_{t})\). Here the update of \(x_{t}\) combines a weight decay term with coefficient \((1-\epsilon\lambda)\), and the sign of a weighted average of the trajectory gradients. Notably, the weight of the current gradient \(g_{t}\) is increased by \((\beta_{2}-\beta_{1})/((1-\beta_{2})\beta_{1})\approx 10\) times compared with typical exponential moving average of gradients as used in the classical Polyak momentum [31]. One can think of Lion as made by "splicing" the elements of many existing algorithms in Lion, which is exactly what an efficient search program can do when given a proper search space [30, 7, 3]. The update of the momentum \(m_{t}\) is common to the Polyak momentum-based algorithms and yields the exponential moving average part of the update. What sets it apart is the unique update of \(x_{t}\), which uses the combination of three key elements: i) **[Sign Reshaper]** The use of the \(\operatorname{sign}(\cdot)\) function for update, similar to signed gradient descent and signed momentum [5, 8], can be viewed as an extreme way of normalizing the magnitude of the coordinate-wise updates. It is closed related to normalized gradient [20, 26] and adaptive gradient methods such as Adam [15] and RMSprop [37]. Note that Adam can be viewed as signed momentum with an adaptive variance based step size [2], which might be the key factor explaining the gap between Adam and SGD [19]. ii) **[Gradient Enhancement]** When using \(\beta_{2}>\beta_{1}\), the importance of the current gradient \(g_{t}\) is increased compared to the exponential moving average in standard Polyak momentum update. It can be shown that Polyak momentum with this gradient enhancement results in Nesterov momentum, and leads to the well-known acceleration phenomenon [e.g., 36]. iii) **[Decoupled Weight Decay]** The weight decay term \(\lambda x_{t}\) outside of the gradient and \(\operatorname{sign}(\cdot)\). Such idea of the _decoupled_ weight decay is what make AdamW [22] significantly outperform the vanilla Adam in training large AI models. As demonstrated by the empirical findings of Chen et al. [7] and subsequent research, the combination of these elements has been shown to make Lion perform well on a wide range of problems, including image classification, language models, and diffusion models [7]. However, it remains unclear whether the combination of these elements yield a theoretically valid and convergent general-purpose optimizer. Furthermore, the use of decoupled weight decay adds to the uncertainty regarding what optimization problem Lion aims to solve: due to its interaction with other parts of the algorithm, decoupled weight decay is always not equivalent to simply introducing \(\ell_{2}\) regularization [21]. "Lion King Meets Mr. Lyapunov"We propose and analyze a general family of Lion-\(\mathcal{K}\) algorithms, in which we replace the \(\operatorname{sign}(\cdot)\) function in Lion with a subgradient \(\nabla\mathcal{K}\) of a general convex function \(\mathcal{K}\colon\mathbb{R}^{d}\to\mathbb{R}\): \[\begin{array}{ll}\text{Lion-$\mathcal{K}$}:&m_{t+1}=\beta_{2}m_{t}-(1-\beta_ {2})\nabla f(x_{t}),\\ \text{Lion-$\mathcal{K}$}:&x_{t+1}=x_{t}+\epsilon(\nabla\mathcal{K}(\beta_{1} m_{t}-(1-\beta_{1})\nabla f(x_{t}))-\lambda x_{t}).\end{array} \tag{2}\] Lion is recovered when \(\mathcal{K}(x)=\left\|x\right\|_{1}\) and \(\nabla\mathcal{K}(x)=\operatorname{sign}(x)\). Taking the continuous time limit of (2), we obtain the following ordinary differential equation (ODE): \[\begin{array}{ll}\text{Lion-$\mathcal{K}$ (ODE):}&\dot{m}_{t}=-\alpha\nabla f(x_{t})-\gamma m_{t}\\ &\dot{x}_{t}=\nabla\mathcal{K}(m_{t}-\varepsilon(\alpha\nabla f(x_{t})+\gamma m _{t}))-\lambda x_{t},\end{array} \tag{3}\] Eq. (2) is the Euler discretization of Eq. (3) with step size \(\epsilon\) in the case of \(\alpha=\gamma\), with \(\beta_{1}=1-\varepsilon\gamma\), and \(\beta_{2}=1-e\gamma\). Lion-\(\mathcal{K}\) includes a broad set of algorithms as special cases, as shown in Table 1. To avoid the complexities associated with regularity conditions, we can assume that \(\mathcal{K}\) is continuously differentiable when discussing the ODE. But parallel results hold for the time discrete algorithm (2) for general non-differentiable convex functions \(\mathcal{K}\). The crest of this work is to show that, when \(\varepsilon\gamma\leq 1\), Lion-\(\mathcal{K}\) ODE solves the following optimization: \[\min_{x\in\mathbb{R}^{d}}F(x)\coloneqq\alpha f(x)+\frac{\gamma}{\lambda} \mathcal{K}^{*}(\lambda x), \tag{4}\] where \(\mathcal{K}^{*}(x)\coloneqq\sup_{z}(\tau^{\top}z-\mathcal{K}(z))\) is the conjugate function of \(\mathcal{K}\). Because we may have \(\mathcal{K}^{*}(x)=+\infty\) for some \(x\), solving (4) requires to enforce a constraint of \(\lambda x\in\mathrm{dom}\mathcal{K}^{*}\), where \(\mathrm{dom}\mathcal{K}^{*}\coloneqq\left\{x\colon\mathcal{K}^{*}(x)<+\infty\right\}\) is the effective domain of \(\mathcal{K}^{*}\). In the case of Lion, we have \(\mathcal{K}(x)=\left\|x\right\|_{1}\) and hence \(\mathcal{K}^{*}(x)=\delta(\left\|x\right\|_{\infty}\leq 1)\), where \(\delta\) the \(\infty\)-indicator function with \(\delta(\texttt{True})=0\), \(\delta(\texttt{False})=+\infty\). Hence, Lion solves the following bound-constrained optimization problem: \[\min_{x\in\mathbb{R}^{d}}f(x)\quad s.t.\quad\left\|x\right\|_{\infty}\leq 1/\lambda, \tag{5}\] where the bound \(1/\lambda\) is solely decided by the weight decay coefficient \(\lambda\). \begin{table} \begin{tabular}{c|c} \hline Polyak Momentum [31] & \(\mathcal{K}(x)=\left\|x\right\|_{2}^{2}/2\), \(\gamma\lambda=0\), \(\varepsilon=0\) \\ \hline Nesterov Momentum [28] & \(\mathcal{K}(x)=\left\|x\right\|_{2}^{2}/2\), \(\gamma\lambda=0\) \\ \hline Signed Momentum [5] & \(\mathcal{K}(x)=\left\|x\right\|_{1}^{2}\), \(\varepsilon=0\), \(\lambda=0\) \\ \hline Hamiltonian Descent [23] & \(\varepsilon=0\), \(\lambda=0\) \\ \hline Hamiltonian Descent for Composite Objectives [23] & \(\varepsilon=0\), \(\lambda>0\) \\ \hline Dual Space Preconditioning [24], Mirror Descent [27] & \(\varepsilon\gamma=1\), \(\lambda=0\) \\ \hline Signed Gradient Descent [5] & \(\mathcal{K}(x)=\left\|x\right\|_{1}\), \(\varepsilon\gamma=1\), \(\lambda=0\) \\ \hline Accelerated Mirror Descent [17] & \(\gamma=0\), \(\varepsilon=0\), \(\lambda>0\) \\ \hline Frank–Wolfe [11] & \(\varepsilon\gamma=1\), \(\lambda>0\) \\ \hline \end{tabular} \end{table} Table 1: Lion-\(\mathcal{K}\) includes a large family algorithms as special cases. See Section 3.1 Figure 1: (a)-(c) Trajectories of Lion on 2D function \(f(x)=(x_{1}-1.5)^{2}+x_{2}^{2}\), with \(\lambda=1.5\) and \(\lambda=0.5\) ((a)-(c)). The boxes in a) represent the constraint set : blue box is for \(\left\|x\right\|_{\infty}\leq 1/\lambda\) with \(\lambda=0.5\), green box is for \(\lambda=1.5\). (d) \(\lambda\) vs. the converged loss We can see that the converged loss starts to increase only when \(\lambda\) excel a threshold (\(\lambda\geq 0.6\)) to excluded the unconstrained minimum from the constrained set. Figure 2: Histograms of the network parameters of ResNet-18 on CIFAR-10 trained by Lion with \(\lambda=10\). The constraint of \(\left\|x\right\|_{\infty}\leq 1/\lambda\) (indicated by the red vertical lines) is satisfied within only \(\sim\)200 steps. Our proof shows that the Lion-\(\mathcal{K}\) dynamics consists of two phases: 1) **[Phase 1]** When \(\lambda x\not\in\mathrm{dom}\mathcal{K}^{*}\), it exponentially decays the distance from \(\lambda x_{t}\) to the set \(\mathrm{dom}\mathcal{K}^{*}\): \[\mathrm{dist}(\lambda x_{t},\mathrm{dom}\mathcal{K}^{*})\leq\exp(-\lambda(t-s) )\,\mathrm{dist}(\lambda x_{s},\mathrm{dom}\mathcal{K}^{*}),\ \ \ \forall s\leq t.\] Hence, \(\lambda x_{t}\) converges to \(\mathrm{dom}\mathcal{K}^{*}\) rapidly and stays within \(\mathrm{dom}\mathcal{K}^{*}\) once it arrived. 2) **[Phase 2]** After \(\lambda x_{t}\) enters \(\mathrm{dom}\mathcal{K}^{*}\), the dynamics minimizes the finite valued objective \(F(x)\). This is proved by showing that the Lion-\(\mathcal{K}\) dynamics minimizes the following Lyapunov function: \[H(x,m)=\alpha f(x)+\frac{\gamma}{\lambda}\mathcal{K}^{*}(\lambda x)+\frac{1- \varepsilon\gamma}{1+\varepsilon\lambda}(\mathcal{K}^{*}(\lambda x)+\mathcal{ K}(m)-\lambda m^{\top}x). \tag{6}\] We show that, whenever \(H(x_{t},m_{t})\) is finite, it is decreased monotonically (i.e., \(\frac{\mathrm{d}}{\mathrm{d}t}H(x_{t},m_{t})\leq 0\)) along trajectories of (3) until a local minimum of point of \(H(x,m)\) is reached. Furthermore, we have \(F(x)=\min_{m}H(x,m)\), and hence minimizing \(H(x,m)\) is equivalent to minimizing \(F(x)\); this is because the minimum of the last term in (6) equals zero, \(\min_{m}\mathcal{K}^{*}(\lambda x)+\mathcal{K}(m)-\lambda m^{\top}x=0\), for any fixed \(x\), by Fenchel-Young inequality. The discovery of this Lyapunov function is a new and non-trivial mathematical result. But intuitively, one can see easily the connection of (3) and (4) by comparing their fixed points. Assume \(\mathcal{K}\) and \(\mathcal{K}^{*}\) are differentiable, then a fix point of (3) must implies a stationary point of (4): \[\underbrace{\alpha\nabla f(x_{t})+\gamma m_{t}=0,\ \ \ \nabla\mathcal{K}(m_{t})= \lambda x_{t}}_{\text{fixed point of (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq Going Beyond LionDifferent \(\mathcal{K}\) yield optimization with different convex constraints and/or regularizations. For example, using the \(\ell_{p}\) norm \(\mathcal{K}(x)=\left\|x\right\|_{p}\) yields a constraint on the dual norm \(\left\|x\right\|_{q}\leq 1/\lambda\) where \(1/p+1/q=1\) (Table 2, Line 2); zeroing out the coordinates with small magnitude corresponds to introducing an \(\ell_{1}\) regularization (Line 3) or \(\ell_{1}\) constraint (4), which is useful for sparse learning; replacing \(\nabla\mathcal{K}(x)=\mathrm{sign}(x)\) with a continuous function would introduce an extra regularization term on the loss (e.g., 5). This work will focus on building the basic theoretical framework, and leave the vast opportunities of practical applications as future directions. OutlineThe rest of the paper is organized as follows. Section 2 introduces preliminaries on convex functions. Section 3 analyzes the continuous-time Lion-\(\mathcal{K}\) dynamics and discusses connections with existing algorithms. Section 4 presents the discrete-time analysis. Section 5 presents experiments that study and verify the behavior of using different \(\mathcal{K}\)s. ## 2 Preliminaries on Convex Functions Assume \(\mathcal{K}\colon\mathbb{R}^{d}\to\mathbb{R}\) is convex. A vector \(u\in\mathbb{R}^{d}\) is said to be a subgradient of \(\mathcal{K}\) at \(x\), denoted as \(u\in\partial\mathcal{K}(x)\), if \[\mathcal{K}(y)-\mathcal{K}(x)\geq u^{\top}(y-x),\quad\forall y\in\mathbb{R}^{ d}.\] With an abuse of notation, we use \(\nabla\mathcal{K}(x)\) to denote a subgradients of \(\mathcal{K}\), that is, \(\nabla\mathcal{K}(x)\in\partial\mathcal{K}(x)\). When \(\mathcal{K}\) is differentiable at \(x\), there is an unique subgradient \(\nabla\mathcal{K}(x)\) which coincides with the regular derivative. The conjugate function \(\mathcal{K}^{*}\) of \(\mathcal{K}\) is defined as \[\mathcal{K}^{*}(x)=\sup_{z\in\mathbb{R}^{d}}(x^{\top}z-\mathcal{K}(z)).\] \begin{table} \begin{tabular}{c|c|c|c} \hline Line ID & \(\mathcal{K}(x)\) & \(\nabla\mathcal{K}(x)\) & \(\min_{x}f(x)+\mathcal{K}^{*}(x)\) \\ \hline 1 & \(\left\|x\right\|_{1}\) & \(\mathrm{sign}(x)\) & \(\min f(x)\) \(s.t.\) \(\left\|x\right\|_{\infty}\leq 1\) \\ \hline 2 & \(\left\|x\right\|_{p}\) & \(\frac{\mathrm{sign}(x)\left\|x\right\|^{p-1}}{\left\|x\right\|_{p}^{2}}\) & \(\min f(x)\) \(s.t.\) \(\left\|x\right\|_{q}\leq 1\) \\ \hline 3 & \(\sum_{i}\max(\left|x_{i}\right|-e,0)\) & \(\mathrm{sign}(x)\mathbb{I}(\left|x\right|>e)\) & \(\min f(x)+e\left\|x\right\|_{1}\) \(s.t.\) \(\left\|x\right\|_{\infty}\leq 1\) \\ \hline 4 & \(\sum_{i<i^{cut}}\left|x_{(i)}\right|\) & \(\mathrm{sign}(x)\mathbb{I}(\left|x\right|>\left|x_{(i^{cut})}\right|)\) & \(\min f(x)\) \(s.t.\) \(\left\|x\right\|_{1}\leq i^{cut},\) \(\left\|x\right\|_{\infty}\leq 1\) \\ \hline 5 & \(\sum_{i}\mathrm{huber}_{e}(x_{i})\) & \(\mathrm{clip}(x,-e,e)/e\) & \(\min f(x)+\frac{5}{2}\left\|x\right\|_{2}^{2}\) \(s.t.\) \(\left\|x\right\|_{\infty}<1\) \\ \hline \end{tabular} \end{table} Table 2: Examples of \(\mathcal{K}\) and \(\nabla\mathcal{K}\), and the optimization problems they solved (we set \(\gamma=\lambda=1\) for simplicity). We assume \(x=[x_{1},\ldots,x_{d}]\in\mathbb{R}^{d}\) and \(\left|x_{(1)}\right|\geq\left|x_{(2)}\right|\geq\cdots\) is a monotonic sorting of the elements of \(x\), and \(i^{cut}\) is an integer in \(\{1,\ldots,d\}\). The Huber loss is \(\mathrm{huber}_{e}(x_{i})=\mathbb{I}(\left|x_{i}\right|\geq e)(\left|x_{i} \right|-\frac{e}{2})+\mathbb{I}(\left|x_{i}\right|<e)\frac{1}{2e}x_{i}^{2}\), \(e>0\). See Appendix A for more examples. Figure 4: Analysis of weight decay on CIFAR-10 using Lion. a) The converged Loss vs. weight decay in Lion. We can see that the loss starts to increase only when \(\lambda\) excel a threshold, which is expected from the constrained optimization view. b) The loss curves vs. epochs with different weight decays. Larger weight decay \(\lambda\) yields faster convergence (due to stronger Phase 1), but may yield larger final loss when it is too large. Hence, by definition, we have the following Fenchel-Young inequality: \[\mathcal{K}(x)+\mathcal{K}^{*}(y)\geq x^{\top}y,\ \ \ \forall x,y. \tag{7}\] The conjugate function \(\mathcal{K}^{*}\) can take values in the extended real set \(\overline{\mathbb{R}}=\mathbb{R}\cup\{\pm\infty\}\), and \(\mathcal{K}^{*}\) is always closed and convex, even when \(\mathcal{K}\) is not. Recall that a function \(f\) is said to be closed if for each \(b\in\mathbb{R}\), its sublevel sets \(\{x\colon f(x)\leq b\}\) is a closed set. If \(\mathcal{K}\) is closed and convex, we have \(\mathcal{K}^{**}=\mathcal{K}\), and \[y\in\partial\mathcal{K}(x)\qquad\iff\qquad x\in\partial\mathcal{K}^{*}(y) \qquad\iff\qquad\mathcal{K}(x)+\mathcal{K}^{*}(y)=x^{\top}y. \tag{8}\] When \(\mathcal{K}\) and \(\mathcal{K}^{*}\) are differentiable, (8) suggests that \(\nabla\mathcal{K}\) and \(\nabla\mathcal{K}^{*}\) is a pair of inverse maps: \(\nabla\mathcal{K}(\nabla\mathcal{K}^{*}(x))=x\). Combining (7) and (8), we get \(\min_{m}\mathcal{K}(m)+\mathcal{K}^{*}(x)-x^{\top}m=0\), which yields \(F(x)=\min_{m}H(x,m)\). We refer to Rockafellar [34] for a systematic introduction to convex functions. A key property of any subgradient \(\nabla\mathcal{K}\) and \(\nabla\mathcal{K}^{*}\) is that they are monotonic maps, which plays a crucial rule in our results. **Lemma 2.1**.: _Assume \(\mathcal{K},\mathcal{K}^{*}\) is a closed convex conjugate pair and \(\nabla\mathcal{K}\), \(\nabla\mathcal{K}^{*}\) are their subgradients, we have_ \[(\nabla\mathcal{K}(x)-\nabla\mathcal{K}(y))^{\top}(x-y)\geq 0,\qquad\qquad( \nabla\mathcal{K}(x)-y)^{\top}(x-\nabla\mathcal{K}^{*}(y))\geq 0. \tag{9}\] See Appendix B.1 for the proof. These two inequalities are crucial because they allow us to identify vectors that have a non-negative inner product with a given direction to achieve monotonic descent in optimization. **Example 2.2**.: _In the case of Lion, we take \(\mathcal{K}(x)=\left\|x\right\|_{1}\) with \(\nabla\mathcal{K}(x)=\operatorname{sign}(x)\), and_ \[\mathcal{K}^{*}(y)=\begin{cases}0&\text{if }\|y\|_{\infty}\leq 1\\ +\infty&\text{if }\|y\|_{\infty}>1\end{cases},\qquad\qquad[\nabla \mathcal{K}^{*}(y)]_{i}=\begin{cases}0&\text{if }|y_{i}|\leq 1\\ +\infty&y_{i}>1\\ -\infty&y_{i}<-1.\end{cases}\] _One can verify that the inequalities in (9) hold (even though the values on the left side can be \(+\infty\)). The Lyapunov function in (6) becomes_ \[H(x,m)=\begin{cases}f(x)+\frac{1-\varepsilon\gamma}{1+\varepsilon\lambda}(\|m \|_{1}-\lambda x^{\top}m)&\text{if }\|x\|_{\infty}\leq 1\\ +\infty&\text{if }\|x\|_{\infty}>1.\end{cases}\] ## 3 Main Result: Continuous-Time We study the continuous-time Lion-\(\mathcal{K}\) dynamics (3), and discuss its connection to existing algorithms listed in Table 1. We defer the detailed proofs to Appendix B.7, but outline a novel _implicit Hamiltonian + descent decomposition_ that underpins the construction of the Lyapunov function \(H(x,m)\). **Theorem 3.1**.: _Let \((x_{t},m_{t})\) be a continuously differentiable trajectory of the Lion-\(\mathcal{K}\) ODE (3), where \(\mathcal{K}\) is differentiable convex with conjugate \(\mathcal{K}^{*}\). Assume \(\alpha,\gamma,\lambda,\varepsilon>0\) and \(\varepsilon\gamma\leq 1\)._ _1) **[Phase I]** Define \(\operatorname{dist}(\lambda x_{t},\operatorname{dom}\mathcal{K}^{*})= \inf_{z\in\operatorname{dom}\mathcal{K}^{*}}\left\|z-\lambda x_{t}\right\|\) w.r.t. any norm \(\left\|\cdot\right\|\). We have_ \[\operatorname{dist}(\lambda x_{t},\operatorname{dom}\mathcal{K}^{*})\leq \exp(\lambda(s-t))\operatorname{dist}(\lambda x_{s},\operatorname{dom} \mathcal{K}^{*}),\quad\forall 0\leq s\leq t.\] _Hence, \(\lambda x_{t}\) converges linearly to set \(\operatorname{dom}\mathcal{K}^{*}\) and stays within \(\operatorname{dom}\mathcal{K}^{*}\) once it enters it._ _2) **[Phase 2]** When \(H(x,m)\) in (6) is finite and continuously differentiable, it is decreased monotonically along the trajectory:_ \[-\frac{\mathrm{d}}{\mathrm{d}t}H(x_{t},m_{t})=\Delta(x_{t},m_{t}):=\frac{ \lambda+\gamma}{1+\varepsilon\lambda}\Delta_{1}(x_{t},\tilde{m}_{t})+\frac{1- \varepsilon\gamma}{1+\varepsilon\lambda}\Delta_{2}(m_{t},\tilde{m}_{t})\geq 0,\] _where we define \(\tilde{m}_{t}=m_{t}-\varepsilon(\alpha\nabla f(x_{t})+\gamma m_{t})\), and_ \[\Delta_{1}(x_{t},\tilde{m}_{t})=(\tilde{m}_{t}-\nabla\mathcal{K}^{*}(\lambda x _{t}))^{\top}(\nabla\mathcal{K}(\tilde{m}_{t})-\lambda x_{t})\geq 0, \tag{10}\] \[\Delta_{2}(m_{t},\tilde{m}_{t})=\frac{1}{\varepsilon}(\tilde{m}_{ t}-m_{t})^{\top}(\nabla\mathcal{K}(\tilde{m}_{t})-\nabla\mathcal{K}(m_{t}))\geq 0.\] _3) **[Stationarily]** Assume \(\nabla\mathcal{K}^{*}\) is strictly monotonic. All the accumulation points of \((x_{t},m_{t})\) as \(t\to+\infty\) are stationary points of the objective function \(F(x)=\alpha f(x)+\frac{\tau}{\lambda}\mathcal{K}^{*}(\lambda x),\) and satisfy \(\lambda x\in\mathrm{dom}\mathcal{K}^{*}\)._ \(\Delta(x_{t},m_{t})\) can be viewed as an indication of the stationarity of the system. If \(H(x_{0},m_{0})\) is finite and \(H_{b}=\inf_{x,m}H(x,m)>-\infty\), we have \(\frac{1}{T}\int_{0}^{T}\Delta(x_{t},m_{t})\mathrm{d}t\leq\frac{H(x_{0},m_{0})- H_{b}}{T}\to 0\) when \(T\to+\infty\). Proof Sketch.: See Appendix B.7 for the full proof. The original discovery of the Lyapunov function was made possible by starting from the inequalities in (10) as guaranteed by Lemma 2.1, and working backwards with some guesswork. The following is a simplified proof that highlights the essential mathematical structure that makes \(H(x,m)\) Lyapunov. Define \[\dot{x}=V_{x}(x,m)\coloneqq\nabla\mathcal{K}(\tilde{m})-\lambda x,\qquad\dot{ m}=V_{m}(x,m)\coloneqq-\alpha\nabla f(x)-\gamma m=\frac{\tilde{m}-m}{\varepsilon}\] and related \[\hat{V}_{x}(x,m)=\tilde{m}-\nabla\mathcal{K}^{*}(\lambda x),\qquad\qquad\hat{ V}_{m}(x,m)=\nabla\mathcal{K}(\tilde{m})-\nabla\mathcal{K}(m).\] The \(\hat{V}_{x}\) and \(\hat{V}_{m}\) have two critical properties: 1) By Lemma 2.1, \(\hat{V}_{x}\) and \(\hat{V}_{m}\) have non-negative inner products with \(V_{x},V_{m}\), respectively: \[\hat{V}_{x}(x,m)^{\top}V_{x}(x,m)\geq 0,\qquad\qquad\hat{V}_{m}(x,m)^{\top}V_{ m}(x,m)\geq 0,\qquad\forall x,m.\] 2) By Lemma B.5 in Appendix B.7, the gradients of \(H\) can be decomposed as follows: \[\nabla_{x}H(x,m)=-\eta^{\prime}\hat{V}_{x}-\eta\hat{V}_{m}\] ( **Implicit Hamiltonian + Descent)** (11) \[\nabla_{m}H(x,m)=-\eta\hat{V}_{m}+\eta\hat{V}_{x},\] where \(\eta=\frac{1-\varepsilon\gamma}{1+\varepsilon\lambda}\) and \(\eta^{\prime}=\frac{\gamma+\lambda}{1+\varepsilon\lambda}\). We call (11) an _"implicit" Hamiltonian + descent_ decomposition, in connection with the Hamiltonian + descent decomposition we introduce in sequel. Then we have, \[\frac{\mathrm{d}}{\mathrm{d}t}H(x_{t},m_{t}) =\nabla_{x}H^{\top}V_{x}+\nabla_{m}H^{\top}V_{m}=(-\eta^{\prime} \hat{V}_{x}-\eta V_{m})^{\top}V_{x}+(-\eta\hat{V}_{m}+\eta V_{x})^{\top}V_{m}\] \[=-(\eta^{\prime}\hat{V}_{x}^{\top}V_{x}+\eta\hat{V}_{m}^{\top}V_{ m})\leq 0.\] The key here is that the cross term \(\eta\hat{V}_{x}^{\top}V_{m}\) is canceled, leaving only the negative terms. The convergence property uses Lasselle's invariance principle; see Appendix B.7 for details. Hamiltonian + Descent DecompositionThe decomposition structure (11) is a key characterization of Lion-\(\mathcal{K}\) ODE. An interesting remark is that \(H(x,m)\) is also Lyapunov if we have the following _Hamiltonian + descent_ structure [23, 29] in which the roles of \([\nabla_{x}H,\nabla_{m}H]\) and \([V_{x},V_{m}]\) in (11) are switched: \[V_{x}=-\hat{H}_{x}-\eta\nabla_{m}H\] ( **Hamiltonian + Descent)** (12) \[V_{m}=-\hat{H}_{m}+\eta\nabla_{x}H,\] where \(\hat{H}_{x},\hat{H}_{m}\) are two vector fields satisfying \(\hat{H}_{x}^{\top}(\nabla_{x}H)\geq 0\) and \(\hat{H}_{m}^{\top}(\nabla_{m}H)\geq 0\), then \[\frac{\mathrm{d}}{\mathrm{d}t}H(x_{t},m_{t}) =\nabla_{x}H^{\top}V_{x}+\nabla_{m}H^{\top}V_{m}=\nabla_{x}H^{\top }(-\hat{H}_{x}-\eta\nabla_{m}H)+\nabla_{m}H^{\top}(-\hat{H}_{m}+\eta H_{x})\] \[=-(\hat{H}_{x}^{\top}(\nabla_{x}H)+\hat{H}_{m}^{\top}(\nabla_{m}H) )\leq 0.\] The structure in (12) can be intuitively viewed as a generalized damped Hamiltonian system with \(H(x,m)\) as the total energy, where \([-\hat{H}_{x},-\hat{H}_{m}]\) serves a damping force that monotonically decreases the total energy, and \([-\nabla_{m}H,\nabla_{x}H]\) is the Hamiltonian vector field which preserves the energy but introduces an inertia-like effect into system. One can easily verify (12) on the classical Polayk's momentum. The more general idea is explored in the Hamiltonian descent method of [23, 29], which considers systems of structure (12) for the separatiable Hamiltonian of form \(H(x,m)=f(x)+\mathcal{K}(m)\) with \(\hat{H}_{x}=0\). In contrast, (11) do not seem to have a clear physical interpretation, yet provides a handy tool for understanding the general Lion-\(\mathcal{K}\) dynamics. Some special cases of Lion-\(\mathcal{K}\), such as when \(\lambda=0\) or \(\varepsilon=0\), can also be alternatively viewed from the Hamiltonian + descent structure as shown in Section 3.1. ### Connection with Existing Algorithms What makes Lion-\(\mathcal{K}\) unique is the combination of the gradient enhancement (\(\varepsilon>0\)), the decoupled weight decay (\(\lambda>0\)), and the momentum damping (\(\gamma>0\)), the use of reshaper function \(\nabla\mathcal{K}(\cdot)\). We discuss the effects of these elements in connection to existing algorithms as shown in Table 1. Lion-\(\mathcal{K}\) Without Weight DecayWhen \(\lambda=0\) and \(\nabla\mathcal{K}^{*}(0)=0\), we have \(\lim_{\lambda\to 0}\frac{1}{\lambda}\mathcal{K}^{*}(\lambda x)=\nabla \mathcal{K}(0)^{\top}x=0\), and the Lyapunov function can be defined as \[H(x,m)=\alpha f(x)+(1-\varepsilon\gamma)\mathcal{K}(m),\] for which we have \[-\frac{\mathrm{d}}{\mathrm{d}t}H(x_{t},m_{t})=\gamma\nabla\mathcal{K}(\tilde{m }_{t})\tilde{m}_{t}+\frac{(1-\varepsilon\gamma)}{\varepsilon}(\tilde{m}_{t}-m _{t})^{\top}(\nabla\mathcal{K}(\tilde{m}_{t})-\nabla\mathcal{K}(m_{t}))\geq 0.\] In this case, the algorithm solves \(\min_{x}f(x)\), without the regularization term \(\mathcal{K}^{*}(\lambda x)\). Interestingly, in this case (\(\lambda=0\)) and \(1-\varepsilon\gamma>0\), there exists a second Lyapunov function: \[\tilde{H}(x,m)=\alpha f(x)+\frac{1}{1-\varepsilon\gamma}\mathcal{K}((1- \varepsilon\gamma)m), \tag{13}\] with which the Lion-\(\mathcal{K}\) ODE (\(\lambda=0\)) can be decomposed in the form of (12), as a sum of a Hamiltonian vector field and a descent direction: \[\begin{bmatrix}\dot{x}_{t}\\ \dot{m}_{t}\end{bmatrix}=\underbrace{\begin{bmatrix}+\nabla_{m}\tilde{H}(x_{t},m_{t})\\ -\nabla_{x}\tilde{H}(x_{t},m_{t})\end{bmatrix}}_{\text{Hamiltonian}}- \underbrace{\begin{bmatrix}\nabla\mathcal{K}(\tilde{m}_{t}^{0})-\nabla \mathcal{K}(\tilde{m}_{t})\\ \gamma m_{t}\end{bmatrix}}_{\text{Descent}},\] where \(\tilde{m}_{t}^{0}=(1-\varepsilon\gamma)m_{t}\) and hence \(\tilde{m}_{t}^{0}-\tilde{m}_{t}=\varepsilon\alpha\nabla f(x_{t})\). If \(m=0\) is a minimum of \(\mathcal{K}(m)\), one can show that the second component above is a descent direction of \(\tilde{H}(x,m)\) in (13), with \[-\frac{\mathrm{d}}{\mathrm{d}t}\tilde{H}(x_{t},m_{t})=\gamma\nabla\mathcal{K} (\tilde{m}_{t}^{0})^{\top}m_{t}+\frac{1}{\varepsilon}(\tilde{m}_{t}^{0}- \tilde{m}_{t})^{\top}(\nabla\mathcal{K}(\tilde{m}_{t}^{0})-\nabla\mathcal{K} (\tilde{m}_{t}))\geq 0,\] See Appendix B.6 for details. Lion-\(\mathcal{K}\) Without Momentum DampingWhen \(\gamma=0\), we have \[H(x,m)=\alpha f(x)+\frac{1}{1+\varepsilon\lambda}(\mathcal{K}^{*}(x)+ \mathcal{K}(m)-\lambda x^{\top}m),\] Because \(\min_{m}(\mathcal{K}^{*}(x)+\mathcal{K}(m)-\lambda x^{\top}m)=0\), the algorithm also corresponds to solving \(\min_{x}f(x)\) without regularization \(\mathcal{K}^{*}(\lambda x)\). It is interesting to see that the weight decay and momentum damping play a somewhat symmetric role, because turning off either one of it turns off the regularization term \(\mathcal{K}^{*}(\lambda x)\). In particular, if \(\mathcal{K}(x)=\left\|x\right\|_{2}^{2}/2\), the Lion-\(\mathcal{K}\) ODE can be rewritten into a second-order ODE: \[\ddot{x}_{t}+(\lambda+\gamma)\dot{x}_{t}+\varepsilon\alpha\nabla^{2}f(x_{t}) \dot{x}_{t}+\gamma\lambda x_{t}+\alpha\nabla f(x_{t})=0, \tag{14}\] in which the role of \(\gamma,\lambda\) are symmetric. Equation (21) coincides the high-resolution ODE in [36] for minimizing \(F(x)=\alpha f(x)+\gamma\lambda\left\|x\right\|_{2}^{2}/2\), which is a high resolution continuous time limit of Nesterov momentum. The hessian-based damping term \(\nabla^{2}f(x_{t})\dot{x}_{t}\) plays a key role for acceleration phenomenon [see e.g., 36, 1]. When we turn off the gradient enhancement (\(\varepsilon=0\)), then we get ODE for Ployak momentum. Interestingly, if we set \(\lambda=\gamma=0\), but \(\varepsilon>0\), ODE (21) still serve to minimize \(f(x)\), due to the Hessian damping term. Lion-\(\mathcal{K}\) without Gradient EnhancementWhen \(\varepsilon=0\), we have \[H(x,m)=\alpha f(x)+\frac{\gamma}{\lambda}\mathcal{K}^{*}(\lambda x)+(\mathcal{K}^ {*}(\lambda x)+\mathcal{K}(m)-\lambda m^{\top}x),\] and \(\Delta_{2}(m,\tilde{m})=0\), \[\Delta(x,m)=(\lambda+\gamma)\Delta_{1}(x,m)=(\lambda+\gamma)(m-\nabla\mathcal{ K}^{*}(\lambda x))^{\top}(\nabla\mathcal{K}(m)-\lambda x).\] In this case, minimizing \(H(x,m)\) still yields the minimization of \(F(x)\). Hence, the choice of \(\varepsilon\) does not alter the objective function. Moreover, with \(\varepsilon=0\), one can conveniently decompose the velocity field in the form of (12), as a sum of a Hamiltonian vector field and mirror descent direction: \[\begin{bmatrix}\dot{x}_{t}\\ \dot{m}_{t}\end{bmatrix}=\underbrace{\begin{bmatrix}+\nabla_{m}H(x_{t},m_{t} )\\ -\nabla_{x}H(x_{t},m_{t})\end{bmatrix}}_{\text{Hamiltonian}}-\underbrace{ \begin{bmatrix}0\\ (\gamma+\lambda)(m_{t}-\nabla\mathcal{K}^{*}(\lambda x_{t}))\end{bmatrix}}_{ \text{Descent}}.\] This system can be shown to be equivalent to the Hamiltonian descent system for composite objects of [29]. Further, if \(\lambda=0\), it reduces to the conformal Hamiltonian system [e.g., 23, 25]. Mirror Descent and Frank-WolfeIf \(\varepsilon\gamma=1\), Lion-\(\mathcal{K}\) reduces to \[\dot{x}_{t}=\nabla\mathcal{K}(-\varepsilon\alpha\nabla f(x_{t}))-\lambda x_{t},\] which can be shown to be equivalent to the Frank-Wolfe algorithm for minimizing \(F(x)=\alpha f(x)+\frac{\gamma}{\lambda}\mathcal{K}^{*}(\lambda x)\). When \(\varepsilon\gamma=1\), and \(\lambda=0\) with \(\nabla\mathcal{K}(x)=0\) iff \(x=0\), Lion-\(\mathcal{K}\) reduces to \(\dot{x}_{t}=\nabla\mathcal{K}(-\varepsilon\alpha\nabla f(x_{t}))\), which is dual space conditioning [24], or a variant of mirror descent for \(\min_{x}f(x)\). See Appendix B.4 for more discussion. Accelerated Mirror DescentThe accelerated mirror descent of Krichene et al. [17] is \[\dot{x}_{t}=\lambda_{t}(\nabla\mathcal{K}(m_{t})-x_{t}), \dot{m}_{t}=-\alpha_{t}\nabla f(x_{t}),\] which is shown to exhibit an acceleration behavior for minimizing a convex \(f\) (without the \(\mathcal{K}^{*}\) regularization) when \(\alpha_{t}=t/r\) and \(\lambda_{t}=r/t\) and \(r\geq 2\). This can be viewed as Lion-\(\mathcal{K}\) ODE with \(\gamma=0,\varepsilon=0\) and but a special time-dependent coefficient. ## 4 Discrete Time Analysis We now present a result on the discrete-time Lion-\(\mathcal{K}\) parallel to the continous-time results in Theorem 3.1, but work for non-differentiable convex functions \(\mathcal{K}\). We analyze a slight reform of (2): \[\begin{split} m_{t+1}&=\beta_{2}m_{t}-(1-\beta_{2}) \nabla f(x_{t})\\ \tilde{m}_{t+1}&=\beta_{1}m_{t}-(1-\beta_{1})\nabla f (x_{t})\\ x_{t+1}&=x_{t}+\epsilon(\nabla\mathcal{K}(\tilde{m}_{t+1} )-\lambda x_{t+1}),\end{split} \tag{15}\] in which we use an implicit scheme for the \(x_{t}\)-update, replacing \(\lambda x_{t}\) with \(\lambda x_{t+1}\). It is equivalent to the explicit scheme in (2) with \(\epsilon\) replaced by \(\epsilon^{\prime}=\frac{\epsilon}{1+\epsilon\lambda}\). **Theorem 4.1**.: _Assume \(f\colon\mathbb{R}^{d}\to\mathbb{R}\) is L-smooth, and \(\mathcal{K}\colon\mathbb{R}^{d}\to\mathbb{R}\) is closed and convex, and \(\nabla\mathcal{K}\) is a subgradient of \(\mathcal{K}\). Assume \(\beta_{1},\beta_{2}\in(0,1)\), and \(\beta_{2}>\beta_{1}\), and \(\epsilon,\lambda>0\)._ _1) For any two non-negative integers_ \(s\leq t\)_, we have_ \[\operatorname{dist}(\lambda x_{t},\operatorname{dom}\mathcal{K}^{*})\leq \left(\frac{1}{1+\epsilon\lambda}\right)^{s-t}\operatorname{dist}(\lambda x_{s },\operatorname{dom}\mathcal{K}^{*}),\ \ \forall s\leq t.\] _2) Define the following Lyapunov function:_ \[H(x,m)=f(x)+\frac{1}{\lambda}\mathcal{K}^{*}(\lambda x)+\frac{\beta_{1}}{ \epsilon\lambda(1-\beta_{1})+(1-\beta_{2})}(\mathcal{K}^{*}(\lambda x)+ \mathcal{K}(m)-\lambda x^{\top}m),\] \[\Delta_{t}^{1} =(\nabla\mathcal{K}(\tilde{m}_{t+1})-\lambda x_{t+1})^{\top}(\tilde{m }_{t+1}-\nabla\mathcal{K}^{*}(\lambda x_{t+1}))\geq 0,\] \[\Delta_{t}^{2} =(\nabla\mathcal{K}(\tilde{m}_{t+1})-\nabla\mathcal{K}(m_{t+1}))^{ \top}(\tilde{m}_{t+1}-m_{t+1})\geq 0,\] _where \(\nabla\mathcal{K}^{*}\) is a subgradient of \(\mathcal{K}^{*}\). Then we have_ \[H(x_{t+1},m_{t+1})-H(x_{t},m_{t})\leq-\epsilon\Delta_{t}+\frac{L\epsilon^{2}}{ 2}\left\|\nabla\mathcal{K}(\tilde{m}_{t+1})-\lambda x_{t+1}\right\|_{2}^{2},\] _where \(\Delta_{t}=a\Delta_{t}^{1}+b\Delta_{t}^{2}\), with_ \[a=\frac{\beta_{1}}{\epsilon\lambda(1-\beta_{1})+(1-\beta_{2})}+1\geq 0,\ \ \ \ \ b=\frac{\beta_{1}(1-\beta_{2})}{\epsilon\lambda(\beta_{2}-\beta_{1})( \epsilon\lambda(1-\beta_{1})+(1-\beta_{2}))}\geq 0.\] _Hence, a telescoping sum yields_ \[\frac{1}{T}\sum_{t=0}^{T-1}\Delta_{t}\leq\frac{H(x_{0},m_{0})-H(x_{T},m_{T})}{ \epsilon T}+\frac{L\epsilon}{2}B_{T},\] _where \(B_{T}=\frac{1}{T}\sum_{t=1}^{T}\left\|\nabla\mathcal{K}(\tilde{m}_{t+1})- \lambda x_{t+1}\right\|_{2}^{2}\)._ The result above shows that \(\frac{1}{T}\sum_{t=0}^{T-1}\Delta_{t}\) decays with an \(O(\frac{1}{\epsilon T}+\epsilon)\) rate, if \(B_{T}\) is a finite upper bound. This reduces to the continuous-time result of \(\frac{1}{t}\int_{0}^{t}\Delta(x_{s},m_{s})\mathrm{d}s=O\left(\frac{1}{t}\right)\) when the step size \(\epsilon\) converges to zero. If \(\mathcal{K}\) is smooth, it is possible to improve the discrete-time rate to \(O\left(\frac{1}{\epsilon T}\right)\) with standard arguments based on the proof of Theorem 4.1. Hence, the impact of the non-differentiability of \(\mathcal{K}\) contributes to the \(O(\epsilon)\) term, which suggests that the algorithm converges upto an \(\epsilon\) accuracy. This is an typical phenomenon in optimization with non-smooth objectives (like sub-gradient descent) or non-smooth update (like signed GD). Because in practice the step size is small or decaying, the \(O(\epsilon)\) term may not have a substantial impact for practical performance. ## 5 Experiments on Different \(\mathcal{K}\) This section provides a preliminary investigation on the behaviors of Lion-\(\mathcal{K}\) with different \(\mathcal{K}\). We experiment with the \(\mathcal{K}\)s listed in Table 2 on the toy example shown in Figure 1 to confirm the behavior follows exactly as what the theory predicts. Then we focus on the Lion-\(\ell_{p}\) optimizer with general \(p\in[1,2]\) since it is the most straightforward extension of the original Lion (with \(p=1\)). ### Lion-\(\mathcal{K}\)s on the Toy Example In the following, we plot the behavior of different Lion-\(\mathcal{K}\)s on the toy example shown in Figure 1. For each \(\mathcal{K}\), we draw the optimization trajectory using the corresponding optimizer, the loss \(f(x)\), and the corresponding constraint (e.g., the norm of \(x\)) v.s. iteration. The results are shown in Figure 5. ObservationFrom Figure 5, one can observe that for \(\mathcal{K}(x)=\left\|x\right\|_{2}\), the constraint is a circle. For \(\mathcal{K}(x)=\sum_{i}\max(\left|x_{i}\right|-e,0)\), an additional \(\ell_{1}\) regularization is introduced in addition to the \(\ell_{\infty}\) constraint, which encourages sparse solutions. When \(\mathcal{K}(x)=\sum_{i\leq i^{\text{cut}}}\left|x_{(i)}\right|\), it enforces an \(\ell_{1}\) constraint (rather than regularization) in addition to the \(\ell_{\infty}\) constraint. The \(\mathcal{K}(x)=\sum_{i}\text{huber}_{e}(x_{i})\) introduces an \(\ell_{2}\) regularization effect in addition to \(\ell_{\infty}\) constraint. All optimization trajectories closely match what the theory predicts. ### Lion-\(\ell_{p}\) for ImageNet and Language Modeling Lion-\(\ell_{p}\) corresponds to \(\mathcal{K}(x)=\left\|x\right\|_{p}\), \(p\geq 1\) and amounts to solving \(\min_{x}f(x)\ s.t.\ \left\|x\right\|_{q}\leq 1/\lambda\) where \(1/p+1/q=1\). In Figure 6, we plot how the parameter norms (e.g., \(\left\|\cdot\right\|_{\infty}\) when \(p=1\) and \(||\cdot||_{2}\) when \(p=2\)) change over training iterations. In Figure 7, we compare the performance of using Lion-\(\ell_{p}\) with different \(p\), on ImageNet [35] and Language Modeling tasks, using ResNet-50, Vision Transformer (ViT) [10], and the GPT-2 model [32]. Experiment SettingFor the ImageNet training, we follow the standard PyTorch ImageNet training code.1 We train the ResNet-50 and the ViT-B/16 model using batch size 1024 and cosine learning rate scheduler. For GPT-2 training, we follow the HuggingFace code2, train it on OpenWebText3 using cosine learning rate scheduler. Footnote 1: [https://github.com/pytorch/examples/blob/main/imagenet/main.py](https://github.com/pytorch/examples/blob/main/imagenet/main.py). Footnote 2: [https://huggingface.co/gpt2](https://huggingface.co/gpt2) Footnote 3: [https://huggingface.co/datasets/Skylion007/openwebtext](https://huggingface.co/datasets/Skylion007/openwebtext) ObservationFrom Figure 6, we observe that even on deep neural networks like ViT [10], ResNet [14], and GPT-2 [32], the behavior of the Lion-\(\mathcal{K}\) optimizers strictly follow what the theory predicts. From Figure 7, we observe that Lion-\(\ell_{1}\) (the original Lion optimizer) performs better than Lion with other \(p\) on ImageNet when ViT is used, and on language modeling with the GPT-2 model. The plot indicates a trend that smaller \(p\in[0,1]\) results in better training efficiency. However, the trend is reversed when ResNet-50 [14] is used on ImageNet. Therefore, this indicates that the choice of \(\mathcal{K}\) might depend on the underlying neural architecture. Based on the empirical observation, we conjecture that Lion-\(\ell_{1}\) performs well among all Lion-\(\ell_{p}\) on the transformer architecture, which is consistent with the fact that Lion-\(\ell_{1}\) is found by an evolutionary search using the transformer architecture [6]. ## 6 Discussion As demonstrated in the analysis of the Lyapunov function in Theorem 3.1, the Lion-\(\mathcal{K}\) dynamics exhibit a distinct nature when compared to typical momentum-based methods like Polyak, Nesterov momentum, and Hamiltonian descent, all of which can be conveniently understood as certain generalized dissipative Hamiltonian systems. While the Lyapunov function provides a powerful Figure 5: The behavior of Lion-\(\mathcal{K}\) with different \(\mathcal{K}\)s from Table 2. The blue trajectory always reaches the optimum as the optimum is included in the constraint. The green trajectory converges to the boundary of the constraint. characterization of the dynamical behavior, our intuitive understanding of the Lion-\(\mathcal{K}\) dynamics remains obscured because we lack a "physical intuition" or constructive derivation like the standard optimization algorithms. This invites more studies in studies and understandings in future works. The connection between Lion-\(\mathcal{K}\) and Nesterov momentum and accelerated mirror descent suggests the possibility of acceleration phenomena in variants of Lion-\(\mathcal{K}\), which opens an exciting avenue for future exploration and research. It might be possible to find novel accelerated algorithms based on the Lion-\(\mathcal{K}\) family. It is surprising and compelling that an algorithm found by a random search program has such a rich and intriguing theoretical basis. The reasons for this remain elusive, whether it is a coincidence or due to some inherent necessity. For instance, the design of the search space in Chen et al. [6] may in some way entails a high likelihood of discovering theoretically sound algorithms with random search. Understanding the underlying logic here could lead to future advancements in automatic machine-based algorithm discovery. Regarding applications, since Lion-\(\mathcal{K}\) offers a broader family than Lion, it is possible to find within the Lion-\(\mathcal{K}\) family new algorithms that outperform Lion in various tasks and metrics. Additionally, by using different values of \(\mathcal{K}\), Lion-\(\mathcal{K}\) can be utilized to address different types of constraint optimization problems.
2303.03309
Simulating bistable current-induced switching of metallic atomic contacts by electron-vibration scattering
We present a microscopic model, describing current-driven switching in metallic atomic-size contacts. Applying a high current through an atomic-size contact, creates a strong electronic nonequilibrium that excites vibrational modes by virtue of the electron-vibration coupling. Using density functional theory (DFT) in combination with the Landauer-B\"uttiker theory for phase-coherent transport, expressed in terms of nonequilibrium Green's functions (NEGFs), we study the current-induced forces arising from this nonequilibrium and determine those vibrational modes which couple most strongly to the electronic system. For single-atom lead (Pb) contacts we show specific candidates for bistable switches, consisting of two similar atomic configurations with differing electric conductance. We identify vibrational modes that induce a transition between these configurations. Our results reveal a possible origin of bistable switching in atomic-size contacts through excitation of vibrations by inelastic electron scattering and underline the power of the combined DFT-NEGF approach and statistical mechanics analysis of a Langevin equation to overcome the time-scale gap between atomic motion and rare switching events, allowing for an efficient exploration of the contacts' configurational phase space.
Markus Ring, Fabian Pauly, Peter Nielaba, Elke Scheer
2023-03-06T17:28:59Z
http://arxiv.org/abs/2303.03309v1
Simulating bistable current-induced switching of metallic atomic contacts by electron-vibration scattering ###### Abstract We present a microscopic model, describing current-driven switching in metallic atomic-size contacts. Applying a high current through an atomic-size contact, creates a strong electronic nonequilibrium that excites vibrational modes by virtue of the electron-vibration coupling. Using density functional theory (DFT) in combination with the Landauer-Buttiker theory for phase-coherent transport, expressed in terms of nonequilibrium Green's functions (NEGFs), we study the current-induced forces arising from this nonequilibrium and determine those vibrational modes which couple most strongly to the electronic system. For single-atom lead (Pb) contacts we show specific candidates for bistable switches, consisting of two similar atomic configurations with differing electric conductance. We identify vibrational modes that induce a transition between these configurations. Our results reveal a possible origin of bistable switching in atomic-size contacts through excitation of vibrations by inelastic electron scattering and underline the power of the combined DFT-NEGF approach and statistical mechanics analysis of a Langevin equation to overcome the time-scale gap between atomic motion and rare switching events, allowing for an efficient exploration of the contacts' configurational phase space. ## I Introduction Bistable atomic-scale conductance switches are considered as possible building blocks for nanoelectronic circuits [1]. In a two-terminal configuration and activated by controlled electromigration they are ultimately miniaturized [2]. The term electromigration denotes the rearrangement of atoms inside a conductor in response to an applied bias voltage or flowing charge current. Electromigration in macroscopic conductors is reported to be a thermally driven process by the dissipated Joule heat [3]. While atomic-size switches are straightforward to realize experimentally, the microscopic theory is involved. Electromigration requires the description of the coupled electronic and atomic motion, which is typically separated along the lines of the Born-Oppenheimer approximation due to the large mass difference between electrons and atoms. The treatment of current-induced atomic rearrangements in a junction hence requires in principle complex dynamics simulations, bridging electronic and atomic time scales of several orders of magnitude. Metals can sustain high current densities, and electromigration is a relevant mechanism for atomic rearrangements [4]. Different models of electromigration on the atomic scale have been suggested, including the excitation of local vibrational modes due to inelastic scattering of electrons [5; 6; 7]. These inelastic scattering events cause forces that act on the atoms [8; 9]. Although the microscopic processes are in principle clear, their implementation in molecular dynamics approaches proves difficult, since forces are nonconservative [10]. Recent theoretical work addressed this problem with ab-initio molecular dynamics, including heating through nonconservative forces and identifying hot spots and vibrational modes especially excited by the electronic nonequilibrium [9]. The study of switching is challenging because of the large difference in time scales in the mechanics of interest. The typical time scale for atomic thermalization is picoseconds, while electronic relaxation happens much faster within femtoseconds. Even on the picosecond time scale, however, major atomic relocations, causing electrical switching events, are rare. They typically happen in the microsecond range, as determined by the measurement resolution of experimental setups. This difference in time scales of some 9 orders of magnitude from the femtosecond, necessary to resolve the electronic subsystem, to the microsecond, relevant for switching events, is the central obstacle in simulating electromigration of metallic atomic contacts. In the present work we bridge the time-scale gap by first integrating the electronic dynamics into effective forces on the atomic scale and then investigating the long-term limit of atomic dynamics. Our current-induced-forces approach identifies those vibrational modes that couple strongest to the electronic nonequilibrium. Utilizing these vibrations to evolve the contact configuration can lead to different local minima in the configurational phase space. For a contact with \(N\) flexible atoms, this strategy reduces the dimensionality of the search space of other stable configurations from \(3N\) to \(O(1)\). Consequently, possible stable contact geometries are found in a computationally efficient way. We use the established formalisms of DFT and Landauer-Buttiker scattering theory to describe the phase-coherent electron transport, expressing the transport in terms of NEGFs. The inelastic scattering of electrons by vibrations of the system is taken into account in a time-averaged fashion through current-induced forces in a Langevin equation for the atoms, with a nonconservative friction kernel taking into account the nonequilibrium electron bath. The Langevin equation for the displacements \(\vec{x}\) of all atoms from their equilibrium po sitions has the form \[\mathbf{m}\cdot\ddot{\vec{x}}+\mathbf{\eta}(V)\cdot\dot{\vec{x}}+\mathbf{D}(V)\cdot \vec{x}=\vec{f}(V), \tag{1}\] where \(\mathbf{m}\) is the diagonal matrix of all atomic masses. The dynamical matrix \(\mathbf{D}(V)\), the friction matrix \(\mathbf{\eta}(V)\) and the random force \(\vec{f}(V)\) are perturbed by the electronic nonequilibrium, as represented by the indicated dependence on the voltage \(V\)[5]. This perturbation adds antisymmetric contributions to the voltage-dependent matrices in Eq. (1), which lead to nonconservative forces. Analysis of the Langevin equation (1) allows to determine threshold voltages, when specific excited vibrations become effectively undamped [5]. The collective motion of the atoms along these modes is a potential mechanism for a switching process, since the undamped vibrations can lead to a mechanical instability of the contact configuration. In a previous work [11] we computed threshold voltages for metallic atomic junctions of four different elements and compared them in a statistical analysis to experimentally extracted switching voltages. The good agreement between both corroborates vibrational pumping as possible switching mechanism. The present work is devoted to identifying bistable, i.e., reversible bivalued, switching processes as well as the underlying collective atomic motion based on this mechanism of electronic-vibrational excitations. ## II Computational procedures In order to describe a bistable electrical switching process, it is necessary to first identify the stable geometries of the switch and then a mechanism to transition between them. The simplest description of the switching is given by a reaction coordinate connecting these two states over an energy barrier in between. Here we use simulations to determine all of these aspects: We identify two states, find a process to transition between them and determine energy barriers. The simulation approach is summarized in Fig. 1. In this work, we study atomic-size metallic contacts of Pb. Extended central clusters [12], containing the central narrowest constriction and part of the electrodes, consist of around 60 atoms, out of which 20 can move freely between two slabs of Pb atoms, fixed in a crystalline structure at a predefined distance, see Fig. 1(a). The distance \(d\) between the first fixed electrode layers on both sides of the contact is set to values between 15-20 A in 15 steps of about \(\Delta=0.3\) A, see Fig. 1(a) and 1(b). The movable atoms between the two fixed crystalline layers are then relaxed to their energetic minimum. We calculate electronic and vibrational structures as well as the electron-vibration coupling with the quantum chemistry software package TURBOMOLE [13; 14; 15]. In the calculations presented here in the main text, we use the def-SV(P) basis set [16; 17]. Results for the def-TZVP basis set [17; 18] are discussed in the Supplemental Material. The properties are then used in the NEGF framework to calculate the energy-dependent electronic transmission function \(\tau(E)\) and all the matrices needed in the Langevin equation (1) [5]. Conductance values are determined in the phase-coherent elastic approximation in the low-temperature limit as \(G=G_{0}\tau(E_{\text{F}})\), with the conductance quantum \(G_{0}=2e^{2}/h\) and the Fermi energy \(E_{\text{F}}\). In the charge transport calculations we use \(32\times 32\) transverse \(k\)-points. We have extended a code to calculate inelastic electron tunneling spectra [19; 15; 20] to include current-induced forces, following the approach of Lu _et al._[5]. We Fourier transformed Eq. (1) to compute vibrational eigenvalues and eigenmodes for different voltages at a specific \(d\), see Fig. 1(c). Above a certain voltage, some modes reveal a sign change of the damping from negative to positive, see Fig. 1(d), indicating that they become undamped and are enhanced in amplitude instead of decaying over time. We term these vibrational modes "runaway modes" and the respective voltages "threshold voltages". We suggest that these undamped vibrations trigger atomic rearrangements, see Fig. 1(e). To realize a bistable atomic switch, we are interested in pairs of contact configurations, which give rise to different electronic conductance for the same distance \(d\) between the electrodes. To find such pairs of geometries, we mechanically manipulate a contact by compressing or stretching, see Fig. 1(a). The corresponding conductance-distance trace exhibits features which are known from experiment, like conductance plateaus and abrupt jumps in between at atomic rearrangements [21; 22; 11; 2], see Fig. 1(b). At the distances at which jumps in conductance occur, a hysteresis with respect to reversing the direction of the distance change can be expected, and hence two different metastable configurations for the same \(d\). Since the conductance-distance trace in Fig. 1(b) actually arises from a compression, we took the contact geometry at a subsequent reduced distance step \(d-\Delta\), stretched it by \(\Delta\) and optimized atomic positions again. Starting from the initial blue points, shown in Figs. 1(b) and 2(a), this mechanical cycle of \(\mp\Delta\) generates the red points in Fig. 2(a). With this mechanical manipulation approach, we identify pairs of configurations, called configurations 1 and 2, see Fig. 1(e), with different conductance for the same distance. Points of bistability, marked in Fig. 2(a) by gray bars, identify candidates for bistable atomic switches, which may be either operated mechanically by stretching and compressing or by current-induced forces [2]. Let us now discuss, if the transition between the configurations 1 and 2 at a certain \(d\) can be mediated by a runaway mode and what the energy barrier for the transition is, see Fig. 1(e). For candidate structures to act as reversible bivalued switches, several additional conditions must be met. At first the identified configurations 1 and 2 need to be separated by an energy barrier. A barrier is necessary to prevent random switching that would be observed in experiment either as telegraph oscillations of the conductance or as a weighted average conductance if the switching time is faster than the experimental measurement resolution. We use a linear interpolation of all atomic coordinates between the initial and final configurations 1 and 2 as the reaction path. The reaction coordinate \(r\) is thus defined by \(\vec{x}_{r}=\vec{x}_{1}+r\cdot(\vec{x}_{2}-\vec{x}_{1})\) with \(r\in[0,1]\), where \(\vec{x}_{1}\) and \(\vec{x}_{2}\) denote initial and final positions, respectively. DFT calculations for different \(r\) then show the presence or absence of a reaction barrier and quantify its size. We note that the linear interpolation will yield an upper energy barrier for the transition between initial to final states. The calculations allow us to decide, if the structures are sufficiently stable with regard to switching over a certain time at a given temperature, see Fig. 1(e). Finally, concerning the transition, we compute the runaway modes of configurations 1 and 2, see Fig. 1 (c) and 1(d). At or above the threshold voltage, atomic motion along these undamped modes requires vanishing energy cost. Accordingly, we vary the Figure 1: Scheme of the simulation process for describing current-induced atomic rearrangements based on electron-vibration coupling. (a) Contact geometries at various electrode separations \(d\). Fixed and relaxed atoms are separated by dashed lines. (b) Conductance as a function of the electrode separation \(d\). The contact at \(d=17.5\) Å, which is studied further in panels (c-e), is marked in gray. (c) Vibrational frequencies as a function of the applied bias voltage for the contact at \(d=17.5\) Å. (d) Damping of the vibrations as a function of the bias voltage \(V\) for the contact at \(d=17.5\) Å. (e) Starting configuration 1 of the atomic contact (left, blue) at \(d=17.5\) Å; displacement of its atoms by the mode with the lowest threshold voltage (middle), requiring \(V>0.4\) V to become undamped; contact configuration 2 (right, red) after a corresponding relaxation, i.e. energy optimization of atomic positions. The resulting junction configurations 1 and 2 exhibit different conductance of \(2.5G_{0}\) and \(3.2G_{0}\), as indicated in the figure. The motion of atoms in the central relaxed junction part for one period of the unstable vibration is shown by snapshots in green and orange. The connection to the calculated bias-dependent damping in panel (d) is indicated by a green arrow. In the middle panel, the energy barrier between the two configurations is shown as a function of the reaction coordinate \(r\). contacts by moving the atoms along these modes, assuming amplitudes of \(\pm 1\), \(\pm 2\), \(\pm 4\) or \(\pm 8\) times the normalized vibrational eigenvector. The contact geometries, obtained by this displacement procedure from configuration 1, are relaxed again to find a local energetic minimum. If this new configuration agrees with configuration 2, we have identified a current-induced mechanism for a vibrational transition between states 1 and 2, as illustrated in Fig. 1(e). We attempt the same for the transition from 2 to 1. If runaway modes are found that establish transitions in both directions, a reversible bistable switch has been detected. ## III Results and Discussion From the extended list of requirements, namely, to find two contact geometries at a specific \(d\) with largely different conductance, current-induced vibrational transitions and a sufficiently high energy barrier between them, it comes clear that only a fraction of the simulations returns current-driven bistable switches. In our present study we found three candidates out of 30 contact structures, obtained by mechanical manipulation at different \(d\). In the following we describe a mechanical compression curve that contains the three candidates, out of which only one resulted in a vibrationally-driven switch. Figure 2 shows the electrical conductances, energies and threshold voltages of a compression process, which started at the largest distance \(d=20\) A (see the Supplemental Material). The conductance increases rather linearly with decreasing distance from around 1.5 \(G_{0}\) up to 3.5 \(G_{0}\) over a length of about 4 A. These findings are consistent with earlier calculations [23; 24] and measurements [22; 25] for atomic-size Pb wires that exhibit _sp_-orbital conduction with three main transmission eigenchannels at the Fermi energy in a single-atom contact. The conductance curve exhibits three discrete jumps, when considering the initial configurations 1, indicating regions to search for bistable behavior. We generate subsequently the configurations 2 for all electrode separations by the mechanical manipulation cycle of \(\mp\Delta\), as explained in Sec. II. Electrode separations, where we find bistable conductance behavior, are marked by gray bars in Fig. 2. The DFT total energy curve shows a local minimum at around 17 A and a rather linear slope for higher distance, while the behavior is more complex for shorter distances due to major atomic rearrangements. Threshold voltages show a larger spread from 0 to some 1.4 V, with a trend towards decreasing threshold voltages for larger electrode separations. Figure 3(a) compares configurations 1 and 2 at \(d=16\) A. We have constructed a linear interpolation between those structures with 10 steps to analyze the transition. The total DFT energy of the intermediate geometries is shown in Fig. 3(b), and features a difference in energy of initial and final structures of around 100 meV. Depending on a starting point at configuration 1 or 2, the barrier between the structures amounts to around 250-350 meV. To change these two configurations into each other, it appears that a rotation of a large part of the atoms in the central region is necessary. Unfortunately, we have found no pumped vibrational mode that would enable such a switching. An intuitive explanation for this negative re Figure 2: Conductance (a), total DFT energy (b) and threshold voltage (c) as a function of electrode separation distance \(d\), respectively. Blue points visualize results obtained during an initial compression process. Red points are obtained through a mechanical cycle, by separating the electrodes of the corresponding relaxed junction at the subsequent distance step \(d-\Delta\) by one distance step \(\Delta=0.3\) Å to reach the electrode separation \(d\) and relaxing the contact geometry again. The three gray bars indicate points of bistability in the conductance, as observed in panel (a). sult is that the required rotation does not couple well to the electric current. The charge carriers would need to be scattered nearly orthogonal to their direction of motion, which is unlikely in a two-particle process under momentum conservation. The switching candidate, shown in Fig. 4, is also displayed in Fig. 1(c)-(e). Configurations 1 and 2 are separated by an energy barrier of less than 160 meV. The blue configuration 1 has a conductance of 2.5 \(G_{0}\) and appears to be somewhat more disordered than the red configuration 2, exhibiting a conductance of 3.2 \(G_{0}\). Configuration 1 exhibits a higher total energy than configuration 2 by some 50 meV. As shown in Fig. 1(e), the threshold voltage for pumping vibrational modes amounts to 0.4 V, and we can switch to configuration 2 by displacing atoms of configuration 1 along the runaway mode with an amplitude of twice the eigenvector and then relaxing the structure again. In contrast, the threshold voltage in configuration 2 is as high as 1.4 V, see also 2(c). This indicates a significantly increased stability of that configuration or a significantly reduced electron-vibrational coupling. Unfortunately, we did not find a runaway mode that transforms configuration 2 into configuration 1, and hence this switch is monodirectional. Let us finally study the switching candidate at \(d=20\) A. The atomic configurations of the two states, shown in Fig. 5, look very similar. The most pronounced relocations are found for the two atoms in the center that form a dimer. The barrier between the configurations has a height of around 40 meV, when starting from configuration 1, and 20 meV, starting from configuration 2. At sufficiently low temperatures, a crossing of the barrier by thermal excitation alone would be strongly suppressed, while nonconservative current-induced processes should be able to surmount it [7; 9]. For configurations 1 and 2 different runaway vibrational modes could be detected, whose excitation enables the transition into the other configuration. As indicated in Fig. 5, the excitation of the mode needs more than 0.8 V in configuration 1, while it needs more than 0.4 V in configuration 2. Interestingly, the threshold voltages, when transitioning from 1 to 2 instead of 2 to 1, differ by a factor of two, resembling the differences in barrier heights of 40 meV and 20 meV, respectively. This example of a successful identification of a bistable switch shows the potential of the vibration-mediated switching mechanism. Theoretical methods to simulate the switching of atomic contacts by current-induced forces are a timely research theme. The direct molecular dynamics approach offers several challenges, most prevalent the problem of the huge separation in time scales between the current-induced atomic motion and the rare switching events. The approach presented here circumvents that problem by sampling the configurational phase space in the direction of current-induced forces. The computational costs of the presented procedure are still significant. The accurate determination of vibrational modes and electron-vibration couplings requires structural relaxations to a high level of convergence. For the switching, many contact configurations are generated and energetically opti Figure 3: (a) Contact configurations 1 (blue) and 2 (red) for the electrode separation \(d=\)16 Å. (b) Total DFT energy as a function of the reaction coordinate \(r\). Figure 4: Same as Fig. 3 but for \(d=\)17.5 Å. mized. The high computational demands limit the number of atoms in the extended central cluster of our junction models and of the candidate structures for switches that we could study. For details on computational demands, we refer to the Supplemental Material. ## Conclusions In conclusion, we presented a microscopic approach to simulate current-induced switching processes. We used the developed computational method to identify bistable metallic atomic switches, showing two stable atomic configurations with different conductance. The switching is achieved by excitation of a vibrational mode, which becomes amplified by current-induced pumping. Displacing the atoms of the contact along this so-called runaway mode and optimizing atomic positions results in a transition from one contact configuration to the other. Both geometric configurations must be stable over sufficiently long times scales despite the excess energy in the electronic and phononic systems due to the applied bias voltage. This is possible, if the switching process dissipates the excess energy of the respective pumped vibrational modes efficiently, i.e., if the excited mode is not a runaway mode for the new configuration and is thus sufficiently dampened. We applied our scheme to study the current-induced reversible switching of Pb nanowires with smallest cross sections containing one or a few atoms only. Combined with our previous findings that computed threshold voltages for a related theoretical approach are of the same size as measured switching voltages [11], our results indicate that the experimentally observed current-induced switching in Al atomic-size contacts [2] might also be dominated by electron-vibration scattering. In the future, the presented approach could be used to analyze bistable switching in different metals. Furthermore, by repeatedly displacing atoms along runaway modes, it may be possible to study the long-term evolution of contacts towards a higher stability under applied bias. In this way, the microscopic mechanism of electronic hardening may be revealed. ## Acknowledgments We thank D. Weber, M. Strohmeier and J. C. Cuevas for inspiring discussions. We gratefully acknowledge financial support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under project number 262725753, the Gauss Centre for Supercomputing eV. (www.gauss-centre.eu) by providing computing computing through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS at the Julich Supercomputing Centre (JSC) [26]. We acknowledge further computing time provided by the state of Baden-Wurttemberg through bwHPC and the DFG through project number 236232410 (JUSTUS computing cluster). Figure 5: (a) Contact configurations 1 (blue) and 2 (red) for \(d=\)20 Å. (b) The configurations are separated by a reaction barrier of 20-40 meV, depending on the starting point. (c) The contact geometries in panel (a) realize a bistable switch between configuration 1 with a conductance of 1.3 \(G_{0}\) and configuration 2 with 1.7 \(G_{0}\). Displacements along the pumped vibrational mode are shown in orange, and, according to the stability analysis, require applied bias voltages larger than 0.8 or 0.4 V, respectively.
2308.06103
Composable Function-preserving Expansions for Transformer Architectures
Training state-of-the-art neural networks requires a high cost in terms of compute and time. Model scale is recognized to be a critical factor to achieve and improve the state-of-the-art. Increasing the scale of a neural network normally requires restarting from scratch by randomly initializing all the parameters of the model, as this implies a change of architecture's parameters that does not allow for a straightforward transfer of knowledge from smaller size models. In this work, we propose six composable transformations to incrementally increase the size of transformer-based neural networks while preserving functionality, allowing to expand the capacity of the model as needed. We provide proof of exact function preservation under minimal initialization constraints for each transformation. The proposed methods may enable efficient training pipelines for larger and more powerful models by progressively expanding the architecture throughout training.
Andrea Gesmundo, Kaitlin Maile
2023-08-11T12:27:22Z
http://arxiv.org/abs/2308.06103v1
# Composable Function-preserving Expansions ###### Abstract Training state-of-the-art neural networks requires a high cost in terms of compute and time. Model scale is recognized to be a critical factor to achieve and improve the state-of-the-art. Increasing the scale of a neural network normally requires restarting from scratch by randomly initializing all the parameters of the model, as this implies a change of architecture's parameters that does not allow for a straightforward transfer of knowledge from smaller size models. In this work, we propose six composable transformations to incrementally increase the size of transformer-based neural networks while preserving functionality, allowing to expand the capacity of the model as needed. We provide proof of exact function preservation under minimal initialization constraints for each transformation. The proposed methods may enable efficient training pipelines for larger and more powerful models by progressively expanding the architecture throughout training. 1 Footnote 1: Implementation of the proposed transformations and empirical tests of the function preservation property are available at: [http://goo.gle/TransformerExpansions](http://goo.gle/TransformerExpansions). ## 1 Introduction Transformer-based neural networks have gained widespread attention in recent years due to their impressive performance. The Transformer architecture, introduced by Vaswani et al. (2017), has become the standard for many natural language processing (NLP) tasks, including machine translation, text generation, and question answering. The success of transformer-based models is not limited to NLP: they have also been applied to various other domains, including computer vision, speech recognition, and recommendation systems. The largest and most performant of these models, large language models (LLMs) and vision and multimodal foundation models, are reaching billions to trillions of parameters (Dehghani et al., 2023; Touvron et al., 2023; Rae et al., 2021; Raffel et al., 2020). However, each new model is generally trained from scratch, without reusing the capabilities acquired by previously trained smaller models. Furthermore, the size of the model is constant throughout training. The computational cost of training scales quadratically with model size due to the necessary increase in amount of training data (Hoffmann et al., 2022; Google, 2023; Kaplan et al., 2020). The ability to reuse parameters of a pretrained model or dynamically increase a model's size during training could thus reduce the overall cost of training, but how to accomplish parameter reuse effectively without losing training progress is not straightforward. To address these limitations, we propose parameter expansion transformations for transformer-based models that are exactly function preserving. These transformations increase the model size and thus the potential capacity of the model without changing its functionality, permitting continued training. These composable transformations operate on independent dimensions of the architecture, allowing for fine-grained architectural expansion. Some previous works have also proposed function preserving parameter expansion transformations for transformer-based models (Chen et al., 2022; Shen et al., 2022; Wang et al., 2023; Mazzawi et al., 2023), extending from techniques for smaller convolutional and dense models (Chen et al., 2016; Evci et al., 2022). Our framework is so far the most comprehensive and composable set of function preserving transformations. The contributions of this paper are six composable function preserving transformations applicable to Transformer architectures: 1) size of MLP internal representation, 2) number of attention heads, 3) size of the attention heads output representation, 4) size of the attention input representation, 5) size of the transformer layers input/output representations, 6) number of layers, summarized in Table 1. For each transformation, we provide proof of how the _exactly function preserving_ property is achieved with a minimal set of constraints on the initialization of the added parameters. ## 2 Transformer architecture formalization This presentation is based on a particular instantiation of the transformer architecture: applications to variants (e.g. Encoder+Decoder, different normalization placement) can be obtained with simple extensions. Figure 1 represents the standard Transformer architecture (Vaswani et al., 2017). The _Input Embedding_ module maps the arbitrary input modality (e.g. image, text) into a bidimensional tensor \(\underset{s\times h}{\mathrm{I}}\), where \(s\) is the sequence dimension and \(h\) is the hidden dimension. The \(\mathrm{TransformerArchitecture}(\cdot)\) is defined as a function that maps: \(\underset{s\times h}{\mathrm{I}}\rightarrow\underset{s\times o}{\mathrm{O}}\), where \(o\) is the hidden dimension of the output representation. The _Head_ component represents the output modality specific logic that maps \(\underset{s\times o}{\mathrm{O}}\) into a specific output (e.g. a distribution over classes or text tokens). \(\mathrm{TransformerArchitecture}(\cdot)\) is defined as: \[\mathrm{TransformerArchitecture}(\underset{s\times h}{\mathrm{I}})=\mathrm{ TransformerLayer}^{\circ N}(\underset{s\times h}{\mathrm{I}}+\underset{s\times h}{ \mathrm{P}})\ \times\underset{h\times o}{\mathrm{W}}^{out}, \tag{1}\] where \(\underset{h\times o}{\mathrm{W}}^{out}\) are the parameters of the final linear projection, \(\underset{s\times h}{\mathrm{P}}\) are the positional embedding parameters, and \(\mathrm{TransformerLayer}^{\circ N}(\cdot)\) represents the recursive application of \(N\) transformer Figure 1: Representation of a standard Neural Network based on the Transformer architecture. layers. The \(n^{\text{th}}\) transformer layer is defined as: \[\begin{array}{l}\text{TransformerLayer}_{n}(\begin{array}{c}\text{I}_{n}\\ s\times h\\ s\times h\end{array}\begin{array}{c}\text{=I}^{{}^{\prime}}_{n}\\ s\times h\\ \text{=I}_{s\times h}\end{array}+\text{MHA}_{n}(\text{Norm}_{n}^{\text{MHA}}( \begin{array}{c}\text{I}^{{}^{\prime}}_{n}\\ s\times h\end{array})),\\ \text{\ For each transformation, we define how the existing parameters must be expanded and propose a set of minimal initialization constraints to obtain the function preserving property with proof. The presented transformations can be combined to allow the joint extension of multiple dimensions of the transformer architecture. Furthermore, different subsets of such transformations can be applied incrementally, interleaving training iterations, as well as independently to different parts of the architecture. Symbols denoting parameters, representations, and functions resulting from the application of the transformation discussed in each of the following subsection are indicated with the "hat" symbol: \(\hat{\cdot}\). ### MLP expansion The _MLP expansion_ transformation can be applied to expand the scale of the MLP by expanding the dimension of its internal representation. This scaling dimension is controlled by the hyper-parameter \(p\) introduced in Equation 3. **Definition 3.1** (MLP expansion).: Given a Transformer model as defined in Section 2, the internal dimension of \(\mathrm{MLP}_{n}\)\(\forall\)\(n\in\)\([1,N]\) can be increased from \(p\) to \(\hat{p}\) by applying the following parameter-matrix transformations: \[\begin{split}\mathbf{W}_{n}^{l1}\mapsto\underset{h\times p}{ \mathbf{\hat{W}}_{n}^{l1}}:=\begin{bmatrix}\mathbf{W}_{n}^{l1}&\mathbf{M}_{n}^{ Wl1}\\ h\times p&h\times(\hat{p}-p)\end{bmatrix},\end{split} \tag{6}\] \[\begin{split}\mathbf{b}_{n}^{l1}\mapsto\underset{1\times\hat{p}}{ \mathbf{\hat{W}}_{n}^{l1}}:=\begin{bmatrix}\mathbf{b}_{n}^{l1}&\mathbf{m}_{n}^{ kl1}\\ 1\times p&1\times(\hat{p}-p)\end{bmatrix},\end{split} \tag{7}\] \begin{table} \begin{tabular}{|p{284.5pt}|p{142.3pt}|p{142.3pt}|} \hline Name & Transformation & Function preserving constraint \\ \hline Sec. 3.1: MLP expansion & Def. 3.1: to increase the MLP internal dimension \(p\) to \(\hat{p}\), add \(\hat{p}-p\) columns to the the first MLP weight matrix and bias vector and add \(\hat{p}-p\) rows to the second MLP weight matrix. & Thm. 3.1: zero initialize the new \(\hat{p}-p\) rows of the second MLP weight matrix. \\ \hline Sec. 3.2: Head addition & Def. 3.2: to increase the number of attention heads \(E\), per head added, add \(v\) rows to the MHA output weight matrix. & Thm. 3.2: zero initialize the new \(v\) rows of the MHA output weight matrix. \\ \hline Sec. 3.3: Heads expansion & Def. 3.3: to increase the attention head representation dimension \(v\) to \(\hat{v}\), add \(\hat{v}-v\) columns to the value weight matrix and insert \(\hat{v}-v\) rows to each of \(E\) splits of the MHA output weight matrix. \\ \hline Sec. 3.4: Attention expansion & Def. 3.4: to increase the key/query representation dimension \(k\) to \(\hat{k}\), add \(\hat{k}-k\) columns to the key/query weight matrices and scale the key weight matrix by \(\sqrt{\hat{k}}/\sqrt{\hat{k}}\). \\ \hline Sec. 3.5: Hidden dimension dimension & Def. 3.5: to increase the transformer hidden dimension \(h\) to \(\hat{h}\), add \(\hat{h}-h\) columns to the positional encoding matrix, norm scaling vector, second MLP weight matrix and bias vector, MHA output weight matrix, and input representation matrix; add \(\hat{h}-h\) rows to the transformer output weight matrix, first MLP weight matrix, and key/query/value weight matrices; scale norm scaling vector by \(\sqrt{\hat{h}}/\sqrt{\hat{h}}\). \\ \hline Sec. 3.6: Layer addition & Def. 3.6: to increase the number of layers \(N\) to \(N\), per layer added, insert new layer at position \(n\) and increment index of all following layers. \\ \hline \end{tabular} \end{table} Table 1: Summary of proposed function preserving transformations. \[\begin{split}\mathbf{W}_{n}^{l2}\mapsto\mathbf{\hat{W}}_{\hat{p}\times h }^{l2}:=\left[\begin{array}{c}\mathbf{W}_{n}^{l2}\\ {}_{p\times h}\\ \mathbf{M}_{n}^{Wl2}\\ (\hat{p}-p)\times h\end{array}\right],\end{split} \tag{8}\] where \(\mathbf{M}_{n}^{Wl1}\), \(\mathbf{m}_{n}^{h1}\), and \(\mathbf{M}_{n}^{Wl2}\) are matrices of the specified shape. For the purpose of defining of the MLP expansion transformation, the values of these matrices can be assumed to be arbitrary. Constraints on their _initializer functions_ are introduced below to achieve the function preserving property. No other modifications to the Transformer architecture are required since the \(\mathrm{MLP}_{n}(\cdot)\) function (Equation 3) still inputs and outputs matrices of shape \(s\times h\) after the transformation. **Theorem 3.1** (Function preserving MLP expansion).: \[\begin{split}\mathbf{M}_{n}^{Wl2}:=\mathbf{0}\\ (\hat{p}-p)\times h\end{split}\] (9) \(\Longrightarrow\) \[\mathrm{ReLU}(\underset{s\times h}{\mathrm{X}}\times\underset{h\times p}{ \mathrm{X}}+\underset{s\times p}{\mathrm{X}})\times\underset{p\times h}{ \mathrm{X}}+\underset{s\times h}{\mathrm{X}}=\mathrm{ReLU}(\underset{s\times h }{\mathrm{X}}\times\underset{h\times p}{\mathrm{X}}+\underset{s\times p}{ \mathrm{X}})\times\underset{p\times h}{\mathrm{X}}+\underset{s\times h}{ \mathrm{X}}+\underset{p\times p}{\mathrm{X}}+\underset{s\times h}{\mathrm{X}} \end{split} \tag{10}\] Informally: zero initializing \(\mathbf{M}_{n}^{Wl2}\) implies the _function preservation_ property for the MLP expansion transformation. See Appendix A.1 for proof. The MLP expansion transformation can be applied to all the MLP blocks to maintain the MLP internal dimension uniformly across all the layers. However, it can also be applied to only a subset of the layers independently to allow experimenting with different capacity at different depths. ### Head addition The _Head addition_ transformation can be applied to add new heads in a MHA component. This scaling dimension is controlled by the hyper-parameter \(E\) introduced in Equation 4. **Definition 3.2** (Head addition).: Given a Transformer model as defined in Section 2, a new head can be added to \(\mathrm{MHA}_{n}(\cdot)\)\(\forall\)\(n\in[1,N]\) by introducing new input projection matrices: \(\mathbf{W}_{n,E+1}^{Q},\mathbf{W}_{n,E+1}^{K},\mathbf{W}_{n,E+1}^{V}\) and applying the following parameter-matrix transformation to the output projection matrix: \[\underset{(E\cdot v)\times h}{\mathrm{W}_{n}^{O}}\mapsto\underset{((E+1) \cdot v)\times h}{\mathrm{W}_{n}^{O}}:=\left[\begin{array}{c}\mathbf{W}_{n}^ {O}\\ {}_{(E\cdot v)\times h}\\ \mathbf{M}_{n}^{WO}\\ \underset{v\times h}{\mathrm{X}}\end{array}\right]. \tag{11}\] No other modifications to the Transformer architecture are required since the \(\mathrm{MHA}_{n}(\cdot)\) function (Equation 4) still inputs and outputs matrices of shape \(s\times h\) after the transformation. The _Head addition_ transformation is defined to add one new head. The transformation can be applied multiple times to add an arbitrary number of new heads. **Theorem 3.2** (Function preserving head addition).: \[\begin{split}\mathbf{M}_{n}^{WO}:=\underset{v\times h}{\mathbf{0}} \implies\left[\begin{subarray}{c}\mathrm{H}_{1}&\cdots& \mathrm{H}_{E}\\ \approx v&\end{subarray}\right]\times\underset{(E\cdot v)\times h}{ \mathbf{W}_{n}^{O}}=\left[\begin{subarray}{c}\mathrm{H}_{1}&\cdots& \mathrm{H}_{(E+1)}\\ \approx v&\end{subarray}\right]\times\underset{((E+1)\cdot v)\times h}{ \mathbf{W}_{n}^{O}}\end{split}\] (12) Informally: zero initializing \(\mathbf{M}_{n}^{WO}\) implies the _function preservation_ property for the head addition transformation. See Appendix A.2 for proof. The head addition transformation can be applied to all the MHA blocks to maintain the number of MHA heads uniformly across all the layers. However, it can also be applied to only a subset of the layers independently to allow experimenting with different capacity at different depths. ### Heads expansion The _Heads expansion_ transformation can be applied to expand the dimension of the representation generated by each attention heads. This scaling dimension is controlled by the hyper-parameter \(v\) introduced in Equation 4. **Definition 3.3** (Heads expansion).: Given a Transformer model as defined in Section 2, the dimension of representation generated by the attention heads, \(\underset{s\times v}{\mathbf{H}_{e}}\)\(\forall\)\(e\)\(\in\)\([1,E]\), of \(\mathrm{MHA}_{n}\)\(\forall\)\(n\)\(\in\)\([1,N]\) can be increased from \(v\) to \(\hat{v}\) by applying the following parameter-matrix transformations: \[\begin{split}\mathbf{W}_{\underset{h\times v}{\mathbf{W}_{n,e}}}^ {V}\mapsto\hat{\mathbf{W}}_{\underset{h\times\hat{v}}{\mathbf{W}_{n,e}}}^{V}:= \left[\begin{subarray}{c}\mathbf{W}_{n,e}^{V}&\mathbf{M}_{n,e}^{WV}\\ h\times v&h\times(\hat{v}-v)\end{subarray}\right]\quad\forall\;e\in[1,E], \end{split} \tag{13}\] \[\begin{split}\mathbf{W}_{\underset{v\times h}{\mathbf{W}_{n,e}}}^ {O}\mapsto\hat{\mathbf{W}}_{\underset{\hat{v}\times h}{\mathbf{W}_{n,e}}}^{O}:= \left[\begin{subarray}{c}\mathbf{W}_{n,e}^{O}\\ v\times h\\ \mathbf{M}_{n,e}^{WO}\\ (\hat{v}-v)\times h\end{subarray}\right]\quad\forall\;e\in[1,E],\end{split} \tag{14}\] where \(\mathbf{W}_{\underset{v\times h}{\mathbf{W}_{n,e}}}^{O}\) is the \(e^{\text{th}}\) "split" of \(\underset{(E\cdot v)\times h}{\mathbf{W}_{n}^{O}}\) along the \((E\cdot v)\) dimension: \[\begin{split}\mathbf{W}_{\underset{(E\cdot v)\times h}{\mathbf{W}_ {n}^{O}}}^{O}:=\left[\begin{array}{c}\vdots\\ \mathbf{W}_{\underset{v\times h}{\mathbf{W}_{n,e}}}^{O}\\ \vdots\end{array}\right]\quad e\in[1,E].\\ \end{split} \tag{15}\] No other modifications to the Transformer architecture are required since the \(\mathrm{MHA}_{n}(\cdot)\) function (Equation 4) still inputs and outputs matrices of shape \(s\times h\) after the transformation. **Theorem 3.3** (Function preserving heads expansion).: \[\begin{split}\mathbf{M}_{\underset{(\hat{v}-v)\times h}{\mathbf{W }_{n,e}}}^{WO}:=\underset{(\hat{v}-v)\times h}{\mathbf{0}}\implies\left[ \begin{subarray}{c}\mathrm{H}_{1}&\cdots&\mathrm{H}_{E}\\ \approx v&\end{subarray}\right]\times\underset{(E\cdot v)\times h}{\mathbf{W}_{n }^{O}}=\left[\begin{subarray}{c}\mathrm{H}_{1}&\cdots&\mathrm{H}_{E}\\ \approx v&\end{subarray}\right]\times\underset{(E\cdot\hat{v})\times h}{ \mathbf{W}_{n}^{O}}\\ \end{split}\] (16) where: \[\begin{split}\hat{\mathrm{H}}_{\underset{s\times\hat{v}}{\mathbf{ W}_{n,e}}}=\text{Attention}(\underset{s\times h}{\mathbf{X}}\times\underset{h}{ \mathbf{W}_{n,e}^{O}},\underset{s\times h}{\mathbf{X}}\times\underset{h}{ \mathbf{X}}\times\underset{h\times\hat{v}}{\mathbf{W}_{n,e}^{K}},\underset{s \times h}{\mathbf{X}}\times\underset{h\times\hat{v}}{\mathbf{W}_{n,e}^{V}}) \\ \end{split} \tag{17}\] Informally: zero initializing \(\underset{(\hat{e}-v)\times h}{\mathbf{M}^{W\!K}_{n,e}}\) implies the _function preservation_ property for the head expansion transformation. See Appendix A.3 for proof The heads expansion transformation can be applied to all heads of all the MHA blocks to maintain the attention head representation dimension uniformly across all the layers. However, it can also be applied to only a subset of the layers or even a subset of attention heads independently to allow experimenting with different capacity at different parts of the architecture. ### Attention expansion The _Attention expansion_ transformation can be applied to expand the _key_ and _query_ representations whose inner product produces the attention weights matrix. This scaling dimension is controlled by the hyper-parameter \(k\) introduced in Equation 4. **Definition 3.4** (Attention expansion).: Given a Transformer model as defined in Section 2, the dimension of representations generating the attention weights of \(\mathrm{MHA}_{n}\)\(\forall\)\(n\!\in\![1,N]\) can be increased from \(k\) to \(\hat{k}\) by applying the following parameter-matrix transformations: \[\mathbf{W}^{Q}_{\begin{subarray}{c}n,e\\ h\times k\end{subarray}}\mapsto\mathbf{\hat{W}}^{Q}_{\begin{subarray}{c}n,e\\ h\times\hat{k}\end{subarray}}:=\begin{bmatrix}\mathbf{W}^{Q}_{\begin{subarray}{ c}n,e\\ h\times k\end{subarray}}&\mathbf{M}^{W\!Q}_{\begin{subarray}{c}n,e\\ h\times\hat{k}\end{subarray}}\\ \end{bmatrix}\quad\forall\;e\in[1,E], \tag{18}\] \[\mathbf{W}^{K}_{\begin{subarray}{c}n,e\\ h\times k\end{subarray}}\mapsto\mathbf{\hat{W}}^{K}_{\begin{subarray}{c}n,e\\ h\times\hat{k}\end{subarray}}:=\begin{bmatrix}\sqrt{\hat{k}}\\ \sqrt{\hat{k}}\end{bmatrix}\cdot\mathbf{W}^{K}_{\begin{subarray}{c}n,e\\ h\times k\end{subarray}}&\mathbf{M}^{W\!K}_{\begin{subarray}{c}n,e\\ h\times\hat{k}\end{subarray}}\\ \quad\forall\;e\in[1,E]. \tag{19}\] **Theorem 3.4** (Function preserving attention expansion).: \[\underset{h\times(\hat{k}-k)}{\mathbf{M}^{W\!K}_{n,e}}:=\underset{h\times( \hat{k}-k)}{\mathbf{0}}\] (20) \[\implies\] (21) Informally: zero initializing \(\underset{h\times(\hat{k}-k)}{\mathbf{M}^{W\!K}_{n,e}}\) implies the _function preservation_ property for the attention expansion transformation. See Appendix A.4 for proof. In most transformer implementations, \(k=v\). In such cases, the attention expansion may be performed jointly with the head expansion. The attention expansion transformation can be applied to all heads of all the MHA blocks to maintain the key/query representation dimension uniformly across all the layers. However, it can also be applied to only a subset of the layers or even a subset of attention heads independently to allow experimenting with different capacity at different parts of the architecture. ### Hidden dimension expansion The _Hidden dimension expansion_ transformation can be applied to expand the dimension of the representation produced by the transformer layers. This scaling dimension is controlled by the hyper-parameter \(h\) introduced in Equation 1. **Definition 3.5** (Hidden dimension expansion).: Given a Transformer model as defined in Section 2, the dimension of the transformer layers' input/output representation can be increased from \(h\) to \(\hat{h}\) by applying the following parameter-matrix transformations: \[\mathbf{P}_{s\times h}\mapsto\underset{s\times h}{\mathbf{\hat{P}}}:=\left[ \mathbf{P}_{s\times h}\quad\mathbf{M}_{s\times(\hat{h}-h)}^{P}\right], \tag{22}\] \[\mathbf{W}_{h\times o}^{out}\mapsto\underset{\hat{h}\times o}{\mathbf{\hat{W}} }^{out}:=\left[\begin{array}{c}\mathbf{W}_{h\times o}^{out}\\ \mathbf{M}_{(\hat{h}-h)\times o}^{Wout}\end{array}\right], \tag{23}\] \[\mathbf{g}_{n}^{c}\mapsto\underset{1\times\hat{h}}{\mathbf{\hat{g}}}^{c}_{n}:= \left[\frac{\sqrt{h}}{\sqrt{\hat{h}}}\cdot\mathbf{g}_{n}^{c}\quad\underset{1 \times\hat{h}}{\mathbf{\hat{g}}}^{c}_{n}\quad\underset{1\times\hat{h}}{ \mathbf{\hat{g}}}^{g,c}_{n}\right]\ \ \forall n\!\in\![1,N]\wedge c\!\in\!\{\text{MHA}, \text{MLP}\}, \tag{24}\] \[\mathbf{W}_{n}^{l1}\mapsto\underset{h\times p}{\mathbf{\hat{W}}}^{l1}_{n}:= \left[\begin{array}{c}\mathbf{W}_{n}^{l1}\\ h\times p\\ \mathbf{M}_{(\hat{h}-h)\times p}\end{array}\right]\ \forall n\!\in\![1,N], \tag{25}\] \[\mathbf{W}_{n}^{l2}\mapsto\underset{p\times\hat{h}}{\mathbf{\hat{W}}}^{l2}_{n }:=\left[\begin{array}{c}\mathbf{W}_{n}^{l2}\quad\mathbf{M}_{n}^{Wl2}\\ \mathbf{p\times h}\quad\underset{p\times\hat{h}}{\mathbf{\hat{g}}}^{l2}_{n \times(\hat{h}-h)}\end{array}\right]\ \forall n\!\in\![1,N], \tag{26}\] \[\mathbf{W}_{n}^{l2}\mapsto\underset{1\times\hat{h}}{\mathbf{\hat{b}}}^{l2}_{n }:=\left[\begin{array}{c}\mathbf{b}_{n}^{l2}\quad\mathbf{m}_{n}^{bl2}\\ 1\times h\quad 1\times(\hat{h}-h)\end{array}\right]\ \forall n\!\in\![1,N], \tag{27}\] \[\mathbf{W}_{n,e}^{Q}\mapsto\underset{h\times k}{\mathbf{\hat{W}}}^{Q}_{n,e}:= \left[\begin{array}{c}\mathbf{W}_{n,e}^{Q}\\ h\times k\\ \mathbf{M}_{n,e}^{WO}\\ (h-h)\times k\end{array}\right]\ \forall n\!\in\![1,N]\wedge e\!\in\![1,E], \tag{28}\] \[\mathbf{W}_{n,e}^{K}\mapsto\underset{h\times k}{\mathbf{\hat{W}}}^{K}_{n,e}:= \left[\begin{array}{c}\mathbf{W}_{n,e}^{K}\\ h\times k\\ \mathbf{M}_{n,e}^{WK}\\ (\hat{h}-h)\times k\end{array}\right]\ \forall n\!\in\![1,N]\wedge e\!\in\![1,E], \tag{29}\] \[\mathbf{W}_{n,e}^{V}\mapsto\underset{h\times v}{\mathbf{\hat{W}}}^{V}_{n,e}:= \left[\begin{array}{c}\mathbf{W}_{n,e}^{V}\\ h\times v\\ \mathbf{M}_{n,e}^{WV}\\ (\hat{h}-h)\times v\end{array}\right]\ \forall n\!\in\![1,N]\wedge e\!\in\![1,E], \tag{30}\] \[\mathbf{W}_{n}^{O}\mapsto\underset{(E\cdot v)\times h}{\mathbf{\hat{W}}}^{O}_{ n}:=\left[\begin{array}{c}\mathbf{W}_{n}^{O}\quad\mathbf{M}_{n}^{WO}\\ (E\cdot v)\times h\quad(E\cdot v)\times(h-h)\end{array}\right]\ \forall n\!\in\![1,N], \tag{31}\] and modifying the embedding function to produce an extended input representation: \[\hat{\operatorname*{I}}_{s\times\hat{h}}:=\begin{bmatrix}\operatorname*{I}_{s \times\hat{h}}&\operatorname*{M}^{I}_{s\times(\hat{h}-h)}\end{bmatrix}. \tag{32}\] For example, a token embedding table can be expanded by adding \((\hat{h}-h)\) randomly initialized columns, mapping the same vocabulary into an extended embedding. **Theorem 3.5** (Function preserving hidden dimension expansion).: \[\operatorname*{\mathbf{M}}_{s\times(\hat{h}-h)}^{P} :=\operatorname*{\mathbf{0}}_{s\times(\hat{h}-h)}\] (33) \[\operatorname*{\mathbf{M}}_{n}^{W\!I2} :=\operatorname*{\mathbf{0}}_{p\times(\hat{h}-h)}\quad\forall n \!\in\![1,N] \tag{34}\] \[\operatorname*{\mathbf{m}}_{n}^{b\!I2} :=\operatorname*{\mathbf{0}}_{1\times(\hat{h}-h)}\quad\forall n \!\in\![1,N] \tag{35}\] \[\operatorname*{\mathbf{M}}_{n}^{W\!O} :=\operatorname*{\mathbf{0}}_{(E\cdot v)\times(\hat{h}-h)}\quad \forall n\!\in\![1,N] \tag{36}\] \[\operatorname*{\mathbf{M}}_{s\times(\hat{h}-h)}^{I} :=\operatorname*{\mathbf{0}}_{s\times(\hat{h}-h)} \tag{37}\] \(\Longrightarrow\) \[\hat{\operatorname*{I}}_{n} =\begin{bmatrix}\operatorname*{I}_{n}&0\\ s\times h&s\times(\hat{h}-h)\end{bmatrix}\quad\forall n\!\in\![1,N+1] \tag{38}\] \(\Longrightarrow\) \[\operatorname*{TransformerLayer^{\circ N}(\operatorname*{I}_{s \times\hat{h}}+\operatorname*{\mathbf{P}}_{s\times h})}\times\operatorname*{ \mathbf{W}}_{h\times o}^{out}=\operatorname*{TransformerLayer^{\circ N}( \operatorname*{I}_{s\times h}+\operatorname*{\mathbf{\hat{P}}}_{s\times h})} \times\operatorname*{\mathbf{\hat{W}}}_{h\times o}^{out} \tag{39}\] where \(\operatorname*{I}_{N+1}\) refers to the representations outputted by the last transformer layer, and \(\operatorname*{I}_{n}\underset{s\times h}{\forall n\!\in\![1,N]}\) refers to the representation inputted by the \(n^{th}\) transformer layer. Symbols denoting parameters, representations and functions resulting from the application of the transformation discussed in this section are indicated with the "hat" symbol. Informally: zero initializing the specified matrices implies the _function preservation_ property for the hidden dimension expansion transformation. See Appendix A.5 for proof. The hidden dimension expansion transformation must be applied to all MHA blocks to maintain the hidden dimension uniformly across all the layers, due to the skip connections used throughout the architecture. ### Layer addition The _Layer addition_ transformation can be applied to insert an new layer at any depth of the current Transformer architecture. This scaling dimension is controlled by the hyper-parameter \(N\) introduced in Equation 1. **Definition 3.6** (Layer addition).: A new \(\mathrm{TransformerLayer}(\cdot)\) whose parameters allow to input and output matrices of \(x\times h\) can be inserted in the sequence of the pre-existing \(N\) layers. The new transformer layer can be inserted at any position \(n\in[1,N+1]\). The index of the downstream layers is incremented by one. **Theorem 3.6** (Function preserving layer addition).: With \(n\) being the index of the added layer: \[\begin{array}{c}\mathbf{W}_{n}^{O}:=\begin{array}{c}\mathbf{0}\\ (E\cdot v)\times h\\ \mathbf{W}_{n}^{l2}:=\begin{array}{c}\mathbf{0}\\ p\times h\\ \mathbf{b}_{n}^{l2}:=\begin{array}{c}\mathbf{0}\\ 1\times h\end{array}\end{array}\\ \implies\mathrm{TransformerLayer}_{n}(\begin{array}{c}\mathrm{I}_{n}\\ s\times h\end{array})=\begin{array}{c}\mathrm{I}_{n}\\ s\times h\end{array}\\ \end{array} \tag{40}\] Informally: Zero initializing the parameters of the output projections of the MLP and MHA implies that the added transformer layer output is equivalent to the input. See Appendix A.6 for proof. ## 4 Related work Some existing works have proposed function preserving transformer expansion operators, but none cover all six dimensions as proposed in this work. Bert2BERT (Chen et al., 2022) proposes function preserving width expansions of the MLP internal dimension, hidden dimension, and number of attention heads. Shen et al. (2022) achieve function preserving width expansion, although constrained to doubling of all matrix and vector dimensions, and depth expansion via zero initialization of LayerNorm and bias parameters. Yao et al. (2023) use masking on new hidden MLP neurons, attention heads, and layers to achieve function preservation. Wang et al. (2023) use an inner optimization to learn a linear mapping for parameter expansion in depth and width, but without constraints for function preservation. Notably, our transformations form a function preserving subspace of their learnable space. Deep Fusion (Mazzawi et al., 2023) extends the concept of expansion to multiple source models, where the special case of self-fusion achieves function preserving width expansion. Of these works, some methods are nearly function preserving but admit gaps due to LayerNorm discrepancies (Chen et al., 2022; Mazzawi et al., 2023). No known works consider scaling factors, as we address in Equations 19 and 24, nor RMSNorm. ## 5 Conclusion We have defined six transformations that can be applied to a transformer model to increase the scale of all the different aspects of the architecture: 1) size of MLP internal representation, 2) number of attention heads, 3) size of the attention heads output representation, 4) size of the attention input representation, 5) size of the transformer layers input/output representations, 6) number of layers. For each of these transformations, we have provided a proof of exact function preservation given a minimal set of constraints on the initialization of the added parameters. These six transformations are composable to permit many different ways to scale a transformer-based model while preserving its function. We note that, there exist alternative definitions to such transformations that achieve function-preservation without requiring zero initialization. However, the form of the proposed transformations is intended to be simple yet minimally constraining. The space of possible initialization strategies may be explored with the aim to optimize for training in an empirical context. In future work, these transformations may be applied in the training of a new large model by initializing a smaller model, training it under reduced data and computational complexity requirements, and incrementally scaling it to larger sizes throughout training to the desired final size. They may also be used to generate a family of models that are trained for the same task but at different sizes: all models within the family can begin from the same checkpoint from training the smallest model, then each successively sized model can be branched and finetuned at its final size. Finally, neural architecture search (NAS) techniques could be applied to determine optimal transformation scheduling and architectural progression for a given task and compute budget. ## 6 Acknowledgements We would like to thank Jeffrey Pennington and Utku Evci for their input to this work.
2301.05101
Folding interpretations
We study the polyregular string-to-string functions, which are certain functions of polynomial output size that can be described using automata and logic. We describe a system of combinators that generates exactly these functions. Unlike previous systems, the present system includes an iteration mechanism, namely fold. Although unrestricted fold can define all primitive recursive functions, we identify a type system (inspired by linear logic) that restricts fold so that it defines exactly the polyregular functions. We also present related systems, for quantifier-free functions as well as for linear regular functions on both strings and trees.
Mikołaj Bojańczyk
2023-01-12T15:58:49Z
http://arxiv.org/abs/2301.05101v2
# Folding interpretations ###### Abstract. We study the polyregular string-to-string functions, whi ch are certain functions of polynomial output size that can be described using automata and logic. We describe a system of combinators that generates exactly these functions. Unlike previous systems, the present system includes an iteration mechanism, namely fold. Although unrestricted fold can define all primitive recursive functions, we identify a type system (inspired by linear logic) that restricts fold so that it defines exactly the polyregular functions. We also present related systems, for quantifier-free functions as well as for linear regular functions on both strings and trees. 2023 fold combinator. This combinator can be written as a rule \[\frac{1\to\Gamma\quad\Gamma\times\Sigma\to\Gamma}{\Sigma^{*}\to\Gamma}\qquad\text{ fold.}\] The assumption of this rule can be seen as a deterministic automaton with input alphabet \(\Sigma\) and state space \(\Gamma\), given by its initial state and transition function. In the conclusion of the rule, we have the function that maps an input string to the last state of the run of the automaton. The input alphabet and the state space need not be finite, e.g. the state space \(\Gamma\) could be the set \(1^{*}\) which represents the natural numbers. Folding is a fundamental construction in functional programming languages. For example, the fold combinator arises canonically from the inductive definition of the list type (Han and Stanley, 1996; Section 3). Unfortunately, there is a price to pay for the power and elegance of the fold combinator: one can use it to derive all primitive recursive functions (Han and Stanley, 1996, Section 4.1). Therefore, without any further restrictions, the fold combinator falls outside the scope of automata techniques, or any other techniques that can be used to decide semantic properties of programs, such as the halting problem. This paper is devoted to identifying restrictions on the fold combinator that tame its expressive power. These restrictions are presented as a typing system, which ensures that applications of fold will stay in the class of polyregular functions. In particular, the resulting class of functions shares the decidability properties of the polyregular functions, e.g. one can decide if a function produces a nonempty output for at least one input. There are two main contributions in the paper. Quantifier-free interpretationsThe first contribution is to identify the quantifier-free interpretations as an important class of functions in the context of fold. These are functions on structures in which the universe of the output is a subset of the universe of the input (in particular, the output size is linear), and all relations in the output structure are defined using quantifier-free formulas. In Theorem 3.2 we show that applying the fold combinator to a quantifier-free interpretation yields a function that, although not necessarily quantifier-free, is at least linear regular. This result subsumes several existing results, in particular those about mso definability of streaming transducers (Boh and Stanley, 1996; Boh and Stanley, 1996). Although quantifier-free interpretations are rather weak, they can describe most natural transformations that are used as primes in the calculi from (Boh and Stanley, 1996; Boh and Stanley, 1996); the remaining primes can then be derived using fold. Having identified the importance of quantifier-free functions, in Theorem 4.1, we present a system of prime functions and combinators that derives exactly the quantifier-free functions. The completeness proof of the system is the longest proof in the paper. The quantifier-free system does not allow fold; fold is used in the next part of the paper, about polyregular functions. Safe foldThe second main contribution is a type system thatames the power of fold. This system uses a type constructor \(!\) and bears certain similarities to the parsimonious calculus of Mazza (Mazza, 2000, Section 2.2). The latter is part of a field called _implicit computational complexity_, which seeks to describe complexity classes using type systems. An influential example of this kind is a system of Bellantoni and Cook (Bellantoni and Cook, 2000), which characterizes polynomial time. The present paper can be seen as part of implicit computational complexity, which targets regular languages instead of Turing complete models, such as logarithmic space or polynomial time. For a more detailed discussion of the connections between regular languages and \(\lambda\)-calculus, including a pioneering applicaton of linear types, see (Mazza, 2001). The usual application of \(!\) is to restrict duplication, and this paper is no exception, as in the following example: \[\underbrace{x\mapsto\left(x,x\right)}_{\text{not allowed}}\qquad\underbrace{ \operatorname{l}x\mapsto\left(\operatorname{l}x,x\right)}_{\text{allowed}}.\] However, apart from restricting duplication, \(!\) is also used in this paper to restrict another, more mysterious, resource, namely quantifiers. The idea is that our system uses \(!\) used to describe functions that are not necessarily quantifier-free, but are similar enough to quantifier-free functions so that the fold combinator can be applied to them. The second main contribution of this paper is Theorem 5.3, which characterizes the polyregular functions using certain prime functions and combinators, in which the types involve \(!\) and one of the combinators is fold. In Theorem 6.1 we also show that if we further restrict duplication \[\underbrace{\left|x\mapsto\left(\operatorname{l}x,x\right)\right.}_{\text{ not allowed}}\qquad\underbrace{\operatorname{l}x\mapsto\left(x,x\right)}_{\text{allowed}},\] then the resulting system derives exactly the linear functions. Finally, we also show that the results about the linear case can be extended from strings to trees without much difficulty. ## 2. Interpretations In this section, we describe the polyregular functions. Among several equivalent definitions of the polyregular functions, our point of departure in this paper will be a definition that uses mso interpretations (Boh and Stanley, 1996, Section 2). ### Definition of mso interpretations We assume that the reader is familiar with basic notions of monadic second-order logic mso, see (Han and Stanley, 1996) for an introduction. We only describe the notation that we use. A _vocabulary_ consists of a finite set of relation names, each one with an associated arity in \(\{0,1,\ldots\}\). Note that we allow nullary relations, i.e. relations of arity zero; such a relation takes no arguments and is "true" or "false" in each structure. A _structure_ over such a vocabulary consists of a finite nonempty set, called the _universe_ of the structure, and an interpretation of the vocabulary, which associates to each relation name in the vocabulary a relation over the universe of matching arity. The syntax and semantics of first-order logic and mso are defined in the usual way. Whenever we speak of a _class of structures_, all structures in the class must be over the same vocabulary, and the class must be closed under isomorphism. The structures considered in this paper will be used to describe finite strings and similar objects, such as pairs of strings, or strings of pairs of strings. _Intuitive description._ We begin with an intuitive description of string-to-string mso intepretations. Following the classical Buchi-Elgot-Trakhtenbrot correspondence of automata and mso logic, we view strings as structures. **Definition 2.1**.: _A string in \(\Sigma^{*}\) is viewed as a structure whose universe is the string positions, equipped with the relations_ \[\underbrace{x\leq y}_{\text{order on positions}}\qquad\underbrace{a(x)}_{x\text{ has label a }\in\Sigma}.\] A string-to-string mso interpretation transforms strings using the above representation, such that the positions of the output string are represented by \(k\)-tuples of positions in the input string, for some \(k\in\{0,1,\dots\}\). The order2 on output positions is defined by a formula Footnote 2: For reasons described in [9, Theorem 4], the string positions are equipped with a linear order \(x\geq y\) instead of successor \(x=y+1\). \[\varphi(\underbrace{x_{1},\dots,x_{k}}_{\text{first output}},\underbrace{y_{1}, \dots,y_{k}}_{\text{second output}})\] with \(2k\) free variables, while the labels of the output positions are defined by formulas with \(k\) free variables, one for each letter in the output alphabet. Finally, not all \(k\)-tuples of input positions need to participate in the output string; there is a formula with \(k\) free variables, called the _universe_ formula, which selects those that do. All of these formulas need to be consistent - every \(k\)-tuple of positions in the input string that satisfies the universe formula must satisfy exactly one of the label formulas, and these \(k\)-tuples need to be linearly ordered by the order formula. Consistency is decidable, since it boils down to checking if some mso formula is true in all strings, which in turn boils down to checking if automaton is nonempty by the equivalence of mso and regular languages. _Formal definition._ We now give a formal definition of mso interpretations. The formal definition generalizes the above intuitive description in two ways of minor importance. First, the definition is presented not just for strings, but for general classes of structures; we intend to apply it to mild generalizations of strings, such as pairs of strings or strings of strings. Second, instead of the universe being \(k\)-tuples of some fixed dimension, it is created using a _polynomial functor_, which is an operation on sets of the form \[F(A)=A^{k_{1}}+\dots+A^{k_{n}}. \tag{1}\] Typical polynomial functors include the identity functor \(A\), or the functor \(A^{2}+A^{2}\) that produces two copies of the square of the input set. We use the following terminology for polynomial functors: each \(A^{k_{i}}\) is called a _component_ of the polynomial functor, and \(k_{i}\in\{0,1,\dots\}\) is called the _dimension_ of this component. This extra generality of polynomial functors3 makes the definition more robust, it will be useful in a more refined analysis of mso interpretations that will appear in Section 5.3. In case of linear functors (where all components have dimension at most one), the components correspond to the _copies_ in an mso transduction [13, p. 230]. Footnote 3: One can reduce the polynomial functor in an mso interpretation to a single component \(A^{k}\), at the cost of increasing the dimension \(k\). This works for input structures with at least two elements. For this reason, [9] uses interpretations with just one component. In an mso interpretation, the polynomial functor is used to define the universe of the output structure; if \(A\) is an input structure then elements of \(F(A)\) are called _output candidates_. A subset of the output candidates will be the universe of the output structure. This subset is defined using an mso _query of type \(F\)_, which is a family of mso formulas, with one formula for each component in the functor, such that number of free variables in each formula is the dimension of the corresponding component. Here are some examples: \[\underbrace{A^{0}=1}_{\begin{subarray}{c}\text{a query of this type}\\ \text{is a formula without}\\ \text{free variables}\end{subarray}}\qquad\underbrace{A^{4}}_{ \begin{subarray}{c}\text{a query of this type}\\ \text{is a formula with}\\ \text{four free variables}\end{subarray}}\qquad\underbrace{A^{2}+A^{2}}_{ \begin{subarray}{c}\text{a query of this type}\\ \text{is two formulas with}\\ \text{two free variables each}\end{subarray}}\] The relations in the output structure are also defined using mso queries, with a relation of arity \(m\) defined using a query of type \[F^{m}(A)\stackrel{{\text{def}}}{{=}}\underbrace{F(A)\times\dots \times F(A)}_{m\text{ times}}\] The above type is also a polynomial functor, since polynomial functors are closed under taking products, e.g. the product of \(A^{2}\) and \(A+1\) is \(A^{3}+A^{2}\). The discussion above is summarized in the following definition. **Definition 2.2** (mso interpretation).: _A function \(f:\Sigma\to\Gamma\) between two classes of structures is called an mso interpretation if:_ 1. _Universe._ _There is a polynomial functor_ \(F\) _and a mso _query of type_ \(F\) _such that for every input structure_ \(A\in\Sigma\)_, the universe of the output structure is the subset of the output candidates_ \(F(A)\) _defined by this query; and_ 2. _Relations._ _For every relation name_ \(R\) _in the vocabulary of the output class, of arity_ \(m\)_, there is an mso _query of type_ \(F^{m}\)_, which defines the interpretation of_ \(R\) _in every output structure._ A _string-to-string_mso _interpretation_ is the special case of the above definition where the input type is \(\Sigma^{*}\) for some finite alphabet \(\Sigma\), and the output type is \(\Gamma^{*}\) for some finite alphabet \(\Gamma\). Example 1 ().: Consider the squaring operation on strings \[[1,2,3]\mapsto[1,2,3,1,2,3].\] Suppose that the input alphabet is \(\Sigma\). This function is defined by an mso interpretation as follows. The functor \(F\) is \(A^{2}\), and the universe formula is "true", which means that the positions of the output string are all pairs of positions in the input string. The order formula describes the lexicographic order on \(A^{2}\). Finally, the label of an output position is inherited from the input position on the second coordinate. ### String types We are ultimately interested in functions that input and output strings over a finite alphabet. However, to create such functions using primes and combinators, it will be convenient to have more structured types for the simpler functions, such as pairs of strings. The idea to use such structured types comes from (Bohr, 2017), in particular we use the same types, as described in the following definition. Definition 2.3 (List types).: A _list type_ is any type constructed using the constructors_ \[\underbrace{\begin{subarray}{c}1\\ a\text{ type with}\\ one\text{ element}\end{subarray}}_{\text{pairs}}\underbrace{\Sigma_{1}\times \Sigma_{2}}_{\text{pairs}}\underbrace{\Sigma_{1}+\Sigma_{2}}_{\text{or- pairs, i.e.}}\underbrace{\Sigma^{*}}_{\text{lists}}.\] An example of a list type is \[(1+1+1)^{*}.\] This type can be seen as the type of strings over a three letter alphabet; in this way the list types generalize strings over finite alphabets. The generalization is minor, since elements of a list type can be seen as strings over a finite alphabet, which uses brackets and commas as in the following example: \[\underbrace{\left(\left[\left[\left[\left[\left[\left[\left[\left[\left[ \left[\leftleft[\left[\leftleft[\left[\leftleft[\leftleft[\leftleft[\leftleft[ \leftleft\left[\left[\leftleft[\leftleft[\left[\leftleft[\leftleft[\leftleft[ \leftleft\left[\left[\left[\left[\leftleft[\left[\left[\left[\leftleft\left[ \left\left[\left[\left[\left\left[\left[\left[\left\left[\left[\left\left[\left[ \left\left[\left\left[\left\left[\left\left[\left\left[\left\left[\left\left[\left[\left\left[\left\right\right\right\right\right\}\right. \right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\] \right.\right.\right.\right.\right.\right.\right.\right.\]}.\] _Structures for list types_. We will be interested in mso interpretations that transform one list type into another. We could simply represent list types as strings over a finite alphabet in the way described above, and then use mso interpretations on strings over a finite alphabet. The resulting definition would be equivalent to the one that we will use in the paper. However, we choose to use a direct representation of list types as structures, without passing through a string encoding. The reason is that quantifiers would be needed to go between list types and their string encodings, and in this paper, we will be particularly interested in quantifier-free interpretations. Definition 2.4 ().: _To each list type we associate a class of structures, which is defined by induction as follows._ 1. _The class_ \(1\) _contains only one structure; this structure has one element in its universe and no relations._ 2. _The vocabulary of the class_ \(\Sigma_{1}+\Sigma_{2}\) _is the disjoint union of the vocabularies of the classes_ \(\Sigma_{1}\) _and_ \(\Sigma_{2}\)_, plus one new nullary relation name (i.e. arity zero). A structure in this class is obtained by taking a structure in either of the classes_ \(\Sigma_{1}\) _or_ \(\Sigma_{2}\)_, extending the vocabulary to the vocabulary of the other class by using empty sets, and interpreting the new nullary relation as "true" or "false" depending on whether the structure is from_ \(\Sigma_{1}\) _or_ \(\Sigma_{2}\)_._ 3. _The vocabulary of the class_ \(\Sigma_{1}\times\Sigma_{2}\) _is the disjoint union of the vocabularies of the class_ \(\Sigma_{1}\) _and_ \(\Sigma_{2}\)_, plus one new unary relation name (i.e. arity one). A structure in this class is obtained by taking the disjoint union (defined in the natural way) of two structures, one from_ \(\Sigma_{1}\) _and one from_ \(\Sigma_{2}\)_, and using the new unary relation name to select the elements from the first structure._ 4. _The general idea is that a structure in the class_ \(\Sigma^{*}\) _is obtained by taking a list_ \([A_{1},\ldots,A_{n}]\) _of nonempty_4 _structures in_ \(\Sigma\)_, creating a new structure using disjoint union (with a shared vocabulary), and adding a new binary relation_ \(x\leq y\) _which holds whenever the structure containing_ \(x\) _appears earlier in the list (or in the same place) than the structure containing_ \(y\)_. The problem with this construction is that it would mix nullary relations that come from different structures in the list. To fix this problem, each nullary relation name_ \(R()\) _in the vocabulary of_ \(\Sigma\) _is changed into a unary relation name_ \(R(x)\) _that selects elements_ \(x\) _such that the corresponding structure satisfies_ \(R()\)_._ Footnote 4: A structure is nonempty if its universe is nonempty. This leads to the following subtle point, which arises when considering lists of lists, and related structures. Since a list can be empty, it follows that we do not allow lists of empty lists such as \([\left[\left[\left[\left[\left[\left[\left[\left[\left[\left\left[\left[\left[\left \left\left[\left\left[\left\left\left[\left\left\left[\left\left[\left\left[ \left\left\left[\left\left\left[\left\left\left[\left\left\left[\left\left\left[ \left\left\left\[\left\left\left\[\left\left\left[\left\left\left[\left\left\left[ \left\left\right\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right. \right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right. \right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\right.\] \right.\right.\right.\right.\].\) If we apply the above representation to a list type \[\left(\underbrace{1+\cdots+1}_{\text{$n$ times}}\right)^{*}\] then we get the representation of strings as ordered structures from Definition 2.1, with the exception that the empty string has a universe with one element. Therefore, it is not important if we use Definition 2.1 or 2.4 for representing strings. Definition 2.5 ().: \(A\) polyregular function _is a function_ \[f:\Sigma\rightarrow\Gamma\] between list types that can be defined by an mso interpretation, assuming that list types are viewed as classes of structures according to Definition 2.4._ The original definition of polyregular functions (Bartos and Kastel, 2010) did not use mso interpretations, however mso interpretations were shown equivalent to the original definition in (Bartos and Kastel, 2010, Theorem 7). Since the original definition was closed under composition, it follows that mso interpretations are closed under composition (as long as the input and output classes are list types). ## 3. The fold combinator In this section, we discuss dangers of the fold combinator \[\frac{1\rightarrow\Gamma}{\Sigma^{*}\rightarrow\Gamma}\qquad\text{fold}\] We also explain how some of the dangers can be avoided by using quantifier-free interpretations. We begin this section with several examples illustrating the usefulness of fold. Example 2 ().: Consider a finite automaton with a \(n\) states and an input alphabet of \(m\) letters. Assuming some order on the states and alphabet, the transition function can be seen as a function between finite string types \[\underbrace{(1+\cdots+1)}_{\text{$n$ times}}\times\underbrace{(1+\cdots+1)}_{ \text{$m$ times}}\rightarrow\underbrace{1+\cdots+1}_{\text{$n$ times}}.\] If we apply fold to this automaton, under some chosen initial state, then we get the function that inputs a string, and returns the last state in the run. A special case of this construction is when both the states and input letters of the automaton are elements of some finite group \(G\), the initial state is the group identity, and the transition function is the group operation. By folding this transition function, we get the _group multiplication_ function of type \(G^{*}\to G\), which is one of the (less appealing) prime functions in the combinatory calculus from (Bartos and Kastel, 2010). Example 3 ().: There are two symmetric list constructors \[\underbrace{1+\Sigma^{*}\times\Sigma\rightarrow\Sigma^{*}}_{\text{lists are constructed by adding letters to the right of the list}}\underbrace{1+\Sigma\times\Sigma^{*}\rightarrow\Sigma^{*}}_{\text{lists are constructed by adding letters to the left of the list}}.\] If we apply fold to the two corresponding automata, then we get the reverse and identity functions on lists, respectively. The fold combinatory corresponds in a canonical way to the first list constructor, which is why it is sometimes called _fold right_. ### On the dangers of folding We now present two examples which show how the fold combinator, without any further restrictions, can define functions that are not polyregular. More generally, one can use fold to derive any primitive recursive function (Kastel, 2010, Section 4.1). Example 4 ().: [Iterating duplication] Consider an automaton where the input alphabet is \(1\), and the states are \(1^{*}\). We view the states as natural numbers, with the list \(1^{n}\) of length \(n\) representing the number \(n\). The initial state in this automaton is \(1\), and the transition function is \[(1^{n},1)\in 1^{*}\times 1\quad\mapsto\quad 1^{2n}\in 1^{*}.\] This is an example of a polyregular function, in fact it is a linear regular function. However, if we apply fold to it, then we get the function \[1^{n}\in 1^{*}\quad\mapsto\quad 1^{2^{n}}\in 1^{*}.\] which is not polyregular because of exponential growth. Example 5 ().: [Subtraction] As illustrated in Example 4, we run into trouble if iterate duplication. But we can also run into trouble when the transition function does not create any new elements. Consider an automaton where the input alphabet is \(1+1\), and the state space is the integers, represented as the list type \[\underbrace{1^{*}}_{\begin{subarray}{c}\text{represents}\\ \{-1,-2,\ldots\}\end{subarray}}+\underbrace{1^{*}}_{\begin{subarray}{c}\text{represents}\\ \{0,1,\ldots\}\end{subarray}}\] The initial state is zero, and the transition function increments or decrements the state depending on which of the two input letters from \(1+1\) it gets. This transition function is easily seen to be polyregular, and it has the property that the output size is at most the input size, assuming that the input letter contributes to the input size. However, by folding this automaton, we get a function that subsumes integer subtraction and is therefore not polyregular. Using similar ideas, one could simulate two-counter machines. ### Quantifier-free interpretations and their folding As the two above examples show, we have to be careful when applying fold. Clearly we must avoid duplication (Example 4). This can be done by requiring the polynomial functor in the interpretation to be the identity, thus ensuring that the output is no larger than the input. It is less clear how to avoid the problem with Example 5. Our solution is to use quantifier-free interpretations, as defined below. Definition 3.1 ().: A quantifier-free interpretation is the special case of mso interpretations where the polynomial functor is the identity \(F(A)=A\) and all formulas are quantifier-free. One could consider interpretations in which the formulas are quantifier-free, but the functor is not necessarily the identity; such interpretations will not be useful in this paper. The transition function in Example 5 is not quantifier-free, since decrementing a number, which corresponds to removing a list element, is not a quantifier-free operation. The following theorem is the first main contribution of this paper: fold can be safely applied to quantifier-free interpretations. Theorem 3.2: _Let \(\Sigma\) and \(\Gamma\) be any classes of structures, not necessarily list types. If the transition function_ \[\delta:\Gamma\times\Sigma\rightarrow\Gamma\] _in the assumption of the fold combinator is a quantifier-free interpretation, then the function in the conclusion is a linear mso interpretation._ **Proof** Consider an automaton as in the assumption of the theorem. For an input to this automaton \([A_{1},\ldots,A_{n}]\), and \(i\in\{0,\ldots,n\}\) we write \(B_{i}\in\Gamma\) for the state of the automaton after reading the first \(i\) input letters. The state \(B_{0}\) is the initial state, which is given by the assumption to the fold combinator, and the state \(B_{n}\) is the last state, which is the output of the function in the conclusion of the fold combinator. Our goal is to compute the last state using a linear mso interpretation. Since the functor in \(\delta\) is the identity, the output candidates are simply the elements of the input structure. Therefore, the universe of \(B_{n}\) is contained in the disjoint union of the universe of \(B_{n-1}\) and the universe of \(A_{n}\). By unfolding the induction, the universe of \(B_{n}\) is contained in the universe of the first state \(B_{0}\) and the input structure \(A=[A_{1},\ldots,A_{n}]\). Therefore, to prove that the fold is an mso interpretation, it will be enough to show that an mso formula can tell us: (a) which elements of \(B_{0}+A\) belong to the output structure; and (b) which relations of the output structure are satisfied by which tuples from \(B_{0}+A\). The answers to these questions will be contained in the quantifier-free theory of the tuple, as defined below. Definition 3.3: Let \(A\) be a structure and let \(\bar{a}\) be a list of distinguished elements, which need not belong to the universe of \(A\). The quantifier-free theory of a \(\bar{a}\) in \(A\) is the following information: which distinguished elements are in the universe, and which quantifier-free formulas are satisfied by those distinguished elements that are in the universe. Using the above terminology, to prove that the fold is definable in mso, we need to show that for each tuple in \(B_{0}+A\), we can define in mso the corresponding quantifier-free theory in the output structure \(B_{0}\). This will be done in the following claim. The key property used by the claim is the following _continuity property_ of quantifier-free interpretations: the quantifier-free theory of a tuple of output candidates in the output structure is uniquely determined by the quantifier-free theory of the same tuple in the input structure. In the following claim, we consider a function which inputs structures with tuples of \(k\) distinguished elements, and has finitely many possible output values (quantifier-free theories, in the case of the claim). Such a function is called mso definable if for every chosen output value, there is an mso formula with \(k\) free variables that selects inputs which give chosen output. Claim 3.4: _For every \(k\in\{1,2,\ldots\}\) and every tuple \(\bar{b}\) of elements in \(B_{0}\), the following function is mso definable:_ * **Input.**_A structure_ \(A\in\Sigma^{*}\) _with elements_ \(\bar{a}\in A^{k}\)_._ * **Output.**_The quantifier-free theory of_ \(\bar{a}\bar{b}\) _in_ \(B_{n}\)_._ **Proof** By the continuity property mentioned earlier in this proof, the quantifier-free theory of \(\bar{a}\bar{b}\) in \(B_{n}\) is uniquely determined by the quantifier-free theory of \(\bar{a}\bar{b}\) in the structure \((B_{n-1},A_{n})\), which in turn is uniquely determined (by compositionality) by the quantifier-free theories of \(\bar{a}\bar{b}\) in the two individual structures \(B_{n-1}\) and \(A_{n}\). Therefore, we can think of these quantifier-free theories as being computed by a finite automaton, where the initial state is the quantifier-free theory of \(\bar{b}\) in \(B_{0}\), and the input string is \[[\text{\small{qf theory of $\bar{a}$ in $A_{1},\ldots,\text{\small{qf theory of $\bar{a}$ in $A_{n}$}}}$}].\] By the continuity property, one can design a transition function for this automaton, which does not depend on the input structure \(A\) or the tuple \(\bar{a}\), such that its state after reading the first \(i\) letters is the quantifier-free theory of \(\bar{a}\bar{b}\) in \(B_{i}\). The state space of this automaton is finite, since there are finitely many quantifier-free theories once the vocabulary and number of arguments have been fixed. Since finite automata can be simulated in mso, it follows that the last state in the run of this automaton, which is the theory in the conclusion of the claim, can be defined in mso. \(\Box\) We now use the claim to complete the proof of the lemma. The output candidates of the mso interpretation are defined by the polynomial functor \[F(A)=A+\underbrace{1+\cdots+1}_{\text{\small{size of initial state $B_{0}$}}}.\] In other words, the output candidates are elements of the input list and the initial state. By the above claim, the quantifier-free theory of a single output candidate in the output structure can be defined in mso, and since this theory tells us if the output candidate is present in the universe output structure, we can use it to define the universe. Similarly, if we want to know if a tuple of output candidates satisfies some relation from the output vocabulary, then we can find this information using mso as in the above claim. \(\Box\) On its own, the theorem above does not solve all of the problems with fold. One issue is that the theorem only supports one application of fold, since the folded function is no longer quantifier-free and cannot be folded again. Another issue is that applying the theorem stays within the class of functions that do not increase the output size, while we will also be interested in folding functions that increase the size. These problems will be addressed later in the paper, by developing a suitable type system. Before continuing, we give some applications of the theorem. **Example 6**.: Consider a transition function of a finite automaton as in Example 2. In a list type of the form \(1+\cdots+1\), the component of the disjoint union that is used can be accessed by a quantifier-free formula without free variables, since it is represented using nullary relations. Therefore, the transition function is a quantifier-free interpretation, and so we can apply Theorem 3.2 to conclude that the fold is an mso transduction. This corresponds to the inclusion \[\text{regular languages}\quad\subseteq\quad\text{mso}.\] Applying Theorem 3.2 to prove this inclusion is not the right way to prove it, since the inclusion itself is used in the proof of the theorem. In Example 6, we applied the fold combinator to a finite automaton. In the following example, we give a more interesting application, where the state space is infinite. **Example 7**.: [Streaming string transducers] Define a _simple streaming string transducer_, simple ssr for short, as follows. It has two finite alphabets \(\Sigma\) and \(\Gamma\), called the _input_ and _output_ alphabets. It has a _configuration space_, which is a list type of the form \[\Delta=(\Gamma^{*})^{k_{1}}+\cdots+(\Gamma^{*})^{k_{m}}.\] In other words, the set of configurations is obtained by applying some polynomial functor to the set of strings over the output alphabet. The idea is that a configuration consists of a state, which is one of the \(m\) components, and a register valuation which is a tuple of strings over the output alphabet. The configurations of the transducer are updated according to the following three functions, which are required to be quantifier-free, according the the representation of the input and output alphabets that was used in Example 6: \[\underbrace{1\rightarrow\Delta}_{\text{initial}}\quad\underbrace{\Delta \times\Sigma\rightarrow\Delta}_{\text{transition function}}\quad\underbrace{\Delta \rightarrow\Gamma^{*}}_{\text{final}}.\] The semantics of the transducer is the function of type \(\Sigma^{*}\rightarrow\Gamma^{*}\) that is obtained by folding the first two functions, and post-composing with the final function. By Theorem 3.2, this function is an mso transduction. The model described above subsumes (and in fact, is equivalent to) the classical model of sst[1, Section 3], with the only difference (which is why we call our model simple) being that our model allows the input letter to be used only once (as opposed to a constant number of times) in the registers. This is because string concatenation, which is the operation used to update registers in an sst, is a quantifier-free operation. Therefore, Theorem 3.2 can be seen as subsuming the implication \[\text{copyless sst}\subseteq\text{deterministic mso}\] transductions proved in (1, Theorem 3). The same idea will work for trees, as we will see in Section 6.1. **Example 8**.: [Graphs] As mentioned in Theorem 3.2, the folded automaton need not operate on classes that are list types. For instance, we could adapt Example 7 to transducers in which the registers, instead of storing strings, store graphs with \(k\) distinguished vertices, as in Courcelle's algebras for treewidth (12, Section 1.4). We could still apply Theorem 3.2, since the corresponding operations on graphs are quantifier-free. Similar ideas would also work for cliquewidth. ## 4. Deriving quantifier-free functions As we have shown in Theorem 3.2, the fold combinator can be safely applied to quantifier-free interpretations. Before discussing the fold combinator, we take a minor detour in this section, and present a complete system for the quantifier-free interpretations. _A few examples._ We begin with examples and non-examples of quantifier-free interpretations operating on list types. **Example 9**.: [Commutativity of product] Consider the function of type \[\Sigma_{1}\times\Sigma_{2}\rightarrow\Sigma_{2}\times\Sigma_{1},\] which swaps the order in a pair. Like all examples in this section, this is actually an infinite family of functions, one for every choice of \(\Sigma_{1}\) and \(\Sigma_{2}\). The function is a quantifier-free interpretation. The only change between the input and output concerns the unary relation from the definition of the product class \(\Sigma_{1}\times\Sigma_{2}\) which tells us if an element is from the first coordinate; this relation needs to be complemented. **Example 10**.: [List reverse and concatenation] Consider the list reverse function of type \(\Sigma^{*}\rightarrow\Sigma^{*}\). This is clearly a quantifier-free interpretation - it is enough to replace the order \(x\leq y\) with its reverse \(y\leq x\). A similar idea works for the list concatenation function of type \(\Sigma^{**}\rightarrow\Sigma^{*}\) which concatenates a list of lists into a list. In the input structure, there are two linear orders, corresponding to the inner and outer lists. To get the output structure, we use the lexicographic product of these two orders, which can be defined in a quantifier-free way. **Example 11**.: [List constructor and destructor] Consider the (left) list constructor \[1+\Sigma\times\Sigma^{*}\rightarrow\Sigma^{*},\] that was discussed in Example 3. This is a quantifier-free interpretation. If the input is from 1, which can be tested in a quantifier-free way using the nullary relation from the co-product, then the output list is created in the natural way. Otherwise, if input is a pair from \(\Sigma\times\Sigma^{*}\), then the order on the concatenated list can easily be defined by using the unary predicate that identifies the first argument of a pair. The list constructor is bijective, and therefore it has a corresponding inverse of type \[\Sigma^{*}\to 1+\Sigma\times\Sigma^{*},\] which we call the _list destructor_. The list destructor is not a quantifier-free interpretation. The reason is that if the input is an nonempty list, then we would need to isolate in a quantifier-free way the elements from the head, i.e. from the first list element, which cannot be done. \(\Box\) **Example 12**.: [Diagonal] Another non-example is \(x\mapsto(x,x)\). This is not a quantifier-free interpretation, since the output size is bigger than the input size. \(\Box\) _A complete system._ We now present a complete characterization of quantifier-free interpretations on list types. The system will be used as a basis for the system in the next section, which will describe general mso interpretations. **Theorem 4.1**.: _The quantifier-free interpretations between list types are exactly those that can be derived from the prime functions in Figure 1 by applying the combinators from Figure 2._ The proof of the above theorem, with completeness being the non-trivial part, is in the appendix. ### String diagrams We conclude this section with several example derivations of quantifier-free functions using the system from Theorem 3.2. To present these derivations, we use string5 diagrams based on [11, Chapter 3], as depicted in Figure 3. Footnote 5: This is a name clash: the word “string” relates to the shape of the diagrams, and not to the fact that they manipulate types that represent strings. We also use string diagrams with a yellow background, where parallel wires represent co-products. For example, the following diagram represents the prime function from Figure 1 that describes commutativity of \(+\): Here are two other examples of string diagrams, which use dead ends, and represent projections and co-projections: **Example 13**.: Recall the representation of finite sets as list types \(1+\cdots+1\) used in Examples 2 and 6. Under this representation, every function between finite sets is derivable using the prime functions and combinators of Theorem 3.2. This is easily seen using string diagrams, as illustrated below: Figure 1. The prime quantifier-free functions. Figure 3. A string diagram that derives the binary operation of type \(\Sigma^{*}\times\Sigma^{*}\to\Sigma^{*}\) for list concatenation. Figure 2. The quantifier-free combinators. The representation of finite sets as co-products is important here. For example, the diagonal function \(1\to 1\times 1\) is not derivable, as explained in Example 12. ## 5. Deriving polyregular functions We now move beyond quantifier-free functions and present the main contribution of this paper, which is a system that derives exactly the polyregular functions. As explained in Example 5, we cannot simply add the fold combinator to the system from Theorem 3.2. Another idea would be to have two kinds of functions: quantifier-free functions, and general polyregular functions, with the fold combinator used to go from one kind to the other. In such a system, the only contribution of fold would be to define linear regular functions, since such are the functions in the conclusion of Theorem 3.2. We are more ambitious, and we want the fold combinator to be useful also for non-linear functions. To define a system with fold, we add a new unary type constructor. This type constructor is denoted by \(!\) and it is written on the left. The general idea is that an element \(!x\) is essentially the same element as \(x\), except that it is harder to obtain. The type constructor is not idempotent, and so \(!!x\) is even harder to obtain than \(!x\). The goal of this type constructor is to restrict the application of fold in a way that avoids the problems discussed in Section 3.1. This is done by using the following _safe fold_ combinator: \[\frac{!^{k}1\rightarrow\Gamma}{!^{k}(\Sigma^{*})\rightarrow\Gamma}\qquad \text{safe fold}\] In the combinator, \(!^{k}\) refers to \(k\)-fold application of \(!\). When applying the combinator, the number \(k\in\{0,1,\ldots\}\) must be strictly bigger than the grade of \(\Gamma\), which is defined to be the maximal nesting of \(!\), as in the following examples: \[\underbrace{\frac{1^{*}}{\text{grade zero}}}_{\text{grade zero}}\quad \underbrace{1+!(1+!1)}_{\text{grade two}}.\] For example, when \(\Gamma\) has grade zero, i.e. it does not use \(!\), then safe fold can be used in the form \[\frac{!1\rightarrow\Gamma}{!(\Sigma^{*})\rightarrow\Gamma}\qquad\text{ safe fold when $\Gamma$ is without!}\] The general idea is that the annotation with \(!\) will disallow certain kinds of repeated applications of fold that would lead to functions that are not polyregular. Before giving a formal description of the system, we begin with an example. **Example 14**.: [List destructor] In this example, we use safe fold to derive a variant of the list destructor \[\Sigma^{*}\to 1+\Sigma^{*}\times\Sigma\] that was discussed in Example 11. Consider an automaton where the state space is the output type of the list destructor, the initial state is \(1\), and the transition function is \[!(\Sigma^{*})\to 1+\Sigma^{*}\times\Sigma.\] By applying the safe fold to this automaton, we get the list deconstructor in a weaker type, namely \[!(\Sigma^{*})\to 1+\Sigma^{*}\times\Sigma.\] The weaker type avoids the issues from Example 5, since the input and output will have different numbers of \(!\), and therefore we will be unable to apply fold again. ### Graded types and their derivable functions We now give a formal description of the system. The type system is the same as previously, except that we have one more type constructor for \(!\). **Definition 5.1**.: _A graded list type is any type that is constructed using the following type constructors_ \[\underbrace{\frac{1}{\text{a type with}}}_{\text{one element}}\quad \underbrace{\Sigma_{1}\times\Sigma_{2}}_{\text{pair}}\quad\underbrace{\Sigma_ {1}+\Sigma_{2}}_{\text{eq-pair, i.e.}}\quad\underbrace{\Sigma^{*}}_{\text{ listus}}\quad\quad!\Sigma.\] The general idea is that \(!\) does not change the underlying set, but only introduces some type annotation that controls the way fold and duplication can be applied. Apart from safe fold, the main way of dealing with \(!\) is the duplicating operation \[!\Sigma\rightarrow\mathbb{I}\times\Sigma\qquad\qquad\text{absorption},\] which is named after the same rule in the parsimonious calculus of Mazza (20, p.1). There are also prime functions for commuting \(!\) with the remaining type constructors, for example \(![x,y,z]\) and \([!x,y,!z]\) are going to be equivalent in our system; for this reason we can write \(!\Sigma^{*}\) without specifying the order in which the two constructors are applied. **Definition 5.2**.: _There are two kinds of derivability for functions between graded list types._ 1. _Strongly derivable__._ _A function is called_ strongly derivable _if it can be derived using quantifier-free prime functions and combinators from Figures_ 1 _and_ 2_, extended to graded list types that can use \(!\), along with four new prime functions_ \[!(\Gamma+\Sigma)\leftrightarrow|\Gamma+\mathord{\raisebox{-1.0pt}{ \raisebox{-1.0pt}{\raisebox{-1.0pt}{\raisebox{-1.0pt}{\raisebox{-1.0pt}{\raisebox {-1.0pt}{\raisebox{-1.0pt}{\raisebox{-1.0pt}{\raisebox{-1.0pt}{\raisebox{-1.0pt}{ \raisebox{-1.0pt}{\raisebox{-1.0pt}{\raisebox{-1.0pt}{\raisebox{-1.0pt}{\raisebox{-1.0pt}{ \raisebox{-1.0pt}{\raisebox{-1.0pt}{\raisebox{-1.0pt}{\raisebox{-1.0pt}{\raisebox{-1.0pt} {\raisebox{-1.0pt}{\raisebox{-1.0pt}{\raisebox{-1.0pt}{\raisebox{-1.0pt}{\raisebox{-1.0pt} {\raisebox{-1.0pt}{\raisebox{-1.0pt}{\raisebox{-1.0pt}{\raisebox{-1.0pt}{-1.0pt}{ \raisebox{-1.0pt}{\raisebox{-1.0pt}{\raisebox{-1.0pt}{\raisebox{-1.0pt}{-1.0pt \raisebox{-1.0pt}{\raisebox{-1.0pt}{\raisebox{-1.0pt}{-1.0pt{\raisebox{-1.0pt}{ \raisebox{-1.0pt}{-1.0pt{\raisebox{-1.0pt}{-1.0pt{-1.0pt}{\raisebox{-1.0pt -1.0pt{-1.0pt}{\raisebox{-1.0pt{-1.0pt}{-1.0pt{-1.0pt{-1.0pt}{-1.0pt \raisebox{-1.0pt{-1.0pt}{-1.0pt{-1.0pt{-1.0pt}{-1.0pt{-1.0pt{-1.0pt }-1.0pt{-1.0pt{-1.0pt{-1.0pt{-1.0pt}-1.0pt{-1.0pt{-1.0pt{-1.0pt }-1.0pt{-1.0pt{-1.0pt{-1.0pt{-1.0pt{-10pt-1.0pt{-10pt-1.0pt{-10pt -1.0pt{-10pt-1.0pt{-10pt-1.0pt{-10pt-1.0pt{-10pt-1.0pt{-10pt-1.0pt{-10pt -1.0pt{-10pt-1.0pt{-10pt-1.0pt{-10pt-1.0pt{-10pt-1.0pt{-10pt-10pt.0pt-1.0pt{-10pt-1.0pt{-10pt-1.0pt{-10pt-1.0pt{-10pt-1.0pt{-10pt-10. 0pt-1.0pt{-10pt-1.0pt{-10pt-1.0pt{-10pt-1.0pt-{-10pt-10.0pt-1.0pt {-10pt-1.0pt-{-10pt-1.0pt-{-10pt-1.0pt-{-10pt-1.0pt-{-10pt-10. 0pt-1.0pt-{-10pt-{-10pt-{-10pt-{-10pt-{-10pt-{-0pt-10 0pt-{-0pt-10pt-{-0pt-10pt-{-0pt-00pt-{-0 0pt-0pt-0pt-10pt-{{00pt-00pt-0pt-0 0pt-0pt-{{00pt-00pt-0pt-0pt-0pt-{{000pt-00 pt-0pt-0pt-0pt-0pt-{{000pt-00pt-00pt-0pt-0pt-0pt{{000 pt-0pt-00pt-0pt-0pt-0pt-0pt{{000pt-00pt-0pt-0pt-0pt-0pt{{000 pt-0pt-00pt-0pt-0pt-0pt{{000pt-00pt-00pt-0pt-0pt-0pt{{000 pt-00pt-0pt-0pt-0pt-0pt{{000pt-00pt-0pt-00pt-0pt-0pt{{000pt-00pt-0pt-0pt-0pt-0pt{{0000pt-0pt-00pt-0pt-0pt-0pt{{000pt-00pt-0pt-00pt-0pt-0pt{{000pt-00pt-0pt-0pt-00pt-0pt{{000pt-00pt-0pt-0pt-0pt-0pt-0pt{{0000pt-00pt-0pt-0pt-0pt-0pt{{0000pt-0pt-00pt-0pt-0pt-0pt{{0000pt-0pt-00pt-0pt-0pt-0pt{00pt-0pt-0pt-0pt{000pt-0pt-0pt-0pt{00pt-0pt-0pt{00pt-0pt-00pt-0pt{00pt-0pt-0pt-0pt{000pt-0pt-0pt{00pt-0pt-0pt-0pt{00pt-0pt-0pt-0pt{00pt-0pt-0pt{00pt-0pt-0pt-0pt{00pt-0pt-0pt-0pt{00pt-0pt-0pt-0pt{00pt-0pt-0pt-0pt{00pt-0pt-0pt-0pt{00pt-00pt-0pt{00pt-0pt-0pt-0pt{00pt-0pt-0pt-0pt{00pt-0pt-0pt-0pt{00pt-0pt-0pt{00pt-0pt-0pt-0pt{00pt-0pt-0pt-0pt{00pt-0pt-0pt-0pt{00pt-0pt-0pt-0pt{00pt-0pt-0pt-0pt{00pt-0pt-0pt-0pt{00pt-0pt-0pt{00pt-0pt-0pt-0pt{00pt-0pt-0pt-0pt{00pt-0pt-0pt-0pt{00pt-0pt-0pt-0pt{00pt-0pt-0pt-0pt{00pt-0pt-0pt-0pt{00pt-0pt-0pt-0pt{00pt-0pt-0pt-0pt{00pt-0pt-0ptpt-0pt{00pt-0pt-0ptpt-0pt{00pt-0pt-0ptpt-0pt{00pt-0pt-0ptpt-0ptpt-00ptpt-0ptpt-0pt-0ptpt-0pt{00pt-0ptpt-0ptpt-0pt{00pt-0ptpt-0ptpt-0ptpt-0ptpt-0ptpt-0ptpt-0pt- The idea is that the grade of an element is the number of times that \(!\) has been applied, as in the following example \[(\underbrace{1}_{\begin{subarray}{c}\text{grade}\\ \text{zero}\end{subarray}},\underbrace{![[1,1,1]]}_{\begin{subarray}{c}\text{ grade}\\ \text{one}\end{subarray}}).\] A graded list type can be seen as describing a class of graded structures, with the constructor \(!\) incrementing the grade of all elements, and the remaining constructors treated in the same way as in Definition 2.4. If \(A\) is a graded structure, we write \(A|\ell\) for the structure that is obtained from \(A\) by restricting its universe to elements that have grade at least \(\ell\). In the definition of a graded mso interpretation, we use the grades to control how an mso interpretation \(f\) uses quantifiers. The general idea is that \(f(A)|\ell\) depends on \(A|\ell\) in a quantifier-free way, and on \(A|\ell+1\) in an mso definable way. Before presenting the formal definition, we introduce some notation, in which a polynomial functor \(F\) is applied to a tuple of elements \(\bar{a}\), yielding a new (typically longer) tuple of elements \(F(\bar{a})\). If an input set \(A\) for a polynomial functor \(F\) is equipped with some linear order, then this linear order can be extended to a linear order on the output set \(F(A)\), by using some fixed order on the components, and ordering tuples lexicographically. This way we can think of a polynomial functor as transforming linearly ordered sets, i.e. lists. We will care about lists of fixed length, which we call tuples. For example if the polynomial functor is \(A+A^{2}\), then applying it to the tuple \((1,2)\) gives the tuple \[(1,2,1,2,(1,1),(1,2),(2,1),(2,2))\in F(\{1,2\})^{6}.\] In the definition below, we will care about the theories of tuples of the form \(F(\bar{a})\), with the theories defined as in Definition 3.3, but extended to mso formulas of given quantifier rank (the quantifier rank of an mso formula is the nesting depth of the quantifiers, with first-order and second-order quantifiers counted in the same way). Recall that these theories allow for distinguished elements that are not part of the universe in a structure. Equipped with this notation, we are ready define the graded version of mso interpretations. **Definition 5.7**.: _A function \(f:\Sigma\to\Gamma\) is called a graded mso interpretation if there is some polynomial functor_ \[F(A) =\underbrace{A}_{\begin{subarray}{c}\text{this is called the}\\ \text{quantifier-free}\\ \text{component}\end{subarray}}+\underbrace{F_{0}(A)+\cdots+F_{m}(A)}_{ \begin{subarray}{c}\text{components from this part}\\ \text{of the functor are called the}\\ \text{downgrading components}\end{subarray}}\] _such that the following conditions hold:_ 1. **Universe and grades.** _The universe of the output structure is contained in_ \[A+F_{0}(A|1)+F_{1}(A|2)+\cdots+F_{m}(A|m+1).\] _The grades in the output structure are defined as follows: elements from_ \(F_{t}\) _have grade_ \(\ell\)_, and elements from the quantifier-free component inherit their grade from_ \(A\)_._ 2. **Continuity.** _For every_ \(k,\ell\in\{0,1,\ldots\}\) _there is some quantifier rank_ \(r\in\{0,1,\ldots\}\) _such that for every input structure_ \(A\) _and distinguished elements_ \(\bar{a}\in A^{k}\)_, the quantifier-free theory of the tuple_ \(F(\bar{a})\) _in_ \(f(A)|\ell\) _is uniquely determined by the following two theories:_ 1. _the quantifier-free theory of_ \(\bar{a}\) _in_ \(A|\ell\)_;_ 2. _the rank_ \(r\)__mso _theory of_ \(\bar{a}\) _in_ \(A|\ell+1\)_._ If we ignore the grades, then a graded mso interpretation is a special case of an mso interpretation. This is because the quantifier-free type mentioned in the continuity condition will tell us which output candidates from \(F(A)\) are in the universe of the output structure, and how the relations of the output structure are defined on them. Therefore, the continuity condition tells us that the output not only can be defined in mso, but it can be defined in a way that respects the grades. In particular, in the special case when all input elements have nonzero grade, and all output elements have zero grade, the continuity condition collapses to the usual condition in an mso interpretation. In this way, graded mso interpretations generalize ungraded mso interpretations. Graded mso interpretations also generalize quantifier-free interpretations - this happens in the case when all elements in the input and output structures have grade zero. In this case, only the quantifier-free component is useful, and all formulas are quantifier-free. In the appendix, we show that all strongly derivable prime functions are graded mso interpretations. This will imply that all weakly derivable functions are ungraded mso interpretations, since the continuity condition becomes vacuous when the input type is sufficiently upgraded. The proof is an induction on the size of a strong derivation, with the most interesting cases being composition and safe fold. Composition is a corollary of composition closure for mso interpretations on string types (Grover and Leskovec, 2010, Corollary 8), while safe fold is treated in the same way as in Theorem 3.2. ## 6. Linear regular functions The last group of results from this paper concerns the linear regular functions, i.e. polyregular functions of linear growth. We show that a small change to the system from Theorem 5.3 will give exactly the linear regular functions. As we will see, superlinear growth in the system from Theorem 5.3 is not created by the fold combinator, with the culprit instead being \[!\Gamma\to!\Gamma\times\Gamma\] \[\text{absorption}.\] This function allows us to create an unbounded number of copies of an element of \(\Gamma\), as witnessed in the proof of Claim 5.4. If we simply remove this function, then the system will become too weak, since all other prime functions and combinators preserve the property that the universe of the output structure is contained in the universe of the input structure. The solution is to add a weaker form of absorption \[!\Gamma\to\Gamma\times\Gamma\qquad\qquad\text{linear absorption}.\] In other words, removing _all_ occurrences of \(!\) is the price paid for copying. The corresponding system describes exactly the linear regular functions, as stated in the following theorem. A function \(f:\Sigma\to\Gamma\) between string types is linear regular if and only if it can be weakly derived in a system that is obtained from the one6 in Theorem 5.3 by replacing absorption with linear absorption. Footnote 6: One can also start with the smaller system from Theorem 5.5. The proof for the above theorem, which is in the appendix, is based on Example 7 about streaming string transducers. The idea is that linear absorption together with fold is enough to simulate streaming string transducers, which are expressively complete the linear regular functions. ### Tree types It turns out that the system for linear regular functions from Theorem 6.1 can be generalized without much further difficulty to trees. This is in contrast to a prior combinator system for trees (Kurur and D'Antoni, 2006, Theorem 7.1), which had an involved proof using approximately fifty prime functions. We belive that this is evidence for the usefulness of the fold combinator. Consider a type for trees, defined inductively by A _tree type_ is a type that is constructed using the types from Definition 2, together with the tree type. Tree types can be seen as structures, using the same construction as for lists in Defintion 2, except that instead of one linear order, we have two orders: the _descendant order_ (which is not a linear order) and the _document order_ given by \[\text{left subtree}\quad<\quad\text{root}\quad<\quad\text{right subtree}.\] Define a _linear regular tree function_ to be a function between tree types that is defined using linear mso transductions. Following Wilke (Wilke, 1992), we view trees as an algebra. In this algebra, there is an additional type constructor \(\C\Sigma\), which describes _contexts_. A context is a tree with a distinguished leaf (called the _hole_) where other trees can be inserted. This is not a primitive type constructor, only syntactic sugar for a certain combination of the list and tree type constructors: \[\C\Sigma\stackrel{{\text{def}}}{{=}}\big{(}(\T\Sigma\times\Sigma) +\underbrace{(\Sigma\times\T\Sigma)}_{\begin{subarray}{c}\text{the hole is in}\\ \text{the right subtree}\end{subarray}}\big{)}^{*}.\] To operate on trees and contexts, we use the following operations, called _Wilke's operations_,see (Wilke, 1992, Figure 1): \[1+\T\Sigma\times\Sigma\times\T\Sigma \to\T\Sigma\qquad\qquad\text{tree constructor}\] \[\C\Sigma\times\T\Sigma \to\T\Sigma\quad\qquad\text{replace hole by a tree}\] \[\C\Sigma\times\C\Sigma \to\C\Sigma\quad\qquad\text{context composition}\] \[1+(\T\Sigma\times\Sigma)+(\Sigma\times\T\Sigma) \to\C\Sigma\qquad\qquad\text{context creation}\] All of these operations are quantifier-free interpretations, and we will use them as primes. The last two operations need not be explicitly added, since they can derived using the system from Theorem 3.2. A function \(f:\Sigma\to\Gamma\) between tree types is linear regular if and only if it can be derived in a system that is obtained from the system in Theorem 6.1 by adding the tree type, Wilke's operations, the prime function \[!\T\Sigma\leftrightarrow\T!\Sigma\qquad\qquad!\text{commutes with }\T\] and the following combinator \[\frac{!k!\to\Gamma\quad\Gamma\times\Sigma\times\Gamma\to\Gamma}{!k!\T\Sigma \to\Gamma\!\Gamma\!}\qquad\qquad\text{safe tree fold},\] which can be applied whenever \(\Gamma\) has grade \(<k\). [Sketch] As in Theorem 6.1. We use the same soundness proof, except that tree automata are used instead of string automata. For completeness, we use a result of Alur and D'Antoni, which says that every linear mso interpretation is computed by a streaming tree transducer (Alur and D'Antoni, 2006, Theorem 4.6). Adjusting for notation, a streaming tree transducer is defined in the same way as in Example 7, except that instead of lists, registers store trees and contexts. The registers in the transducer are manipulated using Wilke's operations; and thus for the same reason as in Example 7, the corresponding tree function is weakly derivable. This completeness proof takes into account only functions of type \(\T\Sigma\to\T\Gamma\) where \(\Sigma\) and \(\Gamma\) are finite alphabets, but the extension to other tree types is easily accomplished by encoding tree types into such trees. Tree polyregular functionsIt is natural to ask about a polyregular system for trees. We conjecture that if we add absorption to the system from Theorem 6.2, and possibly a few extra prime functions, then the system will define exactly the mso interpretations on tree types. This conjecture would imply that tree-to-tree mso inprepretations are closed under composition, which is an open problem. ## 7. Perspectives We finish the paper with some directions for future work. In our proofs, we are careless about the number of times that \(!\) is applied. Maybe a more refined approach can give a better understanding of the correspondence between the nesting of \(!\) and the resources involved, such as quantifiers or copying. Alternatively, one could try to do away with! entirely, and use some proof system where the safety of fold is captured by a structural property of the proof. One idea in this direction is to look at cyclic proofs [10]. Another idea would be to capture the structural property using the visual language of string diagrams. Another question that concerns string diagrams is about the equivalence problem. Decidability of the equivalence problem for polyregrular functions is an open problem, but in the case of linear functions the problem is known to be decidable [16, Theorem 1]. Maybe one can express the decision procedure in terms of string diagrams, by designing equivalences on string diagrams which identify exactly those diagrams that describe the same function. The system in this paper is based on combinators. A more powerful system would also allow for variables, \(\lambda\), and higher-order types. Such a system exists without fold [6, Section 4], and it is tempting to see if it can be extended with fold. The result would be an expressive functional programming language that can only define regular functions.
2310.13022
Uncertainty-aware Parameter-Efficient Self-training for Semi-supervised Language Understanding
The recent success of large pre-trained language models (PLMs) heavily hinges on massive labeled data, which typically produces inferior performance in low-resource scenarios. To remedy this dilemma, we study self-training as one of the predominant semi-supervised learning (SSL) approaches, which utilizes large-scale unlabeled data to generate synthetic examples. However, too many noisy labels will hurt the model performance, and the self-training procedure requires multiple training iterations making it more expensive if all the model parameters of the PLM are updated. This paper presents UPET, a novel Uncertainty-aware Parameter-Efficient self-Training framework to effectively and efficiently address the labeled data scarcity issue. Specifically, we incorporate Monte Carlo (MC) dropout in Bayesian neural network (BNN) to perform uncertainty estimation for the teacher model and then judiciously select reliable pseudo-labeled examples based on confidence and certainty. During the student training, we introduce multiple parameter-efficient learning (PEL) paradigms that allow the optimization of only a small percentage of parameters. We also propose a novel Easy-Hard Contrastive Tuning to enhance the robustness and generalization. Extensive experiments over multiple downstream tasks demonstrate that UPET achieves a substantial improvement in terms of performance and efficiency. Our codes and data are released at https: //github.com/wjn1996/UPET.
Jianing Wang, Qiushi Sun, Nuo Chen, Chengyu Wang, Jun Huang, Ming Gao, Xiang Li
2023-10-19T02:18:29Z
http://arxiv.org/abs/2310.13022v1
# Uncertainty-aware Parameter-Efficient Self-training for Semi-supervised Language Understanding ###### Abstract The recent success of large pre-trained language models (PLMs) heavily hinges on massive labeled data, which typically produces inferior performance in low-resource scenarios. To remedy this dilemma, we study self-training as one of the predominant semi-supervised learning (SSL) approaches, which utilizes large-scale unlabeled data to generate synthetic examples. However, too many noisy labels will hurt the model performance, and the self-training procedure requires multiple training iterations making it more expensive if all the model parameters of the PLM are updated. This paper presents UPET, a novel **U**ncertainty-aware **P**arameter-**E**fficient self-**T**raining framework to effectively and efficiently address the labeled data scarcity issue. Specifically, we incorporate Monte Carlo (MC) dropout in Bayesian neural network (BNN) to perform uncertainty estimation for the teacher model and then judiciously select reliable pseudo-labeled examples based on confidence and certainty. During the student training, we introduce multiple parameter-efficient learning (PEL) paradigms that allow the optimization of only a small percentage of parameters. We also propose a novel Easy-Hard Contrastive Tuning to enhance the robustness and generalization. Extensive experiments over multiple downstream tasks demonstrate that UPET achieves a substantial improvement in terms of performance and efficiency. Our codes and data are released at [https://github.com/wjn1996/UPET](https://github.com/wjn1996/UPET). ## 1 Introduction Pre-trained language models (PLMs) have become the imperative infrastructure in a series of downstream natural language understanding (NLU) tasks Devlin et al. (2019); Liu et al. (2019); Yang et al. (2019), aiming at capturing prior knowledge by pre-training over large-scale unsupervised corpora and fine-tuning on the target tasks. However, the conventional fine-tuning approaches heavily depend on the time-consuming and labor-intensive process of data annotation, which could be even more bothersome in some real-world scenarios and typically produces inferior performance in few-shot settings Liu et al. (2021); Kojima et al. (2022). Recently, self-training Chawla and Karakoulas (2005); Amini et al. (2022) has been presented to address the labeled data scarcity issue by leveraging the large-scale unlabeled data in addition to labeled data, which is one of the mature paradigms in semi-supervised learning Qi and Luo (2022); Yang et al. (2021); Chawla and Karakoulas (2005); van Engelen and Hoos (2020); Yang et al. (2021). A _teacher_ model is fine-tuned on the few-shot labeled data, then the pseudo label of each unlabeled example can be generated. After that, a _student_ model can learn the knowledge derived from the large-scale pseudo-labeled data, leading to better performance near to full-supervised learning. Previous works typically use self-training in conjunction with large PLMs to endow the model with the ability of few-shot learning. Despite the big success, we observe that there are still two challenges. 1) The pseudo-labeled data consists of too many noises, inevitably degrading the model performance due to confirmation bias Wang et al. (2021). 2) The procedure of self-training is too expensive when updating all parameters of the large PLM 1Wang et al. (2022). Footnote 1: Generally, the number of training pseudo-labeled data for the student model is larger than labeled data. Fortunately, parameter-efficient learning (PEL) opens up the possibility of attaining near state-of-the-art performance, whilst adding only a few parameters per task Mao et al. (2022); Ding et al. (2023); Zhang et al. (2023). Notable PEL-based methods include Ptuning Liu et al. (2021), Prefix tuning Li and Liang (2021), Adapter Houlsby et al. (2019), BitFit Zaken et al. (2022), LoRA Hu et al. (2022), etc. Yet, it is unclear how these PEL-based methods can be applied to self-training. In this paper, we develop a novel Uncertainty-aware **P**arameter-**E**fficient self-**T**raining framework (UPET) for improving self-training through two perspectives, i.e., effectiveness and efficiency. To reach these goals, we respectively present two novel techniques, including _Reliable Example Sampling_ (RES) and _Efficient Robust Tuning_ (ERT). The goal of RES is to explicitly mitigate the effect of label noises. Concretely, we obtain the prediction probability distribution over all unlabeled data derived from the teacher model. Then, we utilize Monte Carlo (MC) dropout technique in Bayesian neural network (BNN) Gal and Ghahramani (2016); Wang and Yeung (2016) to estimate the uncertainty of each unlabeled example. To this end, the example with higher confidence and certainty will be judiciously selected as the reliable pseudo-labeled data. In ERT, we aim to leverage PEL paradigms to train a robust student model over reliable pseudo-labeled data. We design multiple PEL-based model architectures for the student model that only need to update a small scope of tunable parameters in PLM during iterative self-training. Additionally, we introduce Easy-Hard Contrastive Tuning to improve the robustness of the parameter-efficient model, which can be viewed as a regularization in the semantic space that keeps the noisy labels away from the reliable examples. We conduct extensive experiments over multiple NLU tasks. Results show that UPET outperforms strong baselines in terms of both effectiveness and efficiency. The improvement is consistent in different settings with different PEL methods and the number of labeled data. Our key contributions to this field are summarized as follows: 1) We use parameter-efficient learning of PLMs in conjunction with uncertainty estimation to form an efficient and effective self-training framework. 2) To better improve the robustness of the parameter-efficient model, we introduce Easy-Hard Contrastive Learning. 3) Extensive experiments among a wide range of tasks demonstrate that our proposed framework outperforms prevailing strong baselines. ## 2 Related Work **Semi-supervised Learning and Self-training.** SSL aims to effectively utilize unlabeled data in addition to labeled data, which has been widely used in the NLP community Yang et al. (2017); Gururangan et al. (2019); Xie et al. (2020); Chen et al. (2020). For instance, Yang et al. (2017); Gururangan et al. (2019) utilize variational autoencoders (VAEs) for sequence classification and labeling. Chen et al. (2020) proposes MixText to mix labeled, unlabeled, and augmented data, and performs similar consistency training as UDA Xie et al. (2020). Self-training is one of the mature SSL approaches that use _teacher-student_ architecture to augment data Hu and Khan (2021); Mukherjee and Awadallah (2020); Amini et al. (2022); Wang et al. (2021); Tsai et al. (2022). For example, Hu and Khan (2021) presents uncertainty estimation for denoising self-training. Tsai et al. (2022) introduces graph-based contrastive learning to preserve consistency regularization. Wang et al. (2021) incorporates self-training into sequence labeling tasks by automatic weighting strategy. **Parameter-Efficient Learning.** PEL is to optimize a small portion of parameters while keeping the model backbone frozen, which aims at improving the training efficiency and preserving the model's effectiveness He et al. (2022). Houlsby et al. (2019) integrates task-specific neural modules called _adapters_ into PLMs, and only these _adapters_ are updated during fine-tuning. Ptuning Liu et al. (2021) and Prefix-Tuning Li and Liang (2021) respectively introduce a lightweight prefix module into the input layer and each transformer layer, enabling efficient training over these prefix modules. Notable PEL-based models also include BitFit Zaken et al. (2022), LoRA, etc. This paper integrates PEL into self-training to improve its efficiency. ## 3 UPET: The Proposed Method Given a labeled set \(\mathcal{D}_{l}=\{(X_{i},Y_{i})\}_{i=1}^{N_{l}}\) and an unlabeled set \(\mathcal{D}_{u}=\{\widetilde{X}_{i}\}_{i=1}^{N_{u}}\), where \(N_{l}\) and \(N_{u}\) respectively denote the number of labeled set and unlabeled set (\(N_{l}\ll N_{u}\)). \(X_{i},\widetilde{X}_{i}\in\mathcal{X}\) denote the input sentence in the labeled set and unlabeled set, respectively. \(Y_{i}\in\mathcal{Y}\) is the corresponding label of \(X_{i}\). The task is to train a neural model \(f^{W}\) and pseudo label for each unlabeled example \(\widetilde{X}_{i}\), where \(f^{W}\): \(\mathcal{X}\rightarrow\mathcal{Y}\) is a function with parameters \(W\) to map the input space \(\mathcal{X}\) to the label space \(\mathcal{Y}\). We aim to answer the following research problem: * **RQ1**: How can we mitigate the problem of noisy pseudo labels via judiciously selecting reliable examples? * **RQ2**: How can the model parameters be efficiently updated during the iterative self-training process, meanwhile preserving the model's robustness and performance? We thus propose the UPET framework which consists of two novel techniques, i.e., _Reliable Example Sampling_ (RES) and _Efficient Robust Tuning_ (ERT). The framework overview is illustrated in Figure 1 and the detailed algorithm is shown in Appendix B. ### Fine-Tuning and Pseudo Annotation We start with a fine-tuning stage over the few-shot labeled data \(\mathcal{D}_{l}\) to form a _teacher_ model \(f_{tea}^{W}\). After that, the pseudo label \(\widetilde{Y}_{i}\) of each unlabeled example \(\widetilde{X}_{i}\) can be generated by the teacher model: \[\widetilde{Y}_{i}=\arg\max_{c}p(y=c|f_{tea}^{W}(\widetilde{X}_{i})), \tag{1}\] where \(p(\cdot)\) is the probability distribution. However, the generated labels may be wrong due to the model confirmation bias problem. That means we need to explicitly reduce the noise problem by designing a suitable sample selection strategy. ### Reliable Example Sampling To reach this goal, we follow Tsai et al. (2022); Mukherjee and Awadallah (2020); Hu and Khan (2021) to leverage uncertainty estimation from BNN to measure what the _reliable_ unlabeled examples can be selected for training. we follow Houlsby et al. (2011); Gal et al. (2017); Tsai et al. (2022) to leverage _information gain_ of the model parameters to show how certain the model is to the pseudo-labeled examples w.r.t. the true labels 2. Typically, the information gain can be defined as: Footnote 2: The model certainty can be used to estimate the reliability of the unlabeled example, even though the label is unknown. \[\mathbb{B}(\widetilde{Y}_{i},W|\widetilde{X}_{i},\mathcal{D}_{u})= \mathbb{H}(\widetilde{Y}_{i}|\widetilde{X}_{i},\mathcal{D}_{u})- \tag{2}\] \[\mathbb{E}_{p(W|\mathcal{D}_{u})}[\mathbb{H}(\widetilde{Y}_{i}| \widetilde{X}_{i},W)],\] where \(W\) denotes the parameters of the teacher. \(\mathbb{B}(\widetilde{Y}_{i},W|\widetilde{X}_{i},\mathcal{D}_{u})\) denotes the information gain which is the difference between \(\mathbb{H}(\widetilde{Y}_{i}|\widetilde{X}_{i},\mathcal{D}_{u})\) (the final entropy after seeing all examples from unlabeled sentences) and \(\mathbb{H}(\widetilde{Y}_{i}|\widetilde{X}_{i},W)\) (the current entropy for the example \(\widetilde{X}_{i}\)). \(p(W|\mathcal{D}_{u})\) is the posterior distribution. As the calculation of Eq. 2 is intractable, we utilize MC dropout in BNN to perform approximation. Specifically, we assume that the posterior distribution \(p(W|\mathcal{D}_{u})\) can be replaced with dropout distribution \(q_{\theta}(W)\). Thus, we can sample \(T\) masked model weight \(\{\widetilde{W}_{t}\}_{t=1}^{T}\sim q_{\theta}(W)\), and calculate the approximation value as: \[\hat{\mathbb{B}}(\widetilde{Y}_{i},W|\widetilde{X}_{i},\mathcal{D }_{u})= -\sum_{c\in\mathcal{Y}}(\frac{1}{T}\sum_{t=1}^{T}p_{c}^{t})\log( \frac{1}{T}\sum_{t=1}^{T}\hat{p}_{c}^{t}) \tag{3}\] \[+\frac{1}{T}\sum_{t=1}^{T}\sum_{c\in\mathcal{Y}}\hat{p}_{c}^{t} \log(p_{c}^{t}),\] where \(\hat{p}_{c}^{t}=p(y_{i}=c|f_{tea}^{W}(\widetilde{X}_{i}))\) is the predict probability of \(\widetilde{X}_{i}\) derived from the \(t\)-th masked model Figure 1: The overview of UPET framework. We first fine-tune a teacher model over few-shot labeled data. Then, we aim to judiciously choose suitable pseudo-labeled data by uncertainty estimation. During student learning, we leverage the parameter-efficient method with robust PHCE loss and contrastive regularization to train the student model on pseudo-labeled data. At last, the student model can be used for the next iteration. (Best viewed in color.) \(f_{tea}^{\widehat{W}_{t}}\). Thus, a lower \(\hat{\mathbb{B}}(\hat{Y}_{i},W|\widetilde{X}_{i},\mathcal{D}_{u})\) value means that the model is more certain about the prediction, as higher certainty corresponds to lower information gain (Tsai et al., 2022)3. Formally, we can design a certainty score for each example as: Footnote 3: Intuitively, if the model is always certain about some examples, these examples might be too easy to contribute any additional information. \[s_{i}^{ct}=1-\hat{\mathbb{B}}(\widetilde{Y}_{i},W|\widetilde{X}_{i}, \mathcal{D}_{u}). \tag{4}\] To this end, we can obtain the final sampling weight for each example by considering both model confidence and certainty: \[s_{i}=\frac{\alpha\times s_{i}^{cf}+(1-\alpha)\times s_{i}^{ct}}{\sum_{\tilde{ X}_{i}\in\mathcal{D}_{u}}\alpha\times s_{i}^{cf}+(1-\alpha)\times s_{i}^{ct}}, \tag{5}\] where \(s_{i}^{cf}=\frac{1}{T}\sum_{t=1}^{T}p(y=\widetilde{Y}_{i}|f_{tea}^{\widehat{W} _{t}}(\widetilde{X}_{i}))\) is the model confidence derived from the average approximate posterior of the \(T\) masked models w.r.t the pseudo label \(\widetilde{Y}_{i}\), \(\alpha\) (\(0\leq\alpha\leq 1\)) denotes the balancing factor. Hence, a number of \(N_{r}\) reliable examples can be sampled by these weights to form a new subset \(\mathcal{D}_{r}\subset\mathcal{D}_{u}\). ### Efficient Robust Tuning #### 3.3.1 Parameter-Efficient Tuning After the annotation and selection of unlabeled examples, we need to train a student model to elicit knowledge from the teacher. Yet, the training process of the self-training paradigm is inefficient. To remedy this dilemma, we aim to introduce PEL in self-training. We initialize a student model \(f_{stu}^{W}\) and a few designated parameters in \(W^{\star}\) can be tuned, enabling efficiency when training on many pseudo-labeled data. To meet our desiderata, we introduce two prediction paradigms with three PEL methods. The architecture is shown in Figure 2. Head-Tuning.Head-Tuning leverages CLS head to generate the probability distribution of the given example. Formally, we have: \[p_{W^{\star}}(y|\widetilde{X}_{i})=\mathcal{H}_{cls}(\mathcal{F}_{W^{\star}}( \widetilde{X}_{i})), \tag{6}\] where \(\mathcal{F}_{W^{\star}}(\cdot)\) denotes the output representation by the student model \(f_{stu}^{W^{\star}}\). \(\mathcal{H}_{cls}(\cdot)\) denotes a CLS head with a softmax classification layer 4. Prompt-Tuning.Prompt-Tuning aims at reusing the Masked Language Modeling (MLM) head to make predictions. Specifically, a well-designed template \(\mathcal{T}\) with a masked token ("[MASK]") is concatenated with the original input sentence. In addition, we need to define a verbalizer \(\mathcal{V}\) that maps the probability distribution over the whole vocabulary set \(\mathcal{X}\) to the label set \(\mathcal{Y}\). The probability can be calculated as: Footnote 4: It can be viewed as a feed-forward network (FFN) with a random initialized parameters. \[p_{W^{\star}}(y|\widetilde{X}_{i})=\mathcal{V}_{y}(\mathcal{H}_{mlm}(\mathcal{ F}_{W^{\star}}(\mathcal{T}||\widetilde{X}_{i}))), \tag{7}\] where \(\mathcal{H}_{mlm}\) denotes the MLM head derived from the PLM, \(\cdot||\cdot\) is the concatenation operation. \(\mathcal{V}_{y}(\cdot)\) aims to map the label word's probability at the masked position to the corresponding class \(y\). Hence, we can integrate Ptuning (Liu et al., 2021), Prefix-tuning (Li and Liang, 2021) and Adapter-tuning (Houlsby et al., 2019) to unify the PEL with arbitrary PLMs and prediction paradigms, including Head-Puning, Head-Prefix, Head-Adapter, Prompt-P Figure 2: Overview of different PEL paradigms. (a)-(c) represent **Head-Tuning**, aiming to CLS head for prediction. (d)-(f) denote **Prompt-Tuning** to make prediction via well-designed template and verbalizer. We unify three classic PEL methods for both Head-Tuning and Prompt-Tuning. The block in light yellow and blue means the trainable and frozen parameters, respectively. The block with sketches denotes the adapter module. (Best viewed in color.) Prompt-Adapter. More details are shown in Appendix A.1. During the optimization, we can compute the following cross-entropy objective by: \[l(\mathcal{D}_{r},f^{W^{\star}}_{stu})=\frac{1}{N_{r}}\sum_{(\widetilde{X}_{i}, \widetilde{Y}_{i})\in\mathcal{D}_{r}}\log p_{W^{\star}}(y=\widetilde{Y}_{i}| \widetilde{X}_{i}). \tag{8}\] Yet, it is still possible that the subset \(\mathcal{D}_{r}\) could consist of some wrong labels. During the parameter-efficient training stage, the scale of trainable parameters in \(W^{\star}\) being small, the student model is fragile and the robustness could not be preserved due to the negative effect of these noises in the backward. In that, we follow Tsai et al. (2022) to utilize partially huberised cross-entropy loss (PHCE loss), which is an alternative variant with a gradient clipping technique. Hence, the loss function in Eq. 8 can be modified as: \[l(\mathcal{D}_{r},f^{W^{\star}}_{stu})=\frac{1}{N_{r}}\sum_{(\widetilde{X}_{i},\widetilde{Y}_{i})\in\mathcal{D}_{r}}\phi_{\tau}(y=\widetilde{Y}_{i}| \widetilde{X}_{i}), \tag{9}\] where \(\phi_{\tau}(y|x)\) is the PHCE loss function with a hyper-parameter \(\tau\) (\(\tau>1\)). The detail of the PHCE loss function is shown in Appendix A.3. #### 3.3.2 Easy-Hard Contrastive Tuning As mentioned above, the selected example in \(D_{r}\) has a higher model certainty and might be too _easy_ to contribute any additional information. Nonetheless, this inevitably leads to the student model _over-fitting_ on these frequently selected samples Mukherjee and Awadallah (2020). Intuitively, the example not selected in \(D_{r}\) is more likely to be a noise that results in semantic drift. Thus, a natural idea is to exploit some _hard_ examples (which are not selected in \(\mathcal{D}_{r}\)) as the negatives to keep them away from _easy_ (reliable) examples, which can be viewed as a regularization in the semantic space. To reach this goal, we present Easy-Hard Contrastive Tuning. We denote \(\mathcal{D}_{h}\) as the difference between \(\mathcal{D}_{u}\) and \(\mathcal{D}_{r}\), so the examples in \(\mathcal{D}_{h}\) represent the _hard_ ones. During the optimization of the student model, given one example \((\widetilde{X}_{i},\widetilde{Y}_{i})\in\mathcal{D}_{r}\), we aim to choose one another example \((\widetilde{X}_{i}^{+},\widetilde{Y}_{i}^{+})\) from \(\mathcal{D}_{r}\) as the positive and some negative examples \(\{(\widetilde{X}_{ik}^{-},\widetilde{Y}_{ik}^{-})\}_{k=1}^{N_{n}}\) from \(\mathcal{D}_{h}\), where \(N_{n}\) is the number of negatives, \(\widetilde{Y}_{i}=\widetilde{Y}_{i}^{+}=\widetilde{Y}_{ik}^{-}\) have the same class 5. Hence, the contrastive regularization term can be computed as: Footnote 5: The pseudo label of the hard example may be wrong, so if the sampled hard example has the same label with \((\widetilde{X}_{i},\widetilde{Y}_{i})\), it can be viewed as a negative in terms of the class \(\widetilde{Y}_{i}\). \[R(f^{W^{\star}}_{stu})=\frac{1}{N_{r}}\sum_{c\in\mathcal{Y}} \sum_{(\widetilde{X}_{i},\widetilde{Y}_{i})\in\mathcal{D}_{r},\widetilde{Y}_{ i}=c} \tag{10}\] \[\Bigg{[}\frac{\exp{(g(\widetilde{X}_{i},\widetilde{X}_{i}^{+}))}} {\exp{(g(\widetilde{X}_{i},\widetilde{X}_{i}^{+}))}+\frac{1}{N_{n}}\sum_{k=1}^ {N_{n}}\exp{(g(\widetilde{X}_{i},\widetilde{X}_{ik}^{-})})}\Bigg{]},\] where \(g(\boldsymbol{\cdot},\boldsymbol{\cdot})\) is the score function that measures the similarity of two examples in the semantic space. Finally, the whole training objective is designed as: \[\mathcal{L}(\mathcal{D}_{r},f^{W^{\star}}_{stu})=l(\mathcal{D}_{r},f^{W^{\star }}_{stu})+\lambda R(f^{W^{\star}}_{stu}), \tag{11}\] where \(\lambda>0\) is the hyper-parameter. ## 4 Experiments ### Dataset and Implementation Details We perform extensive experiments over seven language understanding tasks to evaluate our UPET framework. We choose a series of tasks from the GLUE benchmark Wang et al. (2018), including SST-2 Socher et al. (2013) for sentiment analysis, MNLI Williams et al. (2018) for language inference, QNLI Rajpurkar et al. (2016) for question answering, MRPC Dolan and Brockett (2005) for semantic paraphrasing and RTE Dagan et al. (2005) for textual entailment. We also choose CB Darneffe et al. (2019) from SuperGLUE Wang et al. (2019) for linguistic entailment and AGNews Zhang et al. (2015) for topic classification. For each dataset, the number of labeled examples per class is set as \(N_{l}\in\{16,32,64\}\). We repeatedly sample few-shot labeled instances five times with different seeds from \(\{12,21,42,87,100\}\) and report average performance with standard deviation. For the implementation details, we choose RoBERTa-large Liu et al. (2019) from HuggingFace 6 as the default backbone for both the teacher and student model. The number of the self-training iterations is set as 5. We train models by the AdamW algorithm with \(\beta_{1}=0.9,\beta_{2}=0.98\) on 4 NVIDIA V100-32G GPUs. For each task, we use grid search to select the best hyper-parameter (Appendix D). By default, the training epoch of the teacher and student are 100. Footnote 6: [https://huggingface.co/transformers/index.html](https://huggingface.co/transformers/index.html). ### Baselines We consider some strong baselines for comparison, including **UST**Mukherjee and Awadallah (2020), **CEST**Tsai et al. (2022) and **LiST**Wang et al. (2022). UST and CEST leverage uncertainty estimation for self-training. LiST integrates Adapter-tuning Houlsby et al. (2019) into prompt-based learning for parameter-efficient self-training, which is similar to the Prompt-Adapter paradigm. In addition, we also design two semi-supervised learning baselines: 1) **Head ST** aims to use the classic fine-tuning with CLS head to augment unlabeled data through standard self-training. 2) **Prompt ST** aims to reuse the MLM head with a well-designed task-specific template and verbalizer to perform pseudo-labeling in standard self-training. We also choose **Head FT** and **Prompt FT** to fine-tune over few-shot or full training data. ### Main Results Table 1 illustrates the main results over seven NLU tasks with different settings. RoBERTa-large trained on fully labeled examples provides the ceiling performance for the few-shot and semi-supervised setting. We thus make the following observations. 1) According to the overall results, all the methods with self-training outperform conventional few-shot learning (i.e., Head FT and Prompt FT). This demonstrates the impact of self-training with unlabeled data. 2) We obtain the best overall performance of 78.2% with the lowest tunable parameters (i.e., Prompt-Ptuning) and improve over Head ST, Prompt ST, UST, CEST, and LiST by 7.0%, 3.6%, 6.1%, 5.6%, and 2.0% respectively over seven tasks, which indicates that UPET outperforms state-of-the-arts in terms of both the effectiveness and efficiency. 3) Compared to the strong baseline Prompt ST that uses the PEL-based approach, we obtain a 3.6% absolute improvement, demonstrating the substantial contributions of the well-designed reliable example selection and contrastive regularization. 4) We also list all 6 PEL paradigms' performance of UPET. We observe that the performance of Prompt-Tuning is higher than Head-Tuning, indicating that reusing the pre-training objective MLM with the task-orient template and verbalizer is more effective for self-training. In addition, more tunable parameters may enhance the student model's ability to learn semantic knowledge derived from the teacher. ### Further Analysis Impact of Self-training Iterations.To validate the effectiveness of self-training, we choose MNLI \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline **Baselines** & **Use PEL** & **\#Tunable** & **SST-2** & **MNLI** & **QNLI** & **MRPC** & **RTE** & **CB** & **AGNews** & **Avg.** \\ & & **Params.** & (acc) & (acc) & (f1) & (acc) & (acc) & (acc) & (acc) & \\ \hline _Full Data_ & & & & & & & & & & \\ Head FT & 355M & 95.2 & 89.8 & 93.3 & 91.4 & 83.0 & 90.5 & 94.7 & 91.1 \\ Prompt FT & 355M & 95.9 & 90.2 & 93.0 & 90.9 & 88.4 & 91.1 & 94.0 & 91.9 \\ \hline _Few Labeled Data (16-shot)_ & & & & & & & & & \\ Head FT & 355M & 81.4\(\pm\)3.8 & 45.8\(\pm\)6.4 & 60.2\(\pm\)6.5 & 75.9\(\pm\)2.9 & 54.4\(\pm\)3.9 & 74.5\(\pm\)2.6 & 88.9\(\pm\)2.7 & 68.7 \\ Prompt FT & 355M & 90.6\(\pm\)1.1 & 53.7\(\pm\)2.3 & 64.5\(\pm\)4.0 & 74.4\(\pm\)3.0 & 59.1\(\pm\)3.6 & 77.0\(\pm\)3.3 & 88.6\(\pm\)1.2 & 72.6 \\ \hline _Few Labeled Data (16-shot) + Unlabeled Data_ & & & & & & & & & \\ Head ST & 355M & 87.9\(\pm\)3.0 & 51.9\(\pm\)2.8 & 64.0\(\pm\)2.8 & 79.4\(\pm\)2.5 & 53.2\(\pm\)2.9 & 75.9\(\pm\)1.5 & 86.4\(\pm\)3.0 & 71.2 \\ Prompt ST & 355M & 91.0\(\pm\)3.1 & 57.7\(\pm\)2.9 & 67.8\(\pm\)3.2 & 81.0\(\pm\)2.4 & 57.9\(\pm\)3.3 & 77.7\(\pm\)2.9 & 88.8\(\pm\)3.5 & 74.6 \\ UST & 355M & 84.0\(\pm\)4.0 & 53.9\(\pm\)2.9 & 65.9\(\pm\)3.3 & 79.9\(\pm\)2.0 & 55.6\(\pm\)2.6 & 76.0\(\pm\)3.1 & 89.3\(\pm\)3.5 & 72.1 \\ CEST & 355M & 86.4\(\pm\)3.8 & 52.2\(\pm\)2.9 & 65.0\(\pm\)2.4 & 80.8\(\pm\)3.5 & 57.0\(\pm\)1.9 & 78.1\(\pm\)2.7 & 88.5\(\pm\)2.2 & 72.6 \\ LiST & 14M & 91.0\(\pm\)3.0 & 62.0\(\pm\)3.9 & 67.4\(\pm\)2.5 & 82.0\(\pm\)3.3 & 60.8\(\pm\)2.5 & 79.7\(\pm\)2.9 & 90.3\(\pm\)2.5 & 76.2 \\ \hline **UPET** & & & & & & & & & \\ - Head-Puning & 14M & 90.8\(\pm\)3.2 & 53.2\(\pm\)2.9 & 64.8\(\pm\)2.8 & 82.6\(\pm\)2.8 & 59.3\(\pm\)3.7 & 76.8\(\pm\)2.6 & 90.8\(\pm\)1.8 & 74.0 \\ - Head-Prefix & 14M & 87.5\(\pm\)2.0 & 56.7\(\pm\)2.7 & 69.2\(\pm\)3.1 & 82.3\(\pm\)2.2 & 58.7\(\pm\)2.5 & 79.6\(\pm\)1.5 & 90.9\(\pm\)1.8 & 74.6 \\ - Head-Adapter & 14M & 89.3\(\pm\)1.0 & 60.1\(\pm\)2.6 & 68.5\(\pm\)1.4 & **85.5\(\pm\)2.5** & 59.2\(\pm\)3.5 & 79.0\(\pm\)1.5 & 90.3\(\pm\)2.6 & 76.0 \\ - Prompt-Ptuning & 14M & 91.7\(\pm\)2.8 & **69.5\(\pm\)1.9** & **71.9\(\pm\)2.8** & 83.7\(\pm\)3.3 & 60.8\(\pm\)1.5 & 80.4\(\pm\)1.4 & 89.6\(\pm\)2.2 & **78.2** \\ - Prompt-Prefix & 14M & **92.3\(\pm\)2.0** & 64.2\(\pm\)2.9 & 66.1\(\pm\)3.0 & 83.0\(\pm\)1.8 & **61.5\(\pm\)1.6** & **80.8\(\pm\)2.1** & 90.5\(\pm\)3.1 & 76.9 \\ - Prompt-Adapter & 14M & 91.9\(\pm\)1.9 & 66.1\(\pm\)4.9 & 66.8\(\pm\)1.8 & 84.2\(\pm\)1.4 & 61.0\(\pm\)1.6 & 80.4\(\pm\)2.0 & **91.0\(\pm\)2.0** & 77.3 \\ \hline \hline \end{tabular} \end{table} Table 1: The performance comparison of accuracy or F1 scores (%) with standard deviations on seven tasks. All methods (except fine-tuning with full data) are trained with 16-shot labeled samples for each class and overall results are aggregated over five different runs with different random seeds. In UPET, the first three variants belong to the Head-Tuning paradigm, while the others are Prompt-Tuning. and RTE and draw some curves to show the performance of different PEL paradigms at each iteration in Figure 3. From the figure, we find that the performance increases when the framework continual training until the 4-th iteration, indicating the convergence of our framework. Additionally, the student model with Prompt-Tuning (including Prompt-Ptuning, Prompt-Prefix, and Prompt-Adapter) consistently outperforms Head-Tuning (including Head-Tuning, Head-Prefix, and Head-Adapter). This shows that prompt-based methods can better utilize PEL to make self-training both effective and efficient. Labeled Data Efficiency.To investigate the influence of the number of labeled examples, we vary the examples of each class from 16, 32, and 64. We choose LiST as the strong baseline. To make a fair comparison, the PEL we select is Prompt-Adapter, which is the same as LiST and only tunes the adapter module in PLM. Results in Table 3 illustrate that the performance gradually improves as the number of labeled data increases, as expected. In addition, we also find that our UPET outperforms LiST over most of the tasks no matter how many labeled training examples. Combination of Different Parameter-Efficient Learning Paradigms in Self-training.We aim to explore how PEL performs in the self-training procedure. We integrate the PEL paradigm into the teacher or student model to show the performance of the different combinations of PEL. As shown in Table 2, we choose Head-Adapter and Prompt-Adapter. We find the setting that all parameters in both the teacher and student updated gains the best-average performance, indicating the ceiling performance of each paradigm. Yet, it costs about 11 hours which makes the self-training procedure inefficient. In addition, the time influence on whether the teacher model uses PEL is less than the student, because the teacher model only trains once while the student model needs to update for 100 epochs in each self-training iteration. Correspondingly, this motivated us to leverage PEL in the student model to improve the efficiency of self-training, preserving its effectiveness. \begin{table} \begin{tabular}{l c} \hline \hline **Selection Strategy** & **Avg. Results** \\ \hline None & 76.0 \\ \(\alpha\) = 0 (w/o. Confidence) & 77.2 \\ \(\alpha\) = \(0.2\) & 77.9 \\ \(\alpha\) = \(0.4\) & 78.2 \\ \(\alpha\) = \(0.6\) & 77.6 \\ \(\alpha\) = \(0.8\) & 77.3 \\ \(\alpha\) = \(1.0\) (w/o. Certainty) & 76.8 \\ \hline \hline \end{tabular} \end{table} Table 4: The average performance (%) of UPET (Prompt-Ptuning) with different selection strategies (varying by \(\alpha\)). “None” equals Prompt ST which trains the student model on all pseudo-labeled data. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Teacher** & **Student** & **\# Tunable** & **Avg.** & **Avg.** \\ **Use PEL** & **Use PEL** & **Params.** & **Result** & **Time** \\ \hline \multicolumn{5}{l}{_Head-Adapter_} \\ \(\bigtimes\) & & 355M+355M & 76.6 & 11.3h \\ \(\bigtimes\) & & 355M+14M & 76.0 & 4.1h \\ \(\checkmark\) & & 14M+355M & 75.2 & 10.7h \\ \(\checkmark\) & & 14M+14M & 75.0 & 3.8h \\ \hline \multicolumn{5}{l}{_Prompt-Adapter_} \\ \(\bigtimes\) & & 355M+355M & 77.6 & 11.0h \\ \(\bigtimes\) & & 355M+14M & 77.2 & 4.0h \\ \(\checkmark\) & & 14M+355M & 76.4 & 10.7h \\ \(\checkmark\) & & 14M+14M & 75.8 & 3.9h \\ \hline \hline \end{tabular} \end{table} Table 2: The average performance (%) over all tasks with different combinations of PEL paradigms. Figure 3: The performance (%) of different self-training iterations over MNLI and RTE. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{**LiST**} & \multicolumn{3}{c}{**UPET**} \\ **\#-shot\(\longrightarrow\)** & **16** & **32** & **64** & **16** & **32** & **64** \\ \hline SST-2 & 91.0 & 91.8 & 92.7 & **91.9** & **93.0** & **93.6** \\ MNLI & 62.0 & 65.7 & 69.7 & **66.1** & **69.2** & **72.3** \\ QNLI & **67.4** & **71.5** & 74.4 & 66.8 & 71.1 & **75.0** \\ MRPC & 82.0 & 84.2 & **85.8** & **84.2** & **85.1** & 85.7 \\ RTE & 60.8 & 64.2 & 67.9 & **61.0** & **66.0** & **68.9** \\ CB & 79.7 & 83.1 & 85.7 & **80.4** & **84.3** & **86.2** \\ AGNews & 90.3 & 90.8 & 91.3 & **91.0** & **91.4** & **91.9** \\ \hline \hline \end{tabular} \end{table} Table 3: The performance (%) with different numbers (16/32/64 examples per class) of labeled data. The parameter-efficient paradigm is Prompt-Adapter. **Effectiveness of Reliable Example Sampling.** To validate the effectiveness of the RES, we investigate the effect of the balance factor \(\alpha\) in Eq. 5 in terms of the average performance. From Table 4, it is necessary to perform sample selection to obtain more clean data. The results also illustrate that both model confidence and certainty substantially make contribute to the performance. We find the best value is set around 0.2, which means certainty plays an important role in the selection. **Visualization of the Contrastive Regularization.** To investigate how the proposed Easy-Hard Contrastive Tuning contributes to the final performance, in Figure 4, we use the t-SNE (Van der Maaten and Hinton, 2008) tool and select the AGNews task for validation. Specifically, we randomly sample 1k testing examples to draw the representations in the semantic space. Results demonstrate that the model trained with contrastive regularization can make a clearer boundary between every two classes, corroborating our conclusions that avoiding the overfitting problem and yielding better generalization. ### Ablation Study In this section, we conduct an ablation study to demonstrate the impact of different variants of UPET that remove the designed technique. From Table 5, we thus make the following summarization. 1) We find that the performance of w/o. Reliable Example Sampling (RES) decreases a lot (more than 2%). In addition, we also find that the sampling weight considered by both certainty and confidence can make consistent contributions in RES. These phenomena demonstrate the effectiveness of the de-noising approach considered by both model confidence and certainty. 2) Removing PHCE loss from UPET in 1.1% performance drop in terms of average results, which indicates the importance of PHCE loss in robust student training. 3) Through UPET versus UPET w/o. Easy-Hard Contrastive Tuning, the average performance of the student model is improved by about 1.6%, demonstrating the effectiveness of the contrastive regularization design. ### Comparison to Non-BERT Approaches We end this section with an additional comparison between UPET and non-BERT semi-supervised learning approaches that use a different number \begin{table} \begin{tabular}{l c c} \hline \hline **Methods** & **\#Example** & **Accuracy** \\ \hline Variational Pre-training & 200 & 83.9 \\ Reinforcement + Adv. Training & 100 & 81.7 \\ SeqSSL + Self-training & 100 & 78.5 \\ SeqSSL & 100 & 76.2 \\ SeqSSL + Adv. Training & 100 & 76.0 \\ \hline UPET (worst) & 64 & 89.6 \\ UPET (best) & **64** & **91.0** \\ \hline \hline \end{tabular} \end{table} Table 6: Performance comparison over AGNews task with non-BERT-based SSL approaches (Li and Ye, 2018; Gururangan et al., 2019; Dai and Le, 2015; Li and Sethy, 2020) (RL: Reinforcement Learning, Adv.: Adversarial, Temp. Ens.: Temporal Ensemble, Layer Part.: Layer Partitioning). UPET (worst) and UPET (best) denote the performance of Prompt-Pluning and Prompt-Adapter. Figure 4: The AGNews’s t-SNE visualization of UPET w/o. Easy-Hard Contrastive Tuning (left) and w/ Easy-Hard Contrastive Tuning (right). \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline **Methods** & **SST-2** & **MNLI** & **QNLI** & **MRPC** & **RTE** & **CB** & **AGNews** & **Avg.** \\ & (acc) & (acc) & (f1) & (acc) & (acc) & (acc) & (acc) & (acc) \\ \hline \hline _Prompt-Pluning_ & & & & & & & & \\ Prompt ST & 91.0 & 57.7 & 67.8 & 81.0 & 57.9 & 77.7 & 88.8 & 74.6 \\ UPET & **91.7** & **69.5** & **71.9** & **83.7** & **60.8** & **80.4** & **89.6** & **78.2** \\ w/o. Reliable Example Sampling & 91.3 & 63.0 & 69.8 & 82.2 & 58.3 & 78.3 & 89.2 & 76.0 \\ w/o. certainty & 91.4 & 65.8 & 70.4 & 82.8 & 59.0 & 78.6 & 89.5 & 76.8 \\ w/o. confidence & 91.6 & 66.3 & 71.0 & 83.3 & 59.7 & 78.8 & 89.5 & 77.2 \\ w/o. PHCE loss & 91.3 & 67.2 & 69.3 & 83.0 & 59.9 & 79.7 & 89.3 & 77.1 \\ w/o. Easy-Hard Contrastive Tuning & 91.5 & 65.8 & 68.5 & 82.8 & 58.9 & 79.1 & 89.6 & 76.6 \\ \hline \hline \end{tabular} \end{table} Table 5: The 16-shot performance (%) of different variants of UPET with Prompt-Pluning. of labeled examples for tuning the teacher model. Table 6 shows that our framework achieves a large performance gain with only 64 labeled examples, especially on UPET (best) with at least 7%. ## 5 Conclusion In this paper, we introduce a novel uncertainty-aware parameter-efficient self-training framework (UPET) to better improve the effectiveness and efficiency of self-training. In UPET, we use uncertainty estimation to judiciously select reliable pseudo-labeled examples to explicitly alleviate the noisy label problem. To make self-training more efficient, we integrate multiple parameter-efficient paradigms into self-training. To further improve the performance, we also present Easy-Hard Contrastive Tuning to enhance the robustness and reduce the over-fitting problem. In the future, we will extend our framework to other complex tasks, such as sequence labeling, question answering, etc. ## Limitations Our limitations are shown below: * We only focus on sequence classification-style NLU tasks. However, we think it can be extended to other tasks easily, such as sequence labeling, question answering, etc. * Our work focuses on the PLM without Transformer decoders. We think it is possible to extend our method to natural language generation (NLG) tasks. We will leave it as our future work. ## Ethical Considerations Our contribution in this work is fully methodological, namely uncertainty-aware parameter-efficient self-training (UPET) to improve effectiveness and efficiency based on PLMs. However, transformer-based models may have some negative impacts, such as gender and social bias. Our work would unavoidably suffer from these issues. We suggest that users should carefully address potential risks when the UPET models are deployed online. ## Acknowledgements This work has been supported by the National Natural Science Foundation of China under Grant No.U1911203, and the National Natural Science Foundation of China under Grant No.62377012.
2304.11972
Epitaxial monolayers of magnetic 2D semiconductor FeBr$_{2}$ grown on Au(111)
Magnetic two-dimensional (2D) semiconductors have attracted a lot of attention because modern preparation techniques are capable of providing single crystal films of these materials with precise control of thickness down to the single-layer limit. It opens up a way to study rich variety of electronic and magnetic phenomena with promising routes towards potential applications. We have investigated the initial stages of epitaxial growth of the magnetic van der Waals semiconductor FeBr\textsubscript{2} on a single-crystal Au(111) substrate by means of low-temperature scanning tunneling microscopy, low-energy electron diffraction, x-ray photoemission spectroscopy, low-energy electron emission microscopy and x-ray photoemission electron microscopy. Magnetic properties of the one- and two-layer thick films were measured via x-ray absorption spectroscopy/x-ray magnetic circular dichroism. Our findings show a striking difference in the magnetic behaviour of the single layer of FeBr\textsubscript{2} and its bulk counterpart, which can be attributed to the modifications in the crystal structure due to the interaction with the substrate.
S. E. Hadjadj, C. González-Orellana, J. Lawrence, D. Bikaljević, M. Peña-Díaz, P. Gargiani, L. Aballe, J. Naumann, M. Á. Niño, M. Foerster, S. Ruiz-Gómez, S. Thakur, I. Kumberg, J. Taylor, J. Hayes, J. Torres, C. Luo, F. Radu, D. G. de Oteyza, W. Kuch, J. I. Pascual, C. Rogero, M. Ilyn
2023-04-24T10:12:26Z
http://arxiv.org/abs/2304.11972v2
# Epitaxial monolayers of magnetic 2D semiconductor FeBr\({}_{2}\) grown on Au(111) ###### Abstract Magnetic two-dimensional (2D) semiconductors have attracted a lot of attention because modern preparation techniques are capable of providing single crystal films of these materials with precise control of thickness down to the single-layer limit. It opens up a way to study rich variety of electronic and magnetic phenomena with promising routes towards potential applications. We have investigated the initial stages of epitaxial growth of the magnetic van der Waals semiconductor FeBr\({}_{2}\) on a single-crystal Au(111) substrate by means of low-temperature scanning tunneling microscopy, low-energy electron diffraction, x-ray photoemission spectroscopy, low-energy electron emission microscopy and x-ray photoemission electron microscopy. Magnetic properties of the one- and two-layer thick films were measured via x-ray absorption spectroscopy/x-ray magnetic circular dichroism. Our findings show a striking difference in the magnetic behaviour of the single layer of FeBr\({}_{2}\) and its bulk counterpart, which can be attributed to the modifications in the crystal structure due to the interaction with the substrate. ## I Introduction Integration of two-dimensional (2D) materials in technologically relevant applications requires atomic-scale control of the growth of single crystalline, monolayer-thick films. Meanwhile many semiconducting 2D materials like graphene, h-BN or MoS\({}_{2}\) are routinely grown on the wafer scale [1], preparation of magnetic 2D materials is still limited in most cases to micromechanical exfoliation [2; 3; 4; 5; 6; 7; 8; 9; 10]. Prominent exceptions of this trend are magnetic transition-metal tri- and dihalides, for which single-layer growth was demonstrated recently via molecular beam epitaxy (MBE) [11; 12]. In contrast to well-studied trihalides, particularly CrI\({}_{3}\) and CrBr\({}_{3}\)[2; 13; 14; 15], experimental investigation of the 2D dihalides is less advanced, although their bulk magnetic properties were thoroughly studied [16]. Bulk FeBr\({}_{2}\) is a layered crystal that consists of covalently bonded layers stacked via van der Waals (vdW) interactions in the CdI\({}_{2}\)-type structure (P3m1 space group). The layers consist of triangular lattices of cations in edge-sharing octahedral coordination 1T (or D\({}_{34}\))- MX\({}_{2}\) structure, forming one transition metal layer sandwiched between two halide layers [16; 17]. The lateral lattice constant was found to be 3.776 A [18; 19]. Indirect Fe-Fe exchange interaction gives rise to the collinear intralayer ferromagnetic order below \(T_{N}=14.2\) K with out of plane (OOP) anisotropy, meanwhile the interlayer exchange is antiferromagnetic. Application of an external magnetic field of 3.15 T triggers a metamagnetic phase transition [16; 20]. The six 3d electrons of the Fe\({}^{2+}\) ions are distributed between two groups of orbitals, t\({}_{2g}\) (d\({}_{xy}\), d\({}_{xz}\) and d\({}_{yz}\)) and e\({}_{g}\) (d\({}_{x^{2}-y^{2}}\), and d\({}_{z^{2}}\)), giving rise to a magnetic moment of 4.4 \(\pm\) 0.7 \(\mu_{B}\)/Fe atom [21], which exceeds the value of 4.0 \(\mu_{B}\)/ Fe atom predicted by Hund's rule [22; 23]. Various attempts of DFT calculations yield comparable values of the magnetic moments and provide useful insights on the details of the band structure [24; 25; 26]. In this work we use sublimation of the stoichiometric powder to grow epitaxial films of magnetic semiconductor FeBr\({}_{2}\), which belongs to the family of transition metal dihalides (TMDH) [16], on the single crystal Au(111). Feasibility of growth of TMDH films via Chemical Vapor Deposition (CVD) has been demonstrated recently [27]. In contrast to CVD, MBE does not require heating of the substrate above room temperature which makes it compatible with resist-based nanofabrication and opens up a way to the integration of the TMDH thin films in the scalable manufacturing processes. We focus our investigation on the properties of the one- and two-slab thick films employing spectroscopic and microscopic characterisation, including synchrotron-based techniques. In particular, we demonstrate the modification of the magnetic properties of the stoichiometric FeBr\({}_{2}\) due to a peculiar reconstruction in the first slab. ## II Experiment and Methods FeBr\({}_{2}\) layers with variable thicknesses, ranging from sub-monolayer (sub-ML) to more than one monolayer, were grown on Au(111) using FeBr\({}_{2}\) powder from Sigma Aldrich with a purity of 98% and a Knudsen cell evaporator. The sublimation temperature for FeBr\({}_{2}\) was around 400 \({}^{\circ}\)C in ultra high vacuum (UHV) (with an evaporation pressure of 10-8 mbar to 10-9 mbar). The substrate was kept at room temperature during sublimation. A quartz microbalance was used to measure the nominal amount of the deposited material, meanwhile the calibration of the absolute thickness was done via cross-correlation of scanning tunneling microscopy (STM) images with low energy electron diffraction (LEED) data. This calibration was translated to the integral of the non-polarized soft X-ray absorption at the Fe L\({}_{3,2}\) edges for comparison to samples prepared in different synchrotron radiation sources. The thickness calibration procedure is shown in the supplementary information in Fig. S2. The Au(111) substrate was cleaned by standard Ar\({}^{+}\) sputtering and annealing cycles. Low-temperature STM (LT-STM) experiments were performed at 4.3 K (for a sub-ML sample) and at 77 K for the thicker samples at Centro de Fisica de Materiales and BOREAS beamline, respectively. X-ray photoelectron spectroscopy (XPS) measurements were carried out with a Phoibos 100 photoelectron spectrometer, using a non-monochromatic Al-K\(\alpha\) X-ray source. The analyser energy resolution is 0.1 eV. UHV was preserved during all the sample transfers (base pressure during experiment was 10-10 mbar). X-ray magnetic circular dichroism (XMCD) measurements were performed at both the VEXMAG station (dipole-beamline) of BESSY II in Berlin [28] and the BOREAS beamline (undulator-beamline) at ALBA Synchrotron Light Facility [29]. The measurements at VEKMAG were performed by keeping the beam polarization constant and changing the field. At BOREAS we kept the field constant and changed the polarization. Absorption spectra at the Fe L\({}_{3,2}\)-edges were acquired at normal incidence (NI/0\({}^{\circ}\), out of plane) and grazing incidence (GI/70\({}^{\circ}\), in plane), applying a variable magnetic field up to \(\pm\)6 T. The temperature during the measurements at the VEXMAG beamline was set to 10 K, which is around 12.6 K and at BOREAS \(2\pm 0.5\) K. One 0.6-ML samples of FeBr\({}_{2}\) was brought by a Ferrovac suitcase to BOREAS beamline to cross-correlate the coverage of the samples measured in the home laboratory and in the synchrotron beamlines. The LEED images were acquired to observe the growth and the thickness-dependent change in the structure. Imaging at the mesoscopic scale was done by low-energy electron microscopy (LEEM) and x-ray photoemission electron microscopy (XPEEM) at the CIRCE beamline (ALBA Synchrotron Light Facility) [30]. ## III Results and Discussion ### Epitaxial Growth of FeBr\({}_{2}\) The initial stage of growth of FeBr\({}_{2}\) films on single crystal Au(111) was studied by means of surface-sensitive electron diffraction and scanning tunneling microscopy. LEED patterns measured at 137 eV demonstrate a variation of the crystal structure of FeBr\({}_{2}\) with increasing number of deposited layers (Fig. 1 (a-c)). The hexagonal pattern characteristic of the clean Au(111) surface becomes attenuated and a new hexagonal pattern with smaller period and the same orientation appears when 0.6-ML of FeBr\({}_{2}\) is grown. An additional complex pattern of multiple dots surrounding the first-order spots of Au(111) is indicative for a surface reconstruction process that is depicted in the atomically resolved STM image (Fig. 3). In the LEED pattern acquired for the 2-ML sample, the Au(111) signal is barely visible and the reconstruction-related superstructure is strongly attenuated. At this coverage the hexagonal pattern of the ordered FeBr\({}_{2}\) becomes a dominant motif, also seen as second-order diffraction spots. This behaviour is characteristic of epitaxial, close to layer-by-layer growth of the overlayers on single-crystal substrates. The large-scale STM image shown in Fig. 1 (d) (see also Fig. S5) demonstrates that the islands of FeBr\({}_{2}\) have triangular shapes and well-defined common directions of the symmetry axes. It corroborates well with the ordered epitaxial growth inferred from the LEED diffraction patterns. One can distinguish large areas of the same thickness and limited amount of exposed atomic planes that discards a 3D growth mode. However, in the sample with nominal amount of 2.0-ML, there is nucleation of islands of the third layer and, at the same time, some voids exposing the first layer, which leads to the conclusion that the growth does not proceed in a perfect layer-by-layer mode. Fig. 1 (d) shows also islands of FeBr\({}_{2}\) that grow over the atomic step of the substrate. This peculiar behaviour was reported earlier for a number of different 2D materials [31; 32; 33; 34]. The chemical composition of the films was probed using XPS measurements. Survey spectra (not shown) show no traces of oxygen or other contamination. The Fe 2p and Br 3p spectra acquired for the 0.6-ML and the 2.0-ML samples as well as the calculated best-fitting curves are represented in Fig. 1 (e-f). For the data evaluation, a Shirley background was subtracted and the peaks were fitted with a combination of Voigt functions (Python lmfit routine [35]). The shape of the Fe 2p spectra closely resembles the spectrum of Fe\({}^{2+}\) reported for thin insulating films of FeO [36, 37]. The spectral shape resembles also the one measured for FeCl\({}_{2}\) with a dominant shape for the Fe 2p core level [38]. In total, four Voigt profiles were needed to fit these spectra: two for the main Fe\({}^{2+}\) peaks and two for the satellite peaks, each with a spin orbit (SO) splitting of \(\sim\)13 eV, in close accordance with the data reported in literature [36]. The main peaks of the Fe 2p core level are located at a binding energy of 709.4 eV for Fe 2p\({}_{3/2}\) and 722.6 eV for Fe 2p\({}_{1/2}\). The Br 3p doublet is similar to the spectrum reported for Br\({}^{1-}\) in KBr and the position of the peaks falls in the same range of energies [39]. For further information about the fitting parameters see table S1. Both Fe 2p and Br 3p spectra have the same shape for the 0.6-ML and the 2.0-ML samples. In contrast to the situation observed for NiBr\({}_{2}\), XPS spectra of the first layer of FeBr\({}_{2}\) have no any additional components that could be interpreted as its partial decomposition [12]. Fitting of the Fe 2p spectra for both samples yielded the same parameters (FWHM and center position). Variation of the FWHM for the Br 3p peaks (see table S1) can be attributed to the different environment of the bottom Br layer interfaced with the Au(111) surface and higher Br layers [40]. The peak ratio between the main and the satellite peaks as well as the calculated ratio for Br and Fe stays constant for both samples, supporting the presence of one single stoichiometric phase of FeBr\({}_{2}\) in the ordered layers epitaxially grown on Au(111). The uniform epitaxial growth was also verified on the mesoscopic scale via LEEM and XPEEM measurements, performed at room temperature. We use the capability of LEEM microscopy and XPEEM to provide information of images with structural and chemical contrast [41], to study the 1.5-ML sample that was grown in-situ in the preparation chamber of the microscope. The bright-field LEEM image (the image obtained with a specular-00 spot) is shown in Fig. 2 (a). The contrast arises from the difference in the local reflectivity of the film with variable thickness [41]. The image represents one complete layer and large, \(\mu\)m-scale islands of the second layer in close accordance with the results of the STM measurements (Fig. 1 (d)) and S5). For identification of the layers we make use of the reconstruction, characteristic of the first layer of FeBr\({}_{2}\)/Au(111). The pattern acquired for the 1.5-ML sample with 40 eV energy of electrons (Fig. 2 (b)) is a superposition of the complex LEED pattern Fig. 1 (b) and the hexagonal pattern Fig. 1 (c), originated from the second layer of FeBr\({}_{2}\). Presence of the first-order Au(111) spots shows that the first layer of FeBr\({}_{2}\) is not perfectly Figure 1: (a-c) LEED images at 137 eV of a) clean Au(111), b) sub-monolayer (0.6-ML) and c) bilayer (2-ML) of FeBr\({}_{2}\) on Au(111). The orange half hexagon is used to designate the pattern of Au(111) and the blue half hexagon marks the pattern from FeBr\({}_{2}\). d) Zoomed-in part of the STM image (Fig. S5) of the 2.0-ML FeBr\({}_{2}\)/Au(111), (T=77 K, U\({}_{\text{Bias}}=1\) V and I\({}_{\text{TC}}=0.02\cdot 10^{-9}\) A) measured at the BOREAS beamline. The image size is \(129.9\times 129.9\) nm\({}^{2}\). e-f) XPS spectra of 0.6 ML and 2 ML of FeBr\({}_{2}\)/Au(111) showing the Fe 2p and Br 3p core levels. continuous. Selecting a diffracted beam from the FeBr\({}_{2}\) superstructure, i.e, centering the illumination deflectors at that LEED spot, a dark-field image in real space was formed (Fig. 2 (c)). It has inverted contrast with respect to the BF image Fig. 2 (a). The areas with higher intensity are those that feature the reconstruction's LEED pattern (the first layer of FeBr\({}_{2}\)). Variation of contrast within the bright zones occurs because of the presence of two rotational domains inside of the first layer of FeBr\({}_{2}\) (see discussion related to the Fig. 3). The large dark areas display no reconstruction. To prove that these are indeed the areas occupied with 2 ML of FeBr\({}_{2}\) but not the pure Au(111), BF LEEM (Fig. 2 (d)) and XPEEM (Fig. 2 (e)) images were acquired at the same position. Again, the specular (00)-spot of the LEED pattern was used for the LEEM measurements, therefore the contrast in Fig. 2 (a) and 2 (d) is the same. The XPEEM image shows the local difference in X-ray absorption at the Fe L\({}_{3}\)-edge. The averaged intensities of the bright and dark zones were calculated and represented in Fig. 2 (f) as a function of the X-ray photon energy. A larger absorption peak characteristic of the bright zones in the XPEEM image proves a higher thickness of the FeBr\({}_{2}\) in these areas and consequently in the bright areas of the BF LEEM images (Fig. 2 (a) and 2 (d)). Combining these results with our previous observations, we can conclude that the results of the LEEM/XPEEM characterization corroborate that the growth of FeBr\({}_{2}\) on Au(111) is close to the layer-by-layer mode. Investigation of the atomic arrangement that gives rise to the reconstruction of the first layer of FeBr\({}_{2}\)/Au(111) was performed using LT-STM and LEED. The STM image Figure 2: LEEM and XPEEM images of 1.5 ML of FeBr\({}_{2}\) on Au(111) at room temperature. a) Bright field (BF) LEEM image. b) \(\mu\)-LEED pattern at 40 eV, red circles indicate the Au (111) LEED pattern and the yellow circle marks the spot, belonging to the FeBr\({}_{2}\) superstructure that was used for the dark-field (DF) image. The \(\mu\)-LEED pattern is distorted, since the experiment was performed with the microscope working at 10 kV, energy for which the lenses were not completely aligned in the diffraction mode, to overcome sparks during the experiment. The Au(111) pattern was used as a guide to the eye to correct the distortions. c) DF-LEEM image taken at the same area as the BF image in panel a). d) Bright-field LEEM image in a different area of the sample. e) XPEEM image at the Fe L\({}_{3}\)-edge in the same area as panel d). f) Averaged intensities of the bright and dark areas of the XPEEM image e), as a function of the X-ray photon energy. The XAS spectra are obtained by taking the intensity of the image in certain points of the image. displayed in Fig. 3 (a) shows two islands of FeBr\({}_{2}\) separated by the bare Au(111) surface with the characteristic \(22\times\sqrt{3}\) herringbone reconstruction [42; 43]. Apart of bright dots at the elbows of the herringbone probably associated with initial nucleation of FeBr\({}_{2}\), the Au(111) remains clean and the FeBr\({}_{2}\) grows as compact ordered islands. A zoom-in image in Fig. 3 (b) reveals details of a superstructure in the first layer of FeBr\({}_{2}\)/Au(111) with atomic-resolution. It consists of a triangular net of dark spots with periodicity of \(9.7\pm 0.72\) A that obscure single Br atoms in otherwise flat layer. Interatomic distances in the top-most Br layer were found to be of \(3.66\pm 0.3\) A, in reasonable agreement with the expected monolayer lattice constant calculated by DFT [25] and the bulk value of 3.78 A for FeBr\({}_{2}\)[18] (see also Fig. S4). The angle between the closed-packed directions of Au(111) and the top-most Br plane is \(\sim 5^{\circ}\), meanwhile the angle between the high-symmetry directions of the Au(111) and the superstructure is \(\sim 14^{\circ}\). Fig. 3 (b) shows that the unit vectors c\({}_{1}\) and c\({}_{2}\) of the reconstruction can be represented in terms of the unit vectors b\({}_{1}\) and b\({}_{2}\) of the Br plane as: \[\begin{pmatrix}c_{1}\\ c_{2}\end{pmatrix}=\begin{pmatrix}2&-1\\ 1&3\end{pmatrix}\cdot\begin{pmatrix}b_{1}\\ b_{2}\end{pmatrix}, \tag{1}\] where we drop the vector sign. Although the top and the bottom Br planes in the 1T structure of a single FeBr\({}_{2}\) slab are not equivalent, they have the same orientation of the high-symmetry directions and their lateral positions can be obtained by a rigid shift along the b\({}_{1}\)-b\({}_{2}\) direction. For the sake of clarity, we do not distinguish the top from Figure 3: (a-b) STM images of 0.6-ML of FeBr\({}_{2}\) at 4.3 K, measured at a) U\({}_{\text{Bias}}\)=1 V and I\({}_{\text{TC}}\)=0.1 nA and b) U\({}_{\text{Bias}}\)=1 mV and I\({}_{\text{TC}}\)=1 nA. Brown arrows indicate the close-packed Au [1\(\bar{1}\)0] and equivalent directions. The superstructure unit cell (light blue arrows) and the hexagonal Br lattice unit cell (black lines) are rotated with respect to the substrate high-symmetry directions. c) LEED pattern of a 0.6-ML sample measured at 43 eV, partially overlayed with the simulated pattern. The blue and red spheres are representing the two rotational domains of the superstructure. The green circle marks one of the spots belonging to the FeBr\({}_{2}\) hexagonal pattern. d) Relative orientation of Au(111) and rotated Br atomic nets. The lines are representing at each crossing point the position of a Au atom, the blue dots are representing the Br atoms and the dark discs mark the coincidence superstructure (a, b and C are used to designate the respective unit cell vectors). e) shows the simulated LEED pattern of Au(111) and of two symmetric domains of the coincidence superstructure. f) is the same as in d) but the Br net is rotated in the opposite direction. the bottom Br planes considering relative orientation of the Br and Au(111) layers, keeping in mind this relative shift. In Fig. 3 (d), the Au(111) plane is represented by two series of equally spaced parallel lines, crossed at 120\({}^{\circ}\), and the Br plane is displayed as the set of ordered dots with six-fold symmetry. The angle between the close-packed directions of these layers is set to 5\({}^{\circ}\). It is clearly seen that the vectors c\({}_{1}\) and c\({}_{2}\) constructed in accordance with Eq. (1) point to the places of coincidence between the Au(111) and the Br layers. Using the unit vectors a\({}_{1}\) and a\({}_{2}\) of the Au(111) plane, they can be represented in a matrix form as: \[\begin{pmatrix}c_{1}\\ c_{2}\end{pmatrix}=\begin{pmatrix}3&-1\\ 1&4\end{pmatrix}\cdot\begin{pmatrix}a_{1}\\ a_{2}\end{pmatrix}. \tag{2}\] These points and the equivalent ones are marked with large dark discs in the Fig. 3 (d). Calculations presented in the appendix shows that exact coincidence requires rotation of the Br layer by 5.21\({}^{\circ}\) and lateral expansion of the FeBr\({}_{2}\) by \(\sim 3\%\) with respect to the bulk value. The symmetry of the system requires the existence of FeBr\({}_{2}\) islands rotated by the same angle with respect to the Au(111) but in the opposite direction. This situation is shown in Fig. 3 (f). Representation of the unit vectors \(\tilde{c}_{1}\) and \(\tilde{c}_{2}\) of the coincidence points in terms of the unit vectors of the Br plane \(\tilde{b}_{1}\) and \(\tilde{b}_{2}\) as well as the unit vectors of the Au(111) a\({}_{1}\) and a\({}_{2}\) look like: \[\begin{pmatrix}\tilde{c}_{1}\\ \tilde{c}_{2}\end{pmatrix}=\begin{pmatrix}3&1\\ -1&2\end{pmatrix}\cdot\begin{pmatrix}\tilde{b}_{1}\\ \tilde{b}_{2}\end{pmatrix}=\begin{pmatrix}4&1\\ -1&3\end{pmatrix}\cdot\begin{pmatrix}a_{1}\\ a_{2}\end{pmatrix}. \tag{3}\] Fig. 3 (e) shows simulation of the LEED pattern by means of the LEEDpat software [44]. We used Au(111) as a substrate and the coincidence points visible in the STM image (Fig. 3 (b)) as the dark dots were represented by artificial overlayers. The Au(111) unit cell was taken to be 2.86 A. The overlayers were defined using the matrix relations Eq. 2 and Eq. 3, respectively. Results of the simulation for each domain are shown in the top-left and top-right quarters of Fig. 3 (e), while the bottom half of the figure shows a superposition of both patterns. We can clearly see the characteristic twelve-point circles around the central and the first-order Au(111) spots observed in Fig. 1 (b) and 2 (b). A 43-eV LEED pattern taken for the 0.6-ML FeBr\({}_{2}\)/Au(111) sample is shown in Fig. 3 (c). Half of the image is overlayed by the simulated pattern of the coincidence superstructure. An additional point highlighted with a light green circle belongs to the hexagonal pattern of the top-most Br plane (corresponding pattern in Fig. 1 (b) is marked with a blue hexagon). Since the Br lattices in two different domains are rotated by \(\pm 5^{\circ}\) with respect to the gold lattice, respective LEED patterns are also rotated by the the same angle, but in the Fig. 3 (c) they can not be resolved as two separate hexagons and appear as broadened spots. Taking into account the distortion of the peripheral part of the LEED image intrinsic for instruments with flat channel plates, we can conclude that the simulations reproduce the experimental LEED pattern. Dark spots observed in the STM image (Fig. 3 (b)) probably represent some sort of defects that are situated in the coincidence points of the FeBr\({}_{2}\) and Au(111) planes or can be a pure electronic effect arising due to the interaction between the Br and Au atoms on the interface. Defects in the isostructural compound FeCl\({}_{2}\), which were simulated [45] and studied experimentally [38], have different appearance. We have observed similar objects randomly distributed within the first layer of FeBr\({}_{2}\)/Au(111) (see Fig. S4 and S6 (b)). Although, we were unable to measure the bandgap, our STS data (Fig. S6 (a)) show that a monolayer of FeBr\({}_{2}\)/Au(111) is a semiconductor with conduction-band (CB) minimum situated at 0.4 eV with respect to the Fermi level. Therefore, screening of the charge imbalance that would be the consequence of an atomic vacancy would be impeded and such a defect would affect the electronic state of the surrounding atoms. Indeed, this effect was visualised in the STM image (Fig. S6 (b)) and the corresponding conductance map, displayed in Fig. S6 (c). Furthermore, if these spots were defects, we would expect some random imperfections in their ordered structure unavoidable in any real system, which were never seen in our STM data. Therefore, we believe that these dark spots arise due to peculiar interaction between the Br and Au in certain coordination that occurs in the coincidence points of the Br and Au planes on the interface between FeBr\({}_{2}\) and Au(111). ### Magnetic properties Magnetic properties of the in-situ-grown single-and double-slab films were measured via XAS/XMCD using circularly polarized synchrotron X-ray radiation. White line (average of the spectra with left and right polarisation) absorption spectra at the Fe L\({}_{3}\) edge aligned to the maximum of the peak and the respective XMCD spectra are shown in Fig. 4. The structure of the XAS peak closely resembles the Fe L\({}_{3}\) XAS spectrum measured for FeCl\({}_{2}\)[46], which was attributed to the Fe\({}^{2+}\) oxidation state [47; 48] (see also Fig. S9). It does not vary neither with thickness nor with temperature, which confirms the observation from the analysis of the XPS data that the films are uniform, single-phase and contain Fe\({}^{2+}\) ions in the same coordination. XMCD magnetization curves measured for different thicknesses at 2 K in two different geometries: normal incidence (NI) and grazing incidence (GI), are shown in Fig. 5 (a-b). Since they were measured in total electron yield (TEY) mode, the curves have artifact spikes around 0 T, which were removed. The loops are normalized to the Fe L\({}_{3}\) peak height of the respective white line (2 K, NI or GI) spectra and therefore the intensity values are proportional to the projection of the thermal average of the magnetic moment per Fe atom on the x-ray beam direction at 6 T field. The corresponding XMCD spectra are displayed as the inset in Fig. 5 (a-b). The loops do not show any field hysteresis and magnetization vanishes close to zero field, discarding a simple collinear ferromagnetic ordering at that temperature. It was demonstrated in the previous section that growth of FeBr\({}_{2}\) is close to the layer-by-layer mode. Therefore the 0.6-ML sample comprises mainly one layer thick islands while the 1.5-ML and 2.0-ML samples consist of the complete first slab and islands of the second and, in minor proportion, of the third slab (see also the STM images of the 1.5-ML and 2.0-ML samples acquired at the beamline, Fig. S5). It is clearly seen from the loops in Fig. 5 that the expectation value of the magnetization at 2 K and 6 T along the beam direction is substantially lower in the first layer than in the thicker films. Different sub-ML samples were grown directly at the beamline and compared to the sample transferred via suitcase. Therefore, we can exclude that the reduced magnetization is a result of contamination, since all samples, also the ones grown directly at the beamline, showed a strongly reduced magnetization. Sum-rule analysis of the spectra yields values of the spin magnetic moment close to 1 \(\mu_{B}\) for the 0.6-ML sample and about 2 \(\mu_{B}\) for the 1.5-ML and 2.0-ML samples (see Table 1). Since the data for the 0.6-ML sample are representative of the first slab and the data for the thicker films are the weighted average of the moments of the different slabs [12], it implies that the magnetization in the first slab is lower than in the next slabs. We can quantify this by supposing that the moments in all slabs except the 1st one are the same. Representing the moment for the 1.5-ML sample that comprises \(\frac{2}{3}\) of the first slab and \(\frac{1}{3}\) of the higher layers as: \[m_{Scff}(1.5\mathrm{ML})=\frac{2}{3}\cdot m_{Scff}(0.6\mathrm{ML})+\frac{1}{3} \cdot x,\] where x stands for the moment of the higher layers, and using the values of the spin magnetic moment for NI from Table 1, we obtain \(x=3.3\)\(\mu_{B}\). The corresponding calculation for the 2.0-ML sample that comprises \(\frac{1}{2}\) of the first slab and \(\frac{1}{2}\) of the higher layers yields \(x=3.2\)\(\mu_{B}\). Taking into account that both, the sum-rule analysis and the thickness estimation have an error of about 10%, and that the magnetic saturation of the sample seems not fully attained, we can reasonably assume that the Fe spin magnetic moment of the second and higher layers of FeBr\({}_{2}\) is close to the nominal value of 4 \(\mu_{B}\) predicted by Hund's rule [22; 23]. Analysis of a heterogeneous magnetic system is compli \begin{table} \begin{tabular}{c|c|c c c c} \hline \multirow{2}{*}{ML} & \multirow{2}{*}{T (K)} & \multicolumn{4}{c}{\(\mu\) (\(\mu_{B}\)/Fe atom)} \\ \cline{3-6} & & \multicolumn{2}{c}{NI} & \multicolumn{2}{c}{GI} \\ \cline{3-6} & & \(m_{\mathrm{Scff}}\) & \(m_{\mathrm{I}}\) & \(m_{\mathrm{Scff}}\) & \(m_{\mathrm{I}}\) \\ \hline 0.6 & 2 & 1.13 & 0.30 & 1.05 & 0.36 \\ \hline 1.5 & 2 & 1.814 & 0.60 & 2.03 & 0.45 \\ \hline 2.0 & 2 & 2.15 & 0.70 & 1.91 & 0.47 \\ \hline \end{tabular} \end{table} Table 1: Magnetic moments calculated by means of the sum rules from XMCD spectra obtained at T=2 K and B=6 T. Magnetic moments (\(\mu\)) are divided in two sections for NI and GI. The error for each magnetic moment is \(\pm 10\%\). More details about the procedure of sum-rule analysis and the extended version of the data (Table S4) are available in the supplementary information. Figure 4: a) White-line XAS spectra for different thicknesses measured at the Fe L\({}_{3}\)-edge, 6 T, 2 K and NI. The spectra are shifted along the energy axis to align the position of the Fe L\({}_{3}\) peak maxima. Further information about the shift corrections is available in the supplementary information, Fig. S8. b) XMCD spectra of 2.0-ML FeBr\({}_{2}\) on Au(111) measured at 6 T and NI for different temperatures. The inserted image is a zoomed-in version of the L\({}_{3}\) region. cated, nevertheless we can infer some additional information about magnetic ordering in the thin films of FeBr\({}_{2}\) from the temperature dependence of their magnetic properties. Fig. 5 shows in panels (c) and (d) the expectation value of the spin magnetic moment of the 2.0-ML sample and its inverse as a function of temperature (respective values are listed in Table S4), as obtained by sum-rule analysis of the spectra at 6 T. In the paramagnetic state, \(1/\mu_{Seff}\) is proportional to the susceptibility measured in the field of 6 T, provided that the temperature is high enough to yield a linear magnetization curve. Fig. S11 shows that the data points that correspond to the three lowest temperatures do not meet this condition. Linear fitting of the high-temperature part of this curve yields an extrapolated ordering temperature of \(\sim-10\) K, well below the value of 3.5 K reported for the bulk material [16]. Its negative sign indicates the possibility of antiferromagnetic behaviour. Although a non-zero value of the extrapolated ordering temperature cannot serve as unambiguous prove, it implies that the 2.0-ML sample can be magnetically ordered at 2 K. Panel (a) of Fig. S10 shows the same spin magnetic moments of the 2.0-ML sample together with curves calculated using the Brillouin function. We observe a slope change of the \(m_{Seff}\) vs T curve at \(\sim 7\) K, in contrast to the behavior expected from a paramagnetic system with S = 2, supporting the presence of magnetic correlations in the 2.0-ML sample. The magnetic behaviour of the single-layer FeBr\({}_{2}\) is distinctly different. Low values of the magnetic moments in an external field of 6 T as compared to the thicker films Figure 5: Comparison of the XMCD magnetization loops for 0.6-ML, 1.5-ML and 2.0-ML FeBr\({}_{2}\)/Au(111) films measured at 2 K at: a) normal incidence (NI) and b) grazing incidence (GI) at the Fe L\({}_{3}\)-edge. The magnetization curves are normalized to the respective white line (averaged (\(+\sigma\)+\(-\sigma\))/2) XAS peak height. The insert shows the corresponding Fe L\({}_{3}\) XMCD peaks at 6 T and 2 K normalized to the isotropic XAS edge jump for all three samples. The magnetization loops are normalized to the maximum intensity at 6 T and multiplied by the XAS edge-jump height of the L\({}_{3}\)-edge. Calculated effective spin moment from the spectra of the 2.0-ML sample at 6 T and NI (c) and it’s inverse (d) for different temperatures. The hollow blue circles are representing the data, which were used for the linear fit of the high temperature regime (17-100 K\(>T_{\rm Critical}\)). The excluded data points are displayed in red. A slope of \(0.024\pm 1\cdot 10^{-3}\) and a linear intercept of \(0.21\pm 0.05\) (\(\chi^{2}=0.01\)) gives the estimation of the paramagnetic Curie temperature of \(-10\pm 1\) K. The chosen temperature range for the performed fit was based on the fact that the lowest used temperature is far away from saturation of a Brillouin function in a J=2 system. At 17 K x would be 1.9 (not in saturation see Fig. S11). (Table 1) and shallow magnetization loops in both NI and GI directions (Fig. 5) implies a magnetic order that is neither paramagnetic nor ferromagnetic. Figure S10 demonstrates that fitting of the NI 0.6-ML sample's loop to the Brillouin function does not yield satisfactory results. The paramagnetic curve at 2 K is steeper than the experimental loop and has a smaller tangent in the high-field regions. Including out-of-plane anisotropy in the model of the paramagnetic system would make it's loop even steeper. Therefore, a mere lack of magnetic order cannot explain the observed magnetic properties of a single layer of FeBr\({}_{2}\). STM data show that 0.6-ML FeBr\({}_{2}\)/Au(111) sample comprises single layer islands with lateral size of \(\sim 100\) nm (see Fig. 3). These islands are large enough to neglect the effect of thermal excitations at 2 K and to discard superparamagnetic behaviour. Although recent neutron diffraction data unveiled some clues of a non-collinear magnetic order in bulk FeBr\({}_{2}\)[49; 50], neither these nor older works [21] showed antiferromagnetic order within the layers of FeBr\({}_{2}\). At the same time, the intralayer exchange coupling to the nearest neighbour \(J_{1}\) was found to be of different sign with respect to the next-nearest neighbour exchange coupling constant \(J_{2}\)[49]. These competing interactions lead to frustration, which, according to theoretical calculations [51], can result in complex magnetic textures. The phase diagram presented in [51] demonstrates that the magnetic structure varies with the change of the \(J_{1}/J_{2}\) ratio or due to modification of the anisotropy. Since the coincidence superstructure observed in the first layer of FeBr\({}_{2}\)/Au(111) causes lateral expansion of the FeBr\({}_{2}\) crystal lattice by \(\sim 3\%\) and the superexchange interaction depends strongly on the angle between the Fe-Br-Fe bonds, variation of the \(J_{1}/J_{2}\) ratio can be sufficient to stabilize one of the magnetic textures predicted in [51]. In summary, among different possible reasons of the distinctive magnetic behaviour observed in the single-layer of FeBr\({}_{2}\), we believe that the most plausible explanation is formation of non-collinear magnetic texture due to frustration. ### Summary Thin films (sub-ML to 2.0-ML) of FeBr\({}_{2}\) were grown epitaxially on a single-crystal Au(111) substrate in UHV via sublimation of the stoichiometric powder compound from a Knudsen cell. Thorough characterization performed by means of XPS and XAS/XMCD spectroscopy, as well as via surface-sensitive LEED and LT-STM, LEEM and XPEEM shows that FeBr\({}_{2}\) maintains its stoichiometric chemical composition and the same crystal structure down to the single-layer limit. The growth of the films is close to the layer-by-layer mode. The first layer of the FeBr\({}_{2}\)/Au(111) demonstrates an atomic reconstruction pattern due to the coincidence of the \(\pm 5^{\circ}\) rotated bottom Br and top-most Au(111) atomic planes. This reconstruction causes \(\sim 3\%\) lateral expansion of the FeBr\({}_{2}\) lattice cell. XMCD measurements reveal thickness-dependent magnetic properties of the FeBr\({}_{2}\). While the saturation magnetization of the second and higher layers is comparable to the bulk values and its temperature behaviour shows some clues of magnetic ordering, magnetic properties of the single layer of FeBr\({}_{2}\) were found to be distinctly different. Shallow magnetization loops at 2 K, lack of saturation and low magnetization in fields up to 6 T are attributed to magnetic frustration, characteristic of the triangular net of the magnetic Fe atoms. These findings open the prospect for further investigation of the monolayers of the 2D magnetic transition metal dihalides. In contrast to the trihalides family that features the honeycomb arrangement of the magnetic atoms within the 2D layers, triangular nets of magnetic atoms in TMDH are prone to frustration that leads to degeneracy of the magnetic ground state and potentially may result in stronger response towards external stimuli. This quality might result in a rich variety of interesting physical phenomena and opens a way for using 2D magnetic TMDH compounds in applications. ## Acknowledgment C.G.-O. and M.P.-D. acknowledge funding of the Ph.D. fellowship from the MPC Foundation. S.E.H. thanks the whole AG Kuch and in particular J. Gordes for help during the BESSY measurements. Also he is very thankful to the local IT team/electronics workshop/fine mechanics workshop for their continuous support. J.N. thanks the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) for his funding under the project 277101999 - CRC 183. S.T. acknowledges financial support by the BMBF through project VEKMAG (BMBF 05K19KEA). P.G. acknowledges funding from PID2020-116181RB-C32 and FlagErasSOgraphMEM PCI2019-111908-2 (AEI/FEDER) D. G. O. acknowledges funding by the Spanish MCIN/AEI/ 10.13039/501100011033 (PID2019-107338RB-C63) C. R., M. I., C.G.-O. and M.P.-D. acknowledges the funding by the the European Union's Horizon 2020 research and innovation programme (grant agreement No 800923), the Spanish MCIN/AEI/ 10.13039/501100011033 (PID2020-114252GB-I00, PID2019-107338RB-C63, TED2021-130292B-C42), Basque Government IT1591-22, and by the IKUR Strategy under the collaboration agreement between Ikerbasque Foundation and MPC on behalf of the Department of Education of the Basque Government. ## Supplementary information ### FeBr\({}_{2}\) crystal structure In Fig. S1 the structure of the octahedral FeBr\({}_{2}\) is displayed from the top and from the side. The system has a lattice constant of \(a=3.776\) A = \(b\) and a c-lattice constant of 6.558 A. The red spheres are representing the Fe atoms and the blue spheres the Br-atoms. On the lower right side the 1T-structure of FeBr\({}_{2}\) is displayed with the typical 180\({}^{\circ}\) rotated Br-planes. The visualization of the crystal structure of FeBr\({}_{2}\) was done via VESTA [53]. ### XAS based thickness calibration To calculate the thickness of the samples, we first need to calculate the areal density of Fe in FeBr\({}_{2}\). The thickness calibration is then based on Ref. [54]. Therefore, we are using the BL LEED image and construct a triangle between the (0,0)-spot and two other spots of the FeBr\({}_{2}\) hexagon. We calculate the area of the triangle by using the FeBr\({}_{2}\) lattice constant of 3.776 A. Here we use the fact that the Br-Br distance is the same as Fe-Fe distance in case of a 1T-structure [16]. The number of Fe atoms inside the triangle is 0.5 (each corner of the triangle consists of 1/6 of an Fe atom). We receive an areal density of Fe atoms of 8.098 \(\frac{\text{atoms}}{nm^{2}}\). To calculate the coverage of the sample, we use now the Fe-L\({}_{3}\) - edge XAS spectrum, which was measured for the sub-ML sample at RT with linearly polarized light under the magic angle of 55\({}^{\circ}\) (see Fig. S2). The intensity difference between the L\({}_{3}\) peak and the pre-edge is around 11% and will be defined as the peak height. By comparing the areal density which was calculated for the Fe-based molecule in the paper, we obtain a conversion factor of 9.87 (\(\frac{\text{FeBr}_{2}\ 8.10}{\text{Fe molecule}\ 0.82}\)). The Fe L\({}_{3}\) peak measured for the Fe-based molecule had a peak height of 1.2% for a 0.8 ML thick sample, which would result in a peak height for 0.8 ML of FeBr\({}_{2}\) of 12% by using the above conversion factor. Since the measured peak height in our case is lower (11%), we have an approximated coverage of 0.7 ML. This means that the VEKMAG - BL sample should be around 2.4 ML thick (evaporation time for the BL was scaled by a factor of 3.4 from the sub-ML data). However, from the thickness calculations we obtain 2.9 ML, which could be related to different measurement position or additional island growth before finishing one complete layer. ### STM pattern angle calculation From Fig. S3 we can calculate the vectors, length, and the corresponding angles for the relations between substrate and superstructure as well as between FeBr\({}_{2}\) and superstructure. In Fig. S3 the top-most Br layer is overlayed with the substrate Au(111). In blue the upper Br plane is displayed, which is on top of the Au(111) substrate (red). The black lines are a guide to the eye for the superstructure, which appears at the coincidence points between the Br layer and the Au(111) substrate. The case displayed in Fig. S3 corresponds to the \(-5^{\circ}\) case in Fig. 3 (d). In the following the angles and vectors are calculated step by step. The lattice vector of the superstructure in units of the substrate lattice is the following: \[\vec{a} =\begin{pmatrix}\frac{7}{2}\\ \frac{\sqrt{3}}{2}\end{pmatrix} \tag{4}\] \[|\vec{a}| =\sqrt{\frac{49}{4}+\frac{3}{4}}=\sqrt{13}\] (5) \[\alpha =\arctan\left(\frac{\sqrt{3}}{7}\right)\approx 13.9^{\circ} \tag{6}\] The superstructure vector is obtained by moving 3.5 times the Au atoms in positive x direction and \(\frac{\sqrt{3}}{2}\) in positive y direction (illustrated in Fig. S3 by the red vertical line). For the relation between the FeBr\({}_{2}\) lattice and the superstructure the following relation is obtained. \[\vec{b} =\begin{pmatrix}\frac{5}{2}\\ \frac{\sqrt{3}}{2}\end{pmatrix} \tag{7}\] \[\left|\vec{b}\right| =\sqrt{\frac{25}{4}+\frac{3}{4}}=\sqrt{7}\] (8) \[\beta =\arctan\left(\frac{\sqrt{3}}{5}\right)\approx 19.1^{\circ} \tag{9}\] The superstructure vector is obtained by moving 2.5 times the Br-atoms in positive x direction and moving \(\frac{\sqrt{3}}{2}\) in positive y direction (illustrated in Fig. S3 blue vertical line). To identify now the angle between the substrate and FeBr\({}_{2}\), the scalar product between the vectors \(\vec{a}\) and \(\vec{b}\) needs to be calculated. \[\vec{a}\cdot\vec{b} =\begin{pmatrix}\frac{7}{2}\\ \frac{\sqrt{3}}{2}\end{pmatrix}\cdot\begin{pmatrix}\frac{5}{2}\\ \frac{\sqrt{3}}{2}\end{pmatrix} \tag{10}\] \[\left|\vec{a}\right|\cdot\left|\vec{b}\right|\cdot\cos(\gamma) =\sqrt{13}\cdot\sqrt{7}\cdot\cos(\gamma)\] (11) \[\gamma =\arccos\left(\frac{19}{2\sqrt{91}}\right)\approx 5.21^{\circ} \tag{12}\] This results in an angle of \(5.21^{\circ}\) between the substrate and FeBr\({}_{2}\). Furthermore a lattice mismatch can be identified, which is causing a strain effect on the FeBr\({}_{2}\). The expected lattice constant ratio between FeBr\({}_{2}\) and Au(111) (surface lattice constant) would be \(\frac{3.776}{2.86}=1.32\). However, the calculations are revealing a ratio of \(\sqrt{\frac{13}{7}}=1.36\), which means that the FeBr\({}_{2}\) lattice constant is increased by 3% with respect to the theoretical lattice value and 6% to the measured value. This value is still in good agreement with the expected ratio and also within the error range of the measured value. ### STM - Interatomic distances In Fig. S4, examplary images from which we calculated the interatomic distance of FeBr\({}_{2}\) (Fig. S4 (a)) and the superstructure (Fig. S4 (b)) are shown. The used software was Gwyddion. For calculating the interatomic distance, we draw a line between two spots of the superstructure. The drawn line covers 6 atoms. The resulting line profile (S4 a) and b) insert) is used to determine the distances. The interatomic distance was obtained as the average over 50 line profiles. As a result we obtained an average value with standard deviation of \(3.66\pm 0.04\) A. As a systematic error of the measurements we assume a relative error of 7% (0.26 A). This error is based on the atomic-resolution Au(111) STM file (see Fig. S4 (c)). The herringbone reconstruction is also visible in Fig. S4 (c) with the background color. The superstructure distance was calculated as the average over 35 line profiles. As a result we obtained a value of \(9.69\pm 0.05\) A. As a systematic error of the measurements we assume a relative error of 7% (resulting in an error of 0.67 A). ### STM measurements at the BOREAS beamline The STM measurements were performed directly at the BOREAS beamline at 77 K before measuring XAS and XMCD. In Fig. S5, the evaporated 1.5-ML sample shows the start of the growth of the second and third layer (triangular-shaped islands) on top of the ML. We can also observe carpeting effects for the different heights. The red rectangle indicates the region of a defect, where we can see the layer underneath of the ML sample. From the STM measurements it is visible that with increasing thickness the top layers grow simultaneously with the ### STS measurements The STS measurements in Fig. S6 were performed at 4 K on the same sub-ML sample as displayed in Fig. 3 (a-b). From the measured dI/dV spectra it was not possible to determine the bandgap. The data reveals that the sub-ML sample is semiconducting with a CB onset at 0.4 eV with respect to the Fermi level. ### XPS - Fit parameters for sub - ML and BL samples In table S1 the fit parameters for the sub-ML and BL sample are shown. The samples were grown in the same chamber under the same conditions. After preparation of the sub-ML sample, the substrate was sputtered and annealed. The XPS measurements were performed by using a pass energy of 30 eV with the lens setting of medium area at 1.5 kV and analyzer workfunction of 4.309 eV. The Al-anode with an excitation energy of 1486.61 eV was used and the analyser has an energy resolution of 0.1 eV. The number of scans was kept constant for each element. To check the sample thickness and how it changed the attenuation, we used the element-specific maximum of the Fe 2p3/2 -peak and calculated the height to the post-edge. We calculated the height for the background-subtracted and raw data. For both data sets the same result was obtained. The BL sample shows a stronger Fe and Br signal and a weakened Au signal. In table S2 we see that for the thicker sample the Au peak is more attenuated. The BL sample needed to be shifted by 0.2 eV to negative binding energies to match the peak position of the sub-ML sample. The corresponding LEED images for the sub-ML and BL sample are shown in Fig. 1 (b-c). The peak height is not related to the atomic ratio of Fe and Br of FeBr\({}_{2}\). The chemical stoichiometry stays for both samples the same. The theoretical area ratio for a 2p core level peak is 2 and we receive from the fit for the sub-ML 1.9 and for the BL 1.7. The difference is caused by the not fitted multiplet splitting of the Fe\({}^{2+}\) HS state, because we do not have the energy resolution to fit them. The satellite FWHM is around 1.8 times bigger than the core-level FWHM and is in good correspondence to [36]. The Fe to Br ratio was checked by using the area of the Fe main peaks (excluding the satellite peaks) and the Br main peaks. \[R=\frac{A_{Fe2p}/S_{Fe2p}}{A_{Br3p}/S_{Br3p}} \tag{13}\] R is the ratio between Fe and Br and S is the element specific sensitivity factor (atomic sensitivity factor) for an angle of 55\({}^{\circ}\) (\(S_{Fe2p}=2.957\) and \(S_{Br3p}=1.279\)) [39, 56]. As a result we obtained for the sub-ML sample a ratio of 0.56 and for the BL sample a ratio of 0.58. This is in good agreement with the expected ratio of 0.5 for FeBr\({}_{2}\). By using a different sensitivity factor from the Wagner paper, the values are slightly different for the bulk approximation (R=0.49) and strongly different for the surface approximation (R=0.34) [57]. The surface approximation is not valid for the measured XPS because the resolution is not good enough to distinguish between surface and bulk peaks and the bulk phase is the more dominant one. For this sample, a ML consists of two Br planes sandwiching the transition-metal plane (Fe). In the case of using the sensitivity factors from Ref. [58], we obtain a ratio of 0.61. The calculated sensitivity factors from that paper have all a 10% error. Also the value of 0.61 as a ratio is in good agreement with the expected value of 0.5 for FeBr\({}_{2}\). Another contribution to the not perfect ratio of 0.61 could be the use of a non-monochromatic x-ray gun, which results in doublet peaks where the extra peaks are displaced towards lower binding energies by 9.8 eV with an intensity percentage of 6.4% of the real peaks. ### Sample thickness approximated by the integrated averaged XAS spectra To calibrate the sample thicknesses, we are using the value of the integrated isotropic XAS spectra normalized to the pre-edge (r value in the Table), as well as the average XAS peak height (PH= \(max(\frac{\sigma^{+}+\sigma^{-}}{2})\)). The reference sample for the thickness calibration is a 0.7 ML sample (Fig. S2). The thickness is calculated as the average of both geometries (NI and GI). A ML sample has an r-factor of 0.5 and an PH value of 0.2. The only sample for which we do not have a nearly equal signal for the r-factor and the XAS height is the 2.9 ML one. The 2.9 ML was calculated from the NI signal, for GI we obtain a sample thickness of 5.0 ML. The reason could be that the sample intensity at GI was measuring a thicker amount of the material. Due to the island growth it could have created thicker regions which were measured at GI. For the VEKMAG measurements we used a focus/beam spot size of 0.8 mm \(\times\) 0.8 mm. The error for the calculated coverages is approximated by comparing the NI and GI values. Besides the BL, which was measured at BESSY, the systematic error is around \(\pm\)0.4 ML, which is due to uncertainties and observations from STM. ### XAS and XMCD shift corrections In Fig. S7 the beamline-dependent energy-corrected XAS and XMCD spectra are displayed. To overlay the BESSY spectra with our ALBA data, we shifted the spectra of the 0.7-ML and 2.9-ML samples by around 1.6 eV. The comparison signal to align the data is the ALBA sub-ML signal. The BESSY data were acquired at 10 K and the ALBA data at 2 K. The used measurement point density at ALBA is around 3 times bigger than the one at BESSY. All the measurements took place at NI. The fact that the XMCD signals for the 2.0- and 2.9-ML samples are equal could be caused by the temperature difference. The BOREAS data were also energy-corrected with the sub-ML spectra as a reference. The 2.0-ML sample was corrected by 0.42 eV and the 1.5-ML one by 0.61 eV to higher energies. This shifting could be caused by two different effects. On the one hand, since the thinnest sample (sub-ML) was measured in a different experiment, one year before measuring the rest, the beamline energy calibration may have changed during this time. On the other hand, while increasing the thickness, the material becomes more insulating, changing the gap and therefore the final states. The reason for the shifts is the monochromator movement and also the workfunction change in TEY. ### XAS - Fe\({}^{2+}\) The sample only shows a single stoichiometry (Fe\({}^{2+}\)). In Fig. S9 the averaged signals of the 0.6-ML, 1.5-ML and 2.0-ML samples are shown (full range L\({}_{3}\) and L\({}_{2}\)-edge). By averaging both polarizations \(\sigma^{+}\) and \(\sigma^{-}\), the XAS spectra without magnetic contributions can be calculated. Fig. S9 shows the shift-corrected data for the different thicknesses and the black lines indicate the position of the main peaks (L\({}_{3}\) and L\({}_{2}\) edge). The red lines represent the side peaks which are known for an Fe\({}^{2+}\) state with octahedral symmetry [54; 59; 60; 61; 62]. In Fig. S9 the shift-corrected BOREAS measurements (0.6 to 2.0 ML) are displayed. ### Sum-rule analysis of the XMCD data The values for the effective spin and orbital magnetic moments were obtained from the areas of the \(L_{3}\) and \(L_{2}\) peaks of the XMCD spectra from the sum rules [63; 64; 65]: \[m_{s,eff} =-\frac{A_{L_{3}}-2\cdot A_{L_{2}}}{A_{\text{Average}}}\cdot N_{h }\cdot\frac{1}{\sigma} \tag{14}\] \[m_{l} =-\frac{2}{3}\cdot\frac{A_{L_{3}}+\cdot A_{L_{2}}}{A_{\text{Average }}}\cdot N_{h}\cdot\frac{1}{\sigma} \tag{15}\] where \(N_{h}\) is the number of holes (= 4 for Fe\({}^{2+}\) in FeBr\({}_{2}\)), \(A_{L_{3}}\), \(A_{L_{2}}\) and \(A_{\text{Average}}\) are the areas of the XMCD \(L_{3}\) and \(L_{2}\) region and the total area of the isotropic XAS. \(\sigma\) represents the degree of circular polarization, which is beamline-dependent. The \(m_{s,eff}\) value calculated via equation (14) is the effective spin magnetic moment, result of the sum of the actual spin moment plus the mag netic dipole term, \(\frac{7}{2}T_{Z}\). The background correction was performed using an asymmetrically reweighted penalized least- squares smoothing [66]. In table S4 the magnetic moment values obtained from the sum-rule analysis for the different samples are displayed. The BOREAS data were measured by using a 6 T field and switching the polarization and the VEKMAG data by keeping the polarization fixed and ramping the field (\(\pm\) 6 T). ### Brillouin function for the combined magnetic moment In Fig. S10 (b) the Brillouin function is overlayed with the measured magnetization curve of the 0.6-ML sample. The fitted Brillouin functions are not matching with the measured results, therefore the sample behaviour is not paramagnetic. In Fig. S10 (a) we compare experimental data for a coverage of 2.0 ML as a function of temperature with fits for a Brillouin function. The XMCD loops and the Brillouin functions are normalized to the corresponding magnetic moment of the different samples. For the fit we assumed g = 2 and T = 2 K. As a fitting parameter we used J and N. For the temperature-dependent fit we used a constant field of \(B=6\ T\). The fitting parameter N had the boundaries \(1-10\) and J from 0 to 2. The fit function is: \[\begin{split} M&=N\cdot g\cdot J\cdot\left(\frac{2 \cdot J+1}{2\cdot J}\cdot\coth\left(\frac{2\cdot J+1}{2\cdot J}\cdot x\right) \right.\\ &-\left.\frac{1}{2\cdot J}\cdot\coth\left(\frac{1}{2\cdot J}\cdot x \right)\right)\end{split} \tag{16}\] In Fig. S10 (a) the total magnetic moment data is fitted with two Brillouin functions for different temperature ranges. The blue fit was performed by excluding the experimental data at lower temperatures (2-10 K). The orange fit is based on the full temperature range starting from 2 K. We see that the decay of magnetization with temperature is always steeper for the Brillouin function, which points towards ferromagnetic interactions in the experimental data. In Fig. S11 the Brillouin function is displayed as a function of \(x\) values. Here the function was calculated for different J values. By using x values for different temperatures it can be determined from which starting temperature the linear fitting of the susceptibility (see Fig. 5 (d)) can be performed. The vertical lines are representing different x values for specific temperatures (10-17 K). It can be observed that for all reasonable J values the x position for 17 K is clearly far away from saturation. ### Br XMCD In Fig. S12 the XAS and XMCD measurements at the Br L\({}_{2,3}\) edge are displayed. The step-like behaviour is a direct consequence of the filled d orbitals. From the XMCD analysis it can be obtained that the magnetic behaviour is not caused by Br. Figure S10: In a) the total magnetic moment for the 2.0-ML sample is compared to the Brillouin function. Two different Brillouin function fits are included. The blue fit is using the data starting at 12 K and orange includes the measurements at lower temperatures (2 K). The magnetic moments were obtained from the measurements performed at \(B=6\) T. In b) the 0.6-ML magnetization curve is fitted by the Brillouin function. Three different functions were used by ranging J from 0.5 to 2 (fixed temperature of 2 K). The best fitting case is the one of \(J=0.5\). However, still the match is not perfect and \(J=0.5\) is not reasonable for this material. Therefore the material is not paramagnetic.
2306.07969
GeneCIS: A Benchmark for General Conditional Image Similarity
We argue that there are many notions of 'similarity' and that models, like humans, should be able to adapt to these dynamically. This contrasts with most representation learning methods, supervised or self-supervised, which learn a fixed embedding function and hence implicitly assume a single notion of similarity. For instance, models trained on ImageNet are biased towards object categories, while a user might prefer the model to focus on colors, textures or specific elements in the scene. In this paper, we propose the GeneCIS ('genesis') benchmark, which measures models' ability to adapt to a range of similarity conditions. Extending prior work, our benchmark is designed for zero-shot evaluation only, and hence considers an open-set of similarity conditions. We find that baselines from powerful CLIP models struggle on GeneCIS and that performance on the benchmark is only weakly correlated with ImageNet accuracy, suggesting that simply scaling existing methods is not fruitful. We further propose a simple, scalable solution based on automatically mining information from existing image-caption datasets. We find our method offers a substantial boost over the baselines on GeneCIS, and further improves zero-shot performance on related image retrieval benchmarks. In fact, though evaluated zero-shot, our model surpasses state-of-the-art supervised models on MIT-States. Project page at https://sgvaze.github.io/genecis/.
Sagar Vaze, Nicolas Carion, Ishan Misra
2023-06-13T17:59:58Z
http://arxiv.org/abs/2306.07969v1
# GeneCIS: A Benchmark for General Conditional Image Similarity ###### Abstract We argue that there are many notions of'similarity' and that models, like humans, should be able to adapt to these dynamically. This contrasts with most representation learning methods, supervised or self-supervised, which learn a fixed embedding function and hence implicitly assume a single notion of similarity. For instance, models trained on ImageNet are biased towards object categories, while a user might prefer the model to focus on colors, textures or specific elements in the scene. In this paper, we propose the GeneCIS ('genesis') benchmark, which measures models' ability to adapt to a range of similarity conditions. Extending prior work, our benchmark is designed for zero-shot evaluation only, and hence considers an open-set of similarity conditions. We find that baselines from powerful CLIP models struggle on GeneCIS and that performance on the benchmark is only weakly correlated with ImageNet accuracy, suggesting that simply scaling existing methods is not fruitful. We further propose a simple, scalable solution based on automatically mining information from existing image-caption datasets. We find our method offers a substantial boost over the baselines on GeneCIS, and further improves zero-shot performance on related image retrieval benchmarks. In fact, though evaluated zero-shot, our model surpasses state-of-the-art supervised models on MIT-States. We, the architects of the machine, must decide a-priori what constitutes its 'world'; what things are to be taken as'similar' or 'equal' -- Karl Popper, 1963 ## 1 Introduction Humans understand many notions of similarity and choose specific ones depending on the task at hand [21, 58]. Consider the task of finding'similar' images illustrated in Figure 1. Which of the rightmost images should be considered'most similar' to the reference? Given different _conditions_, each image could be a valid answer. For instance, we may be interested in a specific object in the scene, focusing on either the 'car' or 'bridge'. One could even indicate a 'negative' similarity condition, specifying a _change_ in the image to identify the bottom image as most similar. Learning such similarity functions is a central goal in discriminative deep learning [11, 12, 13, 34, 63, 75, 13]. Discriminative models, either supervised [30, 75] or self-supervised [9, 10], learn embedding functions such that'similar' images are closer in feature space than 'dissimilar' images. However, since there are infinitely many notions of image similarity, how do we allow our models to choose? Almost all current approaches assume a single notion of similarity, either by explicitly training on a specific concept [68, 75] or through an implicit assumption in the underlying data distribution [9, 12]. Meanwhile, prior works tackling the conditional problem have focused on constrained domains such as fashion [69, 73] or birds [46], with a restricted set of similarity conditions. This is because developing and evaluating models that can adapt to generic notions of similarity is extremely challenging. Specifically, curating data to train and evaluate such models is difficult, as collecting annotations for all concepts of similarity is impossible. Figure 1: Given different _conditions_ (shown as blue text), different images on the right can be considered most ‘similar’ to the reference on the left. We present a general way to train and evaluate models which can adapt to different notions of similarity. In this work we study the problem of general conditional image similarity, training on an open-set of similarity conditions, and evaluating on diverse similarity notions in a 'zero-shot' manner. We first design a benchmark comprising of _four evaluation datasets_ for conditional image similarity, setting up conditional retrieval tasks. We define these tasks under a unified framework which spans practical use cases, and propose the benchmark as a sparse but broad coverage of the conditional similarity space. We propose these datasets for _zero-shot evaluation only_, and suggest that models which can perform well without fine-tuning can flexibly adapt to general notions of similarity, as desired. We name this benchmark GeneCIS ('_genesis_') for **G**eneral **C**onditional **I**mage **S**imilarity. On GeneCIS, we find that baselines built from powerful CLIP backbones struggle and, moreover, that performance on it is only weakly correlated with the backbones' ImageNet accuracy [17]. This is in contrast to popular vision tasks such as segmentation [39] and detection [45], underlining the benchmark's utility. We also propose a solution to training general conditional similarity models, based on parsing large-scale caption datasets [64, 66]. Rather than requiring exhaustive similarity annotations, we find that we can automatically mine this information from already abundant image-caption data. We show that training in this way offers substantial gains over the baselines, approaching (and in some cases surpassing) carefully designed specific solutions for each of the GeneCIS tasks. In addition, we demonstrate that our method scales with increasing amounts of caption data, suggesting promising directions for future work. Finally, on related benchmarks from the 'Composed Image Retrieval' (CIR) field [74, 44], we find our method provides gains over zero-shot baselines. In fact, our model outperforms state-of-the-art on the MIT-States benchmark [28], despite being evaluated zero-shot and never seeing the training data. **Contributions.** (i) We present a framework for considering conditional image similarity, an important but understudied problem; (ii) We propose the GeneCIS benchmark to test models' abilities to dynamically adapt to different notions of similarity; (iii) We show that current vision-language models like CLIP struggle on GeneCIS, and that performance on it is only weakly correlated with ImageNet accuracy; (iv) We design a scalable solution to the conditional similarity problem based on automatically parsing large-scale image-caption data; (v) We show our models provide substantial gains over zero-shot CLIP baselines; (vi) We validate our models on related CIR benchmarks, surpassing state-of-the-art on MIT-States despite zero-shot evaluation. ## 2 Related Work Our thesis that the similarity between two images should be conditional is generally relevant to the _representation learning_ literature, which aims to learn embedding functions based on a single (often implicit) notion of similarity. For instance, _deep metric learning_[30, 34, 63] aims to learn visual representations such that images from the same category are projected nearby in feature space. This idea is used in practical domains such as _image retrieval_[7, 59, 61], _face verification_[68, 11, 67] and _vehicle re-identification_[42, 25, 31]. The key limitation here is that networks are trained to encode a single notion of similarity, namely category-level similarity. While some work considered notions of similarity at different visual granularities [4, 15, 70], we posit that there exist concepts of similarity (_e.g_. shape and color) which are orthogonal to categories. Meanwhile, _contrastive learning_[13, 10, 12] defines notions of similarity by specifying a set of transformations to which the representation should be invariant (_e.g_. color jitter or random cropping), encouraging augmentations of the same instance to be embedded together. Similarly, _vision-language_ contrastive training [29, 60] learns joint embedding spaces, where images' representations are aligned with their paired captions. Though the precise notions of similarity are difficult to define in this case, we note that the embeddings are fundamentally unconditional, with a single deterministic embedding of a given image. Finally, we highlight three relevant sub-fields in the literature: _conditional similarity networks_ (CSNs); _compositional learning_ (CL); and _composed image retrieval_ (CIR). CSNs are networks with multiple subspaces for different notions of similarity [73]. Though their motivation is highly related to our work, CSNs are trained in a supervised manner with pre-defined similarity conditions [41, 46, 73], and/or are evaluated in constrained domains such as fashion [32, 69]. In contrast, we aim to train on an open-set of similarity conditions and evaluate zero-shot on natural images. Meanwhile, our work is related to CL research in that we seek to compose information from images and conditions to establish similarities. However, again, CL models are often assessed on their ability to recognize unseen combinations of a finite set of visual primitives [56, 54, 47]. Lastly, the most similar setup to GeneCIS is proposed in the recent CIR [74]. It tackles the problem of composing an image and text prompt to retrieve relevant images from a gallery [1, 3, 16]. This is typically posed in the context of fashion [23, 76], with the text prompt acting as an image edit instruction (_e.g_. 'the same dress but in white' [1]). As such, CIR tackles a subset of the conditional similarity problem, by presenting models with a 'negative' similarity condition. **Key similarities and differences with prior work:** In this work, we leverage CIRR [44] and MIT-States [28] (natural image CIR datasets) for additional evaluations, and further leverage the 'Combiner' architecture [3] to compose text conditions and image features. Broadly speaking, our work differs from CSNs, CL and CIR in that we do not train on a finite, closed-set of similarity conditions or visual primitives. Instead, we train models on open-world image-caption data, and demonstrate a flexible understanding of conditional similarity through zero-shot evaluation on a range of similarity conditions in natural images. ## 3 Conditional Similarity We now describe our setup for the conditional similarity problem and its associated challenges - both with benchmarking models and acquiring data to train them. In SS 4 we introduce the GeneCIS benchmark which measures important aspects of the problem. In SS 5, we present a scalable solution to automatically acquire training data from widely available image-caption datasets. **Problem Definition:** We define the problem of conditional similarity as learning a similarity function between two images given an _explicit_ condition: \(f(I^{T};I^{R},c)\) yields the scalar similarity between a target image, \(I^{T}\), and a reference image, \(I^{R}\), given some external condition, \(c\). We use the scalar \(f(\cdot)\) to find the most conditionally similar image from a target set,, to solve a retrieval task. In this work we consider the condition to be a user-specified text prompt, although other types of condition are possible. We highlight that standard image similarity, framed as \(f(I^{T},I^{R})\), _implicitly_ assumes a similarity condition, often incorporated into the model or dataset (see SS 2). We refer to the case where images are similar under an unspecified condition as the images being _implicitly similar_. ### Challenges in training and evaluation **Challenges in evaluation:** The key difficulty in evaluating conditional similarity is that there are infinitely many possible conditions: from 'images with the same top-left pixel value are similar' to 'the same image but upside down is similar'. Thus, it is impossible to evaluate models' ability to adapt to _every_ similarity condition. Instead, in SS 4, we introduce the GeneCIS benchmark which consists of a subset of such conditions, and covers a broad range of practical use cases. We suggest that models which produce _zero-shot_ gains across GeneCIS, without finetuning, are more capable of flexibly adapting to different notions of similarity. **Challenges in acquiring training data:** Since the space and diversity of similarity conditions is huge, acquiring human annotations to train for _every_ type of conditional similarity is not feasible. For instance, to train a function which is sensitive to object category given some conditions (, 'car' or 'bridge' objects in Figure 1), and 'color' given others ( 'blue' or 'black' car in Figure 1), we need training data containing both features. Prior work addresses this by dramatically restricting the space of conditions and training on human annotations for pre-defined notions of similarity [46, 73]. In SS 5, we describe an automatic method which leverages existing large-scale image-text datasets to learn an open-set of similarity conditions. The resulting model can be evaluated in a zero-shot manner across different types of conditional similarity task. ## 4 The GeneCIS Benchmark GeneCIS considers two important dimensions of the conditional similarity problem. Firstly, a user may be interested in an _object_ in the scene ('with the same car') or an _attribute_ of a given object ('the same color as the car'). Secondly, the condition could either _focus_ on a particular aspect of the image ('the same color as the car') or specify the 'negative' space of a similarity condition, by defining a _change_ in the image ('this car but in black'). Figure 2: **The GeneCIS benchmark** contains four evaluation tasks for conditional similarity, where the goal is to retrieve the most similar image from a gallery (right, green squares), given a reference (left, yellow squares), and condition (blue ovals). Each task explores one combination of ‘focus’/‘change’ an ‘attribute’/‘object’. All galleries contain ‘distractors’ (dashed, dark-red squares) which are _implicitly_ similar to the reference or condition. Thus, given a reference and explicit condition, GeneCIS evaluates models’ ability to select the _most conditionally similar_ gallery image. Note: We show three gallery images for clarity, though all GeneCIS galleries have 10-15 images. We propose **four evaluation tasks in GeneCIS**, that covers the combination of the above dimensions and hence a diverse range of conditional similarities. For each of the tasks, we construct retrieval problems with: a reference image, \(I^{R}\); a text condition, \(c\); and a retrieval gallery of \(M\) target images, \(\{I_{i}^{T}\}_{i=1}^{M}\), of which only one is 'correct' or 'positive'. The task is to identify which of the target images is most similar to the reference, given the condition. The retrieval tasks, illustrated in Figure 2 with more examples in Appendix G.1, are: * **Focus on an Attribute:** This task evaluates a model's ability to focus on a specific attribute type (e.g 'color' or'material'). For instance, in Figure 2, we see a white laptop and the condition 'color', with the task being to select the laptop with the same color from the gallery. * **Change an Attribute:** This task contains 'negative' similarity conditions, considering target images with a specific attribute changed to be most similar. In Figure 2, the aim is to retrieve the same object ('train') but with the color changed from 'green' to 'olive green'. * **Focus on an Object:** This task considers reference images with many objects, and we refer to the set of objects together as a proxy for the image'scene'. The condition selects a single object from the reference as the most important (_e.g_.'refrigerator' in Figure 2) and the 'positive' target contains the condition object as well as the same'scene' (_e.g_. also contains'sky', 'chair' _etc_. in Figure 2). * **Change an Object:** This task considers 'negative' similarity through conditions which specify an object to be added to a scene. For instance, in Figure 2, 'ceiling' is specified, with the aim being to retrieve the same scene (a train station) but with a ceiling also present. The tasks in GeneCIS are designed to be diverse and challenging for a single model while remaining well-posed. In Figure 2, given only the reference image, \(I^{R}\), and text condition, \(c\), a human can readily identify which of the target images is most'similar'. We wish to benchmark vision models' competency at the same task. For the benchmark to be challenging, we would want the model to need both the image content and the text condition to solve the problem. Thus, we include different forms of 'distractor' images in the galleries. For instance, for tasks with objects in the condition, we include distractors which have a similar'scene' to the reference but do not contain the condition object. Such distractors are likely to affect models which are over-reliant on information from the reference image, without considering the condition. Similarly, we include distractors which contain the object specified in the condition, but not the reference scene, confusing models which solely rely on the condition. Meanwhile, for the attribute-based tasks, we include distractors which contain the reference object category, but not the correct attribute, and vice-versa. As such, many targets are _implicitly similar_ to the reference (similar given some condition), but the positive image is the most similar _given the provided condition_. **Noise and human verification:** Though, in principle, our benchmark should be error free, manual inspection of the templates shows that noise is introduced through underlying inconsistencies in Visual Genome [37], VAW [56] and COCO [36]. We are currently in the process of collecting manual annotations and human verification of the templates, and present the current version as 'GeneCIS v0'. ## 5 Method In SS 5.1, we briefly describe preliminaries for our approach to learning general conditional similarity functions. This includes the model architecture and optimization objective which we inherit from prior work [3]. In SS 5.2, we describe our main methodological contribution: an automatic and scalable way of mining conditional similarity training data from widely available image-caption datasets. ### Preliminaries **Training data.** To learn a conditional similarity function \(f(\cdot)\), we train with triplets \((I^{R},I^{T},c)\), where \(I^{R}\)and \(I^{T}\)are termed reference and target images, and \(c\) is the condition defining a relationship between them. **Model Architecture** We parametrize the conditional similarity function \(f(\cdot)\) with deep networks, first encoding features for \((I^{R},I^{T},c)\) as \((\mathbf{x}^{R},\mathbf{x}^{T},\mathbf{e})\in\mathbb{R}^{D}\). We learn separate encoders, \(\Phi(I)\) and \(\Psi(c)\), for the images and text condition. Next, we train a 'Combiner' network [3], which composes the reference image features with the condition text features as \(g(\mathbf{x}^{R},\mathbf{e})\in\mathbb{R}^{D}\). Finally, we consider the scalar conditional similarity to be the dot product between the combined feature, and target image feature, as: \(f(I^{T};I^{R},c)=g(\mathbf{x}^{R},\mathbf{e})\cdot\mathbf{x}^{T}\). Details of the Combiner architecture can be found in Appendix D and [3]. We initialize our image and text backbones, \(\Phi(\cdot)\) and \(\Psi(\cdot)\), with CLIP [60]. CLIP models are pre-trained on 400M image-text pairs containing a range of visual concepts. Furthermore, the visual and text embeddings from CLIP are aligned, making it easier to learn the composition between reference image and conditioning text features. **Optimisation Objective** Given a batch of triplets, \(B=\left\{(I_{i}^{R},I_{i}^{T},c_{i})\right\}_{i=1}^{|B|}\), we get features as \(\left\{(\mathbf{x}_{i}^{R},\mathbf{x}_{i}^{T},\mathbf{e}_{i})\right\}_{i=1}^ {|B|}\). Then, given a temperature \(\tau\), we optimise \((\Phi,\Psi,g)\) with a contrastive loss [50], as: \[\mathcal{L}=-\frac{1}{|B|}\sum_{i\in B}\log\frac{\exp\left(g(\mathbf{x}_{i}^{ R},\mathbf{e}_{i})\cdot\mathbf{x}_{i}^{T}/\tau\right)}{\sum_{j\in B}\exp\left(g( \mathbf{x}_{i}^{R},\mathbf{e}_{i})\cdot\mathbf{x}_{j}^{T}/\tau\right)} \tag{1}\] ### Scalable training for conditional similarity To train for general conditional similarity, we wish to curate triplets for training, \(\mathcal{D}_{train}=\left\{(I_{i}^{R},I_{i}^{T},c_{i})\right\}_{i=1}^{N}\), with diverse conditions and concepts of similarity. However, as the space of conditions increases, the burden for exhaustively annotating such a dataset increases exponentially. Instead, our method (illustrated in Figure 4) automatically mines training triplets from existing data sources: **Image-caption Data:** We begin with large-scale image-caption data scraped from the internet, containing images paired with descriptive captions [51, 66]. We hope that the captions contain information about the objects and attributes in the image, which we can utilize for the conditional similarity task. We also hope that such a method can scale with increasing data in the same way that conventional representation learning algorithms do. **Extract relationships:** We use an off-the-shelf text-to-scene-graph parser [65, 77] to identify 'Subject' \(\rightarrow\) 'Predicate' \(\rightarrow\) 'Object' relationships within the caption [55]. For instance, from the central image in Figure 4, we extract the highlighted relationship 'Horse' \(\rightarrow\) 'on' \(\rightarrow\) 'Canvas'. Note that one caption may contain many such relationships. We find that many of the entities ('Subjects' or 'Objects') extracted by the parser are not visually grounded in Figure 4: **Method overview. Our method for training general conditional similarity functions extracts information from large-scale image-caption datasets (left). We extract ‘Subject’ \(\rightarrow\) ‘Predicate’ \(\rightarrow\) ‘Object’ relationships from the caption data (middle), before using them to construct training triplets where a _reference_ and _target_ image are related by a _condition_ (right).** the image, _e.g_., pronouns ('I', 'you') or time-based nouns ('today', 'yesterday'). To address this, we introduce an additional filtering step, where every entity is scored for 'visual concreteness' based on a pre-existing database [8]. The database contains human ratings between 1 and 5 for how visually apparent a noun is. For each extracted relationship, we average its 'Subject' and 'Object' concreteness scores, discarding relationships if their value is below a threshold. **Construct triplets:** We first randomly select a relationship, taking the image it comes from as the'reference', \(I^{R}\). Having identified the _subject_ of the relationship (_e.g_. 'Horse' in the rightmost column of Figure 4) we identify all other relationships in the dataset containing the same subject. From this restricted pool of relationships, we randomly sample a 'target' relationship and image, \(I^{T}\), with the same subject but a different _object_ (_e.g_. a horse on a 'canvas' instead of in a'meadow' in Figure 4). Finally, we define the _condition_ of the triplet, \(c\), as the concatenated 'Predicate' and 'Object' from the target relationship ('on canvas' in Figure 4). **Discussion:** We note that our mined triplets exhibit a bias towards the 'Change an Object' GeneCIS task. However, the triplets often involve abstract relationships between reference and target images (_e.g_. 'Horse on canvas' in Figure 4). As such, solving the training task requires the model to use the condition to extract and modify diverse forms of information from the reference, which is the central requirement of the broader conditional similarity problem. ## 6 Main Experiments We evaluate baselines, task-specific solutions, and our method on the proposed GeneCIS benchmark. SS 6.1 describes the baselines as well as specific solutions which we design for each of the GeneCIS tasks. SS 6.3 shows results on GeneCIS and, in SS 6.4, we evaluate on related benchmarks from the Composed Image Retrieval (CIR) literature. ### Baselines and Specific Solutions for GeneCIS **CLIP-Only Baselines:** We provide three simple CLIP-only [60] baselines for GeneCIS. Our **Image Only** baseline embeds all images with the CLIP image encoder and retrieves the closest gallery image to the reference. The **Text Only** baseline embeds the text condition with the CLIP text encoder, and the gallery images with the image encoder, and finds the closest gallery image to the text embedding. Finally, our **Image + Text** baseline averages the reference image with the condition text feature, before using the combined vector to find the closest gallery image. **CIRR Combiner baseline:** CIRR is a natural image dataset [44] containing \(28\)K curated retrieval templates. All templates contain a human-specified text condition defining the relationship between the reference and 'positive' target image. Unlike our automatic and scalable triplet mining method, CIRR is manually constructed with a lengthy annotation process. We include a baseline from [3], which trains a Combiner model with a CLIP backbone on CIRR. For fair comparison with our method, we fine-tune both the image and text backbones on CIRR before evaluating the model zero-shot on GeneCIS, terming it **Combiner (CIRR)**. **Specific Solutions:** We also design specific solutions for each of the proposed tasks in GeneCIS. These solutions take into account the construction mechanisms of each task and represent sensible approaches to tackling the tasks independently. We design all solutions to respect the zero-shot nature of the evaluations and hence they are all based on 'open-vocabulary' models; we use CLIP for the attribute-based tasks and Detic [81] for the object-based ones. For the attribute-based tasks, we use CLIP to predict attributes or categories in the reference image, before using text embeddings of these predictions to search the gallery. For the object-based tasks, we use Detic to detect the object categories present in all images, treating the detected categories as bag-of-word descriptors of the target images. We give full details of the specific solutions in Appendix B. ### Implementation Details We train our strongest model on 1.6M triplets mined from Conceptual Captions 3 Million (CC3M) [66] which contains 3M image-caption pairs. Each triplet has a visual concreteness of at least 4.8 averaged over the 'Subject' and 'Object' entities in both the reference and target image. We train the contrastive loss with temperature \(\tau=0.01\) and batch size of 256, training for \(28\)K gradient steps. We use early stopping based on the Recall@1 on the CIRR validation set and, for fair comparison with [3], initialize the image and text backbones with the ResNet50\(\times 4\) CLIP model. Further details are in Appendix E. ### Analysis on GeneCIS We report results for all methods on the GeneCIS benchmark in Table 2. Our evaluation metric is Recall@\(K\): the frequency with which the model ranks the 'correct' gallery image in its top-\(K\) predictions. We report results at \(K=\{1,2,3\}\) to evaluate under different constraints, and to account for any noise in the benchmark. We also report the Average R@1 over all tasks to measure the overall performance across different forms of conditional similarity. **Takeaways:** From the _baselines_ we find that both the 'Image Only' and 'Text Only' models perform poorly as expected, since they only rely on either the reference image content or the text condition. The 'Image + Text' and 'Combiner (CIRR)' models perform better, validating our claim that both the reference and text condition are required to solve the task. Phrased differently, this suggests the benchmark evaluates conditional similarity, as implicit similarity functions (_e.g_. the 'Image Only' baseline) perform poorly on average. We further find that _our method_, using automatically mined data, substantially outperforms all baselines on average across the tasks, as well as at Recall@1 on all tasks individually. Notably, it outperforms the model trained on manually collected data from CIRR. As expected, most per-task _specific solutions_ perform better than our general method. However, the broad zero-shot nature of GeneCIS makes all tasks independently challenging and the specific solutions do not work for all of them. Broadly speaking, we found that CLIP [60] struggles to predict object attributes, and that Detic [81] struggles on the'stuff' categories in COCO Panoptic [36]. Finally, _caveats_ can be found in 'Image Only' results on 'Focus Attribute', where the baseline performs slightly better than our method at higher recalls. This is because there are some similarity conditions (_e.g_. 'color') for which standard image embeddings are well suited. We also find that 'Combiner (CIRR)' performs better on tasks with object conditions, as the multi-object image distribution of CIRR is more closely aligned with these tasks, than with the single-object images in the attribute-based tasks. We note that good performance on all tasks collectively indicates strong general conditional similarity models. ### Comparisons to Prior Work GeneCIS uses natural images with general conditions, rather than being specialized to domains such as bird species [46], faces [80] or fashion compatability [76, 23, 24, 71]. As such, to find comparable existing benchmarks, we turn to the _Composed Image Retrieval_ (CIR) literature. The CIR task is to retrieve images which best match a composed reference image and editing text condition. This task aligns with the 'Change' dimension of GeneCIS. We evaluate on both the MIT-States benchmark [28] as well as on CIRR [44], with the former precisely reflecting the 'Change Attribute' GeneCIS task. **Metrics:** On both benchmarks, we evaluate our model _zero-shot_ on the test-sets and compare with prior work trained on the datasets. These datasets are partially labeled and evaluate using a global retrieval setting, _i.e_., the entire test-set is used as a gallery for each query. Thus, we follow prior work and report Recall@K at multiple \(K=\{1,5,10\}\) to fully capture the model's performance. 1 Footnote 1: CIRR also has an evaluation on curated galleries, akin to GeneCIS. We do not report on this as we found that the “Text Only” baseline performed comparably with SoTA models on this task, achieving over 60% Recall@1. **Results:** We show results on MIT-States in Table 3. Prior work on this benchmark trains models on the dataset from scratch and thus is not zero-shot. Nonetheless, _zero-shot evaluation_ of our model surpasses state-of-the-art on this task. However, we note that prior methods use smaller models compared to our pre-trained CLIP backbone. We report on CIRR in Table 4, evaluating through the official test server and again comparing to methods that train for this setting. We report results for the Combiner method from the paper [3] as well as our improved implementation (see SS 6.1), which are both trained on CIRR. Our improved implementation is a strong upper bound, surpassing previous fully supervised models. On zero-shot evaluation, our method surpasses the comparable baselines by a significant margin across all the recall metrics. Compared to supervised methods, our model outperforms [16] and [44] zero-shot, though we note [16] trains from scratch. Finally, our \begin{table} \begin{tabular}{l c c c c} \hline \hline & Zero-shot & Recall @ 1 & Recall @ 5 & Recall @ 10 \\ \hline ARTEMIS [16] & ✗ & 17.0 & 46.1 & 61.3 \\ CIRPLANT [44] & ✗ & 19.6 & 52.6 & 68.4 \\ Combiner (CIRR, [3]) & ✗ & 38.5 & 70.0 & 81.9 \\ Combiner (CIRR, improved) & ✗ & 40.9 & 73.4 & 54.8 \\ \hline **Image Only** & ✓ & 7.5 & **23.9** & **34.7** \\ **Text Only** & ✓ & **20.7** & **43.9** & **56.1** \\ **Image + Text** & ✓ & 21.8 & **50.9** & **63.7** \\ \hline Combiner (CC3M, Ours) & ✓ & **27.3** & **57.0** & **71.1** \\ \hline \hline \end{tabular} \end{table} Table 4: **Results on CIRR [44]. Our model substantially outperforms the comparable zero-shot baselines.** \begin{table} \begin{tabular}{l|c c c|c c c|c c c|c c c|c} \hline \hline & \multicolumn{3}{c}{Focus Attribute} & \multicolumn{3}{c}{Change Attribute} & \multicolumn{3}{c}{Focus Object} & \multicolumn{3}{c}{Change Object} \\ \cline{2-13} & **R@1** & **R@ 2** & **R@3** & **R@1** & **R@ 2** & **R@3** & **R@1** & **R@ 2** & **R@3** & **R@1** & **R@ 2** & **R@3** & **Average R@1** \\ \hline Specific Solution (Focus Attribute) & 20.8 & 32.6 & 41.1 & - & - & - & - & - & - & - & - & - & - \\ Specific Solution (Change Attribute) & - & - & - & 15.2 & 25.8 & 35.6 & - & - & - & - & - & - & - \\ Specific Solution (Object) & - & - & - & - & - & - & 18.7 & 30.3 & 37.4 & 18.1 & 28.7 & 34.5 & - \\ \hline **Image Only** & 17.7 & 30.9 & **41.9** & 11.9 & 20.8 & 28.8 & 9.3 & 18.2 & 26.2 & 7.2 & 16.7 & 24.9 & 11.5 \\ **Text Only** & 10.2 & 20.5 & 29.6 & 9.5 & 17.6 & 26.4 & 6.5 & 16.8 & 22.4 & 6.2 & 13.9 & 21.4 & 8.1 \\ **Image + Text** & 15.6 & 26.3 & 37.1 & 12.6 & 22.9 & 32.0 & 10.8 & 21.0 & 31.2 & 11.3 & 21.5 & 30.3 & 12.6 \\ **Combiner (CIRR)** & 15.1 & 27.7 & 39.8 & 12.1 & 22.8 & 31.8 & 13.5 & 25.4 & **36.7** & 15.4 & 28.0 & 39.6 & 14.0 \\ \hline Combiner (CC3M, Ours) & **19.0** & **31.0** & 41.5 & **16.6** & **27.5** & **36.5** & **14.7** & **25.9** & 36.1 & **16.8** & **29.1** & **39.7** & **16.8** \\ \hline \hline \end{tabular} \end{table} Table 2: **Evaluation on GeneCIS. We evaluate baselines and our method. We also evaluate specific solutions for each task (shown gray, these are not general conditional similarity functions and hence cannot be evaluated on all tasks). Both across ten random seeds, and with ten cross-validation splits, we find a standard deviation of \(\approx\mathbf{0.2\%}\) in our model’s R@1 on each task, as well as on average over all tasks.** \begin{table} \begin{tabular}{l c c c} \hline \hline & Zero-shot & Recall @ 1 & Recall @ 5 & Recall @ 10 \\ \hline TIRG [74] & ✗ & 12.2 & 31.9 & 43.1 \\ Compensate [1] & ✗ & 13.9 & 35.3 & 47.9 \\ Detic [27] & ✗ & 14.7 & 35.3 & 46.6 \\ HCI [79] & ✗ & 15.2 & 36.0 & 46.7 \\ MAN [19] & ✗ & 15.6 & 36.7 & 47.7 \\ \hline Image Only & ✓ & 3.7 & **14.1** & **22.9** \\ Text Only & ✓ & 9.5 & 22.5 & 31.4 \\ Image + Text & ✓ & 13.3 & 31.7 & **42.6** \\ \hline Combiner (CC3M, Ours) & ✓ & **15.8** & **37.5** & **49.4** \\ \hline \hline \end{tabular} \end{table} Table 3: **Results on MIT-States [28]. _Zero-shot evaluation_ of our model outperforms SoTA supervised methods on this dataset.** model reduces the gap between the baselines and specialist Combiner models trained on CIRR. ## 7 Analysis **Ablations:** Table 5 shows the effect of our design choices on the performance on GeneCIS. We find that filtering out relationships which are not visually concrete, and finetuning the entire backbone, both strongly affect the performance. We verify the robustness of our triplet mining procedure by training with SBU Captions [51], a smaller but different source of image-caption data. We find that though the larger CC3M [66] produces slightly better results, different image-caption datasets are also suitable. **Comparing pretrained backbones:** In Figure 6, we study the effect of changing the CLIP initialization. We train Combiner models with ResNet [26] and ViT [18] backbones on CC3M, showing their performance as well as the 'Image + Text' baseline from SS 6.1. 2 Footnote 2: For fair comparison with [3], we report with a ResNet50\(\times\)4 backbone in Table 2, and report on our strongest ViT-B/16 model in Appendix C. We plot the performance on GeneCIS against the CLIP backbone's zero-shot ImageNet accuracy [17]. We observe that the performance on GeneCIS is **weakly correlated with the ImageNet performance** of the backbone: a Top-1 gain of 10% on ImageNet leads to only 1% improvement on GeneCIS. This suggests that improvements on ImageNet do not directly transfer to GeneCIS and that GeneCIS measures a different yet important capability of vision models. In addition, our method offers a substantial boost over the 'Image + Text' baseline, and a greater boost than scaling the underlying CLIP model. Both of these results are in stark contrast to trends on popular vision tasks such as segmentation [39] and detection [45], where gains on ImageNet directly transfer to large gains on the downstream task, and often more significantly so than gains from the underlying method. **Scaling the number of triplets:** In Figure 5, we investigate the effect of scaling the conditional similarity _training data_. We successively decrease the number of mined triplets by factors of four (from the \(1.6\)M used to train our strongest models) both with and without concreteness filtering. We find results improve with increasing numbers of triplets and that while our models are trained on a dataset of 3M image-caption pairs [66], open-source caption datasets exist with up to five billion images [64]. We emphasize the utility of this finding, suggesting it is possible to train stronger conditional similarity models by further scaling the training data. ## 8 Conclusion In this paper we have proposed the GeneCIS benchmark for General Conditional Image Similarity, an important but understudied problem in computer vision. The benchmark extends prior work and evaluates an open-set of similarity conditions, by being designed for zero-shot testing only. Furthermore, we propose a way forward for scalably training conditional similarity models, which mines information from widely available image-caption datasets. Our method not only boosts performance over all baselines on GeneCIS, but also provides substantial zero-shot gains on related image retrieval tasks. Moreover, we find that unlike for many popular vision tasks, the performance of our models on GeneCIS is roughly decorrelated from scaling the backbone network's ImageNet accuracy, motivating further study of the conditional similarity problem. Figure 5: **Scaling the number of mined triplets used for training our model improves the performance. This suggests that our automatic mining strategy is a promising and scalable approach to learning general similarity functions.** Figure 6: **Impact of different CLIP backbones on the performance of our model and the ‘Image + Text’ baseline. We show the Average Recall@1 on GeneCIS against the backbones’ zero-shot ImageNet accuracy, showing the two have a weak correlation.** \begin{table} \begin{tabular}{l r} \hline \hline & Average Recall @ 1 \\ \hline \hline \multicolumn{1}{l}{**Final Model**} & 16.8 \\ \hline No filtering for visual concreteness & 15.0 \\ \hline Freezing CLIP image backbone & 14.7 \\ Freezing CLIP text backbone & 15.8 \\ Freezing entire backbone & 15.1 \\ \hline Training on SBU [51] instead of CC3M [66] caption data & 16.5 \\ \hline \hline \end{tabular} \end{table} Table 5: **Ablations of key design choices of our full model with results reported on our GeneCIS benchmark.**
2305.10945
An Android Robot Head as Embodied Conversational Agent
This paper describes, how current Machine Learning (ML) techniques combined with simple rule-based animation routines make an android robot head an embodied conversational agent with ChatGPT as its core component. The android robot head is described, technical details are given of how lip-sync animation is being achieved, and general software design decisions are presented. A public presentation of the system revealed improvement opportunities that are reported and that lead our iterative implementation approach.
Marcel Heisler, Christian Becker-Asano
2023-05-18T13:05:10Z
http://arxiv.org/abs/2305.10945v1
# An Android Robot Head as Embodied Conversational Agent ###### Abstract This paper describes, how current Machine Learning (ML) techniques combined with simple rule-based animation routines make an android robot head an embodied conversational agent with ChatGPT as its core component. The android robot head is described, technical details are given of how lip-sync animation is being achieved, and general software design decisions are presented. A public presentation of the system revealed improvement opportunities that are reported and that lead our iterative implementation approach. humanoid robotics, machine learning, software development, conversational agents ## I Introduction The advancements in research on android robots open up more and more application opportunities. For example, android _ERICA_[1] was already tested for attentive listening, job interview practicing, speed date practicing and as a lab guide. _ERICA_ was proposed to be employed in other social interaction tasks, e.g., as an attendant or a receptionist. Recent research suggests that android robots might be useful as interaction partners for community-dwelling older adults with little company [2]. Additionally android robots might be useful tools in other research areas, too, e.g., in psychological studies regarding emotional interactions [3]. While such use cases provide promising perspectives for android robotics, they are often carried out using scripted actions or as Wizard of Oz studies. To actually employ such robots in real world scenarios, however, they need to act autonomously. As a first step in this direction, this paper describes how an android robot head is programmed to converse autonomously. In contrast to other research publications describing android robot software architectures [4, 5], here a less complete, but much simpler approach is presented and described along with an iterative development approach. Manually defined animations are the basis for an implementation that heavily relies on machine learning (ML) models to achieve an embodied conversational agent [6] that is represented by an expressive android head. First, in Section II some background information about the robot hardware and ML models in use is provided. Next, the current state of the implementation as well as the development process to reach this is described in Section III. Finally, the current state is briefly evaluated and next steps to implement are assessed in Section IV. ## II Background ### _Android Robot Head_ The android robot head was manufactured in Japan by the company A-Lab1, cf. [7] for further details. Its 14 pneumatic actuators shown in Fig. 1 enable it to display various facial expressions, e.g., to mimic emotions or lip-sync speech signals. The actuators are controllable by sending 14 integer values, each ranging from 0-255, via a RS-485 connection. The robot head does not provide any built in sensors to perceive its environment like cameras or microphones, nor a built in speaker. For the application described here external speakers and microphones are being used. The following actuators are available: Footnote 1: [https://www.a-lab-japan.co.jp/en.html](https://www.a-lab-japan.co.jp/en.html) 1. upper eyelid down 2. eyeball left right 3. eyeball up down 4. lower eyelid up 5. eyebrow up 6. eyebrow shrink 7. mouth corner up 8. mouth corner back 9. lip shrink 10. lips open 11. jaw down 12. lean head 13. nod 14. tilt head ### _ML Models_ Our implementation of an embodied conversational agent solves the following tasks using ML models: Automatic Speech Recognition (ASR), speech synthesis or Text To Speech (TTS), textual conversation or dialogue (chat) and automatic lip-sync. Related ML approaches for each of these tasks are described in the following paragraphs: AsrThe current state of the art openly accessible ML model for ASR is _Whisper_[8]. It consists of an off-the-shelf encoder-decoder Transformer architecture, which is known to scale well with increasing amounts of training data. The amounts of data used for training are also the main reason for Whisper to outperform previous models. Another notable novelty is that Whisper is trained on multiple tasks and multiple languages. Thus, a single Whisper model is not only capable of transcribing speech in one language but also to do tasks like voice detection, language identification, speaker diarization and translation from different languages to English. There are models in different sizes publicly available providing the common trade-off between more accurate results but slower inference times and stronger hardware requirements with increasing model sizes. TtsOver the last years ML was also adopted for speech synthesis. A development can be observed from multi-stage approaches, where first a model predicts acoustic features, e.g. mel-spectrograms, from linguistic features, e.g. raw text or phonemes, and in the next stage a vocoder model generates waveform from the acoustic features, towards end-to-end models, that generate waveform directly from linguistic features [9]. _VITS_ (Variational Inference with adversarial learning for end-to-end Text-to-Speech) [10] is the first end-to-end model that achieves close to human quality regarding the naturalness of the synthesized speech. Its end-to-end approach also leads to improvements regarding synthesis speed. When trained on a multi-speaker dataset it also allows to switch between speaking styles (e.g. male or female voices) at inference time by selecting different speakers. The most important aspects for VITS' positive results are a combination of different generative ML model approaches, namely VAEs (Variational Autoencoders) and GANs (Generative Adversarial Networks), as well as the newly proposed stochastic duration predictor to synthesize speech with diverse rhythms that helps to learn the one-to-many mapping from text to speech. Simulating different emotions, cloning speakers' voices at inference time, or combining multiple languages in single models are active research topics at present [9]. There are multiple models openly available that provide different capabilities, cf. [10, 11]. Additionally, open source libraries like _coqui.ai2_ allow to easily use and try out different models. Footnote 2: [https://github.com/coqui-ai/STT](https://github.com/coqui-ai/STT) ChatSince the introduction of Transformers [12], language models with impressive capabilities were quickly developed. Especially GPT (Generative Pre-trained Transformer) models show, that scaling them up to more parameters and training them with more data makes them suitable to work on different natural language processing (NLP) tasks. The initial GPT [13] serves as pre-trained model that requires fine-tuning to work on tasks other than next token prediction. GPT-2 [14] was already shown to generalize to other tasks more (especially to reading comprehension) or less (e.g., to summarization or translation) successfully, in a zero-shot fashion, i.e. without requiring additional training. DialoGPT [15] exploits GPT-2's architecture and is trained on conversation-like texts extracted from Reddit comments and thus it can be used as an open domain chatbot. Besides DialoGPT there are other Large Language Models (LLMs) specifically trained as open domain chatbots, e.g. BlenderBot [16, 17, 18], which in contrast to GPT employs an encoder-decoder architecture. It outperforms DialoGPT in multi-turn conversations and in its development of three different versions, important aspects are considered regarding safety, long-term memory, sticking to a persona and integrating information from external sources. Further scaling up GPT shows that GPT-3 [19] is applicable to many different NLP tasks without fine-tuning but instead by providing it textual instructions on what to do or additional one or multiple examples, which is referred to as zero-, one- or few-shot learning. Applying this approach GPT-3 achieved performances on various NLP benchmark tasks nearly matching state of the art fine tuned systems. Also GPT-3's text generation capabilities reached a level at which human evaluators have difficulties to distinguish generated texts from human written texts: Human judges could identify news articles generated in a few-shot setting with an accuracy of 52%, where 50% is chance level performance [19]. Given an appropriate prompt GPT-3 can be used as an open domain chatbot as well. The follow up GPT model was not simply scaled up, but instead fine-tuned with human feedback to better align to users' intents. The new model called InstructGPT [20] was first fine-tuned in a supervised manner and then further fine-tuned using reinforcement learning from human feedback (RLHF). This feedback was collected using human labelers asked to rank different model outputs. In contrast to InstructGPT, Fig. 1: Actuators of the android robot head. Dotted lines indicate symmetric movements by a single actuator. ChatGPT was fine-tuned with differently collected data, to be usable in a dialogue format, that allows for follow up questions. While GPT-2, as well as DialoGPT and all three versions of BlenderBot, models are openly accessible, this is not the case for the latest and most powerful models anymore. For example, ChatGPT is only accessible after registration via a Web-UI or a REST-API and a payed service after some amount of free usage is consumed. However there are promising open source solutions like Open Assistant3 following up. Footnote 3: [https://github.com/LAION-AI/Open-Assistant](https://github.com/LAION-AI/Open-Assistant) Footnote 4: [https://flask.palletsprojects.com/en/2.3.x/](https://flask.palletsprojects.com/en/2.3.x/) Footnote 5: [https://www.djangoproject.com/](https://www.djangoproject.com/) Footnote 6: [https://sanic.deven/](https://sanic.deven/) Footnote 7: [https://fastapi.tianago.com/](https://fastapi.tianago.com/) #### Ii-C2 Lip-Sync There are multiple ML models that predict corresponding facial expressions, especially lip-movements for an input speech signal. While they are most often made for computer animation, they differ in the representation of facial expressions they generate as output: _VisemeNet_[21] predicts visemes (visually different expressions of the mouth during speech), in other works facial landmarks are predicted [22, 23] and _FaceFormer_[24] and others [25, 26, 27] predict a 3D-mesh of a whole virtual human head. In [7] we explored how predicted visemes and face meshes can be applied to animate an android robot head. ## III Implementation In this section first the current setup of the android robot head and its capabilities are described. Next some details on the development process are provided. ### _Current state_ At the time of writing the android robot is able to converse in spoken natural language. The system is able to transcribe spoken language into text, generate a textual response, generate a speech signal articulating this response and display according facial expressions synchronous to its speech audio being output via an external speaker. This processing pipeline is shown in Fig. 2 (top). The system is currently not able to detect, if it is talked to automatically, but instead relies on a push-to-talk button. Speech as input is optional and, if preferred by the user or required in very noisy environments, can be replaced by text input. The conversational capabilities of the system are combined with manually predefined animations of the android head. As shown in the lower part of Fig. 2 such animations are automatically scheduled depending on the state of a conversation. Waiting animations are simple looping routines like random saccades and slight head movements and can be selected manually. Other such animations comprise randomized blinking movements and keep running during a conversation turn. Fig. 3 provides an impression of how the application is presented to a user. Animations are defined in a custom data structure and stored as JSON files. Each animation defines a set of key frames, where each frame contains values for the 14 actuators of the robot head and a frame number. Values can be undefined and will then be replaced with values from a different active animation or the previously used values will be resent. Besides key frames each animation contains additional information further describing the animation. E.g., animations can contain absolute or relative values, where absolute means the actuator should move to the specified value of a that key frame, while relative means the actuators value should be increased or decreased from its current value accordingly. Also, it is possible to define, if and how values between consecutive key frames should be interpolated (though how is currently limited to linear). Another option to define is how many times an animation should run after it has been activated. Common values are once and looping until explicitly stopped. For animations to be executed multiple times, pauses can be specified as exact duration or as range to sample a random duration from. Additionally a priority can be defined for each animation to ensure that more important animations like lip-movements during speech are not compromised by any other less important animation possibly looping in the background. Finally some meta data like a name and description can be provided to be displayed in a GUI. Besides manually starting and stopping animations, the current GUI implementation also allows to manipulate the values of each actuator separately using sliders. For the conversation application the GUI provides the before mentioned push-to-talk button and text input field, as well as possibilities to interrupt the current utterance or repeat the last one, to select a specific voice from the speech synthesizer and to start a new conversation by resetting the session of the chat module. Additionally it displays a conversation's turns so far (see Fig. 2(b)). The GUI is implemented as frontend of a web application written in python.python was chosen as programming language for multiple reasons: First, its low entry hurdle allows the educational use of the robotic head in lectures and semester projects with practical programming assignments even in lower bachelor semesters. Second, python is quite common in robotics (e.g. besides C++, ROS [28] also provides python APIs) and python is the most used programming language for ML applications and research. Thus it is well suited to be used for our application, where both aspects are important. There are also multiple reasons to design the application as a web application: First, this is again quite simple because python provides many frameworks to ease the development of web applications, e.g. flask4, django5, sanic6 and FastAPI7. Here flask was chosen due to prior experience of the authors. Second the REST APIs provided by the backend can be reused for other applications, e.g. it is planned to make the robot head play chess without providing a GUI. A program handling the chess game can use the REST endpoints to schedule animations as needed. Third, it is accessible via (internal) network, thus a stationary computer can host it and make it easily accessible, without having to establish a cable connection to the head. Finally, it is also easy to integrate services running on remote servers. In case of the here described application, such services are the ML models, that run on self-hosted as well as external GPU clusters. Their integration into the application via REST API calls makes them easily replaceable. For the different tasks described in Section II-B and depicted in Fig. 2, the following ML models are currently in use: For **ASR** the current state-of-the art, _Whisper_, is used. The open-source implementation from HuggingFace8 runs on a self-hosted GPU cluster. For access a REST API endpoint is implemented using FastAPI. The cluster's capabilities are sufficient to run the large-v2 model, which has the most parameters and, thus, the best accuracy, with reasonable inference times. Footnote 8: [https://huggingface.co/openai/whisper-large](https://huggingface.co/openai/whisper-large) Footnote 9: [https://github.com/openai/openai-python](https://github.com/openai/openai-python) To generate a textual answer _chatGPT_ is used as the **chat** component. The gpt-3.5-turbo is integrated using the OpenAI Python Library10. The following prompt is sent in the system role once, at the beginning of a new conversation: Footnote 10: [https://github.com/openai/openai-python](https://github.com/openai/openai-python) _"You are a friendly android robot head. You are at a ChatGPT-related event at the Stuttgart Media University (HdM). You were built in Japan in 2022 and Professor Christian Becker-Asano of the HdM is now responsible for you and performs research with you. You represent the android robots of the HdM. There are five android heads including you, and one android with a complete body. You have the ability to do complex facial expressions using air pressure. You have no camera and microphone sensors and cannot perceive the environment. You can talk by using an external speaker. You do not have a name yet, but you are open for suggestions. Keep your answers short by using a maximum of three sentences to respond. Generate plain text output only, no code or other formats. Only respond in English."_ Fig. 3: Current setup of embodied conversational agent application. Fig. 2: Overview of the current implementation. Top: basic pipeline from user input to response spoken by the android robot head. Bottom: animation phases according to the current timestep of the pipeline. As the second sentence suggests the conversational application of the android robot head was first presented at a public event about chatGPT10, which was also the main reason to use chatGPT over one of its open source alternatives. The prompt was not engineered [29] but designed in a trial-and-error approach until suitable results were generated. Footnote 10: [https://ai.hdm-stuttgart.de/news/2023/event-resume-chatgpt-nur-ein-wenig-mathematik/](https://ai.hdm-stuttgart.de/news/2023/event-resume-chatgpt-nur-ein-wenig-mathematik/) For **TTS** coqui's implementation of _VITS_ is being used. Their tts-server runs on our self-hosted GPU cluster and is called by the application via its REST API. Since the model provided by coqui is trained on the multi-speaker dataset VCTK [30], 109 different English speakers are available at inference time. Two female and one male speakers with subjectively good quality are manually pre-selected and provided to choose from in the application's GUI. Finally, a _FaceFormer_ model wrapped in a FastAPI web application runs on our self-hosted GPU cluster, to automatically **lip-sync** the synthesized speech. Instead of rendering the generated sequence of points in 3D space, we map some manually defined distances between points onto actuator movements of the android robot head, cf. [7] for details. ### _Iterative development approach_ To achieve the current state of the implementation an iterative approach was employed, which is described in this subsection. While frameworks like Scrum [31] define pretty clear guidelines on how to develop software in an agile and iterative way, practitioners often recommend to adapt the development process itself to the developer team's needs in an iterative fashion. Some aspects we found helpful during the development so far are to try out different approaches in a trial and error fashion but refactor afterwards to keep a clean code base, as well as to iteratively obtain feedback and adjust the most important next goals, which requires to create small but working increments. Basically the following goals or milestones were reached during the development process so far: 1. Basic library 2. Multiple GUIs with sliders 3. Different Animations 4. Separate task oriented projects, e.g. lip-sync 5. Refactoring and integration 6. Speech synthesis (TTS) 7. Dialog (chatGPT) 8. Speech Recognition (ASR) First a basic library was implemented to establish a connection to the robot head and send values, which implies calculating CRC hash values. Using this basic library different GUIs were developed, that first of all allowed to control single actuators using sliders. Two types of GUIs were implemented in parallel: one web application and one pyQt11 based GUI. Next some basic animations were added to both of the GUIs. The definitions and implementations of the animations differed between the two types of GUIs: one focused on the interpolation between keyframes to easily define more complex animations like yawning, while the other concentrated on scheduling and combining multiple simple animations like blinking and saccades. Then different projects were implemented in parallel, each aiming at a specific task: gazing at people12, lip-syncing speech [7], mimicking facial expressions and displaying emotional facial expressions. These projects were built independently based on different GUI types, most of them as a student semester project. With the experience gained from these projects and newly identified requirements, like access via local network, it was decided to continue the web app and stop developing the other GUI. This required refactoring the current web app to integrate an animation schema combining the features of both independently developed solutions as well as the lip-synching capabilities. Footnote 12: [https://ai.hdm-stuttgart.de/news/2023/gesicthtstackracking-mit-android-kopf/](https://ai.hdm-stuttgart.de/news/2023/gesicthtstackracking-mit-android-kopf/) Having a setup thus far in place, further capabilities were added mainly for demonstration purposes by plugging in different ML models. Implementing this was done in an order that allowed to have a version functioning and assessable on its own after each model added. Fist the TTS model was added to enable the robot head to speak arbitrary utterances, given as input text. Then a connection to ChatGPT was integrated for demonstration at the aforementioned public event. We used this event to collect feedback and identified two new features that were ask the most for: Speech input and multilingual. By now speech input was added as final feature of the current implementation using _Whisper_. ## IV Evaluation and Outlook To evaluate the current setup, three lab members not working with any android robots themselves, were asked to have a conversation with the android robot head. Afterwards a semi-structured interview was conducted to find out two things: First, is the current implementation perceived as intended? Second, which aspect should be improved next? Since the first question targets the perception, not the implementation details, its goal was to find out if the four phases of a conversation turn, as shown in the bottom part of Fig. 2, are recognized as such. While _speak_ and _think_ were clear to all of the participants, _listen_ and _wait_ (sometimes called _idle_ by interviewees) were less obvious at first, but also identifiable after a few conversational turns. In contrast to the feedback from the public presentation, none of the participants requested multilingual as next new feature. Instead all participants see most potential in improving the existing animations. While the "overacting" during the _think_ phase is rated positive, the random eye and head movements during _wait_ and a sometimes appearing jerking behaviour during _listen_ are described as "hectic" and "nervous" and the fixed head and eye positions during _listen_ and _speak_ were perceived negative. Different solutions to improve the current animations are suggested: Two of the participants suggested to add an external camera to make the android robot head look at an interlocutor's eyes while speaking or at some areas with movements happening during the _wait_ phase. The other participant suggests that more subtle eye movements would already benefit both of the phases and some random nodding would help during listening. Also the participant that experienced the jerking bug recommends to fix this, of course. Both adjusting animations and supporting multilinguals are desirable new features and manageable to implement with appropriate effort. To improve the gazing behaviour during animations the already implemented approach based on face detection with an external webcam and some heuristics, like moving the eyes first and turning the whole head a short time later, needs to be refactored and integrated into the current setup. Since Whisper and ChatGPT already support multiple languages, only the TTS model needs to be replaced to enable multilingualism. Unfortunately the openly available _yourTTS_ model provided by coqui supports only English, Portuguese and French. Located in Germany ourselves we will need to train our own model before adding this feature. With datasets and code to train custom models publicly available the effort to implement this feature is estimated manageable. Besides the features requested by users, we also plan to investigate into running the required ML models on an edge device. Though none of the users asked for faster _thinking_, the shorter inference times might be beneficial, additionally we would like to become independent of a stable internet connection. Finally, although also not requested by any user we still plan to improve the lip-sync capabilities of the android robot head, as described in [7]. Apart from fixing the jerking bug and some adjustments to make the current state work on our full-body android robot Andrea13, a prioritisation of the possible next features was not carried out, yet. Footnote 13: [https://ai.hdm-stuttgart.de/news/2022/mit-andrea-ist-man-ganz-vorne-dabei/](https://ai.hdm-stuttgart.de/news/2022/mit-andrea-ist-man-ganz-vorne-dabei/) ## V Conclusion The conceptualization and implementation of a very anthropomorphic robot head as an embodied conversational agent was presented. In doing so, it was highlighted how a combination of scripted animations and state-of-the-art ML models can achieve a convincing behavior in terms of timing and lip-sync animations. Most of the modules rely on open-source ML models, but unfortunately, the core component, ChatGPT, is closed source. This, together with general problems of privacy and legal risks, makes the current prototype not ready for commercial applications. Furthermore, it still remains a problem that the answers provided by large language models often tend to be "too creative" to rely on in any serious application. Our iterative development approach, however, enables us to test and compare different ML models systematically to evaluate the power of embodied AI for future applications.
2310.14995
Edge limits of truncated circular beta ensembles
We study the scaling limit of the rank-one truncation of various beta ensemble generalizations of classical unitary/orthogonal random matrices: the circular beta ensemble, the real orthogonal beta ensemble, and the circular Jacobi beta ensemble. We derive the scaling limit of the normalized characteristic polynomials and the point process limit of the eigenvalues near the point 1. We also treat multiplicative rank one perturbations of our models. Our approach relies on a representation of truncated beta ensembles given by Killip and Kozhan, together with the random operator framework developed by Valk\'o and Vir\'ag to study scaling limits of beta ensembles.
Yun Li, Benedek Valkó
2023-10-23T14:47:40Z
http://arxiv.org/abs/2310.14995v1
# Edge limits of truncated circular beta ensembles ###### Abstract We study the scaling limit of the rank-one truncation of various beta ensemble generalizations of classical unitary/orthogonal random matrices: the circular beta ensemble, the real orthogonal beta ensemble, and the circular Jacobi beta ensemble. We derive the scaling limit of the normalized characteristic polynomials and the point process limit of the eigenvalues near the point 1. We also treat multiplicative rank one perturbations of our models. Our approach relies on a representation of truncated beta ensembles given by Killip-Kozhan [19], together with the random operator framework developed in [36, 37, 38] to study scaling limits of beta ensembles. ## 1 Introduction For the classical unitary and orthogonal random matrix ensembles the point process scaling limit of the eigenvalues is well understood. The eigenvalues are on the unit circle, and if one scales the eigenangles appropriately, one obtains a point process limit on the real line. More recently the scaling limit of the (normalized) characteristic polynomials of these classical ensembles has been derived and characterized as well, these limits lead to random entire functions where the zero set is given by the point process limit of the eigenvalues. If we remove the first row and first column of a unitary (or orthogonal) matrix then the resulting matrix has eigenvalues within the unit disk. It is natural to ask what one can say about the limits of the eigenvalues and the characteristic polynomial if one studies the truncated random matrices, and what connections can be shown between the limit objects of the original and the truncated models. Our main goal is to study these questions for beta-generalizations of classical random orthogonal and unitary ensembles. We will also consider similar questions for multiplicative rank one perturbations of these models. Non-normal perturbations of classical ensembles have a rich history, see e.g. the surveys [14] and [9] and the references within. ### Haar unitary matrices and their truncations To start with a concrete example, we first consider the case of Haar unitary matrices. Let \(M_{n}\) be an \(n\times n\) uniformly chosen unitary matrix. With probability one \(M_{n}\) has \(n\) distinct eigenvalues \(e^{i\theta_{k}},1\leq k\leq n\), all on the unit circle. The joint eigenvalue density is given by \[\frac{1}{Z_{n}}\prod_{1\leq j<k\leq n}|e^{i\theta_{j}}-e^{i\theta_{k}}|^{2}, \qquad\theta_{j}\in[-\pi,\pi), \tag{1}\] where \(Z_{n}\) is an explicit normalizing constant (see e.g. [8]). The distribution given by (1) is called the size \(n\)_circular unitary ensemble_. Because of the appearance of the squared Vandermonde determinant in the probability density, this ensemble is _determinantal_ ([2, 16]), all finite dimensional marginal densities can be expressed via determinants built from a fixed kernel function. (We will provide more detail on the results discussed within this section in the Appendix.) The point process scaling limits of finite determinantal ensembles can be derived by studying the corresponding scaling limits of the determinantal kernel. It is a classical result due to Gaudin, Mehta, Dyson [2, 26] that if we scale the eigenangles of \(M_{n}\) by \(n\) then we get a translation invariant determinantal point process in the limit. We call this point process the \(\mathrm{Sine}_{2}\) process. In a more recent result, Chhaibi, Najnudel and Nikeghbali [6] studied the scaling limit of the (normalized) characteristic polynomial \[p_{n}(z):=\frac{\det(I_{n}-zM_{n}^{-1})}{\det(I_{n}-M_{n}^{-1})}=\prod_{j=1}^{ n}\frac{1-ze^{-i\theta_{j}}}{1-e^{-i\theta_{j}}}\] of the circular unitary ensemble. They showed that under the scaling of the Gaudin-Mehta-Dyson theorem one obtains a random entire function \(\boldsymbol{\zeta}\) (named the _stochastic zeta function_) with zero set given by the \(\mathrm{Sine}_{2}\) process. For a square matrix \(M\) we denote by \(M^{\ulcorner}\) the matrix obtained by removing the first row and column from \(M\). Note that we can write \(M^{\ulcorner}\) as \(\Pi^{\dagger}M\Pi\) where \(\Pi\) is the appropriate projection matrix, and \({}^{\dagger}\) denotes the transpose. Now consider the truncated version of a uniformly chosen \((n+1)\times(n+1)\) unitary matrix, i.e. \(M_{n+1}^{\ulcorner}\). With probability one this matrix has eigenvalues in the open unit disk \(\mathbb{D}=\{z\in\mathbb{C}:|z|<1\}\). The obtained random matrix has been studied in the physics literature because of its connection to chaotic scattering problems (see [12, 18] for further discussion and references). In [41] Zyczkowski and Sommers proved that the joint eigenvalue density of \(M_{n+1}^{\ulcorner}\) (with respect to the Lebesgue measure in the unit disk) is given by \[\frac{1}{\pi^{n}}\prod_{1\leq j<k\leq n}|z_{j}-z_{k}|^{2},\qquad z_{j}\in \mathbb{D}. \tag{2}\] We call this distribution the _truncated circular unitary ensemble_. (Note that the [41] provides a description for the eigenvalue distribution for general rank-\(k\) truncation as well.) The squared Vandermonde term in (2) indicates that this is also a determinantal point process. By studying the determinantal kernel one can show that the point process limit of the eigenvalues of \(M_{n+1}^{\nearrow}\)_without any additional scaling_ leads to a determinantal point process limit in \(\mathbb{D}\). We may call the resulting process the _bulk scaling limit_ of the truncated circular unitary ensemble. It is natural to ask if this point process can be connected to the zeroes of a 'nice' random analytic function, since it is the scaling limit of the zeros of the characteristic polynomial of \(M_{n+1}^{\nearrow}\). In Peres-Virag [28] it was shown that this is indeed the case, the bulk scaling limit of the eigenvalues of \(M_{n+1}^{\nearrow}\) has the same distribution as the zero set of the so-called Gaussian analytic function. One can treat the eigenvalues of \(M_{n+1}^{\nearrow}\) as a perturbation of the original eigenvalues of \(M_{n+1}\). Because of this, it is natural to study the behavior of the eigenvalues of the truncated matrix under the scaling \[z\mapsto-ni\log z, \tag{3}\] since this corresponds to the scaling \(e^{i\theta}\mapsto n\theta\) that takes the original (unit length) eigenvalues to the \(\mathrm{Sine}_{2}\) process.1 See Figure 1 for an illustration. It was shown in [1] that under this scaling the kernel of the truncated circular unitary ensemble (and hence the ensemble itself) indeed has a limit. The limiting point process is determinantal, and it is supported in the open upper half plane \(\mathbb{H}=\{z\in\mathbb{C}:\Im z>0\}\). We call this the _(hard) edge scaling limit_ of the truncated model, since we zoom in near \(z=1\). Footnote 1: Throughout the paper we are considering the branch of logarithm that is defined on \(\mathbb{C}\setminus(-\infty,0]\) and satisfies \(\log(1)=0\). The point process obtained as the edge limit of the truncated circular ensembles limit process in [1] appeared before in [11] and [13] as the point process limit of the rank-one additive anti-Hermitian perturbation for the Gaussian unitary ensemble under the appropriate scaling. It is natural to ask if one can connect the edge limit of the truncated circular ensemble to the zero set of a random analytic function, and whether one can characterize this random function in a natural way. We answer this question in the affirmative in our main result, Theorem 2 below. We provide a scaling limit for the normalized characteristic polynomial of the truncated model under the edge scaling, and describe the limiting random entire function. In fact, our goal is to study this and related questions in a more general setting: for the beta-generalizations of the circular unitary and other random unitary and orthogonal ensembles. ### CMV matrices, beta ensembles, and their truncations The size \(n\) circular beta ensemble with \(\beta>0\) is the distribution of \(n\) points \(\{e^{i\theta_{j}},1\leq j\leq n\}\) on the unit circle with joint probability density given by \[\frac{1}{Z_{n,\beta}}\prod_{j<k\leq n}|e^{i\theta_{j}}-e^{i\theta_{k}}|^{\beta},\qquad\theta_{j}\in[-\pi,\pi). \tag{4}\] Here \(Z_{n,\beta}\) is an explicit normalizing constant, see [8]. When \(\beta=2\) we get the circular unitary ensemble. The cases when \(\beta=1\) and \(4\) correspond to symmetric/self-dual random unitary matrices, but for general \(\beta>0\) there is no known invariant random matrix ensemble with the appropriate joint eigenvalue distribution. Note however that (4) has a natural interpretation as the Gibbs measure corresponding to a log-gas of \(n\)-points restricted to the unit circle and interacting via a logarithmic potential. In [20] Killip and Nenciu (motivated by the results of [7]) constructed a family of sparse random unitary matrix models \(\{{\sf Circ}_{n,\beta},n\geq 1\}\) with joint eigenvalue distribution given by (4). Their construction is based on the theory of orthogonal polynomials on the unit circle. We provide here a quick overview of their approach, the precise statements will be reviewed in Section 3. Suppose that \(\mu\) is a discrete probability measure on the unit circle \(\partial{\mathbb{D}}\) with a finite support of \(n\) points. The probability measure \(\mu\) can be encoded with its system of monic Figure 1: The picture on the left shows the eigenvalues of a truncated uniformly chosen \(100\times 100\) unitary matrix. The picture on the right shows the same eigenvalues under the edge scaling (3). orthogonal polynomials. These polynomials satisfy the so-called Szego recursion, which can be parameterized with a finite collection of complex numbers \(\alpha_{0},\ldots,\alpha_{n-1}\), called the Verblunsky coefficients. In [5] Cantero, Moral, and Velasquez provided a construction for a 'canonical' sparse (five-diagonal) \(n\times n\) unitary matrix (called the _CMV matrix_) \[\mathcal{C}=\mathcal{C}(\alpha_{0},\ldots,\alpha_{n-1})\] in terms of the Verblunski coefficients, so that the spectral measure of \(\mathcal{C}\) with respect to the unit vector \(\mathbf{e}_{1}=(1,0,\ldots,0)^{\dagger}\) is exactly \(\mu\). Moreover, if the probability measure \(\mu\) is the spectral measure of an \(n\times n\) unitary matrix \(U\) with respect to \(\mathbf{e}_{1}\) then the CMV matrix \(\mathcal{C}\) corresponding to \(\mu\) is unitary equivalent to \(U\). Note that the CMV matrix is the analogue of the tridiagonal (Jacobi) matrix constructed from the coefficients of the three-term recursion of the orthogonal polynomials of a finitely supported probability measure on \(\mathbb{R}\). Let \(M_{n}\) be an \(n\times n\) Haar unitary matrix, and consider its spectral measure \(\mu_{n}\) with respect to \(\mathbf{e}_{1}\). This is a (random) probability measure with support given by the circular unitary ensemble (1). Using unitary invariance one can show that the joint distribution of the weights of \(\mu_{n}\) is given by a particular Dirichlet distribution, and that the weights are independent of the support of \(\mu_{n}\). Moreover, the Verblunsky coefficients of \(\mu_{n}\) are independent random variables, and their distributions can be computed explicitly. This motivated Killip and Nenciu in [20] to study the random probability measure \(\mu_{n,\beta}^{\text{KN}}\) with support given by the circular beta ensemble (4) and weights chosen independently from a particular (\(\beta\)-dependent) Dirichlet distribution. [20] showed that the Verblunsky coefficients of \(\mu_{n,\beta}^{\text{KN}}\) are still independent, with explicitly given distributions. The corresponding CMV matrix \(\mathsf{Circ}_{n,\beta}:=\mathcal{C}\) provides a natural sparse random unitary matrix with spectrum given by the circular beta ensemble (4). For \(\beta=2\) this matrix is unitary equivalent to the Haar unitary matrix \(M_{n}\), and their spectral measures with respect to \(\mathbf{e}_{1}\) have the same distribution. In [19] Killip and Kozhan studied how removing the first row and column changes the spectrum of classical random unitary and orthogonal matrices. An important observation of [19] (which is crucial for our paper as well) is the following: if \(U\) is an \(n\times n\) unitary matrix then the truncated matrix \(U^{\ulcorner}\) is unitary equivalent to the truncated version of the CMV matrix \(\mathcal{C}\) corresponding to \(U\), which in turn is unitary equivalent to an \((n-1)\times(n-1)\) CMV matrix built from a simple transformation of the Verblunsky coefficients of \(U\). This means that if we know the Verblunsky coefficients of \(U\) then we can construct a sparse matrix whose spectrum is the same as that of \(U^{\ulcorner}\). This observation allowed [19] to provide a sparse matrix model with spectrum distributed as (2). Their approach also allowed them to study the matrix models \(\mathsf{Circ}_{n,\beta}\) of [20] with the first row and column removed. They proved that the joint eigenvalue density of the truncated matrix \(\mathsf{Circ}_{n+1,\beta}^{r}\) is given by \[\frac{\beta^{n}}{(2\pi)^{n}}\prod_{1\leq j,k\leq n}(1-z_{j}\bar{z}_{k})^{\frac{ \beta}{2}-1}\prod_{j<k\leq n}|z_{j}-z_{k}|^{2},\qquad z_{j}\in\mathbb{D}. \tag{5}\] We call the resulting distribution the size \(n\)_truncated circular beta ensemble_. Note that for \(\beta=2\) we recover (2). (We remark that [19] also provided a log-gas interpretation for (5).) Our goal is to study this ensemble (together with some other related models) under the edge scaling (3). The approach of Killip and Nenciu [20] can be extended to provide random matrix representations of beta-generalizations of other random unitary and orthogonal ensembles where the joint distribution of the Verblunsky coefficients can be described explicitly. The results of Killip and Kozhan [19] then provide a natural random matrix representation of the _truncated_ version of these beta ensembles. Our main results provide descriptions of the edge scaling limits of these truncated ensembles. ### Scaling limits of circular beta ensembles and their truncations Using the Killip-Nenciu representation Killip and Stoiciu in [21] showed that under the scaling (3) the circular beta ensemble has a point process limit. They characterized the limiting point process via its counting function using a system of stochastic differential equations. This limit process was later shown to be the same as the \(\mathrm{Sine}_{\beta}\) process, the bulk scaling limit of the Gaussian beta ensemble ([27, 36]). Note that \(\mathrm{Sine}_{\beta}\) is not determinantal for general \(\beta\), in fact there is no known description for its joint intensity functions in the general case. In a series of papers [36, 37, 38] Valko and Virag developed a framework to study the scaling limits of beta ensembles using Dirac-type differential operators (see Section 2 for a more detailed discussion). [36] showed that the spectra of unitary CMV matrices and some of their point process limits (including the \(\mathrm{Sine}_{\beta}\) process) can be represented as the eigenvalues of random Dirac-type differential operators. A Dirac-type differential operator can be parametrized by a path in the upper half plane \(\mathbb{H}:=\{z:\Im z>0\}\) together with two boundary points in \(\partial\mathbb{H}=\mathbb{R}\cup\{\infty\}\). In the case of a unitary CMV matrix these parameters can be built from the Verblunsky coefficients. [37] showed how this representation can be used to prove operator level convergence of the circular beta ensemble to the \(\mathrm{Sine}_{\beta}\) process. The path parameter of the random differential operator corresponding to \(\mathsf{Circ}_{n,\beta}\) is a random walk, which under the appropriate scaling converges to a time-changed hyperbolic Brownian motion. This process is the path parameter of the random Dirac operator corresponding to the \(\mathrm{Sine}_{\beta}\) process. [38] developed a framework to study scaling limits of normalized characteristic polynomials of beta ensembles. In particular, [38] proved that the normalized and scaled characteristic polynomials of the \(\mathsf{Circ}_{n,\beta}\) converge to a random entire function \(\mathbf{\zeta}_{\beta}\) with zero set given by \(\text{Sine}_{\beta}\). (For \(\beta=2\) this random entire function is the stochastic zeta function introduced in [6].) The random function \(\mathbf{\zeta}_{\beta}\) is characterized via various equivalent ways, in particular as the solution of the following random shooting problem. **Theorem 1** ([38]).: _Let \(b_{1},b_{2}\) be independent two-sided standard Brownian motion, and \(q\) an independent standard Cauchy random variable. Consider the unique strong solution \(\mathcal{H}_{\beta}:(-\infty,0]\times\mathbb{C}\to\mathbb{C}^{2}\) of the stochastic differential equation_ \[d\mathcal{H}_{\beta}=\begin{pmatrix}0&-db_{1}\\ 0&db_{2}\end{pmatrix}\mathcal{H}_{\beta}-z\frac{\beta}{8}e^{\frac{\beta}{4}u} \begin{pmatrix}0&-1\\ 1&0\end{pmatrix}\mathcal{H}_{\beta}du,\quad u\leq 0, \tag{6}\] _subject to the initial condition \(\lim_{u\to-\infty}\sup_{|z|<1}|\mathcal{H}_{\beta}(u,z)-\binom{1}{0}|=0\). Then \(\mathbf{\zeta}_{\beta}\) has the same distribution as the random function \(\mathcal{H}_{\beta}(0,z)^{\dagger}\binom{1}{-q}\)._ Our main result gives the edge scaling limit of the truncated circular beta ensemble (5) together with the scaling limit of its normalized characteristic polynomial. This result also provides a connection to the limit objects of the original circular beta ensemble. **Theorem 2**.: _Under the edge-scaling (3) the truncated circular beta ensemble converges to a point process \(\mathcal{X}_{\beta}\), which has the same distribution as the zero set of the random entire function \(\mathcal{E}_{\beta}=\mathcal{H}_{\beta}(0,\cdot)^{\dagger}\binom{1}{-i}\) defined via (6). Moreover, \(\mathcal{E}_{\beta}\) is the scaling limit of the normalized characteristic polynomials of the truncated circular beta ensemble._ Theorem 2 is proved in Section 6.1. In fact, we will show that there is a coupling of the finite ensembles and the limiting object so that the stated limits hold with probability one with effective (random) error bounds (see Proposition 48). The proof of the theorem uses the random operator framework to analyze the scaling limit of the normalized characteristic polynomial of truncated CMV matrices. Theorem 2 provides a connection between the scaling limits of the full and the truncated circular beta ensemble that is new even in the classical \(\beta=2\) case. It shows that the scaling limit of the characteristic polynomials of the circular beta ensemble can be obtained from the corresponding limit of the truncated model and an independent Cauchy random variable. Both \(\mathbf{\zeta}_{\beta}\) and \(\mathcal{E}_{\beta}\) are random entire functions, and hence they are determined by their restriction to \(\mathbb{R}\). By Theorems 1 and 2 we have the following equality in distribution: \[\{\mathbf{\zeta}_{\beta}(s):s\in\mathbb{R}\}\stackrel{{ d}}{{=}}\{ \Re\mathcal{E}_{\beta}(s)+q\,\Im\mathcal{E}_{\beta}(s):s\in\mathbb{R}\},\] where \(q\) is a Cauchy random variable independent of \(\mathcal{E}_{\beta}\). If \(M\) is a square matrix then the spectrum of \(M^{\ulcorner}\) can also be studied by considering the spectrum of the rank one multiplicative perturbation \[M\cdot\mathrm{diag}(0,1,1,\ldots,1)\] instead. (Of course, this also adds an additional zero eigenvalue.) This motivates the study of general rank one multiplicative perturbations. For \(r\in\mathbb{R}\) define \[M^{[r]}:=M\cdot\mathrm{diag}(r,1,1,\ldots,1). \tag{7}\] If \(M\) is a Haar unitary matrix then the distribution of \(M^{[r]}\) has been studied in [10]. (See also [11] and [13] for related results on rank-one additive anti-Hermitian perturbations for the Gaussian unitary ensemble.) Note that because of the various symmetries of the model, we may assume \(r\in[0,1]\). Following the definition of the truncated circular beta ensemble, it is natural to define the appropriate rank-one multiplicative perturbation of the circular beta ensemble as the spectrum of \(\mathsf{Circ}_{n,\beta}^{[r]}\). In [19] Killip and Kozhan derived the joint eigenvalue distribution of \(\mathsf{Circ}_{n,\beta}^{[r]}\). Moreover, they showed that if \(\mathcal{C}\) is a unitary CMV matrix then the spectrum of \(\mathcal{C}^{[r]}\) is the same as a certain explicitly determined CMV matrix. Using their results we are able to extend the results of Theorem 2. **Theorem 3**.: _Fix \(r\in[0,1]\). Consider the random function \(\mathcal{H}_{\beta}\) defined via (6), and let \(q\) be a standard Cauchy random variable independent of \(b_{1},b_{2}\) appearing in (6). Under the edge-scaling (3) the eigenvalues of \(\mathsf{Circ}_{n,\beta}^{[r]}\) converge to a point process \(\mathcal{X}_{r,\beta}\), which has the same distribution as the zero set of the random entire function \(\mathcal{E}_{r,\beta}=\mathcal{H}_{\beta}(0,\cdot)^{\dagger}\binom{1}{-c_{r}}\) with_ \[c_{r}=\frac{q+i\frac{1-r}{1+r}}{1-iq\frac{1-r}{1+r}}. \tag{8}\] _Moreover, \(\mathcal{E}_{r,\beta}\) is the limit of the normalized characteristic polynomials of \(\mathsf{Circ}_{n,\beta}^{[r]}\) under the same scaling._ Note that \(c_{r}=q\) for \(r=1\) and \(c_{r}=i\) for \(r=0\), so this result gives an interpolation between the scaling limits of the unperturbed and the truncated circular beta ensemble. Our approach extends to other matrix models as well. We provide versions of Theorems 2 and 3 for the _real orthogonal beta ensemble_ and the _circular Jacobi beta ensemble_. The real orthogonal beta ensemble was introduced in [20] (see also [19]) as a generalization of the joint eigenvalue distributions of a certain class of the classical compact random matrix models. (See Section 4.1 for more detail.) The operator level limit of the real orthogonal beta ensemble in the hard-edge limit was derived in [24]. The real orthogonal beta ensemble can be naturally transformed into another classical model, the (real) Jacobi beta ensemble, whose edge scaling limits were studied in [15]. The truncated version of the real orthogonal beta ensemble was introduced in [19], where the authors constructed a sparse matrix model and derived the joint eigenvalue distribution. In Theorem 52 and Corollary 53 of Section 6.2, we will establish the scaling limit of the truncated (and perturbed) real orthogonal beta ensemble, together with the scaling limit of its normalized characteristic polynomial. The circular Jacobi beta ensemble is a one-parameter extension of the circular beta ensemble. For a complex parameter \(\delta\) with \(\Re\delta>-1/2\) it is given by the joint density function \[\frac{1}{Z^{\rm CJ}_{n,\beta,\delta}}\prod_{j<k\leq n}\left|e^{i \theta_{j}}-e^{i\theta_{k}}\right|^{\beta}\prod_{k=1}^{n}(1-e^{-i\theta_{k}})^ {\delta}(1-e^{i\theta_{k}})^{\bar{\delta}} \tag{9}\] with respect to the uniform measure on the unit circle. For \(\delta=0\) this is just the circular beta ensemble. When \(\delta=\beta\frac{k}{2}\) with a positive integer \(k\) then this model can be viewed as the circular beta ensemble conditioned to have \(k\) particles at \(e^{i\theta}=1\). (See Section 4.1 for additional details.) In [4] the authors constructed a family of unitary matrix models whose eigenvalues are distributed according to (9). Following the Killip-Nenciu approach they studied a random probability measure where the support is given by the circular Jacobi beta ensemble, and the weights are given by an independently chosen beta-dependent Dirichlet distribution. [4] showed that although the Verblunsky coefficients for this measure are usually not independent, a modified version of these coefficients are in fact independent, and their distributions can be described explicitly. [24] used this representation to derive the point process and operator level scaling limit of the circular Jacobi beta ensemble. We build on these results together with the ideas of [19] to define the truncated (and the perturbed) version of the circular Jacobi beta ensemble (see Section 7.1). We show that the joint density of the truncated circular beta ensemble is given by a constant multiple of \[\prod_{j,k=1}^{n}(1-z_{j}\bar{z}_{k})^{\frac{\beta}{2}-1}\prod_{j <k}|z_{k}-z_{j}|^{2}\prod_{j=1}^{n}\left((1-z_{j})^{\bar{\delta}}(1-\bar{z}_{j })^{\delta}\right),\] generalizing (5). We then derive the point process scaling limit of these new models, together with the scaling limit of their normalized characteristic polynomial. See Theorem 66 and Corollary 67 of Section 7.2 for the precise statements. ### Outline of the paper In Section 2 we review the required background for the considered Dirac-type differential operators. In Section 3 we give an overview of how finitely supported probability measures on the unit circle can be represented with CMV matrices and Dirac-type differential operators. Section 4 describes the considered finite beta ensembles and their operator level limits, and summarizes the known results on scaling limits of characteristic polynomials of these models. In Section 5 we provide general results regarding the convergence of the eigenvalues truncated and perturbed CMV matrices. Section 6 provides the proofs for Theorems 2 and 3 on the scaling limits of the truncated and the perturbed circular beta ensemble, and proves the corresponding results for the real orthogonal beta ensemble. Section 7 proves our results on the truncated and the perturbed circular Jacobi beta ensemble. Section 8 is an appendix that contains a more detailed discussion of the determinantal point processes mentioned in the Introduction, together with a few open problems. **Acknowledgments.** We thank Balint Virag for helpful comments and references. B.V was partially supported by the University of Wisconsin - Madison Office of the Vice Chancellor for Research and Graduate Education with funding from the Wisconsin Alumni Research Foundation and by the National Science Foundation award DMS-2246435. Y.L. is partially supported by the Shuimu Tsinghua Scholar Program and China International Postdoctoral Exchange Fellowship Program No.YJ20220279. ## 2 Dirac-type differential operators This section provides a brief overview of Dirac-type operators. A more detailed discussion can be found in [36], [38], and [40]. ### Basics of Dirac operators Let \(\mathcal{I}\) be \([0,1)\) or \((0,1]\). Suppose \(x+iy:\mathcal{I}\mapsto\mathbb{H}=\{z\in\mathbb{C}:\Im z>0\}\) is a locally bounded measurable function, we define \[R=\frac{X^{\dagger}X}{2\det X},\qquad X=\begin{pmatrix}1&-x\\ 0&y\end{pmatrix} \tag{10}\] to be the positive definite matrix-valued function that encodes \(x+iy\). We consider differential operators of the form \[\tau:f\to R^{-1}(t)Jf^{\prime},\qquad f:\mathcal{I}\mapsto\mathbb{R}^{2}, \qquad J=\begin{pmatrix}0&-1\\ 1&0\end{pmatrix}. \tag{11}\] We call \(\tau\) a _Dirac-type operator_, \(x+iy\) the _generating path_ of \(\tau\), and \(R(t)\) the _weight function_ of \(\tau\). The boundary conditions for \(\tau\) at \(t=0,1\) are given by two non-zero, non-parallel vectors \(\mathfrak{u}_{0},\mathfrak{u}_{1}\in\mathbb{R}^{2}\). We assume that these vectors are normalized and satisfy certain integrability conditions with respect to the weight function. **Assumption 4**.: _We assume that_ \[\mathfrak{u}_{0}^{\dagger}J\mathfrak{u}_{1}=1, \tag{12}\] \[\int_{0}^{1}\int_{0}^{t}\mathfrak{u}_{0}^{\dagger}R(s)\mathfrak{u}_{0}\, \mathfrak{u}_{1}^{\dagger}R(t)\mathfrak{u}_{1}dsdt<\infty, \tag{13}\] _and_ \[\int_{0}^{1}\|R(s)\mathfrak{u}_{1}\|ds<\infty\text{ if }\mathcal{I}=[0,1) \text{, \ \ and \ \ }\int_{0}^{1}\|\mathfrak{u}_{0}^{\dagger}R(s)\|ds<\infty\text{ if }\mathcal{I}=(0,1].\] Let \(L^{2}_{R}\) denote the \(L^{2}\) space of functions \(f:\mathcal{I}\mapsto\mathbb{R}^{2}\) with the \(L^{2}\)-norm \[\|f\|_{R}^{2}:=\int_{\mathcal{I}}f(s)^{\dagger}R(s)f(s)ds.\] Under Assumption 4, the operator \(\tau\) defined according to (10) and (11) is self-adjoint on the following domain: \[\text{dom}(\tau)=\{v\in L^{2}_{R}\cap\text{AC}:\tau v\in L^{2}_{R},\lim_{s \to 0}v(s)^{\dagger}J\mathfrak{u}_{0}=0,\lim_{s\to 1}v(s)^{\dagger}J \mathfrak{u}_{1}=0\}. \tag{14}\] Here \(\text{AC}(\mathcal{I})\) is the set of absolutely continuous real functions on \(\mathcal{I}\). Throughout the paper, we use the notations \(\texttt{Dir}(R,\mathfrak{u}_{0},\mathfrak{u}_{1})\) or \(\texttt{Dir}(x+iy,\mathfrak{u}_{0},\mathfrak{u}_{1})\) interchangeably for the operator \(\tau\) defined via (11) and (10) on the domain (14). We can identify a nonzero vector in \(\mathbb{R}^{2}\) with a boundary point in \(\partial\mathbb{H}=\mathbb{R}\cup\{\infty\}\) using the projection operator \[\mathcal{P}\binom{x_{1}}{x_{2}}=\begin{cases}x_{1}/x_{2}&\text{if }x_{2}\neq 0,\\ \infty&\text{if }x_{2}=0.\end{cases} \tag{15}\] Using this identification the boundary conditions can be parametrized by two points in \(\partial\mathbb{H}\). Under Assumption 4, the operator \(\tau\) is invertible, and \(\tau^{-1}\) is a Hilbert-Schmidt integral operator. **Proposition 5**.: _Suppose that \(\tau=\texttt{Dir}(R,\mathfrak{u}_{0},\mathfrak{u}_{1})\) satisfies Assumption 4. Then_ \[\|\tau^{-1}\|_{\rm HS}^{2}=2\int_{0}^{1}\int_{0}^{t}\mathfrak{u}_{0}^{\dagger} R(s)\mathfrak{u}_{0}\,\mathfrak{u}_{1}^{\dagger}R(t)\mathfrak{u}_{1}dsdt<\infty,\] _and \(\tau^{-1}\) is a Hilbert-Schmidt integral operator on \(L^{2}_{R}\) given by_ \[\tau^{-1}f(s)=\int_{\mathcal{I}}K_{\tau^{-1}}(s,t)f(t)dt,\quad K_{\tau^{-1}}(s,t) =\big{(}\mathfrak{u}_{0}\mathfrak{u}_{1}^{\dagger}1_{s<t}+\mathfrak{u}_{1} \mathfrak{u}_{0}^{\dagger}1_{s\geq t}\big{)}R(t). \tag{16}\] _Consider the conjugated operator \(X\tau X^{-1}\), where \(X\) is defined as in (15). We denote the inverse of this operator by \(\mathtt{r}\,\tau:=\left(X\tau X^{-1}\right)^{-1}\), this is an integral operator on \(L^{2}(\mathcal{I})\) with kernel_ \[K_{\mathtt{r}\,\tau}(s,t)=\tfrac{1}{2}\big{(}\mathfrak{a}_{0}(s)\mathfrak{a}_ {1}(t)^{\dagger}\mathbf{1}_{s<t}+\mathfrak{a}_{1}(s)\mathfrak{a}_{0}(t)^{ \dagger}\mathbf{1}_{s\geq t}\big{)},\qquad\mathfrak{a}_{j}(s)=\frac{X(s) \mathfrak{u}_{j}}{\sqrt{\det X}},\quad j=0,1. \tag{17}\] _The conjugated operator \(X\tau X^{-1}\) has the same spectrum as the operator \(\tau\), and \(\|\mathtt{r}\,\tau\|_{\mathrm{HS}}=\|\tau^{-1}\|_{\mathrm{HS}}\)._ Proposition 5 and the fact that \(\tau\) is self-adjoint implies that \(\tau\) has a discrete spectrum, with countably many eigenvalues that are all nonzero real numbers, and can only accumulate near \(\pm\infty\). We label the eigenvalues of \(\tau\) in increasing order by \(\lambda_{k},k\in\mathbb{Z}\) so that \(\lambda_{-1}<0<\lambda_{0}\). Let us remark that if \(\tau=\mathtt{Dir}(R,\mathfrak{u}_{0},\mathfrak{u}_{1})\) satisfies Assumptions 4, then for any \(\sigma\in(0,1)\), the operator \(\tau\) restricted in \(\mathcal{I}\cap[0,\sigma]\) is well-defined with boundary conditions \(\mathfrak{u}_{0},\mathfrak{u}_{1}\) at endpoints \(t=0,\sigma\) and the restricted weight function \(R|_{t\in\mathcal{I}\cap[0,\sigma]}\). We denote the restricted operator and its resolvent as \(\tau_{\sigma},\mathtt{r}\,\tau_{\sigma}\) respectively. ### Canonical systems and the secular functions In order to study scaling limits of characteristic polynomials, [38] introduced the secular function and structure function of a Dirac-type operator. We briefly review the required definitions and results. Suppose that \(\mathcal{I}=(0,1]\) and consider the eigenvalue equation of a Dirac operator \(\tau H=zH\) as a canonical system, i.e., \[J\frac{d}{dt}H(t,z)=zR(t)H(t,z).\] [38] showed that this system has a unique solution under our assumptions if we set the initial condition to be \(\mathfrak{u}_{0}\). **Proposition 6**.: _Suppose that \(\mathcal{I}=(0,1]\) and \(\tau=\mathtt{Dir}(R,\mathfrak{u}_{0},\mathfrak{u}_{1})\) satisfies Assumptions 4. Then there exists a unique vector valued function \(H:\mathcal{I}\times\mathbb{C}\mapsto\mathbb{C}^{2}\) so that for every \(z\in\mathbb{C}\), the function \(H(\cdot,z)\) solves the following ordinary differential equation_ \[J\frac{d}{dt}H(t,z)=zR(t)H(t,z),\qquad t\in\mathcal{I},\qquad\lim_{t\to 0}H(t,z )=\mathfrak{u}_{0}. \tag{18}\] _Write \(H=\binom{A}{B}\). For any \(t\in\mathcal{I}\), the function \(H(t,z)\) satisfies \(\|H(t,z)\|\geq 0\), and its components \(A(t,\cdot),B(t,\cdot)\) are analytic functions such that \(A(t,x),B(t,x)\in\mathbb{R}\) for \(x\in\mathbb{R}\)._ **Definition 7**.: Under the assumptions of Proposition 6, we define the secular function of \(\tau\) as \[\zeta_{\tau}(\cdot)=H(1,\cdot)^{\dagger}J\mathfrak{u}_{1}, \tag{19}\] and the structure function of \(\tau\) as \[\mathcal{E}_{\tau}(\cdot)=H(1,\cdot)^{\dagger}\binom{1}{-i}=A(1,\cdot)-iB(1, \cdot). \tag{20}\] We define the integral trace of \(\mathtt{r}\,\tau\) as the integral of the trace of the kernel \(K_{\mathtt{r}\,\tau}\), and denote it by \(\mathfrak{t}_{\tau}\): \[\mathfrak{t}_{\tau}=\int_{0}^{1}\mathrm{tr}\,K_{\mathfrak{t}_{\tau}}(s,s)ds= \tfrac{1}{2}\int_{0}^{1}\mathfrak{a}_{0}(s)^{\dagger}\mathfrak{a}_{1}(s)ds= \int_{0}^{1}\mathfrak{u}_{0}^{\dagger}R(s)\mathfrak{u}_{1}ds. \tag{21}\] It was proved in [38] that the secular function \(\zeta_{\tau}\) can be represented as \[\zeta_{\tau}(z)=e^{-z\mathfrak{t}_{\tau}}\mathrm{det}_{2}(I-z\,\mathtt{r}\, \tau)=e^{-\frac{z}{2}\int_{0}^{1}\mathfrak{a}_{0}(s)^{\dagger}\mathfrak{a}_{ 1}(s)ds}\prod_{k}(1-z/\,\lambda_{k})e^{z/\lambda_{k}}, \tag{22}\] where \(\mathrm{det}_{2}\) is the second regularized determinant, see [33]. Note that the integral trace is finite under Assumption 4, and the secular function \(\zeta_{\tau}\) is an entire function with zero set given by \(\mathrm{spec}(\tau)\), the spectrum of \(\tau\). We refer to [38] for additional details. The secular function of \(\tau\) can be viewed as a generalization of the normalized characteristic polynomial of a matrix. The next proposition provides comparisons for the solutions of two canonical systems of the form (18). It also shows that \(H(t,\cdot)\) is continuous on compacts in \(z\in\mathbb{C}\) at \(t=0\). **Proposition 8** (Proposition 12 of [39]).: _Suppose that \(\mathcal{I}=(0,1]\) and \(\tau=\mathtt{Dir}(R,\mathfrak{u}_{0},\mathfrak{u}_{1})\), \(\tilde{\tau}=\mathtt{Dir}(\tilde{R},\mathfrak{u}_{0},\tilde{\mathfrak{u}}_{1})\) satisfy Assumption 4. Let \(H,\tilde{H}\) be the solutions of the corresponding canonical systems (18), and define \(\mathfrak{a}_{0},\tilde{\mathfrak{a}}_{0}\) according to (17). Recall that \(\tau_{t}\) is the operator \(\tau\) restricted to \((0,t]\). Then there is an absolute constant \(c>1\) depending only on \(\mathfrak{u}_{0}\) so that for all \(t\in\mathcal{I}\), \(z\in\mathbb{C}\) we have_ \[|H(t,z)-\tilde{H}(t,z)|\leq\left(c^{|z|\left(|\mathfrak{t}_{\tau _{t}}-\bar{\mathfrak{t}}_{\tau_{t}}|+\|\mathtt{r}\,\tau_{t}-\mathtt{r}\,\bar{ \mathfrak{t}}_{\|}+\sqrt{\int_{0}^{t}|\mathfrak{a}_{0}(s)-\tilde{\mathfrak{a} }_{0}(s)|^{2}ds\int_{0}^{t}(|\mathfrak{a}_{0}(s)|^{2}+|\tilde{\mathfrak{a}}_{0 }(s)|^{2})ds}\right)}-1\right)\\ \times c^{\left(|z|\left(|\mathfrak{t}_{\tau_{t}}|+|\mathfrak{r} _{\tau_{t}}|+\|\mathtt{r}\,\bar{\mathfrak{t}}_{\tau_{t}}|+\int_{0}^{t}\left(| \mathfrak{a}_{0}(s)|^{2}+|\tilde{\mathfrak{a}}_{0}(s)|^{2}\right)ds\right)+1 \right)^{2}}. \tag{23}\] _Moreover, we have for all \(t\in\mathcal{I}\), \(z\in\mathbb{C}\)_ \[|H(t,z)-\mathfrak{u}_{0}|\leq\left(c^{|z|(|\mathfrak{t}_{\tau_{t}}|+\| \mathtt{r}\,\tau_{t}\|+\int_{0}^{t}|\mathfrak{a}_{0}(s)|^{2}ds)}-1\right)c^{ \left(|z|(|\mathfrak{t}_{\tau_{t}}|+\|\mathtt{r}\,\tau_{t}\|+\int_{0}^{t}| \mathfrak{a}_{0}(s)|^{2}ds)+1\right)^{2}}. \tag{24}\] Proposition 8 together with the Hoffman-Wielandt inequality in infinite dimensions (see e.g. [3]) provide similar comparisons for the secular functions and the spectra of two Dirac operators. **Remark 9**.: Let \(\tau_{1}\), \(\tau_{2}\) be two Dirac operators on \(\mathcal{I}\) satisfying Assumption 4. Denote by \(\lambda_{k,i},\zeta_{i},\mathtt{r}\,\tau_{i},\mathtt{t}_{\tau_{i}}\) the eigenvalues, secular function, resolvent and integral trace of \(\tau_{i}\). Then we have \[\sum_{k}\big{|}\lambda_{k,1}^{-1}-\lambda_{k,2}^{-1}\big{|}^{2}\leq \|\mathtt{r}\,\tau_{1}-\mathtt{r}\,\tau_{2}\|_{\rm HS}^{2}, \tag{25}\] and there is an absolute constant \(c>1\) so that for all \(z\in\mathbb{C}\) \[|\zeta_{1}(z)-\zeta_{2}(z)|\leq \Big{(}c^{|z||\mathtt{t}_{\tau_{1}}-\mathtt{t}_{\tau_{2}}|}-1+|z| \big{\|}\mathtt{r}\,\tau_{1}-\mathtt{r}\,\tau_{2}\big{\|}\Big{)}c^{|z|^{2}(\| \mathtt{r}\,\tau_{1}\|^{2}+\|\mathtt{r}\,\tau_{2}\|^{2})+|z|(|\mathtt{t}_{ \tau_{1}}|+|\mathtt{t}_{\tau_{2}}|)+1}. \tag{26}\] #### Transformations of Dirac-type operators We finish this section with a short discussion on simple transformations of Dirac-type operators. First note that the two cases \(\mathcal{I}=(0,1]\) and \(\mathcal{I}=[0,1)\) can be connected by a time reversal transformation \(\rho\) on functions defined on \((0,1]\) or \([0,1)\) by \(\rho f(t)=f(1-t)\). Let \(\mathtt{r}:\mathbb{H}\to\mathbb{H}\) be the reflection with respect to the imaginary axis defined by \(\mathtt{r}(x+iy)=-x+iy\). The next statement provides a description of the effect of the composition of the time reversal \(\rho\) and the reflection \(\mathtt{r}\), see [38] or [39]. **Lemma 10** ([38]).: _Suppose that the Dirac operator \(\tau=\mathtt{Dir}(R,\mathtt{u}_{0},\mathtt{u}_{1})\) satisfies Assumption 4. Set \(S=\mathrm{diag}(1,-1)\), then the operator \(\rho^{-1}S\tau S\rho\) also satisfies Assumption 4 with boundary conditions \(S\mathtt{u}_{1},S\mathtt{u}_{0}\), weight function \(\rho SRS\), and generating path \(\mathtt{r}\rho z\). The operators \(\tau\) and \(\rho^{-1}S\tau S\rho\) are orthogonally equivalent in the respective \(L^{2}\) spaces, and they have the same integral traces and secular functions._ Consider the projection operator \(\mathcal{P}\) defined in (15). It can be naturally generalized to the projection operator on non-zero two-dimensional complex vectors with \(\mathcal{P}\binom{z_{1}}{z_{2}}=z_{1}/z_{2}\) given \(z_{2}\neq 0\). In this way, a \(2\times 2\) non-singular matrix \(A\) can be identified with a linear fractional transformation via \(z\to\mathcal{P}A\binom{z}{1}\). When \(A\) is real, the corresponding linear fractional transformation is an isometry of \(\mathbb{H}\). The next lemma describes the effect of a hyperbolic isometry on a Dirac operator. **Lemma 11** ([38]).: _Let \(Q\) be a \(2\times 2\) orthogonal matrix with determinant 1. Let \(\mathcal{Q}:\overline{\mathbb{H}}\to\overline{\mathbb{H}}\) be the corresponding linear isometry of \(\overline{\mathbb{H}}\) mapping \(z\in\overline{\mathbb{H}}\) to the ratio of the entries of \(Q\binom{z}{1}\). Suppose that the Dirac operator \(\tau=\mathtt{Dir}(R,\mathtt{u}_{0},\mathtt{u}_{1})\) satisfies Assumption 4. Then the operator \(Q\tau Q^{-1}\) also satisfies Assumption 4, with boundary conditions \(\mathcal{Q}\mathtt{u}_{0},\mathcal{Q}\mathtt{u}_{1}\) and generating path \(\mathcal{Q}(x+iy)\). The two operators are orthogonally equivalent, they have the same integral traces and secular functions._ Finitely supported measures on the unit circle In this section we provide an overview of the CMV construction for finitely supported probability measures on the unit circle, and review how Dirac-type operators can be used to study them. ### CMV matrices We briefly review some of the needed facts from the theory of orthogonal polynomials on the unit circle, together with some properties of CMV matrices. See [32] for a comprehensive treatment of the subject, or [31] for a shorter summary. Let \(\nu\) be a probability measure on \(\partial\mathbb{D}\) with a support of \(n\) distinct points \(e^{i\lambda_{j}},1\leq j\leq n\). Let \(\Phi_{k},k=0,\ldots,n\) be the Gram-Schmidt orthogonalization of the polynomials \(1,z,\cdots,z^{n}\) with respect to \(\nu\). Then \(\Phi_{k},k=0,\ldots,n-1\) are the monic orthogonal polynomials with respect to \(\nu\), and \(\Phi_{n}(z):=\prod_{j=1}^{n}(z-e^{i\lambda_{j}})\). Together with the reversed polynomials \[\Phi_{k}^{*}(z):=z^{k}\overline{\Phi_{k}(1/\bar{z})},\] these polynomials satisfy the famous Szego recursion (see e.g. Section 1.5, vol. 1 of [32]): \[\begin{pmatrix}\Phi_{k+1}\\ \Phi_{k+1}^{*}\end{pmatrix}=\left(\begin{array}{cc}1&-\bar{\alpha}_{k}\\ -\alpha_{k}&1\end{array}\right)\begin{pmatrix}z&0\\ 0&1\end{pmatrix}\begin{pmatrix}\Phi_{k}\\ \Phi_{k}^{*}\end{pmatrix},\qquad\begin{pmatrix}\Phi_{0}\\ \Phi_{0}^{*}\end{pmatrix}=\begin{pmatrix}1\\ 1\end{pmatrix},\qquad 0\leq k\leq n-1. \tag{27}\] The constants \(\alpha_{k}\), \(0\leq k\leq n-1\) are called the Verblunsky coefficients, they satisfy \(|\alpha_{k}|<1\) for \(0\leq k\leq n-2\) and \(|\alpha_{n-1}|=1\). The map between the probability measures supported on \(n\) points on \(\partial\mathbb{D}\) and the possible Verblunsky coefficients \((\alpha_{0},\ldots,\alpha_{n-1})\in\mathbb{D}^{n-1}\times\partial\mathbb{D}\) is invertible, and both the map and its inverse are continuous. (See e.g. Theorem 1.7.11 of [32].) The next definition introduces the CMV matrix. **Definition 12**.: For fixed \(n\geq 1\), let \(\{\alpha_{k},0\leq k\leq n-1\}\) be a sequence of complex coefficients with \(|\alpha_{k}|\leq 1\). Define \[\Xi_{k}=\begin{pmatrix}\bar{\alpha}_{k}&\rho_{k}\\ \rho_{k}&-\alpha_{k}\end{pmatrix},\qquad\rho_{k}=\sqrt{1-|\alpha_{k}|^{2}}, \qquad 0\leq k\leq n-2\] and set \(\Xi_{-1}=(1)\) and \(\Xi_{n-1}=(\bar{\alpha}_{n-1})\) to be \(1\times 1\) matrices. For \(n\geq 2\) we define the _CMV matrix_ corresponding to \(\{\alpha_{k},0\leq k\leq n-1\}\) as \[\mathcal{C}(\alpha_{0},\cdots,\alpha_{n-1}):=\mathcal{LM}, \tag{28}\] where \(\mathcal{L},\mathcal{M}\) are \(n\times n\) block-diagonal matrices \[\mathcal{L}=\operatorname{diag}\left(\Xi_{0},\Xi_{2}\cdots,\Xi_{2\lfloor\frac{n- 1}{2}\rfloor}\right),\qquad\mathcal{M}=\operatorname{diag}\left(\Xi_{-1},\Xi_{ 1}\cdots,\Xi_{2\lfloor\frac{n}{2}\rfloor-1}\right).\] For \(n=1\) we define the \(1\times 1\) CMV matrix as \(\mathcal{C}(\alpha_{0})=(\bar{\alpha}_{0})\). Note that \(\mathcal{C}(\alpha_{0},\ldots,\alpha_{n-1})\) is unitary if and only if \(|\alpha_{n-1}|=1\). The following proposition provides a crucial link between CMV matrices and orthogonal polynomials of a discrete probability measure on \(\partial\mathbb{D}\). **Proposition 13** ([19, 32]).: _Suppose that \(\nu\) is a probability measure with a support of \(n\) distinct points on \(\partial\mathbb{D}\), and let \(\alpha_{k},0\leq k\leq n-1\) be its sequence of Verblunsky coefficients. Then for any \(1\leq k\leq n\) we have_ \[\det(zI_{k}-\mathcal{C}(\alpha_{0},\ldots,\alpha_{k-1}))=\Phi_{k}(z). \tag{29}\] Suppose that \(U\) is an \(n\times n\) unitary matrix for which \(\mathbf{e}_{1}=(1,0,\ldots,0)^{\dagger}\) is cyclic. The spectral measure of \(U\) with respect to \(\mathbf{e}_{1}\) is given by the following discrete probability measure: \[\mu=\sum_{k=1}^{n}\delta_{\lambda_{k}}\cdot\left|\langle\mathbf{e}_{1}, \mathbf{v}_{k}\rangle\right|^{2}. \tag{30}\] Here \(\lambda_{k},1\leq k\leq n\) are the eigenvalues of \(U\), and \(\mathbf{v}_{k},1\leq k\leq n\) are the corresponding unit length (right) eigenvectors. The following proposition summarizes how a unitary matrix and the CMV matrix of its spectral measure are connected. **Proposition 14** ([5]).: _Suppose that \(U\) is an \(n\times n\) unitary matrix for which \(\mathbf{e}_{1}\) is cyclic. Let the spectral measure of \(U\) with respect to \(\mathbf{e}_{1}\) be \(\nu\), and denote the Verblunsky coefficients of \(\nu\) by \(\alpha_{k},0\leq k\leq n-1\). Then there is a unitary matrix \(V\) with \(V\mathbf{e}_{1}=\mathbf{e}_{1}\) and \(U=V\mathcal{C}(\alpha_{0},\ldots,\alpha_{n-1})V^{-1}\). In particular, \(U\) and \(\mathcal{C}\) are unitary equivalent, and they have the same spectral measure with respect to \(\mathbf{e}_{1}\)._ The next definition introduces a simple transformation on a collection of Verblunsky coefficients and on the corresponding CMV matrix. **Definition 15**.: Suppose that \((\alpha_{0},\ldots,\alpha_{n-1})\in\mathbb{D}^{n-1}\times\partial\mathbb{D}\) is the sequence of Verblunsky coefficients of the discrete probability measure \(\nu\) on \(\partial\mathbb{D}\). Let \(\mathcal{C}\) denote the CMV matrix corresponding to these Verblunsky coefficients. We define the'reversed' version of the Verblunsky coefficients as \[(\widetilde{\alpha}_{0},\widetilde{\alpha}_{1},\ldots,\widetilde{\alpha}_{n- 1}):=(-\alpha_{n-1}\bar{\alpha}_{n-2},-\alpha_{n-1}\bar{\alpha}_{n-3},\ldots,- \alpha_{n-1}\bar{\alpha}_{0},\alpha_{n-1}). \tag{31}\] We denote by \(\widetilde{\nu}\) and \(\widetilde{\mathcal{C}}\) the probability measure and CMV matrix corresponding to the sequence \(\widetilde{\alpha}_{k},0\leq k\leq n-1\), and call these the reversed version of \(\mu\) and \(\mathcal{C}\), respectively. Note that since \(|\alpha_{n-1}|=1\), the reversal operation is an involution. This operation appeared in [20] where it was shown that \(\widetilde{\mathcal{C}}\) is unitary equivalent to \(\mathcal{C}\) (if \(n\) is even), and to \(\mathcal{C}^{\dagger}\) (if \(n\) is odd). This implies that \(\mathcal{C}\) and \(\widetilde{\mathcal{C}}\) have the same eigenvalues, or equivalently, \(\mu\) and \(\widetilde{\mu}\) have the same support. In fact, the arguments of Proposition B.2 of [20] also imply that \(\widetilde{\mu}\) is the spectral measure of \(\mathcal{C}\) with respect to the unit vector \(\mathbf{e}_{n}=(0,0,\ldots,0,1)^{\dagger}\). The following results were proved in [19]. They provide key ingredients for our study of truncated and perturbed unitary matrices. **Proposition 16** ([19]).: _Consider the same setup as in Proposition 14. Then if \(n\geq 2\) then the truncated matrix \(U^{\ulcorner}\) is unitary equivalent to \(\mathcal{C}^{\ulcorner}\), which in turn is unitary equivalent to_ \[\mathcal{C}(\widetilde{\alpha}_{0},\ldots,\widetilde{\alpha}_{n-2}),\quad \text{if $n$ is even, and}\quad\mathcal{C}(\widetilde{\alpha}_{0},\ldots, \widetilde{\alpha}_{n-2})^{\dagger},\quad\text{if $n$ is odd}. \tag{32}\] _For \(r\in[0,1]\) the perturbed matrix \(U^{[r]}\) (as defined in (7)) is unitary equivalent to_ \[\mathcal{C}(\widetilde{\alpha}_{0},\ldots,\widetilde{\alpha}_{n-2},r \widetilde{\alpha}_{n-1}),\quad\text{if $n$ is even, and}\quad\mathcal{C}(\widetilde{\alpha}_{0},\ldots, \widetilde{\alpha}_{n-2},r\widetilde{\alpha}_{n-1})^{\dagger},\quad\text{if $n$ is odd}. \tag{33}\] ### Connection to Dirac-type operators In this section, we show how the Dirac operator framework introduced in [36, 37, 38] can be used to study finitely supported probability measures on \(\partial\mathbb{D}\). The following definition constructs a Dirac-type differential operator corresponding to a finitely supported probability measure on \(\partial\mathbb{D}\). **Definition 17**.: Suppose that \(\mu\) is a probability measure on \(\partial\mathbb{D}\) supported on \(n\) distinct points \(\{e^{i\lambda_{j}},1\leq j\leq n\}\), and let \(\alpha_{k},0\leq k\leq n-1\) be the corresponding Verblunsky coefficients. For \(0\leq k\leq n\) we define \[b_{0}=0,\quad b_{k}=\mathcal{P}\begin{pmatrix}1&\bar{\alpha}_{0}\\ \alpha_{0}&1\end{pmatrix}\cdots\begin{pmatrix}1&\bar{\alpha}_{k-1}\\ \alpha_{k-1}&1\end{pmatrix}\begin{pmatrix}0\\ 1\end{pmatrix}\quad\text{for $1\leq k\leq n$}. \tag{34}\] Set \(z_{k}=\mathcal{U}^{-1}(b_{k})\), where \(\mathcal{U}\) is the Cayley transform mapping \(\overline{\mathbb{H}}\) to \(\overline{\mathbb{D}}\) defined via \[\mathcal{U}(z)=\mathcal{P}U\begin{pmatrix}z\\ 1\end{pmatrix}=\frac{z-i}{z+i},\qquad U=\begin{pmatrix}1&-i\\ 1&i\end{pmatrix}. \tag{35}\] We call \(b_{k},0\leq k\leq n\) and \(z_{k},0\leq k\leq n\) the (discrete) _path parameters_ of \(\mu\) in \(\mathbb{D}\) and in \(\mathbb{H}\), respectively. We say that \(z:[0,1)\to\mathbb{H}\) defined via \(z(t)=(x+iy)(t):=z_{[nt]}\) is the _generating path associated to \(\mu\)_. We set \(\mathfrak{u}_{0}=\binom{1}{0}\), \(\mathfrak{u}_{1}=\binom{-z_{n}}{-1}\), and we call \(\tau=\mathtt{Dir}(z(\cdot),\mathfrak{u}_{0},\mathfrak{u}_{1})\) the Dirac-type operator corresponding to \(\mu\). We may call \(\tau\) the Dirac-type operator corresponding to the Verblunsky coefficients \(\alpha_{k},0\leq k\leq n-1\), or the path \(b_{0},\ldots,b_{n}\). We may use the notation \(\tau=\texttt{Dir}(b_{-1},b_{0},\ldots,b_{n})\) with \(b_{-1}=\mathcal{P}U^{-1}\mathfrak{u}_{0}=1\) to emphasize the path dependence. We call \(b_{-1}\) and \(b_{n}\) the _left_ and _right boundary point_ of \(\tau\), respectively. As the next proposition shows, the Dirac-type operator corresponding to \(\mu\) encodes the support of \(\mu\) in the spectrum. **Proposition 18** ([38, 39]).: _Suppose that \(\mu\) is a probability measure on \(\partial\mathbb{D}\) supported on \(n\) distinct points \(e^{i\lambda_{j}},1\leq j\leq n\), with \(\mu(\{1\})=0\). Then the Dirac-type operator \(\tau\) corresponding to \(\mu\) satisfies Assumption 4 with \(\mathcal{I}=[0,1)\), and its spectrum is given by_ \[\text{spec}(\tau)=n\Lambda_{n}+2\pi n\mathbb{Z},\qquad\Lambda_{n}=\{\lambda_{ 1},\ldots,\lambda_{n}\}.\] In [36, 39] it was observed that the Dirac operator representation for \(\mu\) can be simplified using the modified Verblunsky coefficients. These are defined in terms of the Verblunsky coefficients via the recursion \[\gamma_{k}=\bar{\alpha}_{k}\prod_{j=0}^{k-1}\frac{1-\bar{\gamma}_{j}}{1-\gamma _{j}},\qquad 0\leq k\leq n-1. \tag{36}\] Note that the modified Verblunsky coefficients satisfy \(|\gamma_{k}|=|\alpha_{k}|\). Denote by \(\mathcal{T}\) the mapping from the Verblunsky coefficients to the modified ones. This mapping is invertible, in fact for any \(k\geq 1\) it provides a one-to-one correspondence between the first \(k\) Verblunsky coefficients and the first \(k\) modified Verblunsky coefficients. We will use the notation \(\mathcal{T}_{k}\) for this restricted map. The modified Verblunsky coefficients of a discrete probability measure \(\mu\) are connected to its normalized orthogonal polynomials (with normalization at \(1\)). Let \(\Phi_{k}\) be the monic orthogonal polynomials of \(\mu\), with \(\Phi_{k}^{*}\) being the reversed polynomials. We set \[\Psi_{k}(z)=\frac{\Phi_{k}(z)}{\Phi_{k}(1)},\qquad\Psi_{k}^{*}(z)=\frac{\Phi_ {k}^{*}(z)}{\Phi_{k}^{*}(1)}. \tag{37}\] These are always well defined for \(0\leq k\leq n-1\), and \(\Psi_{n}\) is defined as long as \(\mu(\{1\})=0\). The polynomials \(\Psi,\Psi^{*}\) satisfy the following modified version of the Szego recursion (27): \[\begin{pmatrix}\Psi_{k+1}\\ \Psi_{k+1}^{*}\end{pmatrix}=\left(\begin{array}{cc}\frac{1}{1-\gamma_{k}}&- \frac{\gamma_{k}}{1-\gamma_{k}}\\ -\frac{\bar{\gamma}_{k}}{1-\bar{\gamma}_{k}}&\frac{1}{1-\bar{\gamma}_{k}} \end{array}\right)\left(\begin{array}{cc}z&0\\ 0&1\end{array}\right)\begin{pmatrix}\Psi_{k}\\ \Psi_{k}^{*}\end{pmatrix},\quad\begin{pmatrix}\Psi_{0}\\ \Psi_{0}^{*}\end{pmatrix}=\begin{pmatrix}1\\ 1\end{pmatrix},\quad 0\leq k\leq n-1. \tag{38}\] The matrices appearing in this recursion correspond to affine transformations. Affine transformations of \(\mathbb{H}\) can be parametrized by the elements of \(\mathbb{H}\) as follows. For \(z=x+iy\in\mathbb{H}\) we define the matrix \(A_{z}\) and the corresponding linear fractional transformation \(\mathcal{A}_{z,\mathbb{H}}:\overline{\mathbb{H}}\mapsto\overline{\mathbb{H}}\) as follows: \[A_{x+iy,\mathbb{H}}=\left(\begin{array}{cc}1&-x\\ 0&y\end{array}\right),\qquad\mathcal{A}_{x+iy,\mathbb{H}}(w)=\mathcal{P}A_{x+ iy,\mathbb{H}}\binom{w}{1}. \tag{39}\] Note that \(x+iy\) is the pre-image of \(i\) under \(\mathcal{A}_{x+iy,\mathbb{H}}\). The transformations \(\mathcal{A}_{z,\mathbb{H}}\) are isometries of the half-plane model of the hyperbolic plane. The corresponding transformations on the unit-disk model of the hyperbolic plane can be obtained by conjugating with the Cayley transform. For \(\gamma\in\mathbb{D}\) we set \[A_{\gamma,\mathbb{D}}=UA_{\mathcal{U}^{-1}(\gamma),\mathbb{H}}U^{-1},\qquad \mathcal{A}_{\gamma,\mathbb{D}}=\mathcal{U}\circ\mathcal{A}_{\mathcal{U}^{-1 }(\gamma),\mathbb{H}}\circ\mathcal{U}^{-1}, \tag{40}\] which leads to \[A_{\gamma,\mathbb{D}}=\left(\begin{array}{cc}\frac{1}{1-\gamma}&\frac{ \gamma}{\gamma-1}\\ \frac{\tilde{\gamma}}{\tilde{\gamma}-1}&\frac{1}{1-\tilde{\gamma}}\end{array} \right),\qquad\mathcal{A}_{\gamma,\mathbb{D}}(z)=\mathcal{P}A_{\gamma, \mathbb{D}}\binom{z}{1}. \tag{41}\] Note that the matrix coefficient in (38) is exactly \(A_{\gamma_{k},\mathbb{D}}\), and \(\mathcal{A}_{\gamma,\mathbb{D}}\) maps \(\gamma\) to \(0\). The following proposition shows that the generating path of a discrete probability measure \(\mu\) can be expressed via a simple (affine) recursion using the modified Verblunsky coefficients. Define \(w_{k},v_{k}\in\mathbb{R}\) from the modified Verblunsky coefficients as \[v_{k}+iw_{k}:=\frac{2i\gamma_{k}}{1-\gamma_{k}}=\mathcal{U}^{-1}(\gamma_{k})-i. \tag{42}\] **Proposition 19** ([38, 39]).: _Suppose that \(\mu\) is a probability measure on \(\partial\mathbb{D}\) supported on \(n\) distinct points \(e^{i\lambda_{j}},1\leq j\leq n\), and \(\mu(\{1\})=0\). Let \(\gamma_{k},0\leq k\leq n-1\) be the modified Verblunsky coefficients of \(\mu\). Let \(b_{k},z_{k},0\leq k\leq n\) be the path parameters of \(\mu\) defined as in Definition 17. Then the following identities hold for \(0\leq k\leq n-1\):_ \[z_{k+1}=z_{k}+(v_{k}+iw_{k})\Im z_{k}, \tag{43}\] _with \(v_{k},w_{k}\) defined in (42), and_ \[b_{k+1}=\frac{b_{k}+\gamma_{k}\frac{1-b_{k}}{1-b_{k}}}{1+\bar{b}_{k}\gamma_{k }\frac{1-b_{k}}{1-b_{k}}}. \tag{44}\] _We also have_ \[b_{k}=\mathcal{A}_{\gamma_{0},\mathbb{D}}^{-1}\circ\cdots\circ\mathcal{A}_{ \gamma_{k-1},\mathbb{D}}^{-1}(0). \tag{45}\] The normalized orthogonal polynomials of \(\mu\) can be expressed using the canonical system (18) of the Dirac operator \(\tau\) corresponding to \(\mu\). **Proposition 20** ([38, 39]).: _Under the same setup as in Proposition 18, consider the solution to the canonical system (18)_ \[\tau H=\lambda H,\qquad H(0,\lambda)=\binom{1}{0},\qquad(t,\lambda)\in[0,1]\times \mathbb{C}.\] _Then the normalized orthogonal polynomials \(\Psi_{k},\Psi_{k}^{*}\) of \(\mu\) (defined via (37)) satisfy_ \[\binom{\Psi_{k}(e^{i\lambda/n})}{\Psi_{k}^{*}(e^{i\lambda/n})}=e^{i\lambda k/(2 n)}\left(\begin{array}{cc}1&-z_{k}\\ 1&-\bar{z}_{k}\end{array}\right)H(k/n,\lambda),\qquad 0\leq k\leq n. \tag{46}\] Note that Proposition 20 applied with \(k=n\) gives \[e^{-i\lambda/2}\Psi_{n}(e^{i\lambda/n})=\binom{1}{-z_{n}}^{\dagger}H(1, \lambda)=H(1,\lambda)^{\dagger}J\binom{1}{z_{n}}. \tag{47}\] Recall that \(\Psi_{n}(\cdot)\) is just the normalized characteristic polynomial corresponding to the support of \(\mu\): \[\Psi_{n}(z)=\frac{\Phi_{n}(z)}{\Phi_{n}(1)}=\prod_{j=1}^{n}\frac{z-e^{i\lambda _{j}}}{1-e^{i\lambda_{j}}}\] Hence (19) and (47) imply that the scaled and normalized characteristic polynomial of \(\mu\) is the same as the secular function of the Dirac-type operator corresponding to \(\mu\). ## 4 beta ensembles from classical unitary and orthogonal random matrices This section collects the matrix models for various beta-generalizations of unitary and orthogonal random matrices, along with their Dirac operator representations and operator limits. ### Finite ensembles and their Dirac operator representations #### Circular beta ensemble Recall the definition of the size \(n\) circular beta ensemble with density function (4) and the spectral measure with respect to \(\mathbf{e}_{1}\) (30). Recall also that for \(a_{1},\ldots,a_{n}>0\) the Dirichlet\((a_{1},\ldots,a_{n})\) distribution is a probability measure on the set \[\{(x_{1},x_{2},\ldots,x_{n})\in[0,1]^{n}:\sum_{i=1}^{n}x_{i}=1\}\] with joint probability density function \[\frac{\Gamma(\sum_{i=1}^{n}a_{i})}{\prod_{i=1}^{n}\Gamma(a_{i})}\prod_{i=1}^{n}x _{i}^{a_{i}-1}.\] We define the size \(n\) Killip-Nenciu measure as a random probability measure on \(\partial\mathbb{D}\) as \[\mu_{n,\beta}^{\text{KN}}=\sum_{j=1}^{n}\pi_{j}\delta_{e^{i\lambda_{j}}}, \tag{48}\] where the support is distributed as the size \(n\) circular beta ensemble, and the weights \(\pi_{j},1\leq j\leq n\) are chosen according to Dirichlet\((\beta/2,\dots,\beta/2)\) distribution, independently of the support. The joint distribution of the Verblunsky coefficients with respect to \(\mu_{n,\beta}^{\text{KN}}\) was computed in [20]. **Definition 21**.: For \(a>0\) we denote by \(\Theta(a+1)\) the distribution on \(\mathbb{D}\) that has probability density function \[\tfrac{a}{2\pi}(1-|z|^{2})^{a/2-1}. \tag{49}\] The definition is extended for \(a=0\) as follows: \(\Theta(1)\) is the uniform distribution on \(\partial\mathbb{D}\). **Proposition 22** ([20]).: _For a fixed \(n\) and \(\beta>0\), the sequence of Verblunsky coefficients \(\alpha_{k},0\leq k\leq n-1\) of \(\mu_{n,\beta}^{\text{KN}}\) are independent, and \(\alpha_{k}\sim\Theta(\beta(n-k-1)+1)\) for \(0\leq k\leq n-1\)._ **Definition 23**.: For fixed \(n\) and \(\beta>0\), we denote by \(\mathtt{Circ}_{n,\beta}\) and \(\mathtt{Circ}_{n,\beta}\) the CMV matrix and Dirac-type operator corresponding to \(\mu_{n,\beta}^{\text{KN}}\). #### Real orthogonal beta ensemble The _real orthogonal beta ensemble_ is a family of distributions supported on conjugated pairs of points on the unit circle indexed by real parameters \(\beta>0,a>-1,b>-1\). If we parametrize the size \(2n\) real orthogonal beta ensemble as \(\{\pm e^{i\theta_{1}},\dots,\pm e^{i\theta_{n}}\}\) with \(\theta_{j}\in(0,\pi)\) then the joint density for \((\theta_{1},\dots,\theta_{n})\) is proportional to \[\prod_{j<k\leq n}|\cos(\theta_{j})-\cos(\theta_{k})|^{\beta}\times\prod_{k=1}^ {n}|1-\cos(\theta_{k})|^{\frac{\beta}{2}(a+1)-1/2}|1+\cos(\theta_{k})|^{\frac {\beta}{2}(b+1)-1/2}. \tag{50}\] The ensemble was introduced in [20], it can be viewed as a generalization of joint eigenvalue distributions of some classical orthogonal ensembles. For example when \(\beta=2\), \(a=b=\frac{1}{\beta}-1\), (50) describes the joint eigenvalue distribution of a \(2n\times 2n\) special orthogonal matrix chosen uniformly with respect to the Haar measure on \(\mathbb{SO}(2n)\). Consider the probability measure \[\mu^{\text{RO}}_{2n,\beta,a,b}=\sum_{j=1}^{n}\frac{1}{2}\pi_{j}(\delta_{e^{i\theta _{j}}}+\delta_{e^{-i\theta_{j}}}) \tag{51}\] on the unit circle, where the support is distributed as a \(2n\) real orthogonal beta ensemble, and the weights \((\pi_{1},\dots,\pi_{n})\) is an independent random vector that has a Dirichlet\((\beta/2,\dots,\beta/2)\) distribution. The joint distribution of the Verblunsky coefficients of \(\mu^{\text{RO}}_{2n,\beta,a,b}\) was derived in [19]. **Definition 24**.: For \(s,t>0\) let \(\tilde{\text{B}}(s,t)\) denote the scaled (and flipped) beta distribution on \((-1,1)\) that has probability density function \[\tfrac{2^{1-s-t}\Gamma(s+t)}{\Gamma(s)\Gamma(t)}(1-x)^{s-1}(1+x)^{t-1}.\] **Proposition 25** (Theorem 2 of [20], Proposition 4.5 in [19]).: _For given \(\beta>0\), \(a,b>-1\) and fixed \(n\), the Verblunsky coefficients \(\alpha_{k},0\leq k\leq 2n-1\) of the random probability measure \(\mu^{\text{RO}}_{2n,\beta,a,b}\) are independent with \(\alpha_{2n-1}=-1\), and for \(0\leq k\leq 2n-2\)_ \[\alpha_{k}\sim\begin{cases}\tilde{\text{B}}\big{(}\tfrac{\beta}{4}(2n-k+2a), \tfrac{\beta}{4}(2n-k+2b)\big{)},&\text{if $k$ is even,}\\ \tilde{\text{B}}\big{(}\tfrac{\beta}{4}(2n-k+2a+2b+1),\tfrac{\beta}{4}(2n-k-1) \big{)},&\text{if $k$ is odd.}\end{cases}\] **Definition 26**.: For fixed \(n\geq 1\), \(a,b>-1\) and \(\beta>0\), we define \(\text{RO}_{2n,\beta,a,b}\) and \(\text{RO}_{2n,\beta,a,b}\) as the CMV matrix model and the Dirac-type operator corresponding to \(\mu^{\text{RO}}_{2n,\beta,a,b}\), respectively. #### Circular Jacobi beta ensemble The circular Jacobi beta ensemble can be viewed as a one-parameter generalization of the circular beta ensemble. Let \(\delta\) be a complex parameter such that \(\Re\delta>-1/2\). The size \(n\) circular Jacobi beta ensemble is defined as the probability measure on the \(n\) distinct points \(\{e^{i\theta_{1}},\dots,e^{i\theta_{n}}\}\) with \(\theta_{j}\in[-\pi,\pi)\), where the joint density function of the angles \(\theta_{j}\) is given by (9). For \(\beta=2\) the distribution was studied by Hua [17] and Pickrell [29], this special case is sometimes called the Hua-Pickrell measure. Consider the random probability measure \[\mu^{\text{CJ}}_{n,\beta,\delta}=\sum_{j=1}^{n}\pi_{j}\delta_{e^{i\theta_{j}}} \tag{52}\] on the unit circle, where the support is distributed as the size \(n\) circular Jacobi beta ensemble, the weights are Dirichlet\((\beta/2,\dots,\beta/2)\) distributed and are independent of the support. In [4] the authors showed that the modified Verblunsky coefficients \(\gamma_{k},0\leq k\leq n-1\) of \(\mu^{\text{CJ}}_{n,\beta,\delta}\) are independent and described explicitly their joint distribution. We first introduce a generalization of the \(\Theta(a+1)\) distribution defined in Definition 21. **Definition 27**.: For \(a>0\) and \(\Re\delta>-1/2\) let \(\Theta(a+1,\delta)\) be the distribution on \(\mathbb{D}\) that has probability density function \[c_{a,\delta}(1-|z|^{2})^{a/2-1}(1-z)^{\bar{\delta}}(1-\bar{z})^{\delta},\qquad c _{a,\delta}=\tfrac{\Gamma(a/2+1+\delta)\Gamma(a/2+1+\bar{\delta})}{\pi\Gamma(a /2)\Gamma(a/2+1+\delta+\delta)}. \tag{53}\] For the \(a=0\), \(\Re\delta>-1/2\) case we define \(\Theta(1,\delta)\) to be the distribution on \(\partial\mathbb{D}=\{|z|=1\}\) with probability density function \[\tfrac{\Gamma(1+\delta)\Gamma(1+\bar{\delta})}{\Gamma(1+\delta+\delta)}(1-z)^{ \bar{\delta}}(1-\bar{z})^{\delta}. \tag{54}\] **Proposition 28** ([4]).: _For given \(n\geq 1,\beta>0,\Re\delta>-1/2\), the sequence of modified Verblunsky coefficients \(\gamma_{k},0\leq k\leq n-1\) of \(\mu_{n,\beta,\delta}^{\mathrm{CJ}}\) are independent with \(\gamma_{k}\sim\Theta(\beta(n-k-1)+1,\delta)\) for \(0\leq k\leq n-1\)._ Note that for \(\delta\neq 0\) the Verblunsky coefficients \(\alpha_{k},0\leq k\leq n-1\) are not independent. **Definition 29**.: For given \(n\geq 1,\beta>0,\Re\delta>-1/2\), we define \(\mathsf{CJ}_{n,\beta,\delta}\) and \(\mathsf{CJ}_{n,\beta,\delta}\) as the CMV matrix model and the Dirac-type operator corresponding to \(\mu_{n,\beta,\delta}^{\mathrm{CJ}}\), respectively. ### Random operator limits from beta ensembles This section discusses the operator limits of the finite ensembles in Section 4.1. The main idea is that under the appropriate scaling, the piece-wise constant generating paths of the \(\mathtt{Circ}_{n,\beta},\mathtt{RO}_{2n,\beta,a,b}\) and \(\mathsf{CJ}_{n,\beta,\delta}\) operators converge to certain diffusions in the hyperbolic plane. Then one can construct random differential operators in terms of these diffusions. These limiting operators will be denoted as \(\mathtt{Sine}_{\beta},\mathtt{Bess}_{\beta,a}\) and \(\mathtt{HP}_{\beta,\delta}\), respectively. For convenience, we will define the limiting operators with generating paths that lie in \(\mathbb{H}\). We also set \[v(t)=v_{\beta}(t)=-\frac{4}{\beta}\log(1-t)\] be the logarithmic time change function. The \(\mathtt{Sine}_{\beta}\) operator was introduced in [36] as the \(n\to\infty\) limit of the \(\mathtt{Circ}_{n,\beta}\) operator. The operator level convergence was proved in [37] with an explicit rate of convergence, see Proposition 47 below. **Definition 30**.: Fix \(\beta>0\). Let \(B_{1},B_{2}\) be independent standard Brownian motion, and let \(\mathsf{x}_{v}+i\mathsf{y}_{v},v\geq 0\) be the strong solution of the SDE \[d\mathsf{y}=\mathsf{y}dB_{1},\quad d\mathsf{x}=\mathsf{y}dB_{2},\quad\mathsf{y }(0)=1,\mathsf{x}(0)=0. \tag{55}\] For \(t\in[0,1)\) define \(z(t)=x(t)+iy(t)=\mathsf{x}_{v}+i\mathsf{y}_{v}\) where \(v=v(t)\). Let \(\mathsf{u}_{0}=\binom{1}{0}\), \(\mathsf{u}_{1}=\binom{-q}{-1}\), where \(q=\lim_{t\to\infty}\mathsf{x}(t)\). Set \(\mathtt{Sine}_{\beta}=\mathtt{Dir}(z(\cdot),\mathsf{u}_{0},\mathsf{u}_{1})\). Note that \(\mathsf{x}_{v}+i\mathsf{y}_{v},v\geq 0\) is just a hyperbolic Brownian motion in \(\mathbb{H}\), started from \(i\). The hard-edge Dirac operator \(\mathsf{Bess}_{\beta,a}\) and the Hua-Pickrell operator \(\mathtt{HP}_{\beta,\delta}\) were also introduced in [36]. It has been shown in [24] that they are indeed the operator level limits of the \(\mathtt{RQ}_{2n,\beta,a,b}\) and \(\mathtt{CJ}_{n,\beta,\delta}\) operators, respectively. **Definition 31**.: Fix \(\beta>0,a>-1\), and let \(B\) be a standard Brownian motion. Set \(\mathsf{y}(t)=e^{-\frac{\beta}{4}(2a+1)t-B(2t)}\), \(y(t)=\mathsf{y}(v(t))\), \(\mathsf{u}_{0}=\binom{1}{0}\), and \(\mathsf{u}_{1}=\binom{0}{-1}\). Define the hard-edge operator as \(\mathsf{Bess}_{\beta,a}:=\mathtt{Dir}(iy,\mathsf{u}_{0},\mathsf{u}_{1})\). **Definition 32**.: Fix \(\beta>0\) and \(\delta\in\mathbb{C}\) with \(\Re\delta>-1/2\). Let \(B_{1},B_{2}\) be independent standard Brownian motion, and let \(\mathsf{x}_{v}+i\mathsf{y}_{v},v\geq 0\) be the strong solution of the SDE \[d\mathsf{y}=\left(-\Re\delta dt+dB_{1}\right)\mathsf{y},\quad d\mathsf{x}= \left(\Im\delta dt+dB_{2}\right)\mathsf{y},\quad\mathsf{y}(0)=1,\mathsf{x}(0) =0. \tag{56}\] For \(t\in[0,1)\) define \(z(t)=x(t)+iy(t)=\mathsf{x}_{v}+i\mathsf{y}_{v}\) where \(v=v(t)\). Let \(\mathsf{u}_{0}=\binom{1}{0}\), \(\mathsf{u}_{1}=\binom{-q}{-1}\), where \(q=\lim_{v\to\infty}\mathsf{x}(v)\). Set \(\mathtt{HP}_{\beta,\delta}=\mathtt{Dir}(z(\cdot),\mathsf{u}_{0},\mathsf{u}_{1})\). Note that the SDE (56) can be solved explicitly and the solution is given by \[\mathsf{y}(v)=e^{B_{1}(v)-(\Re\delta+\frac{1}{2})v},\quad\mathsf{x}(v)=\int_{ 0}^{v}\mathsf{y}(s)dB_{2}(s)+\Im\delta\int_{0}^{v}\mathsf{y}(s)ds. \tag{57}\] In the case when \(\delta=0\) the equation reduces to (55). In particular, the \(\mathtt{HP}_{\beta,\delta}\) operator can be viewed as the \(\delta\)-generalization of \(\mathsf{Sine}_{\beta}\), and we have \(\mathtt{HP}_{\beta,0}=\mathsf{Sine}_{\beta}\). ### Random analytic functions from beta ensembles As explained in Section 2.2, the Hilbert-Schmidt convergence of the resolvents of Dirac-type operators and the convergence of the integral traces imply the uniform on compacts convergence of the secular functions. In Section 3.2 we saw that we can express the characteristic polynomials of finite ensembles on the unit circle via the secular function of an associated Dirac-type operator. The operator level limits discussed in the previous section lead then lead to convergence statements regarding the scaled and normalized characteristic polynomials of the respective finite ensembles. In this section, we briefly review the secular functions and structure functions arising from the limits of the considered beta ensembles. The constructions rely on certain time-reversed and transformed versions of the limiting operators introduced in Section 4.2. Recall the transformations introduced in Section 2.2. In [38] Valko and Virag constructed the following Dirac-type operator on \((0,1]\) and showed that it is orthogonally equivalent to the \(\mathsf{Sine}_{\beta}\) operator. Consider the time change \[u(t)=u_{\beta}(t)=\frac{4}{\beta}\log t,\qquad t\in(0,1]. \tag{58}\] **Definition 33**.: Let \(b_{1},b_{2}\) be independent two-sided standard Brownian motion, and set \[\mathsf{y}_{u}=e^{b_{2}(u)-u/2},\qquad\mathsf{x}_{u}=-\int_{u}^{0}e^{b_{2}(s)- \frac{s}{2}}db_{1}, \tag{59}\] For \(t\in(0,1]\) set \(\hat{z}(t)=(\hat{x}+i\hat{y})(t):=\mathsf{x}_{u}+i\mathsf{y}_{u},\mathsf{u}_{0 }=\binom{1}{0}\), and \(\mathsf{u}_{1}=\binom{-q}{-1}\), where \(q\) is a Cauchy distributed random variable independent of \(b_{1},b_{2}\). Define \(\tau_{\beta}=\tau_{\beta}^{\text{\rm{Sine}}}=\mathtt{Dir}(\hat{z}(\cdot), \mathsf{u}_{0},\mathsf{u}_{1})\), and denote by \(\zeta_{\beta}:=\zeta_{\tau_{\beta}}\) and \(\mathcal{E}_{\beta}:=\mathcal{E}_{\tau_{\beta}}\) the corresponding secular and structure function according to Definition (7). **Proposition 34** ([38]).: _Let \(q\) and \(\tau_{\beta}\) be defined as in Definition 33. Then the \(\tau_{\beta}\) operator satisfies Assumption 4, and the orthogonally equivalent operator_ \[\rho^{-1}SQ\tau_{\beta}Q^{-1}S\rho,\qquad Q=\frac{1}{\sqrt{1+q^{2}}}\left( \begin{array}{cc}q&1\\ -1&q\end{array}\right),\] _has the same distribution as \(\mathtt{Sine}_{\beta}\). Let \(H_{\beta}(t,z)\) be the solution to the canonical system (18) of the \(\tau_{\beta}\) operator, and let \(\mathsf{x}_{u}\) and \(\mathsf{y}_{u}\) be defined according to (59). Set_ \[\mathcal{H}_{\beta}(u,z)=\left(\begin{array}{cc}1&-\mathsf{x}_{u}\\ 0&\mathsf{y}_{u}\end{array}\right)H_{\beta}(t(u),z),\qquad t=t(u)=e^{\beta u /4}, \tag{60}\] _then \(\mathcal{H}\) satisfies the stochastic differential equation (6). Since \(\mathcal{H}_{\beta}(0,z)=H_{\beta}(1,z)\), the structure-function \(\mathcal{E}_{\beta}(z)\) can be represented as \(\mathcal{E}_{\beta}(z)=\mathcal{H}_{\beta}(0,z)^{\dagger}\binom{1}{-i}\)._ Using the results of Proposition 34 [38] showed the convergence of the scaled and normalized characteristic polynomials of the circular beta ensemble to the stochastic zeta function. **Proposition 35** ([38]).: _Consider the size \(n\) (unperturbed) circular beta ensemble, i.e., the eigenvalues of \(\mathsf{Circ}_{n,\beta}\), and its normalized characteristic polynomial \(p_{n,\beta}(z):=\frac{\det(zI-\mathsf{Circ}_{n,\beta})}{\det(I-\mathsf{Circ}_ {n,\beta})}\). There exists a coupling of \(p_{n,\beta}(z),n\geq 1\), the secular function \(\zeta_{\beta}(z)=\mathcal{H}_{\beta}(0,z)^{\dagger}\binom{1}{-q}\), and an a.s. finite \(C\) so that for all \(z\in\mathbb{C}\) we have_ \[|p_{n,\beta}(e^{iz/n})e^{-iz/2}-\zeta_{\beta}(z)|\leq C^{|z|^{2}+1}\left(e^{|z |\frac{\log^{3}n}{\sqrt{n}}}-1\right).\] The time-reversed and transformed version of the \(\mathtt{Bess}_{\beta,a}\) and \(\mathtt{HP}_{\beta,\delta}\) operators were constructed in [24] in a similar spirit. **Definition 36**.: Let \(B\) be a standard two-sided Brownian motion. Let \(\mathsf{y}_{u}=e^{-\frac{\beta}{4}(2a+1)u+B(2u)}\) and \(\hat{y}(t)=\mathsf{y}(u(t))\) for \(t\in(0,1]\), where \(u\) is defined in (58). Set \(\mathsf{u}_{0}=\binom{1}{0},\mathsf{u}_{1}=\binom{0}{-1}\) and define \(\tau_{\beta,a}=\tau_{\beta,a}^{\text{Bess}}=\mathtt{Dir}(i\hat{y}(t),\mathsf{ u}_{0},\mathsf{u}_{1})\). We also define \(\zeta_{\beta,a}=\zeta_{\tau_{\beta,a}}\) and \(\mathcal{E}_{\beta,a}=\mathcal{E}_{\tau_{\beta,a}}\) as the secular and structure function of the \(\tau_{\beta,a}\) operator via Definition 7. **Proposition 37**.: _The operator \(\tau_{\beta,a}\) satisfies Assumption 4, and we have_ \[\rho^{-1}J_{\mathcal{\gamma},a}J\rho\stackrel{{ d}}{{=}}\mathtt{ Bess}_{\beta,a},\qquad J=\left(\begin{array}{cc}0&-1\\ 1&0\end{array}\right).\] _Let \(H_{\beta,a}(t,z)\) be the solution to the canonical system (18) of the \(\tau_{\beta,a}\) operator and set \(\mathcal{H}_{\beta,a}(u,z)=\mathrm{diag}(1,\mathtt{y}_{u})H_{\beta,a}(e^{ \frac{\beta}{4}u},z)\) for \(u\leq 0,z\in\mathbb{C}\), where \(\mathtt{y}_{u}\) is defined as in Definition 36. Then \(\mathcal{H}_{\beta,a}\) is the unique strong solution to the SDE_ \[d\mathcal{H}=\begin{pmatrix}0&0\\ 0&\sqrt{2}dB+(1-\frac{\beta}{4}(2a+1))du\end{pmatrix}\mathcal{H}-z\frac{\beta} {8}e^{\beta u/4}\begin{pmatrix}0&-1\\ 1&0\end{pmatrix}\mathcal{H}du, \tag{61}\] _with boundary conditions \(\lim\limits_{u\rightarrow-\infty}\sup_{|z|<1}\left|\mathcal{H}(u,z)-\binom{1 }{0}\right|=0\). Since \(\mathcal{H}_{\beta,a}(0,z)=H_{\beta,a}(1,z)\), we have \(\mathcal{E}_{\beta,a}(z)=\mathcal{H}_{\beta,a}(0,z)^{\dagger}\binom{1}{-i}\) and \(\zeta_{\beta,a}=\mathcal{H}_{\beta,a}(1,z)^{\dagger}\binom{1}{0}\)._ [24] showed that the secular function \(\zeta_{\beta,a}\) of the \(\mathtt{Bess}_{\beta,a}\) operator is the limit in distribution of the normalized characteristic polynomial of the real orthogonal beta ensemble under the edge scaling (3). **Definition 38**.: Let \(B_{1},B_{2}\) be independent two-sided standard Brownian motion, and for \(u\leq 0\) define \[\mathtt{y}_{u}=e^{B_{2}(u)-(\Re\delta+\frac{1}{2})u},\quad\mathtt{x}_{u}=-\int _{u}^{0}e^{B_{2}(s)-(\Re\delta+\frac{1}{2})s}dB_{1}-\Im\delta\int_{u}^{0}e^{B_ {2}(s)-(\Re\delta+\frac{1}{2})s}ds, \tag{62}\] For \(t\in(0,1]\) let \(\hat{z}(t)=(\hat{x}+i\hat{y})(t)=\mathtt{x}_{u}+\mathtt{i}\mathtt{y}_{u}\) with \(u=u(t)\) defined in (58). Set \(\mathtt{u}_{0}=\binom{1}{0}\), and \(\mathtt{u}_{1}=\binom{-q}{-1}\), where \(q\sim\Theta(1,\delta)\) is independent of \(B_{1},B_{2}\). Define \(\tau_{\beta,\delta}=\tau_{\beta,\delta}^{\mathrm{HP}}=\mathtt{Dir}(\hat{z}( \cdot),\mathtt{u}_{0},\mathtt{u}_{1})\), and denote by \(\zeta_{\beta,\delta}=\zeta_{\tau_{\beta,\delta}}\) and \(\mathcal{E}_{\beta,\delta}=\mathcal{E}_{\tau_{\beta,\delta}}\) the secular and structure function of \(\tau_{\beta,\delta}\), respectively.2 Footnote 2: From the context it will always be clear if \(\zeta_{\beta,\cdot}\), \(\mathcal{E}_{\beta,\cdot}\) refer to the objects related to the \(\mathtt{Bess}_{\beta,a}\) or the \(\mathtt{HP}_{\beta,\delta}\) operators. **Proposition 39** ([24]).: _Let \(q\) and \(\tau_{\beta,\delta}\) be defined as in Definition 38. Then the \(\tau_{\beta,\delta}\) operator satisfies Assumption 4, and the orthogonally equivalent operator_ \[\rho^{-1}SQ\tau_{\beta,\delta}Q^{-1}S\rho,\qquad Q=\frac{1}{\sqrt{1+q^{2}}} \left(\begin{array}{cc}q&1\\ -1&q\end{array}\right),\] _has the same distribution as \(\mathtt{HP}_{\beta,\delta}\). Let \(H_{\beta,\delta}\) be the solution of the canonical system (18) of the \(\tau_{\beta,\delta}\) operator, and let \(\mathtt{x}_{u},\mathtt{y}_{u}\) be defined as in (62). For \(u\leq 0,z\in\mathbb{C}\) set_ \[\mathcal{H}_{\beta,\delta}(u,z)=\left(\begin{array}{cc}1&-\mathtt{x}_{u}\\ 0&\mathtt{y}_{u}\end{array}\right)H_{\beta,\delta}(e^{\beta u/4},z),\] _then \(\mathcal{H}_{\beta,\delta}\) is the unique solution of the SDE_ \[d\mathcal{H}=\begin{pmatrix}0&-dB_{1}\\ 0&dB_{2}\end{pmatrix}\mathcal{H}+\begin{pmatrix}0&-\Im\delta du\\ 0&-\Re\delta du\end{pmatrix}\mathcal{H}-z\frac{\beta}{8}e^{\beta u/4}\left( \begin{array}{cc}0&-1\\ 1&0\end{array}\right)\mathcal{H}du, \tag{63}\] _with the boundary condition \(\lim\limits_{u\rightarrow-\infty}\sup_{|z|<1}\left|\mathcal{H}(u,z)- \binom{1}{0}\right|=0\), and we have \(\mathcal{E}_{\beta,\delta}(z)=\mathcal{H}_{\beta,\delta}(0,z)^{\dagger}\binom{ 1}{-i}\)._ [24] also showed that the secular function \(\zeta_{\beta,\delta}=\mathcal{H}_{\beta,\delta}(0,z)^{\dagger}\binom{1}{-q}\) of the \(\mathtt{HP}_{\beta,\delta}\) operator is the limit in distribution of the normalized characteristic polynomials of the circular Jacobi beta ensemble under the edge scaling (3). ## 5 Convergence of the truncated models This section establishes general convergence results for the rank-one truncations and multiplicative perturbations of the finite ensembles when the generating paths satisfy certain path bounds. The statements in this section hold in the deterministic setting. Throughout the section, we assume \(\mu_{n}\) is a probability measure on \(\partial\mathbb{D}\) supported on \(n\) distinct points, with Verblunsky coefficients \(\alpha_{0},\ldots,\alpha_{n-1}\). We denote by \(\tau_{n}\) the Dirac operator corresponding to \(\mu_{n}\). Recall the reversed (discrete) probability measures defined in Definition 15. We first introduce the reversed Dirac operator and its pulled-back version, which are closely connected to the truncated ensembles. **Definition 40**.: Let \(\widetilde{\mu}_{n}\) be the reversed version of \(\mu_{n}\), i.e., the probability measure corresponding to the reversed Verblunsky coefficients \((\widetilde{\alpha}_{0},\widetilde{\alpha}_{1},\ldots,\widetilde{\alpha}_{n- 1})\). We define \(\widetilde{\tau}_{n}\) as the Dirac operator corresponding to \(\widetilde{\mu}_{n}\) and call it the reversed version of \(\tau_{n}\). We denote by \(\widetilde{b}_{k},0\leq k\leq n\) the path parameters of \(\widetilde{\tau}_{n}\) in \(\mathbb{D}\). Recall the affine transformation \(\mathcal{A}_{\gamma,\mathbb{D}}\) defined in (41). **Definition 41**.: Set \(\overset{\leftarrow}{b}_{k}:=\mathcal{A}_{\widetilde{b}_{n-1},\mathbb{D}}( \widetilde{b}_{k})\) for \(0\leq k\leq n\) with the extension that \(\overset{\leftarrow}{b}_{-1}=\mathcal{A}_{\widetilde{b}_{n-1},\mathbb{D}}( \widetilde{b}_{-1})=1\). We define \(\overset{\leftarrow}{\tau}_{n}:=\mathcal{A}_{\widetilde{b}_{n-1},\mathbb{D}} (\widetilde{\tau}_{n}):=\mathtt{Dir}(1,\overset{\leftarrow}{b}_{0},\ldots, \overset{\leftarrow}{b}_{n-1},\overset{\leftarrow}{b}_{n})\) as the Dirac operator with path parameters \(\overset{\leftarrow}{b}_{k},0\leq k\leq n\) in \(\mathbb{D}\). We call \(\overset{\leftarrow}{\tau}_{n}\) the pulled-back version of \(\widetilde{\tau}_{n}\). To summarize, we are considering three sets of path parameters corresponding to \(\mu_{n}\), and each one has a Dirac-type operator associated to it. \(b_{k},0\leq k\leq n\) are the path parameters constructed from the Verblunsky coefficients \(\alpha_{k},0\leq k\leq n-1\) according to Definition 17. The reversed path \(\widetilde{b}_{k},0\leq k\leq n\) is constructed via the same definition, but from the reversed Verblunsky coefficients (31). Finally, the path \(\,\stackrel{{\sim}}{{b}}_{k},0\leq k\leq n\) can be obtained from the reversed path by applying an affine transformation that maps \(\widetilde{b}_{n-1}\) to \(0\) in \(\mathbb{D}\). See Table 1 for a summary of the defined objects. Note that by the definition of \(\mathcal{A}_{\gamma,\mathbb{D}}\) and the recursion (44), we have \(\,\stackrel{{\sim}}{{b}}_{n-1}=0\) and \(\,\stackrel{{\sim}}{{b}}_{n}=\widetilde{\gamma}_{n-1}\), where \(\widetilde{\gamma}_{n-1}\) is the last modified Verblunsky coefficient corresponding to \(\widetilde{\mu}_{n}\). The next statement represents the orthogonal polynomial of degree \(n-1\) of \(\widetilde{\mu}_{n}\) using the operator \(\,\stackrel{{\sim}}{{\tau}}_{n}\). **Proposition 42**.: _Let \(\widetilde{\mu}_{n}\) be the reversed version of \(\mu_{n}\), and \(\widetilde{\tau}_{n},\stackrel{{\sim}}{{\tau}}_{n}\) be defined as in Definitions 40 and 41. For \(0\leq k\leq n-1\), let \(\widetilde{\Phi}_{k}(\cdot)\) be the monic orthogonal polynomials of degree \(k\) associated to \(\widetilde{\mu}_{n}\), and set \(\widetilde{\Psi}_{k}(\cdot)=\widetilde{\Phi}_{k}(\cdot)/\widetilde{\Phi}_{k}(1)\). Then we have_ \[\widetilde{\Psi}_{n-1}(e^{iz/n})=e^{iz(n-1)/(2n)}\,\stackrel{{ \sim}}{{H}}_{n}\big{(}\tfrac{n-1}{n},z\big{)}^{\dagger}\binom{1}{-i},\] \begin{table} \begin{tabular}{c c c} original objects & reversed objects & pulled-back objects \\ \hline \hline \(\mu\) & & \(\widetilde{\mu}\) \\ discrete prob. measure on \(\partial\mathbb{D}\) & & reversed measure \\ \(\,\stackrel{{\sim}}{{\downarrow}}\,\eqref{eq:eq _where \(\overset{\leftrightarrow}{H}_{n}(t,z)\) is the solution of the canonical system (18) corresponding to the operator \(\overset{\leftrightarrow}{\tau}_{n}\)._ Proof.: Let \(\widetilde{b}_{k},0\leq k\leq n\) be the path parameters of \(\widetilde{\tau}_{n}\) and set \(\widetilde{z}_{k}=\mathcal{U}^{-1}(\widetilde{b}_{k}),0\leq k\leq n\) to be the corresponding path parameters in \(\mathbb{H}\). Here \(\mathcal{U}\) is the Cayley transform defined in (35). Consider the linear fractional transformation \(\mathcal{A}_{\widetilde{z}_{n-1},\mathbb{H}}\) (the upper half plane representation of \(\mathcal{A}_{\widetilde{b}_{n-1},\mathbb{D}}\)) such that \[\mathcal{A}_{\widetilde{z}_{n-1},\mathbb{H}}(w)=\mathcal{U}^{-1}\circ\mathcal{ A}_{\widetilde{b}_{n-1},\mathbb{D}}\circ\mathcal{U}(w)=\frac{w-\Re\widetilde{z}_{n- 1}}{\Im\widetilde{z}_{n-1}},\qquad w\in\mathbb{H}.\] Define \(\overset{\leftrightarrow}{z}_{k}=\overset{\leftrightarrow}{x}_{k}+i \overset{\leftrightarrow}{y}_{k}=\mathcal{A}_{\widetilde{z}_{n-1},\mathbb{H}} (\widetilde{z}_{k})\) for \(0\leq k\leq n\). Then by (40) we have \(\overset{\leftrightarrow}{z}_{k}=\mathcal{U}^{-1}(\overset{\leftrightarrow} {b}_{k}),0\leq k\leq n\) with the extension that \(\overset{\leftrightarrow}{z}_{-1}=\mathcal{U}^{-1}(\widetilde{b}_{-1})=\infty\). Introduce the temporary notation \[\widetilde{X}_{k}=A_{\widetilde{z}_{k},\mathbb{H}}=\left(\begin{array}{cc}1& -\widetilde{x}_{k}\\ 0&\widetilde{y}_{k}\end{array}\right),\quad\overset{\leftrightarrow}{X}_{k}=A _{\overset{\leftrightarrow}{z}_{k},\mathbb{H}}=\left(\begin{array}{cc}1&- \overset{\leftrightarrow}{x}_{k}\\ 0&\overset{\leftrightarrow}{y}_{k}\end{array}\right),\qquad 0\leq k\leq n,\] then we have \[\overset{\leftrightarrow}{X}_{k}=\widetilde{X}_{k}(\widetilde{X}_{n-1})^{-1}, \qquad 0\leq k\leq n. \tag{64}\] For a matrix of the form \[X=\left(\begin{array}{cc}1&-x\\ 0&y\end{array}\right)\] with \(y>0\) we have the identity \[y^{-1}X^{\dagger}J=JX^{-1}.\] Using this together with (10) and (11) we obtain \[\overset{\leftrightarrow}{\tau}_{n}=\widetilde{X}_{n-1}\widetilde{\tau}_{n}( \widetilde{X}_{n-1})^{-1}. \tag{65}\] Let \(\overset{\leftrightarrow}{H}_{n}(t,z)\) and \(\widetilde{H}_{n}(t,z)\) be the solutions to the canonical systems (18) of \(\overset{\leftrightarrow}{\tau}_{n}\) and \(\widetilde{\tau}_{n}\) respectively, then (65) leads to \[\widetilde{H}_{n}(t,z)=(\widetilde{X}_{n-1})^{-1}\overset{\leftrightarrow} {H}_{n}(t,z). \tag{66}\] Let \(\widetilde{\Psi}_{k}^{*}(z)=z^{k}\overline{\widetilde{\Psi}_{k}(1/\bar{z})}\) be the reversed polynomial of \(\widetilde{\Psi}_{k}(z)\). Using (35), (46), (64), and (66) we get \[\begin{pmatrix}\widetilde{\Psi}_{k}(e^{iz/n})\\ \widetilde{\Psi}_{k}^{*}(e^{iz/n})\end{pmatrix}=e^{izk/(2n)}U\widetilde{X}_{k} \widetilde{H}_{n}(k/n,z)=e^{izk/(2n)}U\overset{\leftrightarrow}{X}_{k} \overset{\leftrightarrow}{H}_{n}(k/n,z).\] Since \(\overset{\leftrightarrow}{z}_{n-1}=\mathcal{A}_{\widetilde{z}_{n-1},\mathbb{ H}}(\widetilde{z}_{n-1})=i\), and \(\overset{\leftrightarrow}{X}_{n-1}=I\), we have \[\widetilde{\Psi}_{n-1}(e^{iz/n})=e^{iz(n-1)/(2n)}\overset{\leftrightarrow} {H}_{n}((n-1)/n,z)^{\dagger}\binom{1}{-i},\] finishing the proof. Suppose now that the probability measure \(\mu_{n}\) is the spectral measure of an \(n\times n\) unitary or orthogonal matrix \(U\) with respect to \(\mathbf{e}_{1}\). By Propositions 13 and 16, the eigenvalues of the truncated matrix \(U^{\ulcorner}\) are exactly the zeros of the normalized orthogonal polynomial \(\widetilde{\Psi}_{n-1}(\cdot)\), hence they can be expressed from the vector-valued function \(\overset{\leftarrow}{H}_{n}\). Next we turn to the discussion of the scaling limit of the eigenvalues of the truncated matrices. If we can prove uniform on compacts convergence of the normalized orthogonal polynomials, then this leads to the convergence of the eigenvalues of the truncated models. By Proposition 42 it is sufficient to show the convergence the convergence of \(\overset{\leftarrow}{H}_{n}\). Using Proposition 8, this can be done if we have sufficient control of the integral traces, resolvent norms, and the proper integrals of the kernels of \(\mathtt{r}\,\overset{\leftarrow}{\tau}_{n}\). We summarize this idea in the following statement. **Proposition 43**.: _Suppose that \(\mu_{n},n\geq 1\) is a sequence of probability measures on \(\partial\mathbb{D}\), with \(\mu_{n}\) supported on \(n\) distinct points. Consider the setup of Proposition 42 and denote by \(z_{n}(\cdot)\) the generating path of \(\overset{\leftarrow}{\tau}_{n}\) for \(n\geq 1\). Suppose that there exists a Dirac operator \(\tau_{\infty}=\mathtt{Dir}(z_{\infty}(\cdot),\mathtt{u}_{0},\mathtt{u}_{1})\) with \(\mathtt{u}_{0}=\binom{1}{0}\) and \(\mathcal{I}=(0,1]\) so that_ \[\|\mathtt{r}\,\overset{\leftarrow}{\tau}_{n}-\mathtt{r}\,\tau_{\infty}\|_{ \mathrm{HS}}\to 0,\qquad\mathtt{t}_{\overset{\leftarrow}{\tau}_{n}}- \mathtt{t}_{\tau_{\infty}}\to 0,\qquad\int_{0}^{1}|\mathfrak{a}_{0,n}(s)- \mathfrak{a}_{0,\infty}(s)|^{2}ds\to 0, \tag{67}\] _where \(\mathfrak{a}_{0,n}(\cdot)=\binom{(\Im z_{n}(\cdot)^{-\frac{1}{2}}}{0},n\in \mathbb{Z}_{+}\cup\{\infty\}\) is defined according to (17). Let \(\widetilde{\Psi}_{n-1,n}\) be the normalized orthogonal polynomial of degree \(n-1\) of \(\widetilde{\mu}_{n}\), and let \(\mathcal{E}(z):=H(1,z)^{\dagger}\binom{1}{-i}\) be the structure function of \(\tau_{\infty}\). Then we have_ \[|e^{-iz/2}\widetilde{\Psi}_{n-1,n}(e^{iz/n})-\mathcal{E}(z)|\to 0\quad\text{uniformly on compacts in $\mathbb{C}$ as $n\to\infty$.} \tag{68}\] Note that (68) implies that the zeros of \(\widetilde{\Psi}_{n-1,n}\) converge to the zeros of \(\mathcal{E}\) under the edge scaling (3). Proof of Proposition 43.: Let \(\overset{\leftarrow}{H}_{n},n\geq 1\) be the solution of the canonical system (18) of \(\overset{\leftarrow}{\tau}_{n}\). By Proposition 42 and the triangle inequality, we have \[\begin{split}|\widetilde{\Psi}_{n-1,n}(e^{iz/n})e^{-iz/2}- \mathcal{E}(z)|&\leq|e^{-iz/(2n)}|\left|\left(\overset{ \leftarrow}{H}_{n}(\tfrac{n-1}{n},z)^{\dagger}-H(\tfrac{n-1}{n},z)^{\dagger} \right)\cdot\binom{1}{-i}\right|\\ &+|e^{-iz/(2n)}|\left|\left(H(\tfrac{n-1}{n},z)^{\dagger}-H(1,z) ^{\dagger}\right)\cdot\binom{1}{-i}\right|\\ &+\left|e^{-iz/(2n)}-1\right|\left|H(1,z)^{\dagger}\cdot\binom{1} {-i}\right|.\end{split} \tag{69}\] For \(z\) in a compact set of \(\mathbb{C}\) and large enough \(n\), the sum of the second and the third term of (69) is upper bounded by an error term of the order \(O(n^{-1})\). Hence it remains to provide an upper bound for the difference \(|\widetilde{H}_{n}(\frac{n-1}{n},z)-H(\frac{n-1}{n},z)|\). By Proposition 8, it suffices to control the terms \[|\mathfrak{t}_{\tau_{\infty}}-\mathfrak{t}_{\stackrel{{\leftarrow} }{{\tau}}_{n}}|,\qquad\|\mathbf{r}\,\tau_{\infty}-\mathbf{r}\stackrel{{ \leftarrow}}{{\tau}}_{n}\|_{\mathrm{HS}},\qquad\int_{0}^{1}| \mathfrak{a}_{0,\infty}(s)-\mathfrak{a}_{0,n}(s)|^{2}ds.\] Applying the assumptions (67) finishes the proof. Next, we turn to the discussion of the rank-one multiplicative perturbation of the matrix models. Suppose \(U\) is an \(n\times n\) unitary or orthogonal matrix with CMV matrix form \(\mathcal{C}(\alpha_{0},\ldots,\alpha_{n-1})\). Let \(r\in[0,1]\) and \(U^{[r]}=U\cdot\mathrm{diag}(r,1,1,\ldots,1)\) be its rank-one multiplicative perturbation. Recall from Proposition 16 that the perturbed matrix \(U^{[r]}\) has the same eigenvalues as the CMV matrix \[\mathcal{C}_{n}^{[r]}:=\mathcal{C}(\widetilde{\alpha}_{0},\ldots,\widetilde {\alpha}_{n-2},r\widetilde{\alpha}_{n-1}). \tag{70}\] Note that this matrix is a function of the spectral measure of \(U\). In particular, for any probability measure \(\mu_{n}\) on \(\partial\mathbb{D}\) (supported on \(n\) points) we can define the matrix \(\mathcal{C}_{n}^{[r]}\) from its Verblunsky coefficients. Suppose now that we have a sequence of probability measures \(\mu_{n},n\geq 1\), just as in Proposition 43. Let \(\widetilde{\gamma}_{n-1}\) be the last modified Verblunsky coefficient corresponding to the Verblunsky coefficients \(\widetilde{\alpha}_{0},\ldots,\widetilde{\alpha}_{n-1}\) of \(\widetilde{\mu}_{n}\). (Note the abuse of notation: we should write \(\widetilde{\alpha}_{k,n},0\leq k\leq n-1\) here, but we drop the extra \(n\) from the notation.) The next result shows that if \(\widetilde{\gamma}_{n-1}\) converges and the assumptions of Proposition 43 hold, then the eigenvalues of \(\mathcal{C}_{n}^{[r]}\) converge as well. **Proposition 44**.: _Consider the same setup as in Proposition 43. Fix \(r\in[0,1]\), let \(\Psi_{n}^{[r]}(z)=\prod_{i=1}^{n}\frac{z-\lambda_{i}}{1-\lambda_{i}}\) be the normalized characteristic polynomial of \(\mathcal{C}_{n}^{[r]}\). Assume that the convergence (68) holds, and assume further that \(\lim_{n\to\infty}\widetilde{\gamma}_{n-1}=\gamma\neq 1\). Then we have_ \[|\Psi_{n}^{[r]}(e^{iz/n})e^{-iz/2}-\mathcal{E}^{[r]}(z)|\to 0,\qquad \mathcal{E}^{[r]}(z)=H(1,z)^{\dagger}\binom{1}{-i\frac{1-r\gamma}{1+r\gamma}} \tag{71}\] _uniformly on compacts in \(\mathbb{C}\) as \(n\to\infty\)._ Note that (71) implies the convergence of the eigenvalues of \(\Psi_{n}^{[r]}\) to the zeros of \(\mathcal{E}^{[r]}\) under the edge scaling (3). Proof of Proposition 44.: By the modified Szego recursion (38), we have \[\Psi_{n}^{[r]}(z)=\tfrac{1}{1-r\widetilde{\gamma}_{n-1}}z\Psi_{n-1}^{[r]}(z)- \tfrac{r\widetilde{\gamma}_{n-1}}{1-r\widetilde{\gamma}_{n-1}}\Psi_{n-1}^{[r],*}(z), \tag{72}\] where \(\Psi^{[r],*}_{n-1}(\cdot)\) is the reversed polynomial of \(\Psi^{[r]}_{n-1}(\cdot)\). Note that the polynomials \(\Psi^{[r]}_{n-1}\) and \(\Psi^{[r],*}_{n-1}\) do not depend on \(\widetilde{\gamma}_{n-1}\). Introduce \(\mathcal{E}^{*}(z)=\overline{\mathcal{E}(\bar{z})}\), by the definition of the reversed polynomials, (46) and (68), we have \[\left|\Psi^{[r],*}_{n-1}(e^{iz/n})e^{-iz/2}-\mathcal{E}^{*}(z)\right|\to 0 \quad\text{uniformly on compacts as }n\to\infty. \tag{73}\] Together with (72) and the convergence of \(\widetilde{\gamma}_{n-1}\to\gamma\) as \(n\to\infty\), we get \[\lim_{n\to\infty}\Psi^{[r]}_{n}(e^{iz/n})e^{-iz/2}=\tfrac{1}{1-r\gamma} \mathcal{E}(z)-\tfrac{r\gamma}{1-r\gamma}\mathcal{E}^{*}(z)=H(1,z)^{\dagger} \binom{1}{-i\tfrac{1-r\gamma}{1+r\gamma}},\] uniformly on compacts in \(z\). ## 6 Edge limits of the truncated circular and real orthogonal beta ensembles Our goal is to prove Theorems 2 and 3, the edge scaling limits of the rank-one truncation and multiplicative perturbation of the circular beta ensemble using the framework developed in Section 5. We will also prove the analogue results for the real orthogonal beta ensemble. In both cases we will check that the random measures associated to the respective beta ensembles (introduced in Section 4.1) satisfy the conditions in Propositions 43 and 44. ### The truncated circular beta ensemble Recall the definition of the random probability measure \(\mu^{\text{KN}}_{n,\beta}\) introduced in Section 4.1 and the corresponding CMV matrix \(\mathsf{Circ}_{n,\beta}\). Proposition 22 gives the distribution of the Verblunsky coefficients \(\alpha_{k},0\leq k\leq n-1\) corresponding to \(\mu^{\text{KN}}_{n,\beta}\). The truncated circular beta ensemble is the joint distribution of the eigenvalues of the truncated CMV matrix \(\mathsf{Circ}_{n,\beta}^{\overset{\Gamma}{\leftarrow}}\). The rank-one multiplicative perturbation of the circular beta ensemble (corresponding to a parameter \(r\in[0,1]\)) is defined as the distribution of the eigenvalues of \(\mathsf{Circ}_{n,\beta}^{[r]}\). Note that by Proposition 22 the Verblunsky coefficients \(\alpha_{k},0\leq k\leq n-1\) are independent and rotationally invariant. Hence from (31) we get the following distributional identity: \[(\widetilde{\alpha}_{0},\widetilde{\alpha}_{1},\ldots,\widetilde{\alpha}_{n- 2},\widetilde{\alpha}_{n-1})\overset{d}{=}(\alpha_{n-2},\alpha_{n-3},\ldots, \alpha_{0},\alpha_{n-1}). \tag{74}\] Using this together with Proposition 16, we recover the following result of Killip and Kozhan [19]. **Proposition 45** ([19]).: _For fixed \(n\) and \(\beta>0\), let \(\alpha_{k},0\leq k\leq n-1\) be distributed according to Proposition 22. Then the joint eigenvalue distribution of the CMV matrix \(\mathcal{C}(\alpha_{n-2},\alpha_{n-3},\ldots,\alpha_{0})\) is the same as that of \(\mathsf{Circ}_{n,\beta}^{\ulcorner}\), which is given by the truncated circular beta ensemble. For fixed \(r\in[0,1]\), the matrix \(\mathcal{C}(\alpha_{n-2},\alpha_{n-3},\ldots,\alpha_{0},r\alpha_{n-1})\) has the same distribution as \(\mathsf{Circ}_{n,\beta}^{[r]}\), in particular its joint eigenvalue distribution is given by the rank-one multiplicative perturbation of the circular beta ensemble._ Note that [19] also proved that the joint density of the truncated circular beta ensemble is given by (5), and described explicitly the joint eigenvalue distribution of the perturbed matrix \(\mathsf{Circ}_{n,\beta}^{[r]}\) (see Proposition 7.2 of [19]). Recall the Dirac operator \(\mathsf{Circ}_{n,\beta}\) in Definition 23. We denote by \(\widetilde{\tau}_{n,\beta}\) the reversed version of \(\mathsf{Circ}_{n,\beta}\), and by \(\overset{\leftarrow}{\tau}_{n,\beta}\) the pulled-back version of \(\widetilde{\tau}_{n,\beta}\) (see Definitions 40 and 41). By studying the generating paths of the operators \(\overset{\leftarrow}{\tau}_{n,\beta}\) and \(\mathsf{Circ}_{n,\beta}\), we claim that these operators are orthogonally equivalent. Indeed, this observation follows from Proposition 52 of [39] using the rotational invariance of the models and a conditioning argument. We will provide a different proof that holds for the circular Jacobi beta ensemble (\(\delta=0\) corresponding to the circular beta ensemble). To avoid repetition, we will state the result without proof here. **Proposition 46**.: _Recall the definitions of the operators \(\rho\) and \(S\) from Lemma 10. Denote by \(\overset{\leftarrow}{b}_{k},0\leq k\leq n\) the path parameters of \(\overset{\leftarrow}{\tau}_{n,\beta}\), and let_ \[Q=\frac{1}{\sqrt{1+q^{2}}}\begin{pmatrix}q&1\\ -1&q\end{pmatrix},\qquad q=\mathcal{U}^{-1}(\overset{\leftarrow}{b}_{n}).\] _Then the operator \(\rho^{-1}(SQ)\overset{\leftarrow}{\tau}_{n,\beta}(SQ)^{-1}\rho\) has the same distribution as \(\mathsf{Circ}_{n,\beta}\)._ Recall the definition of the \(\mathsf{Sine}_{\beta}\) operator from Definition 30, and its reversed and transformed version \(\tau_{\beta}\) defined in Definition 33. Proposition 46 shows that one can obtain the operators \(\tau_{n,\beta}\) and \(\tau_{\beta}\) from \(\mathsf{Circ}_{n,\beta},n\geq 1\) and \(\mathsf{Sine}_{\beta}\) under the same orthogonal transformations. The final ingredient of proving Theorems 2 and 3 is the following strong operator level convergence of \(\mathsf{Circ}_{n,\beta}\) to the \(\mathsf{Sine}_{\beta}\) operator. Let \(d_{\mathbb{H}}(z_{1},z_{2})=\arccosh(1+\frac{|z_{1}-z_{2}|^{2}}{23z_{1}3z_{2}})\) denote the hyperbolic distance in \(\mathbb{H}\). **Proposition 47** ([37, 38]).: _There exists an explicit coupling of the operators \(\mathsf{Circ}_{n,\beta}=\mathtt{Dir}(z_{n}(\cdot),\mathfrak{u}_{0},\mathfrak{ u}_{1}^{(n)}),n\geq 1\) and \(\mathsf{Sine}_{\beta}=\mathtt{Dir}(z(\cdot),\mathfrak{u}_{0},\mathfrak{u}_{1})\) such that \(\mathfrak{u}_{1}^{(n)}=\mathfrak{u}_{1}\), and for large enough \(n\),_ \[d_{\mathbb{H}}(z_{n}(t),z(t)) \leq\frac{\log^{3-1/8}n}{\sqrt{(1-t)n}},\qquad 0\leq t\leq t_{n}:=1- \frac{\log^{6}n}{n}, \tag{75}\] \[d_{\mathbb{H}}(z_{n}(t_{n}),z(t)) \leq(\log\log n)^{4},\qquad t_{n}\leq t<1.\] _Under this coupling, we have_ \[\|\mathtt{r}\,\mathtt{Circ}_{n,\beta}-\mathtt{r}\,\mathtt{Sine}_{\beta}\|_{\textit{ HS}}^{2}\leq\frac{\log^{6}n}{n},\qquad|\mathtt{t}_{\mathtt{Circ}_{n,\beta}}- \mathtt{t}_{\mathtt{Sine}_{\beta}}|\leq\frac{\log^{3}n}{\sqrt{n}}. \tag{76}\] Here the first inequality of (76) was proved in [37] and the second one follows from the estimates used in the proof of Theorem 49 of [38] (in particular equations (107)-(109)). 'Large enough \(n\)' means that there is a finite random variable \(N_{0}\) so that the statements hold for \(n\geq N_{0}\). Proposition 47 provides us with the necessary ingredients so that we can apply Proposition 43 to prove the convergence of the normalized characteristic polynomials of the truncated circular beta ensemble. **Proposition 48**.: _Let \(\lambda_{i},1\leq i\leq n-1\) be the size \(n-1\) truncated circular beta ensemble and set \(p_{n-1,\beta}(z)=\prod_{i=1}^{n-1}\frac{z-\lambda_{i}}{1-\lambda_{i}}\) be the normalized characteristic polynomial. Let \(\mathcal{E}_{\beta}\) be the structure function of \(\tau_{\beta}\) defined via (20). Then there is a coupling of \(p_{n-1,\beta},n\geq 2\) and \(\mathcal{E}_{\beta}\) such that_ \[|p_{n-1,\beta}(e^{iz/n})e^{-iz/2}-\mathcal{E}_{\beta}(z)|\leq\left(e^{|z|\frac {\log^{3}n}{\sqrt{n}}}-1\right)C^{1+|z|^{2}} \tag{77}\] _for all \(z\in\mathbb{C}\) and \(n\geq 1\), where \(C\) is an a.s. finite constant._ Proof.: Let \(\widetilde{\mu}_{n}\) be the reversed version of the Killip-Nenciu probablity measures \(\mu_{n}:=\mu_{n,\beta}^{\text{KN}}\), and let \(\overset{\leftrightarrow}{\tau}_{n,\beta}\) be the pulled-back operator. By Propositions 13 and 45, the normalized characteristic polynomial \(p_{n-1,\beta}\) has the same distribution as \(\widetilde{\Psi}_{n-1,\beta}\), the monic orthogonal polynomial of degree \(n-1\) associated to \(\widetilde{\mu}_{n}\). Hence it is enough to provide a coupling of \(\mu_{n}\) and \(\mathcal{E}_{\beta}\) where (77) holds with \(\widetilde{\Psi}_{n-1,\beta}\) in place of \(p_{n-1,\beta}\). By Proposition 42, we have \[\widetilde{\Psi}_{n-1,\beta}(e^{iz/n})=e^{iz(n-1)/(2n)}\overset{\ \ \ \ }{H}_{n,\beta}((n-1)/n,z)^{\dagger}\binom{1}{-i},\] where \(\overset{\ \ \ }{H}_{n,\beta}\) solves the ODE (18) of \(\overset{\leftarrow}{\tau}_{n,\beta}\). Recall that \(\mathcal{E}_{\beta}(z)=H_{\beta}(1,z)^{\dagger}\binom{1}{-i}\). Consider the coupling of Proposition 47. Using the triangle inequality as in (69) within the proof of Proposition 43, we have \[\begin{split}|\widetilde{\Psi}_{n-1,\beta}(e^{iz/n})e^{-iz/2}- \mathcal{E}_{\beta}(z)|&\leq|e^{-iz/(2n)}|\left|\left(\overset{ \leftarrow}{H}_{n,\beta}(\frac{n-1}{n},z)^{\dagger}-H_{\beta}(\frac{n-1}{n},z )^{\dagger}\right)\cdot\binom{1}{-i}\right|\\ &+|e^{-iz/(2n)}|\left|\left(H_{\beta}(\frac{n-1}{n},z)^{\dagger}- H_{\beta}(1,z)^{\dagger}\right)\cdot\binom{1}{-i}\right|\\ &+\left|e^{-iz/(2n)}-1\right|\left|H_{\beta}(1,z)^{\dagger}\cdot \binom{1}{-i}\right|.\end{split} \tag{78}\] Note that for any compact subset of \((0,1]\) the operator norm of the weight function \(R(s)\) of \(\tau_{\beta}\) is bounded by a finite random constant. Hence by the standard theory of ordinary differential equations, for \(z\) in a compact set of \(\mathbb{C}\), the second term on the right hand side of (78) can be upper bounded by \(Cn^{-1}\) where \(C\) is an a.s. finite constant. The third term on the right side of (78) can also be bounded similarly, since \(z\mapsto H_{\beta}(1,z)^{\dagger}\cdot\binom{1}{-i}\) is a random entire function. It remains to estimate the first term on the right hand side of (78). By Propositions 34 and 46, the operators \(\overset{\leftrightarrow}{\tau}_{n,\beta},n\geq 1\) and \(\tau_{\beta}\) can be obtained from \(\mathtt{Circ}_{n,\beta},n\geq 1\) and \(\mathtt{Sine}_{\beta}\) under the same orthogonal transformations. Hence in the coupling of Proposition 47, for large enough \(n\) we have \[\|\mathtt{r}\,\tau_{\beta}-\mathtt{r}\,\overset{\leftrightarrow}{\tau}_{n, \beta}\|\leq\frac{\log^{3}n}{\sqrt{n}},\qquad|\mathtt{t}_{\tau_{\beta}}- \mathtt{t}_{\overset{\leftrightarrow}{\tau}_{n,\beta}}|\leq\frac{\log^{3}n}{ \sqrt{n}}, \tag{79}\] It has also been shown in the proof of Proposition 3 of [39] (see equations (69)-(72) therein) that \[\int_{0}^{1}|\mathfrak{a}_{0,n}(s)-\mathfrak{a}_{0}(s)|^{2}ds\leq\frac{\log^{6 }n}{n}. \tag{80}\] The estimates (79) and (80) allow us to use Proposition 8 to bound \(\|\overset{\leftrightarrow}{H}_{n,\beta}(\frac{n-1}{n},z)-H_{\beta}(\frac{n-1 }{n},z)\|\) for \(n\) large enough. From this it follows that (78) can be bounded from above according to (77), which proves the proposition. We are now ready to prove Theorems 2 and 3. Proof of Theorem 2.: First note that by (60) we have \(\mathcal{H}_{\beta}(0,\cdot)=H_{\beta}(1,\cdot)\), hence \(\mathcal{E}_{\beta}(\cdot)=\mathcal{H}_{\beta}(0,\cdot)^{\dagger}\binom{1}{-q}\). Then by Proposition 48, we obtain the convergence of the normalized characteristic polynomials of the truncated circular beta ensemble to the random analytic function \(\mathcal{E}_{\beta}\) with an explicit error bound. Under the edge scaling (3), the size \(n\) truncated circular beta ensemble has the same distribution as the zeros of the function \(p_{n,\beta}(e^{iz/n})\) defined in Proposition 48. The weak convergence of the truncated circular beta ensemble to \(\mathcal{X}_{\beta}:=\{z\in\mathbb{H}:\mathcal{E}_{\beta}(z)=0\}\) follows directly from Proposition 48 and Hurwitz's theorem. Proof of Theorem 3.: Consider the coupling of \(\tau_{n,\beta}\) and \(\tau_{\beta}\) described in Proposition 48 again, the right boundary condition of these operators are coupled together. By Propositions 44 and 48 we get the uniform-on-compacts convergence of the normalized characteristic polynomials of \(\mathtt{Circ}_{n,\beta}^{[r]}\) to \(\mathcal{E}_{r,\beta}\) with a similar error bound. This in turn gives the weak convergence of the eigenvalues of \(\mathtt{Circ}_{n,\beta}^{[r]}\) under the edge scaling and completes the proof. ### Edge limits of the truncated real orthogonal beta ensemble This section discusses the edge limits of the rank-one truncation and multiplicative perturbation of the real orthogonal beta ensemble. We will follow the same approach as in Section 6.1. Recall the size \(2n\) real orthogonal ensemble with joint density (50) and the random probability measure \(\mu^{\text{RO}}_{2n,\beta,a,b}\) in (51). Proposition 25 describes the distribution of the corresponding Verblunsky coefficients \(\alpha_{k},0\leq k\leq 2n-1\). Since \(\alpha_{k}\)'s are all real and \(\alpha_{2n-1}=-1\), from (31) we get that the reversed Verblunsky coefficients \(\widetilde{\alpha}_{k},0\leq k\leq 2n-1\) satisfy \[(\widetilde{\alpha}_{0},\widetilde{\alpha}_{1},\cdots,\widetilde{\alpha}_{2n- 2},\widetilde{\alpha}_{2n-1})=(\alpha_{2n-2},\alpha_{2n-3},\cdots,\alpha_{0}, \alpha_{2n-1}). \tag{81}\] The rank-one truncation and multiplicative perturbation of the real orthogonal beta ensemble are defined as the joint eigenvalue distributions of the truncated CMV matrix \(\mathsf{RO}^{\ulcorner}_{2n,\beta,a,b}\) and the perturbed CMV matrix \(\mathsf{RO}^{[r]}_{2n,\beta,a,b}\) (indexed by \(r\in[0,1]\)), respectively. Using (81) with Proposition 16 yields the following result of Killip and Kozhan [19]. **Proposition 49** ([19]).: _For given \(\beta>0\), \(a,b>-1\) and \(n\geq 1\), let \(\alpha_{k},0\leq k\leq 2n-1\) be distributed according to Proposition 25. Then the CMV matrix \(\mathcal{C}(\alpha_{2n-2},\alpha_{2n-3}\ldots,\alpha_{0})\) has the same joint eigenvalue distribution as \(\mathsf{RO}^{\ulcorner}_{2n,\beta,a,b}\). For fixed \(r\in[0,1]\), the CMV matrix \(\mathcal{C}(\alpha_{2n-2},\ldots,\alpha_{0},-r)\) has the same joint eigenvalue distribution as \(\mathsf{RO}^{[r]}_{2n,\beta,a,b}\)._ Note that the joint distributions of the rank-one truncation and multiplicative perturbation of the real orthogonal beta ensemble were described explicitly in Theorem 6.4 and Proposition 7.2 (b) of [19]. Recall the Dirac operator \(\mathsf{RO}_{2n,\beta,a,b}\) in Definition 26. Denote by \(\widetilde{\tau}_{2n,\beta,a,b}\) the reversed version of \(\mathsf{RO}_{2n,\beta,a,b}\), and by \(\widetilde{\tau}_{2n,\beta,a,b}^{\urcorner}\) the pulled-back version of \(\widetilde{\tau}_{2n,\beta,a,b}\). Recall also the limiting \(\mathsf{Bess}_{\beta,a}\) operator in Definition 31 and its reversed version \(\tau_{\beta,a}\) in Definition 36. By Proposition 37 we get that \(\rho^{-1}J\tau_{\beta,a}J\rho\stackrel{{ d}}{{=}}\mathsf{Bess}_{ \beta,a}\). The next result shows that under the same transformations, the operators \(\widetilde{\tau}_{2n,\beta,a,b}\) and \(\mathsf{RO}_{2n,\beta,a,b}\) are orthogonally equivalent. **Proposition 50**.: \[\rho^{-1}J\widetilde{\tau}_{2n,\beta,a,b}J\rho\stackrel{{ d}}{{=}} \mathsf{RO}_{2n,\beta,a,b}.\] Proof.: Let \(\alpha_{k},0\leq k\leq 2n-1\) be distributed as in Proposition 25, and let \(\gamma_{k},0\leq k\leq 2n-1\) be the corresponding modified Verblunsky coefficients. Since \(\alpha_{k}\)'s are all real, from (36) we have \(\gamma_{k}=\alpha_{k}\) for all \(0\leq k\leq 2n-1\). Let \(z_{k},0\leq k\leq 2n\) and \(\widetilde{z}_{k},0\leq k\leq 2n\) be the path parameters of \(\mathsf{RO}_{2n,\beta,a,b}\) and \(\widetilde{\tau}_{2n,\beta,a,b}\) in \(\mathbb{H}\), respectively. Using (43) and (81) we get for \[z_{k}=i\prod_{j=0}^{k-1}\frac{1+\gamma_{j}}{1-\gamma_{j}},\qquad\widetilde{z} _{k}=i\prod_{j=0}^{k-1}\frac{1+\gamma_{2n-2-j}}{1-\gamma_{2n-2-j}},\qquad 0 \leq k\leq 2n.\] Recall the affine transformation \(\mathcal{A}_{z,\mathbb{H}}\) in (39) and the corresponding transformation on the uni-disk model \(\mathcal{A}_{\gamma,\mathbb{D}}\) in (41). By Definition 41 and (40), the path parameters of \(\overset{\leftarrow}{\tau}_{2n,\beta,a,b}\) in \(\mathbb{H}\), denoted by \(\overset{\leftarrow}{z}_{k},0\leq k\leq 2n\), are obtained from \(\widetilde{z}_{k},0\leq k\leq 2n\) via the affine transformation \(\mathcal{A}_{\widetilde{z}_{2n-1},\mathbb{H}}\). More precisely, we have \[\overset{\leftarrow}{z}_{k}=\mathcal{P}\left(\begin{array}{cc}1&0\\ 0&\Im\widetilde{z}_{2n-1}\end{array}\right)\left(\overset{\leftarrow}{z}_{k} \right)=i\prod_{j=0}^{2n-2-k}\frac{1-\gamma_{j}}{1+\gamma_{j}},\qquad 0\leq k \leq 2n-1,\] with \(\overset{\leftarrow}{z}_{2n}=\mathcal{A}_{\widetilde{z}_{2n-1},\mathbb{H}}( \widetilde{z}_{2n})=0\). Note that conjugating with the permutation matrix \(J\) maps \(z\mapsto-1/z\) for \(z\in i\mathbb{R}\). Together with the time-reversal, we see that the path parameters of the operator \(\rho J\overset{\leftarrow}{\tau}_{2n,\beta,a,b}J\rho\) are the same as the path parameters of \(\mathtt{RO}_{2n,\beta,a,b}\). By Lemmas 10 and 11, the left and right boundary points of \(\rho^{-1}J\overset{\leftarrow}{\tau}_{2n,\beta,a,b}J\rho\) are given by \(1\) and \(-1\), which corresponds to the vectors \(\mathfrak{u}_{0}=\binom{1}{0}\) and \(\mathfrak{u}_{1}=\binom{0}{-1}\) as desired. This completes the proof. In the rest of the section, we aim to prove the convergence of the normalized characteristic polynomials of the truncated real orthogonal beta ensemble using Proposition 43. The main ingredient will be the operator level convergence of \(\mathtt{RO}_{2n,\beta,a,b}\) to its limit \(\mathtt{Bess}_{\beta,a}\) proved in [24]. **Proposition 51** ([24]).: _Let \(\mathfrak{u}_{0}=\binom{1}{0},\mathfrak{u}_{1}=\binom{0}{-1}\). There exists a coupling of the operators \(\mathtt{RO}_{2n,\beta,a,b}=\mathtt{Dir}(iy_{n}(\cdot),\mathfrak{u}_{0}, \mathfrak{u}_{1}),n\geq 1\) and \(\mathtt{Bess}_{\beta,a}=\mathtt{Dir}(iy(\cdot),\mathfrak{u}_{0},\mathfrak{u}_ {1})\) such that as \(n\to\infty\) we have almost surely \(|y_{n}-y|\to 0\) point-wise on \([0,1)\), and_ \[\|\mathtt{r}\,\mathtt{RO}_{2n,\beta,a,b}-\mathtt{r}\,\mathtt{Bess}_{\beta,a} \|\to 0.\] _Moreover, for \(\varepsilon>0\) small there exists a sequence of tight random variables \(\kappa_{n}\) and an a.s. finite random variable \(\kappa>0\) such that for \(0\leq t<1\)_ \[\kappa_{n}^{-1}(1-\lfloor nt\rfloor/n)^{2a+1+\varepsilon}\leq y_{n}(t)\leq \kappa_{n}(1-\lfloor nt\rfloor/n)^{2a+1-\varepsilon}, \tag{82}\] _and similarly,_ \[\kappa^{-1}(1-t)^{2a+1+\varepsilon}\leq y(t)\leq\kappa(1-t)^{2a+1-\varepsilon}. \tag{83}\] Now we are ready to prove the main result of this section. **Theorem 52**.: _For fixed \(\beta>0,a,b>-1\) and \(n\geq 1\), let \(\lambda_{i},1\leq i\leq 2n-1\) be the size \((2n-1)\) truncated real orthogonal beta ensemble and set \(p_{2n-1,\beta,a,b}(z)=\prod_{i=1}^{2n-1}\frac{z-\lambda_{i}}{1-\lambda_{i}}\) be the normalized characteristic polynomial. Let \(\mathcal{E}_{\beta,a}\) be the structure function of \(\tau_{\beta,a}\) defined via (20). Then there is a coupling of \(p_{2n-1,\beta,a,b},n\geq 1\) and \(\mathcal{E}_{\beta,a}\) such that almost surely_ \[\left|p_{2n-1,\beta,a,b}(e^{iz/(2n)})e^{-iz/2}-\mathcal{E}_{\beta,a}(z)\right| \to 0\quad\text{ uniformly on compacts as $n\to\infty$.} \tag{84}\] _Consequently, the truncated real orthogonal beta ensembles converge weakly to the zeros of \(\mathcal{E}_{\beta,a}(\cdot)\) under the edge scaling (3) as \(n\to\infty\)._ Proof.: Let \(\widetilde{\mu}_{2n}\) be the reversed version of the random probability measures \(\mu_{2n}:=\mu_{2n,\beta,a,b}^{\text{RO}}\), and let \(\overset{\leftarrow}{\tau}_{2n,\beta,a,b}\) be the pulled-back operator. By Propositions 13 and 49, the normalized characteristic polynomial \(p_{2n-1,\beta,a,b}\) has the same distribution as the monic orthogonal polynomial of degree \(2n-1\) associated to \(\widetilde{\mu}_{2n}\). Hence it is enough to show (84) holds with \(\widetilde{\Psi}_{2n-1,\beta,a,b}\) in place of \(p_{2n-1,\beta,a,b}\). By Proposition 42, we have \[\widetilde{\Psi}_{2n-1,\beta,a,b}(e^{iz/(2n)})=e^{iz(2n-1)/(4n)}\overset{ \leftarrow}{H}_{2n,\beta,a,b}((2n-1)/(2n),z)^{\dagger}\binom{1}{-i},\] where \(\overset{\leftarrow}{H}_{2n,\beta,a,b}\) solves the ODE (18) of \(\overset{\leftarrow}{\tau}_{2n,\beta,a,b}\). Recall that \(\mathcal{E}_{\beta,a}=H_{\beta,a}(1,\cdot)^{\dagger}\binom{1}{-i}\). It suffices to provide a coupling of \(\mu_{2n}\) and \(\mathcal{E}_{\beta,a}\) under which the uniform-on-compacts convergence of \(\overset{\leftarrow}{H}_{2n,\beta,a,b}(1,z)\) to \(H_{\beta,a}(1,z)\) holds. By Propositions 37 and 50, the operators \(\overset{\leftarrow}{\tau}_{2n,\beta,a,b},n\geq 1\) and \(\tau_{\beta,a}\) can be obtained from \(\mathtt{RO}_{2n,\beta,a,b},n\geq 1\) and \(\mathtt{Bess}_{\beta,a}\) under the same orthogonal transformations. Therefore, in the coupling of Proposition 51, we have \[\|\mathtt{r}\overset{\leftarrow}{\tau}_{2n,\beta,a,b}-\mathtt{r}\,\tau_{ \beta,a}\|\to 0,\qquad\mathtt{t}_{\overset{\leftarrow}{\tau}_{2n,\beta,a,b}}= \mathtt{t}_{\tau_{\beta,a}}=0. \tag{85}\] By the triangle inequality, the path bounds (82) and (83), and a standard subsequence argument, we have \(\lim_{n\to\infty}\int_{0}^{1}|\hat{y}(s)^{-1/2}-\hat{y}_{n}(s)^{-1/2}|^{2}ds=0\). Together with (85), this verifies the conditions in (67). Using Proposition 43 completes the proof of the proposition. Note that by Proposition 39 the limit random analytic function \(\mathcal{E}_{\beta,a}\) can also be characterized as \(\mathcal{E}_{\beta,a}=\mathcal{H}_{\beta,a}(0,z)^{\dagger}\binom{1}{-i}\), where \(\mathcal{H}_{\beta,a}\) solves the SDE (63). Using Proposition 44 and Theorem 52, one obtains the following result on the convergence of normalized characteristic polynomial of the perturbed matrix \(\mathsf{RO}_{2n,\beta,a,b}^{[r]}\). **Corollary 53**.: _For fixed \(r\in[0,1]\) let \(\Lambda_{2n}=\{\lambda_{1},\ldots,\lambda_{2n}\}\) be the size \(2n\) rank-one multiplicative perturbed real orthogonal ensemble and set \(p_{2n,\beta,a,b}^{[r]}(z)=\prod_{i=1}^{2n}\frac{z-\lambda_{i}}{1-\lambda_{i}}\) to be the normalized characteristic polynomial. Then under the coupling of Proposition 51, we have almost surely as \(n\to\infty\)_ \[\Big{|}\Psi_{2n,\beta,a,b}^{[r]}(e^{iz/n})e^{-iz/(2n)}-\mathcal{E}_{\beta,a}^ {[r]}(z)\Big{|}\to 0,\quad\text{ uniformly on compacts in $\mathbb{C}$,}\] _where \(\mathcal{E}_{\beta,a}^{[r]}(z)=H_{\beta,a}(1,z)^{\dagger}\binom{1}{-i\frac{1 -r}{1+r}}\). Consequently, the rank-one multiplicative perturbed real orthogonal beta ensemble \(\Lambda_{2n}\) converges weakly to the zero set of the random analytic function \(\mathcal{E}_{\beta,a}^{[r]}\) under the edge scaling (3) as \(n\to\infty\)._ Recall that the secular function \(\zeta_{\beta,a}\) of the \(\mathtt{Bess}_{\beta,a}\) operator is given by \(\zeta_{\beta,a}=H_{\beta,a}(1,z)^{\dagger}\binom{1}{0}\). The above result gives an interpolation between the scaling limits of the normalized characteristic polynomials of the unperturbed and the truncated real orthogonal beta ensemble. ## 7 The truncated circular Jacobi beta ensemble In Section 7.1 we construct the truncated and the multiplicative perturbed circular Jacobi beta ensemble, and we compute the joint eigenvalue density of the truncated model. In Section 7.2, we derive the edge scaling limits of the truncated and perturbed models using Propositions 43 and 44. ### Matrix model and joint eigenvalue density for truncated circular Jacobi beta ensemble Consider the random probability measure \(\mu^{\mathrm{CJ}}_{n,\beta,\delta}\) in (52), with support given by the circular Jacobi beta ensemble. Recall also the matrix model \(\mathsf{CJ}_{n,\beta,\delta}\) defined in Proposition 28 via the sequence of regular Verblunsky coefficients \(\alpha_{k},0\leq k\leq n-1\) of \(\mu^{\mathrm{CJ}}_{n,\beta,\delta}\). **Definition 54**.: For fixed \(n\geq 1\), \(\beta>0\), \(\Re\delta>-1/2\) we define the _truncated circular Jacobi beta ensemble_ as the joint eigenvalue distribution of the truncated matrix \(\mathsf{CJ}_{n,\beta,\delta}^{\ulcorner}\). For a fixed \(r\in[0,1]\) we define the perturbed circular Jacobi beta ensemble as the joint eigenvalue distribution of the perturbed matrix \(\mathsf{CJ}_{n,\beta,\delta}^{[r]}:=\mathsf{CJ}_{n,\beta,\delta}\cdot\mathrm{ diag}(r,1,1,\ldots,1)\). The main challenge to study these ensembles is that the Verblunsky coefficients of \(\mu^{\mathrm{CJ}}_{n,\beta,\delta}\) are not independent, hence one cannot expect a nice description of the CMV matrices appearing in Proposition 16. However, as the next proposition shows, we can still preserve the independence by expressing the appearing CMV matrices in terms of the _modified_ Verblunsky coefficients. Recall the definition of modified Verblunsky coefficients given by the recursion (36). The recursion provides a one-to-one map between the first \(k\leq n-1\) Verblunsky coefficients and the first \(k\) modified Verblunsky coefficients, we denoted this map by \(\mathcal{T}_{k}\). **Proposition 55**.: _Fix \(\beta>0,\delta\in\mathbb{C}\) with \(\Re\delta>-1/2\), and let \(\gamma_{k},0\leq k\leq n-1\) be the sequence of modified Verblunsky coefficients of \(\mu^{\mathrm{CJ}}_{n,\beta,\delta}\). Then the sub-unitary CMV matrix \(\mathcal{C}(\mathcal{T}_{n-1}^{-1}(\gamma_{n-2},\gamma_{n-3}\cdots,\gamma_{0}))\) has the same joint eigenvalue distribution as the truncated model \(\mathsf{CJ}_{n,\beta,\delta}^{\ulcorner}\). For fixed \(r\in[0,1]\), the matrix \(\mathcal{C}(\mathcal{T}_{n}^{-1}(\gamma_{n-2},\gamma_{n-3}\ldots,\gamma_{0}, r\gamma_{n-1}))\) has the same joint eigenvalue distribution as the perturbed model \(\mathsf{CJ}_{n,\beta,\delta}^{[r]}\)._ Before proving the proposition, we introduce a simple mapping on \(\mathbb{D}\). For \(\gamma\in\mathbb{D}\) recall the linear fractional transformation \(\mathcal{A}_{\gamma,\mathbb{D}}\) from (41). This is an isometry of the Poincare disk that corresponds to an affine transformation in the Poincare half-plane \(\mathbb{H}\). The inverse of \(\mathcal{A}_{\gamma,\mathbb{D}}\) is also an isometry, and it also corresponds to an affine transformation in \(\mathbb{H}\), we denote the corresponding element in \(\mathbb{D}\) by \(\gamma^{\iota}\): \[\mathcal{A}_{\gamma,\mathbb{D}}^{-1}=\mathcal{A}_{\gamma^{\iota},\mathbb{D}}, \qquad\gamma^{\iota}=-\gamma\frac{1-\bar{\gamma}}{1-\gamma}. \tag{86}\] Note that the \(\iota:\gamma\mapsto\gamma^{\iota}\) mapping is an involution such that \(\mathcal{A}_{\gamma,\mathbb{D}}(0)=\gamma^{\iota}\). We also need the following distributional identity for \(\Theta(a+1,\delta)\) random variables (see Definition 27). **Claim 56**.: _Fix \(\delta\in\mathbb{C}\) with \(\Re\delta>-1/2\). Let \(\zeta_{0},\zeta_{1},\ldots,\zeta_{n-1}\) be a sequence of independent random variables such that \(\zeta_{i}\sim\Theta(a_{i}+1,\delta)\), with \(a_{0}=0\) and \(a_{i}\geq 0\) for \(i\geq 1\). Then_ \[\left(\frac{\zeta_{1}^{\iota}}{\zeta_{0}},\frac{\zeta_{2}^{\iota}}{\zeta_{1}},\ldots\frac{\zeta_{n-1}^{\iota}}{\zeta_{n-2}},\frac{1}{\zeta_{n-1}}\right) \stackrel{{ d}}{{=}}\left(\bar{\zeta}_{1},\frac{\bar{\zeta}_{2}}{ \zeta_{1}^{\iota}},\ldots,\frac{\bar{\zeta}_{n-1}}{\zeta_{n-2}^{\iota}},\frac{ \bar{\zeta}_{0}}{\bar{\zeta}_{n-1}^{\iota}}\right). \tag{87}\] Proof.: Our statement will follow from the following (simpler) distributional identity. Let \(\zeta\sim\Theta(a+1,\delta)\) and \(\eta\sim\Theta(1,\delta)\) be independent. Then \[(\eta\bar{\zeta}^{\iota},\zeta)\stackrel{{ d}}{{=}}(\zeta,\eta \bar{\zeta}^{\iota}). \tag{88}\] Since \(|\zeta^{\iota}|=|\zeta|\), it is sufficient to prove that the unit length random variables \(\eta\bar{\zeta}^{\iota}/|\zeta|\) and \(\zeta/|\zeta|\) are conditionally exchangeable given \(|\zeta|=r\). Let \(z=\zeta/|\zeta|\), then under the condition \(|\zeta|=r\) we have \[\eta\bar{\zeta}^{\iota}/|\zeta|=-\eta\frac{1-rz}{z-r}.\] By the independence of \(\eta\) and \(\zeta\), the conditional joint density of \((\eta,z)\) (given \(|\zeta|=r\)) is proportional \[(1-\eta)^{\bar{\delta}}(1-\eta^{-1})^{\delta}(1-rz)^{\bar{\delta}}(1-rz^{-1})^ {\delta}.\] Since \(z\mapsto\frac{1-rz}{z-r}\) is an isometry of the unit circle, the Jacobian of the mapping from \(\left(-\eta\frac{1-rz}{z-r},z\right)\mapsto(\eta,z)\) to is equal to \(1\). Therefore, the conditional joint density of \((\xi_{1},\xi_{2}):=\left(-\eta\frac{1-rz}{z-r},z\right)\) (given \(|\zeta|=r\)) is proportional to \[(1-r(\xi_{1}+\xi_{2})+\xi_{1}\xi_{2})^{\bar{\delta}}(1-r(\xi_{1}^{-1}+\xi_{2}^ {-1})+\xi_{1}^{-1}\xi_{2}^{-1})^{\delta}.\] This shows the (conditional) exchangeability \((\xi_{1},\xi_{2})=(\eta\bar{\zeta}^{\iota}/|\zeta|,\zeta/|\zeta|)\) and proves (88). Now we turn to the proof of the statement. Note that (88) implies that \[(\zeta_{1}^{\iota}/\zeta_{0},\bar{\zeta}_{1})\stackrel{{ d}}{{=}}( \bar{\zeta}_{1},\zeta_{1}^{\iota}/\zeta_{0}).\] Starting from the random vector on the left-hand side of (87), we apply (88) repeatedly and get \[\left(\frac{\zeta_{1}^{\iota}}{\zeta_{0}},\frac{\zeta_{2}^{\iota}}{ \zeta_{1}},\ldots\frac{\zeta_{n-1}^{\iota}}{\zeta_{n-1}},\frac{1}{\zeta_{n-1}}\right) \stackrel{{ d}}{{=}}\left(\bar{\zeta}_{1},\frac{\bar{ \zeta}_{0}\zeta_{2}^{\iota}}{\zeta_{1}^{\iota}},\ldots\frac{\zeta_{n-1}^{ \iota}}{\zeta_{n-2}},\frac{1}{\zeta_{n-1}}\right)\stackrel{{ d}}{{=}}\ldots\] \[\stackrel{{ d}}{{=}}\left(\bar{\zeta}_{1},\frac{\bar{ \zeta}_{2}}{\bar{\zeta}_{1}^{\iota}},\ldots,\frac{\bar{\zeta}_{0}\zeta_{n-1}^{ \iota}}{\bar{\zeta}_{n-2}^{\iota}},\frac{1}{\zeta_{n-1}}\right) \stackrel{{ d}}{{=}}\left(\bar{\zeta}_{1},\frac{\bar{\zeta}_{2}} {\bar{\zeta}_{1}^{\iota}},\ldots,\frac{\bar{\zeta}_{n-1}}{\bar{\zeta}_{n-2}^{ \iota}},\frac{\bar{\zeta}_{0}}{\bar{\zeta}_{n-1}^{\iota}}\right),\] proving (87). Now we return to the proof of Proposition 55. Proof of Proposition 55.: Let \(\alpha_{k},0\leq k\leq n-1\) be the (regular) Verblunsky coefficients of \(\mu_{n,\beta,\delta}^{\mathrm{CJ}}\) and set \(\widetilde{\alpha}_{k},0\leq k\leq n-1\) to be the reversed version of \(\alpha_{k},0\leq k\leq n-1\) defined via (31). By Proposition 16, the truncated and perturbed models \(\mathsf{C}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Proposition 55 allows us to derive the joint eigenvalue distribution of the finite truncated circular Jacobi beta ensemble. **Theorem 57**.: _Fix \(\beta>0,\delta\in\mathbb{C}\) with \(\Re\delta>-1/2\). Then the eigenvalues of \(\mathsf{C}\mathsf{J}_{n+1,\beta,\delta}^{\ulcorner}\) are distributed in \(\mathbb{D}^{n}\) according to the density_ \[c_{n,\beta,\delta}\prod_{j,k=1}^{n}(1-z_{j}\bar{z}_{k})^{\frac{\beta}{2}-1} \prod_{j<k}|z_{k}-z_{j}|^{2}\prod_{j=1}^{n}\Big{(}(1-z_{j})^{\bar{\delta}}(1- \bar{z}_{j})^{\delta}\Big{)} \tag{91}\] _with respect to the Lebesgue measure on \(\mathbb{D}^{n}\). Here \(c_{n,\beta,\delta}=\frac{1}{\pi^{n}n!}\prod_{j=1}^{n}\frac{\Gamma(\frac{\beta }{2}j+1+\delta)\Gamma(\frac{\beta}{2}j+1+\bar{\delta})}{\Gamma(\frac{\beta}{2 }j)\Gamma(\frac{\beta}{2}j+1+\delta+\bar{\delta})}\) is the normalizing constant._ Proof.: The proof of the statement relies on the following computations of Jacobian determinants. We refer to Section 6 and Appendix B of [19] for more details. Let \(\gamma_{k},0\leq k\leq n-1\) be distributed as in Proposition 55, and let \((\alpha_{0},\alpha_{1},\cdots,\alpha_{n-1})=\mathcal{T}_{n}^{-1}(\gamma_{0}, \gamma_{1},\cdots,\gamma_{n-1})\). From (36) we have \[\left|\frac{\partial(\alpha_{0},\ldots,\alpha_{n-1})}{\partial(\gamma_{0}, \ldots,\gamma_{n-1})}\right|=1. \tag{92}\] Denote by \(z_{k},1\leq k\leq n\) the eigenvalues of \(\overset{\rightharpoonup}{\mathsf{C}\mathsf{J}_{n+1,\beta,\delta}^{\ulcorner}}\). It has been shown in [19] that \[\left|\frac{\partial(\alpha_{0},\ldots,\alpha_{n-1})}{\partial(z_{1},\ldots,z_ {n})}\right|=|\Delta(z_{1},\ldots,z_{n})|^{2}\prod_{j=0}^{n-1}(1-|\alpha_{j}|^ {2})^{-j}, \tag{93}\] where \(\Delta(z_{1},\ldots,z_{n})=\prod_{1\leq j<k\leq n}(z_{j}-z_{k})\) denotes the Vandermonde determinant of \(z_{k},1\leq k\leq n\). Using \(|\alpha_{k}|=|\gamma_{k}|\), (92), and (93), we obtain \[\left|\frac{\partial(\gamma_{0},\ldots,\gamma_{n-1})}{\partial(z_{1},\ldots,z_ {n})}\right|=|\Delta(z_{1},\ldots,z_{n})|^{2}\prod_{j=0}^{n-1}(1-|\gamma_{j}| ^{2})^{-j}. \tag{94}\] Proposition 2.5 of [4] shows that we have \[\prod_{j=0}^{n-1}(1-\gamma_{j})=\Phi_{n}(1)=\prod_{j=1}^{n}(1-z_{j}),\quad \prod_{j=0}^{n-1}(1-\bar{\gamma}_{j})=\bar{\Phi}_{n}(1)=\prod_{j=1}^{n}(1-\bar {z}_{j}). \tag{95}\] Since \(|\alpha_{k}|=|\gamma_{k}|\), by Lemma B.1 (vi) of [19], we have \[\prod_{j=0}^{n-1}(1-|\gamma_{j}|^{2})^{(\frac{\beta}{2}-1)(j+1)}=\prod_{j=0}^{ n-1}(1-|\alpha_{j}|^{2})^{(\frac{\beta}{2}-1)(j+1)}=\prod_{j,k=1}^{n}(1-z_{j} \bar{z}_{k})^{\frac{\beta}{2}-1}. \tag{96}\] By (94)-(96) and the explicit joint distribution of \(\gamma_{k},0\leq k\leq n-1\), the joint density of \(z_{k},1\leq k\leq n\) is given by \[c_{n,\beta,\delta}\prod_{j=0}^{n-1}(1-|\gamma_{j}|^{2})^{(\frac{ \beta}{2}-1)(j+1)}(1-\gamma_{j})^{\bar{\delta}}(1-\bar{\gamma}_{j})^{\delta}| \Delta(z_{1},\ldots,z_{n})|^{2}\] \[\qquad\qquad=c_{n,\beta,\delta}\prod_{j,k=1}^{n}(1-z_{j}\bar{z}_{ k})^{\frac{\beta}{2}-1}\prod_{j=1}^{n}\left((1-z_{j})^{\bar{\delta}}(1-\bar{z}_{ j})^{\delta}\right)|\Delta(z_{1},\ldots,z_{n})|^{2}.\] Collecting the normalizing constants of the joint distribution of \(\gamma_{k},0\leq k\leq n-1\), we get \[c_{n,\beta,\delta}=\frac{1}{\pi^{n}n!}\prod_{j=0}^{n-1}\frac{\Gamma(\frac{ \beta}{2}(j+1)+1+\delta)\Gamma(\frac{\beta}{2}(j+1)+1+\bar{\delta})}{\Gamma( \frac{\beta}{2}(j+1))\Gamma(\frac{\beta}{2}(j+1)+1+\delta+\bar{\delta})}.\] This finishes the proof. Note that using the explicit description of the joint distribution of the modified Verblunsky coefficients of the random probability measure \(\mu^{\rm CJ}_{n,\beta,\delta}\), and the method developed in Section 7 of [19], one can also obtain the joint density of the perturbed circular Jacobi beta ensemble. We omit the computation to shorten our representation. ### Edge limit of the truncated circular Jacobi beta ensemble In this section, we prove the edge limits of the rank-one truncation and multiplicative perturbation of the circular Jacobi beta ensemble. Recall the Dirac operator representation \({\tt CJ}_{n,\beta,\delta}\) defined in Definition 29. We denote by \(\widetilde{\tau}_{n,\beta,\delta}\) the reversed version of \({\tt CJ}_{n,\beta,\delta}\) via Definition 40, and by \(\widetilde{\tau}_{n,\beta,\delta}\) to be the pulled-back version of \(\widetilde{\tau}_{n,\beta,\delta}\) via Definition 41. Our approach will be similar to the one used for the circular beta ensemble case (which correspond to \(\delta=0\)). We will show that under appropriate transformations, the operators \(\widetilde{\tau}_{n,\beta,\delta}\) and \({\tt CJ}_{n,\beta,\delta}\) are orthogonally equivalent. Note however, that in the \(\delta\neq 0\) the measure \(\mu^{\rm CJ}_{n,\beta,\delta}\) is no longer invariant under rotations, which requires us to develop a new method to prove the orthogonal equivalence. The key ingredient is the following proposition, providing equivalent descriptions of the conditioned path parameters of \({\tt CJ}_{n,\beta,\delta}\). **Proposition 58**.: _Let \(\mu=\mu^{\rm CJ}_{n,\beta,\delta}\) and let \(\gamma_{k},0\leq k\leq n-1\) be its modified Verblunsky coefficients. Let \(b_{k},0\leq k\leq n\) be the path parameters of \({\tt CJ}_{n,\beta,\delta}\) in \(\mathbb{D}\). Then the following sequences have the same joint distribution._ 1. _The path parameters_ \(b_{k},0\leq k\leq n\) _conditioned on_ \(b_{n}=1\) _._ 2. _The path parameters corresponding to the sequence of modified Verblunsky coefficients_ \(\bar{\gamma}_{0}^{\iota},\bar{\gamma}_{1}^{\iota},\ldots,\bar{\gamma}_{n-2}^{\iota},1\)_._ 3. _The 'pulled back' path parameters_ \(b_{k}^{\prime}:=\mathcal{A}_{\hat{b}_{n-1},\mathbb{D}}(\hat{b}_{n-k-1}),0\leq k\leq n\)_, where_ \(\hat{b}_{-1}=1\) _and_ \(\hat{b}_{k},0\leq k\leq n-1\) _are the first_ \(n\) _elements of the path parameters produced by the sequence of modified Verblunsky coefficients_ \(\bar{\gamma}_{n-2},\bar{\gamma}_{n-3},\ldots,\bar{\gamma}_{0}\)_._ The proof relies on a special decomposition property of the \(\Theta(a+1,\delta)\) distribution and an application of Doob's h-transform. Before presenting the proof, we first introduce a Pearson-type distribution, and a couple of facts about the \(\Theta(a+1,\delta)\) distribution. **Definition 59**.: For \(m>1/2\) and \(\mu\in\mathbb{R}\) we denote by \(P_{IV}(m,\mu)\) the distribution of the (unscaled) Pearson type IV distribution on \(\mathbb{R}\) that has density function \[\frac{2^{2m-2}|\Gamma(m+\frac{\mu}{2}i)|^{2}}{\pi\,\Gamma(2m-1)}(1+x^{2})^{-m} e^{-\mu\arctan x}. \tag{97}\] Note that the random variable \(\Theta(1,\delta)\) can be connected to the Pearson random variable \(P_{IV}(\Re\delta+1,-2\Im\delta)\) via the mapping \(e^{i\theta}\mapsto-\cot(\theta/2)\). **Fact 60** ([24]).: _Suppose that \(\gamma\sim\Theta(a+1,\delta)\) with \(a\geq 0\) and \(\Re\delta>-1/2\). Define \(w,v\in\mathbb{R}\) with \(\frac{2\gamma}{1-\gamma}=w-iv\). Then the joint density of \((v,1+w)\in\mathbb{R}\times\mathbb{R}_{+}\) is given by_ \[f_{a,\delta}(x,y)=c_{a,\delta}\,y^{\frac{a}{2}-1}(x^{2}+(1+y)^{2})^{-(\frac{a} {2}+\Re\delta+1)}e^{2\Im\delta\arctan\frac{x}{1+y}}, \tag{98}\] _with \(c_{a,\delta}=2^{a+2\Re\delta}\frac{\Gamma(a/2+1+\delta)\Gamma(a/2+1+\bar{ \delta})}{\pi\Gamma(a/2)\Gamma(a/2+1+2\Re\delta)}\)._ _Moreover, the random variables \(w\) and \(\frac{v}{2+w}\) are independent. The distribution of the random variable \(\frac{v}{2+w}\) is given by \(P_{IV}(\frac{a}{2}+\Re\delta+1,-2\Im\delta)\). The distribution of \(1+w\) is the same as the distribution of \(G_{1}/G_{2}\), where \(G_{1},G_{2}\) are the independent (standard) Gamma random variables with parameters \(\frac{a}{2}\) and \(\frac{a}{2}+2\Re\delta+1\), respectively._ Using Fact 60 and the definition of \(\gamma^{\iota}\) in (86), one obtains the following property. **Fact 61**.: _Suppose that \(\gamma\sim\Theta(a+1,\delta)\) with \(a\geq 0\) and \(\Re\delta>-1/2\). Then \(\gamma^{\iota}\sim\Theta(\tilde{a}+1,\tilde{\delta})\) with \(\tilde{a}=a+4\Re\delta+2\), \(\tilde{\delta}=-(1+\delta)\)._ Proof.: With a bit of abuse of notation, define \(w,v,w^{\iota},v^{\iota}\in\mathbb{R}\) with \[w-iv=\frac{2\gamma}{1-\gamma},\qquad w^{\iota}-iv^{\iota}=\frac{2\gamma^{\iota }}{1-\gamma^{\iota}}. \tag{99}\] They satisfy the identities \(\mathcal{U}^{-1}(\gamma)=v+i(1+w)\) and \(\mathcal{U}^{-1}(\gamma^{\iota})=v^{\iota}+i(1+w^{\iota})\). By (99) and (86) we have \[1+w^{\iota}=\frac{1}{1+w},\qquad\frac{v^{\iota}}{2+w^{\iota}}=-\frac{v}{2+w}.\] From Fact 60, we obtain that the joint density of \((v^{\iota},1+w^{\iota})\) is proportional to \[y^{\frac{a}{2}+2\Re\delta}(x^{2}+(1+y)^{2})^{-(\frac{a}{2}+\Re\delta+1)}e^{-2 3\delta\arctan\frac{x}{1+y}}.\] This shows that if \(\gamma\sim\Theta(a+1,\delta)\), then \(\gamma^{\iota}\sim\Theta(\tilde{a}+1,\tilde{\delta})\), with \(\tilde{a}:=a+4\Re\delta+2\) and \(\tilde{\delta}:=-(1+\delta)\). Note that the random variable \(\gamma^{\iota}\) is well-defined since the conditions \(\tilde{a}>0\) and \(\tilde{a}/2+\Re\tilde{\delta}=a/2+\Re\delta>-1/2\) are still satisfied. The next result shows that the right boundary point of the \(\mathtt{CJ}_{n,\beta,\delta}\) operator has the same distribution as a \(\Theta(1,\delta)\) random variable. **Proposition 62**.: _Let \(b_{n}\) be the right boundary point of the Dirac operator \(\mathtt{CJ}_{n,\beta,\delta}\) in \(\mathbb{D}\). Then \(b_{n}\stackrel{{ d}}{{=}}\Theta(1,\delta)\)._ Proof.: Fix \(n\), and let \(b_{k}:=b_{k}^{(n)},0\leq k\leq n-1\) be the path parameters of \(\mathtt{CJ}_{n,\beta,\delta}\) in \(\mathbb{D}\), these solve the recursion (44) with \(b_{0}=0\). We can consider the solution \(b_{k,\gamma},0\leq k\leq n\) of this recursion for any initial value \(b_{0,\gamma}=\gamma\in\mathbb{D}\). We denote the probability density function of \(b_{n,\gamma}\) by \(P_{n}(\gamma,\eta)\). Recall from Proposition 28 that the modified Verblunsky coefficients \(\gamma_{k}=\gamma_{k}^{(n)},0\leq k\leq n-1\) of \(\mathtt{CJ}_{n,\beta,\delta}\) are independent, with \(\gamma_{k}\sim\Theta(\beta(n-k-1)+1,\delta)\). In the case when \(n=1\), we get \(b_{n}=b_{1}=\gamma_{0}\) from the recursion (44), so the statement follows. We also have \[P_{1}(0,\eta)=c_{\delta}(1-\eta)^{\tilde{\delta}}(1-\bar{\eta})^{\delta},\quad c _{\delta}=\frac{\Gamma(1+\delta)\Gamma(1+\bar{\delta})}{\Gamma(1+\delta+\bar{ \delta})}.\] Note that by (44) the distribution of \(b_{k+1}\) given \(b_{k}\) is invariant with respect to isometries of the disk that preserve \(1\). These isometries are just the Poincare disk version of the affine isometries of \(\mathbb{H}\), which can be parameterized as \(\mathcal{A}_{a,\mathbb{D}}(z)=\frac{z-a}{1-\bar{a}z}\frac{1-\bar{a}}{1-a},a\in \mathbb{D}\) according to (41). Therefore, we have \[P_{1}(\gamma,\eta)=c_{\delta}\left(\frac{(1-|\gamma|^{2})(1-\eta)}{(1-\bar{ \gamma}\eta)(1-\gamma)}\right)^{\bar{\delta}}\left(\frac{(1-|\gamma|^{2})(1- \bar{\eta})}{(1-\gamma\bar{\eta})(1-\bar{\gamma})}\right)^{\delta}\frac{1-| \gamma|^{2}}{|1-\bar{\gamma}\eta|^{2}}.\] For \(n\geq 2\) we will proceed by induction. Assume that \(P_{n-1}(\gamma,\eta)=P_{1}(\gamma,\eta)\) (or equivalently, \(P_{n-1}(0,\eta)=P_{1}(0,\eta)\)). The proof will be completed if we can show that \(P_{n}(0,\eta)=P_{1}(0,\eta)\). Let \[f_{a,\delta}=c_{a,\delta}(1-|z|^{2})^{\frac{a}{2}-1}(1-z)^{\bar{\delta}}(1- \bar{z})^{\delta},\quad a=\beta(n-1)\] be the density function of \(\gamma_{0}^{(n)}\) as in Definition 27. Note that \(\gamma_{0}^{(n)}\) is independent of \(\gamma_{k}^{(n)},1\leq k\leq n-1\), which have the same joint distribution as \(\gamma_{k}^{(n-1)},0\leq k\leq n-2\). Hence from (44) we get \[P_{n}(0,\eta) =\int_{z\in\mathbb{D}}P_{n-1}(z,\eta)f_{a,\delta}(z)dz=\int_{z\in \mathbb{D}}P_{1}(z,\eta)f_{a,\delta}(z)dz\] \[=c_{\delta}(1-\eta)^{\bar{\delta}}(1-\bar{\eta})^{\delta}\int_{z \in\mathbb{D}}c_{a,\delta}\frac{(1-|z|^{2})^{\frac{\eta}{2}+\delta+\bar{ \delta}}}{(1-\bar{z}\eta)^{\delta+1}(1-z\bar{\eta})^{\delta+1}}dz.\] Using the change of variable \(z\mapsto z\bar{\eta}\) we see that the integral does not depend on \(\eta\). Hence \(P_{n}(0,\eta)\) is a constant multiple of \(P_{1}(0,\eta)\) which means that it must be equal to it. This completes the induction step and finishes the proof. Now we turn to the proof of Proposition 58. Proof of Proposition 58.: We first prove the equivalence between (2) and (3). By the recursion (45) and the fact \(\mathcal{A}^{-1}_{\gamma,\mathbb{D}}(0)=\gamma\), we have \[b^{\prime}_{k} =\mathcal{A}_{\bar{\gamma}_{0},\mathbb{D}}\circ\mathcal{A}_{\bar {\gamma}_{1},\mathbb{D}}\circ\cdots\circ\mathcal{A}_{\bar{\gamma}_{n-2}, \mathbb{D}}\big{(}\mathcal{A}^{-1}_{\bar{\gamma}_{n-2},\mathbb{D}}\circ\cdots \circ\mathcal{A}^{-1}_{\bar{\gamma}_{k},\mathbb{D}}(0)\big{)}\] \[=\mathcal{A}^{-1}_{\bar{\gamma}_{0},\mathbb{D}}\circ\cdots\circ \mathcal{A}^{-1}_{\bar{\gamma}_{k-1}^{\prime},\mathbb{D}}(0).\] This shows that the sequence \(b^{\prime}_{k},0\leq k\leq n-1\) are the first \(n\) elements of the path produced by \(\bar{\gamma}_{0}^{\iota},\ldots,\bar{\gamma}_{n-2}^{\iota}\). (This fact was also observed in Lemma 50 of [39].) By setting \(\tilde{b}_{-1}=1\), we have \(b^{\prime}_{n}=\mathcal{A}_{b_{n-1},\mathbb{D}}(1)=1\). Note also, that \(b^{\prime}_{n}=1\) if and only if the corresponding last modified Verblunsky coefficient is equal to \(1\). This proves that the path parameters described in (2) and (3) are equal in law. We now prove that the paths described in (1) and (2) have the same distribution. Let \(z_{k}=\mathcal{U}^{-1}(b_{k}),0\leq k\leq n\) be the path parameters of \(\mathtt{CJ}_{n,\beta,\delta}\) in \(\mathbb{H}\). By (43) this is a (time-inhomogeneous) Markov chain, with transition densities that are invariant under affine transformations. By Proposition 62 it follows that \(z_{n}\stackrel{{ d}}{{=}}\mathcal{U}^{-1}(\gamma_{n-1})\sim P_{IV }(\Re\delta+1,-2\Im\delta)\) with density \[g(q)=\frac{4^{\Re\delta}|\Gamma(1+\delta)|^{2}}{\pi\,\Gamma(2\Re\delta+1)}(1+q ^{2})^{-\Re\delta-1}e^{2\Im\delta\arctan q}.\] Moreover, the proof of the same proposition also implies that the conditional distribution of \(z_{n}\) given \(z_{k}=z=x+iy\) for a fixed \(0\leq k\leq n-2\) is the same as that of \(y\mathcal{U}^{-1}(\gamma_{n-1})+x\). This random variable has density \[g_{x,y}(q):=y^{-1}g(\frac{q-x}{y}).\] For any fixed \(z=x+iy,z^{\prime}=x^{\prime}+iy^{\prime}\in\mathbb{H}\) we get \[\lim_{q\to\infty}\frac{g_{x^{\prime},y^{\prime}}(q)}{g_{x,y}(q)}=\frac{h(z^{ \prime})}{h(z)},\qquad h(z):=(\Im z)^{2\Re\delta+1}. \tag{100}\] Let \(f_{a_{k},\delta}(x,y)\) be the density function of \(\mathcal{U}^{-1}(\gamma_{k})\) defined via (98) with parameters \(a_{k}=\beta(n-k-1)\) and \(\delta\). Then the transition density of the Markov chain \(z_{k},0\leq k\leq n\) is given by \[P\Big{(}(z,k),(z^{\prime},k+1)\Big{)}=f_{a_{k},\delta}\left(\frac{\Re(z^{\prime }-z)}{\Im z},\frac{\Im z^{\prime}}{\Im z}\right),\qquad 0\leq k\leq n-1.\] Now consider the distribution of \(z_{k},0\leq k\leq n\) conditioned on \(z_{n}=q\) with \(q\to\infty\). By Doob's h-transform, the conditioned transition density is given by \[Q\Big{(}(z,k),(z^{\prime},k+1)\Big{)} =P\Big{(}(z,k),(z^{\prime},k+1)\Big{)}\frac{h(z^{\prime})}{h(z)}\] \[=c_{a_{k},\delta}\,v^{\frac{\beta}{2}(n-k-1)+2\Re\delta}(u^{2}+(1 +v^{2}))^{-\frac{\beta}{2}(n-k-1)-\Re\delta-1}e^{2\Im\delta\arctan(u/(1+v))}\] where \(u=(\Re z^{\prime}-\Re z)/\Im z\) and \(v=\Im z^{\prime}/\Im z\). By Proposition 61, \(Q((z,k),(z^{\prime},k+1))\) is exactly the density function of the random variable \(\mathcal{U}^{-1}(\bar{\gamma}_{k}^{\iota})\). This proves the equivalence between the statements (1) and (2), and hence completes the proof. By Proposition 58 and Fact 61 we see that the effect of conditioning the path parameters in \(\mathbb{D}\) to hit \(1\) is equivalent to changing the parameter \(\delta\mapsto-(1+\bar{\delta})\). This coincides with a similar factorization lemma for the generating path of the \(\tau_{\beta,\delta}\) operator, see Theorem 43 of [24]. **Corollary 63**.: _Consider the same setup as in Proposition 58, in particular the path \(b^{\prime}_{k},0\leq k\leq n\) defined in (3). Let \(\eta\sim\Theta(1,\delta)\) be independent of \(\gamma_{k},0\leq k\leq n-2\). Then the rotated path parameters \(\check{b}_{k}:=\eta b^{\prime}_{k},0\leq k\leq n\) have the same joint distribution as the path parameters of the \(\mathtt{CJ}_{n,\beta,\delta}\) operator._ Proof.: By Proposition 58, the path parameters \(b^{\prime}_{k},0\leq k\leq n\) can be produced by the sequence of modified Verblunsky coefficients \(\bar{\gamma}_{0}^{\iota},\ldots,\bar{\gamma}_{n-2}^{\iota},1\). Let \((\alpha^{\prime}_{0},\ldots,\alpha^{\prime}_{n-1}):=\mathcal{T}_{n}^{-1}(\bar {\gamma}_{0}^{\iota},\ldots,\bar{\gamma}_{n-2}^{\iota},1)\) be the corresponding sequence of Verblunsky coefficients. By (36) and (90) we get \[\alpha^{\prime}_{k}=(-1)^{k}\frac{\gamma^{\iota}_{k}\cdots\gamma^{\iota}_{0}} {\gamma_{k-1}\cdots\gamma_{0}},0\leq k\leq n-2,\qquad\alpha^{\prime}_{n-1}=(-1) ^{n-1}\frac{\gamma^{\iota}_{n-2}\cdots\gamma^{\iota}_{0}}{\gamma_{n-2}\cdots \gamma_{0}}.\] Recall also the (equivalent) description of \(b^{\prime}_{k},0\leq k\leq n\) using \(\alpha^{\prime}_{k},0\leq k\leq n-1\) in (34). Observe that for \(Z=\operatorname{diag}(z,1)\), we have \[\mathcal{P}Z^{-1}\left(\begin{array}{cc}1&\bar{\alpha}_{0}\\ \alpha_{0}&1\end{array}\right)\cdots\left(\begin{array}{cc}1&\bar{\alpha}_{k -1}\\ \alpha_{k-1}&1\end{array}\right)Z\left(\begin{array}{c}0\\ 1\end{array}\right)=\bar{z}b_{k}. \tag{101}\] This shows that the Verblunsky coefficients \(\check{\alpha}_{k},0\leq k\leq n-1\) corresponding to \(\check{b}_{k},0\leq k\leq n\) are given by \(\check{\alpha}_{k}=\bar{\eta}\alpha^{\prime}_{k}\), in particular \[\check{\alpha}_{k}=(-1)^{k}\bar{\eta}\frac{\gamma^{\iota}_{k}\cdots\gamma^{ \iota}_{0}}{\gamma_{k-1}\cdots\gamma_{0}},0\leq k\leq n-2,\qquad\check{\alpha} _{n-1}=(-1)^{n-1}\bar{\eta}\frac{\gamma^{\iota}_{n-2}\cdots\gamma^{\iota}_{0}} {\gamma_{n-2}\cdots\gamma_{0}}.\] (The relation between rotating the sequences of path parameters and the corresponding Verblunsky coefficients is related to the so-called Aleksandrov measure. We refer to [32] for more background.) Finally, using Claim 56 for \(\zeta_{0}=\eta\), \(\zeta_{j}=\gamma_{j-1},1\leq j\leq n\) we get the following distributional identity: \[\left(\gamma_{0}^{\iota}\bar{\eta},\frac{\gamma_{1}^{\iota}}{\gamma_{0}}, \ldots,\frac{\gamma_{n-2}^{\iota}}{\gamma_{n-3}},\frac{1}{\gamma_{n-2}}\right) \stackrel{{ d}}{{=}}\left(\bar{\gamma}_{0},\frac{\bar{\gamma}_{1} }{\bar{\gamma}_{0}^{\iota}},\ldots,\frac{\bar{\gamma}_{n-2}}{\bar{\gamma}_{n-3 }^{\iota}},\frac{\bar{\eta}}{\bar{\gamma}_{n-2}^{\iota}}\right).\] This means that \(\check{\alpha}_{k},0\leq k\leq n-1\) have the same joint distribution as the random variables \[\dot{\alpha}_{k}=(-1)^{k}\frac{\bar{\gamma}_{k}\cdots\bar{\gamma}_{0}}{\bar{ \gamma}_{k-1}^{\iota}\cdots\bar{\gamma}_{0}^{\iota}},0\leq k\leq n-2,\qquad \dot{\alpha}_{n-1}=(-1)^{n-1}\frac{\bar{\eta}\bar{\gamma}_{n-2}\cdots\bar{ \gamma}_{0}}{\bar{\gamma}_{n-2}^{\iota}\cdots\bar{\gamma}_{0}^{\iota}}.\] Since the joint distribution of \(\gamma_{0},\ldots,\gamma_{n-2},\eta\) is the same as the joint distribution of \(\gamma_{0},\ldots,\gamma_{n-1}\), (36) and (90) now shows that \[(\check{\alpha}_{0},\check{\alpha}_{1},\ldots,\check{\alpha}_{n-1})\stackrel{{ d}}{{=}}(\dot{\alpha}_{0},\ldots,\dot{\alpha}_{n-1})\stackrel{{ d}}{{=}}\mathcal{T}_{n}^{-1}(\gamma_{0},\ldots,\gamma_{n-1}),\] which implies that \(\check{b}_{k},0\leq k\leq n\) have the same joint distribution as the path parameters \(b_{k},0\leq k\leq n\) for \(\mathtt{CJ}_{n,\beta,\delta}\). Recall the path \(\stackrel{{\leftrightarrow}}{{b}}_{k},0\leq k\leq n\), the pulled back version of the reversed path \(\widetilde{b}_{k},0\leq k\leq n\) corresponding to the random measure \(\mu_{n,\beta,\delta}^{\textsc{CJ}}\). By Proposition 55\(\widetilde{b}_{k},0\leq k\leq n-1\) has the same distribution as the path built from the modified Verblunsky coefficients \(\gamma_{n-2},\ldots,\gamma_{0}\). This path is the complex conjugate of the path \(\hat{b}_{k},0\leq k\leq n-1\) in (3) of Proposition 58. This also means that \(\stackrel{{\leftrightarrow}}{{b}}_{k},0\leq k\leq n\) is just the time-reversed and complex conjugated version of the path \(b^{\prime}_{k},0\leq k\leq n\) in (3) of Proposition 58. Together with Corollary 63 this implies that applying an independent random rotation, a complex conjugation, and a time reversal to \(\stackrel{{\leftrightarrow}}{{b}}_{k},0\leq k\leq n\) produces a path that has the same distribution as the driving path of the operator \(\mathtt{CJ}_{n,\beta,\delta}\). This statement allows us to show that \(\stackrel{{\leftarrow}}{{\tau}}_{n,\beta,\delta}\) and \(\mathtt{CJ}_{n,\beta,\delta}\) are orthogonally equivalent. **Proposition 64**.: _Let \(\stackrel{{\leftarrow}}{{b}}_{k},0\leq k\leq n\) be the path parameters defined in Corollary 63, \(\stackrel{{\leftarrow}}{{\tau}}_{n,\beta,\delta}\) the corresponding Dirac-type operator, and let_ \[Q=\frac{1}{\sqrt{1+q^{2}}}\begin{pmatrix}q&1\\ -1&q\end{pmatrix},\qquad q=\mathcal{U}^{-1}(\stackrel{{ \leftarrow}}{{b}}_{n}).\] _Then the operator \(\rho^{-1}(SQ)\stackrel{{\leftarrow}}{{\tau}}_{n,\beta,\delta}(SQ )^{-1}\rho\) has the same distribution as \(\mathtt{CJ}_{n,\beta,\delta}\)._ Proof.: Recall the transformation \(\mathcal{Q}\) defined in Section 2.2. Observe that in the unit-disk model, the transformation \(\mathcal{Q}\) behaves exactly as a rotation, i.e. for \(z\in\mathbb{D}\) we have \(\mathcal{Q}(z)=\mathcal{U}\circ\mathcal{Q}\circ\mathcal{U}^{-1}(z)=\bar{\eta}z\), where we denote \(\eta:=\stackrel{{\leftarrow}}{{b}}_{n}\). Together with Propositions 55 and 58, this observation shows that the path parameters of \(\rho^{-1}(SQ)\overset{\omega}{\tau}_{n,\beta,\delta}(SQ)^{-1}\rho\) have the same joint distribution as \(\check{b}_{k}=\eta b^{\prime}_{k},0\leq k\leq n\) where \(b^{\prime}_{k},0\leq k\leq n\) are distributed as in Proposition 58. Moreover, by the same argument one can check that the left boundary point of \(\rho^{-1}(SQ)\overset{\omega}{\tau}_{n,\beta,\delta}(SQ)^{-1}\rho\) is equal to \(1\) as desired. The proof is now completed by Corollary 63. Recall the limiting operator \(\mathtt{HP}_{\beta,\delta}\) defined in Definition 32, and its reversed and transformed version \(\tau_{\beta,\delta}\) defined in Definition 38. By Propositions 62 and 64 the transformation connecting \(\mathtt{HP}_{\beta,\delta}\) and \(\tau_{\beta,\delta}\) has the same distribution as the one connecting \(\mathtt{CJ}_{n,\beta,\delta}\) and \(\overset{\omega}{\tau}_{n,\beta,\delta}\). The operator level convergence of the \(\mathtt{CJ}_{n,\beta,\delta}\) to the limiting \(\mathtt{HP}_{\beta,\delta}\) operator was proved in [24] and can be summarized as follows. This is also the final ingredient to prove the convergence of the normalized characteristic polynomial of the truncated circular Jacobi beta ensemble. **Proposition 65** ([24]).: _Fix \(\beta>0\) and \(\Re\delta>-1/2\). There exists a coupling of the operators \(\mathtt{CJ}_{n,\beta,\delta}=\mathtt{Dir}(z_{n}(\cdot),\mathtt{u}_{0},\mathtt{ u}_{1}^{(n)}),n\geq 1\) with \(\mathtt{u}_{0}=\binom{1}{0},\mathtt{u}_{1}^{(n)}=\binom{-z_{n}(1)}{-1}\), and \(\mathtt{HP}_{\beta,\delta}=\mathtt{Dir}(z(\cdot),\mathtt{u}_{0},\mathtt{u}_{1})\) with \(\mathtt{u}_{1}=\binom{-z(1)}{-1}\) such that as \(n\to\infty\) we have almost surely \(z_{n}\to z\) pointwise on \([0,1)\), and_ \[\|\mathtt{r}\,\mathtt{CJ}_{n,\beta,\delta}-\mathtt{r}\,\mathtt{HP}_{\beta, \delta}\|_{\rm HS}\to 0,\qquad\mathtt{t}_{\mathtt{CJ}_{n,\beta,\delta}}- \mathtt{t}_{\mathtt{HP}_{\beta,\delta}}\to 0.\] _Set \(c_{\delta}=\frac{4}{\beta}(\Re\delta+\frac{1}{2})>0\), and for \(\varepsilon>0\) small define \(c_{1}=c_{\delta}-\varepsilon,c_{2}=c_{\delta}+\varepsilon\). Then there exists a sequence of tight random variables \(\kappa_{n},n\geq 1\) and an a.s. finite random variable \(\kappa>0\) such that for \(0\leq t<1\)_ \[\kappa_{n}^{-1}\left(1-\frac{\lfloor nt\rfloor}{n}\right)^{c_{2}}\leq y_{n}(t) \leq\kappa_{n}\left(1-\frac{\lfloor nt\rfloor}{n}\right)^{c_{1}},\quad|x_{n}( 1)-x_{n}(t)|\leq\kappa_{n}\left(1-\frac{\lfloor nt\rfloor}{n}\right)^{c_{1}} \tag{102}\] _and similarly,_ \[\kappa^{-1}(1-t)^{c_{2}}\leq y(t)\leq\kappa(1-t)^{c_{1}},\quad|x(1)-x(t)|\leq \kappa(1-t)^{c_{1}}. \tag{103}\] We now have all the ingredients to prove the edge scaling limit of the truncated circular Jacobi beta ensemble. **Theorem 66**.: _For fixed \(n\geq 1\), \(\beta>0\) and \(\Re\delta>-1/2\), let \(\lambda_{i},1\leq i\leq n-1\) be size \(n\) truncated circular Jacobi beta ensemble nd set \(p_{n-1,\beta,\delta}(z)=\prod_{i=1}^{n-1}\frac{z-\lambda_{i}}{1-\lambda_{i}}\) be the normalized characteristic polynomial. Let \(\mathcal{E}_{\beta,\delta}\) be the structure function of \(\tau_{\beta,\delta}\) defined via (20). Then there is a coupling of \(p_{n,\beta,\delta},n\geq 1\) and \(\mathcal{E}_{\beta,\delta}\) such that_ \[|p_{n-1,\beta}(e^{iz/n})e^{-iz/2}-\mathcal{E}_{\beta,\delta}(z)|\to 0\quad\text{ almost surely, uniformly on $\alpha$}\] _Consequently, under the edge scaling (3), the truncated circular Jacobi beta ensembles converge weakly to the zeros of the random analytic function \(\mathcal{E}_{\beta,\delta}(\cdot)\)._ Proof.: Let \(\widetilde{\mu}_{n}\) be the reversed version of the measure \(\mu_{n}:=\mu_{n,\beta,\delta}^{\text{CJ}}\), and \(\widetilde{\tau}_{n,\beta,\delta}\) be the pulled-back operator. By Propositions 13 and 55, \(p_{n-1,\beta,\delta}\) has the same distribution as the monic orthogonal polynomial of degree \(n-1\) associated to \(\widetilde{\mu}_{n}\), denoted as \(\widetilde{\Psi}_{n-1,\beta,\delta}\). By Proposition 42, we have \[\widetilde{\Psi}_{n-1,\beta,\delta}(e^{iz/n})=e^{iz(n-1)/(2n)}\overset{\sim}{H }_{n}((n-1)/n,z)^{\dagger}\binom{1}{-i},\] where \(\overset{\sim}{H}_{n}\) solves the ODE (18) of \(\widetilde{\tau}_{n,\beta,\delta}\). Recall \(\mathcal{E}_{\beta,\delta}=H_{\beta,\delta}(1,\cdot)^{\dagger}\cdot\binom{1}{ -i}\), where \(H_{\beta,\delta}(t,z)\) solves (18) of \(\tau_{\beta,\delta}\). It is enough to show the uniform-on-compacts convergence of \(\overset{\sim}{H}_{n}(1,z)\) to \(H_{\beta,\delta}(1,z)\). By Propositions 39 and 64, one can obtain the operators \(\widetilde{\tau}_{n,\beta,\delta},n\geq 1\) and \(\tau_{\beta,\delta}\) from \(\texttt{CJ}_{n,\beta,\delta},n\geq 1\) and \(\texttt{HP}_{\beta,\delta}\) under the same orthogonal transformations. Therefore, applying the coupling of the operators \(\texttt{CJ}_{n,\beta,\delta}\) and \(\texttt{HP}_{\beta,\delta}\) in Proposition 65 gives \[\|\texttt{r}\:\tau_{n,\beta,\delta}-\texttt{r}\:\tau_{\beta,\delta}\|\to 0, \qquad|\mathfrak{t}_{\tau_{n,\beta,\delta}}-\mathfrak{t}_{\tau_{\beta,\delta}}| \to 0.\] In order to apply Proposition 43, we need to show that \[\int_{0}^{1}|\mathfrak{a}_{0,n}(s)-\mathfrak{a}_{0}(s)|^{2}ds=\int_{0}^{1} \left|\frac{1}{\sqrt{y_{n}(s)}}-\frac{1}{\sqrt{y(s)}}\right|^{2}\to 0, \tag{104}\] where \(y_{n}=\Im z_{n}\) and \(y=\Im z\) are the imaginary part of the generating paths of \(\tau_{n,\beta,\delta}\) and \(\tau_{\beta,\delta}\), respectively. By a standard subsequence argument, the path bound (102) allows one to choose a subsequence so that \(\kappa_{n}\) converges to an a.s. finite random variable. The convergence (104) now follows from the point-wise convergence of \(z_{n}\to z\), the path bounds (102), (103) (under the time reversal), and an application of the triangle inequality. This verifies (67) and completes the proof. Note that by Proposition 39, the structure function \(\mathcal{E}_{\beta,\delta}\) can also be characterized as \(\mathcal{E}_{\beta,\delta}=\mathcal{H}_{\beta,\delta}(0,z)^{\dagger}\binom{1} {-i}\), where \(\mathcal{H}_{\beta,\delta}(u,z)\) solves the SDE (63). Note that in the coupling of Proposition 65, the path bounds (102), (103) and the point-wise convergence of the generating paths \(z_{n}\to z\) imply that \(\mathfrak{u}_{1}^{(n)}\to\mathfrak{u}_{1}\) a.s. (see also Proposition 62). Then using Theorem 66 and Proposition 44 one obtains the following result on the (edge) scaling limit of the multiplicative perturbed circular Jacobi beta ensemble. **Corollary 67**.: _For fixed \(r\in[0,1]\) let \(\Lambda_{n}^{[r]}=\{\lambda_{1},\ldots,\lambda_{n}\}\) be the set of eigenvalues of the perturbed matrix \(\mathsf{CJ}_{n,\beta,\delta}^{[r]}\). Set \(p_{n,\beta,\delta}^{[r]}(z)=\prod_{i=1}^{n}\frac{z-\lambda_{i}}{1-\lambda_{i}}\) to be the normalized characteristic polynomial, and let \(H_{\beta,\delta}\) be the solution to the ODE (18) of \(\tau_{\beta,\delta}\). Let \(q\sim\Theta(1,\delta)\) be independent of \(H_{\beta,\delta}\). Then under the coupling of Proposition 65, we have almost surely as \(n\to\infty\)_ \[\left|p_{n,\beta,\delta}^{[r]}(e^{iz/n})e^{-iz/(2n)}-\mathcal{E}_{\beta,\delta }^{[r]}(z)\right|\to 0,\quad\text{ uniformly on compacts in $\mathbb{C}$,}\] _where \(\mathcal{E}_{\beta,\delta}^{[r]}(z)=H_{\beta,\delta}(1,z)\cdot\binom{1}{-c_{r}}\) with \(c_{r}=\frac{q+i\frac{1-r}{1+r}}{1-iq\frac{1+r}{1+r}}\). In particular, this implies the weak convergence of \(\Lambda_{n}^{[r]}\) under the edge scaling (3) to the zero set of the random analytic function \(\mathcal{E}_{\beta,\delta}^{[r]}\) as \(n\to\infty\)._ Similar to the statement of Theorem 3, we have \(c_{r}=q\) when \(r=1\) and \(c_{r}=i\) when \(r=0\), hence this result shows the connection between the scaling limits of the truncated and the unperturbed circular Jacobi beta ensemble. ## 8 Appendix ### Overview of some results for \(\beta=2\) Let \(\Lambda\) be a locally compact Polish space (in this section it will always be \(\mathbb{C}\)). A simple point process on \(\Lambda\) is called _determinantal_ with respect to a reference measure \(\mu\) with kernel function \(K:\Lambda\times\Lambda\to\mathbb{R}\) if for any \(k\geq 1\) the \(k\)th joint intensity function of the process with respect to \(\mu\) is given by \[\rho_{k}(x_{1},\ldots,x_{k})=\det(K(x_{i},x_{j}))_{1\leq i,j\leq k}. \tag{105}\] (See [16] for more on determinantal point processes.) Determinantal processes appear naturally from joint probability densities containing the square of the Vandermonde. In this section we will always assume that all absolute moments of \(\mu\) are finite. **Proposition 68** ([26, 22]).: _Suppose that the \(X_{1},X_{2},\ldots,X_{n}\) are complex valued random variables with joint density given by_ \[\frac{1}{Z_{n,\mu}}\prod_{1\leq i<j\leq n}|z_{i}-z_{j}|^{2} \tag{106}\] _with respect to a product measure \(\mu^{\otimes n}\) (on \(\mathbb{R}^{n}\) of \(\mathbb{C}^{n}\)). (Here \(Z_{n,\mu}\) is a finite constant.) Let \(\varphi_{k}(z)\) be the degree \(k\) orthonormal polynomials with respect to \(\mu\). Then \(\sum_{k=1}^{n}\delta_{X_{k}}\) is a determinantal point process with respect to \(\mu\) with kernel given by_ \[K(z,w)=\sum_{k=0}^{n-1}\varphi_{k}(z)\bar{\varphi}_{k}(w). \tag{107}\] Proposition 68 together with (1) and (2) immediately imply that both the circular unitary ensemble and the truncated circular unitary ensemble are determinantal. The size \(n\) circular unitary ensemble has kernel function \[K_{\text{Circ}_{2,n}}(z,w)=\sum_{k=0}^{n-1}z^{k}\bar{w}^{k}, \tag{108}\] with respect to the uniform measure on the unit circle. The size \(n\) truncated circular unitary ensemble has a similar kernel function \[K_{\mathsf{Circ}^{c}_{2,n}}(z,w)=\sum_{k=0}^{n-1}(k+1)z^{k}\bar{w}^{k}, \tag{109}\] with respect to the uniform measure on the unit disk. Note that [41] also treats more general truncations of Haar unitary matrices. They showed that if we delete the first \(m\) rows and columns of a size \(n+m\) Haar unitary matrix then the eigenvalues of the resulting submatrix have joint eigenvalue density given by \[\frac{1}{Z_{n,m}}\prod_{1\leq j<k\leq n}|z_{j}-z_{k}|^{2}\prod_{k=1}^{n}(1-|z_ {k}|^{2})^{m-1},\qquad z_{j}\in\mathbb{D}, \tag{110}\] with respect to the Lebesgue measure on the unit disk. By Proposition 68 this is also a determinantal point process, with respect to the measure with density \((1-|z|^{2})\) on the unit disk. Determinantal processes have a number of nice analytic features. The following proposition shows that if we understand the scaling limit of the kernels of a sequence of determinantal processes then we can derive the scaling limit and the limit is also determinantal. **Proposition 69** ([34, 30]).: _Suppose that \(\mathcal{X},\mathcal{X}_{1},\mathcal{X}_{2},\dots\) are determinantal processes on \(\mathbb{C}\) with respect to the common reference measure \(\mu\), with kernel functions \(K,K_{1},K_{2},\dots\). Assume that_ \[K_{n}(z,w)\to K(z,w) \tag{111}\] _uniformly on compacts in \(\mathbb{C}^{2}\). Then \(\mathcal{X}_{n}\) converges in distribution to \(\mathcal{X}\)._ Using some simple transformations one can rewrite the determinantal kernel of the circular unitary ensemble as \[\tilde{K}_{\mathsf{Circ}_{2,n}}(e^{it},e^{is})=D_{n}(t-s),\qquad D_{n}(u)= \frac{\sin(nu/2)}{\sin(u/2)}.\] By taking the limit of this kernel as \(n\to\infty\) we get the celebrated result of Gaudin, Mehta, Dyson regarding the the scaling limit of the circular ensemble. **Theorem 70** ([2, 26]).: _Let \(\Lambda_{n}\) be the angles in the size \(n\) circular unitary ensemble parametrized in \((-\pi,\pi]\). Then \(n\Lambda_{n}\Rightarrow\Lambda\) where \(\Lambda\) is a determinantal point process on \(\mathbb{R}\) with kernel_ \[K_{\mathrm{Sine}_{2}}(s,t)=\frac{\sin((s-t)/2)}{\pi(s-t)}\] _with respect to the Lebesgue measure._ Taking the limit of the kernel (109) (without any additional scaling) we obtain the point process studied by Peres and Virag in [28]. **Theorem 71** ([28]).: _Let \(\Lambda_{n}\) be the eigenvalues of \(\mathsf{Circ}_{n+1}^{\ulcorner}\). Then \(\Lambda_{n}\) converges to a determinantal point process on the unit disk with kernel_ \[K_{\mathrm{Bergman}}(z,w)=\frac{1}{(1-z\bar{w})^{2}} \tag{112}\] _with respect to the uniform measure on the unit disk. The resulting point process has the same distribution as the zero set of the Gaussian analytic function_ \[f_{GAF}(z)=\sum_{n=0}^{\infty}\xi_{n}z^{n}, \tag{113}\] _where \(\xi_{n},n\geq 0\) are i.i.d. standard complex normals._ Note that [23] provides a generalization of this result by connecting the limit of the eigenvalues of rank \(m\) truncation of Haar unitary matrices with the singular points of a matrix valued Gaussian analytic function that generalizes (113). The circular Jacobi beta ensemble (9) for \(\beta=2\) is also determinantal, this is also called the Hua-Pickrell distribution. The kernel function can be expressed in terms of the orthogonal polynomials with respect to the probability measure \(\mu_{\delta}\) that has probability density function proportional to \[(1-\bar{z})^{\delta}(1-z)^{\bar{\delta}}\] on the unit circle. The Hua-Pickrell distribution can be realized as the joint eigenvalue distribution of the random unitary matrices that have density proportional to \(|\det(1-U)^{\delta}|^{2}\) with respect to the Haar measure. In [25] the authors studied the truncations of these random matrices. They showed that the eigenvalues of the rank-\(m\) truncated matrices form a determinantal point process on the unit disk, and derived their kernel function. Moreover, they showed that the scaling limit of the joint eigenvalue distribution (without any additional scaling) leads to the same limits as in the Haar unitary case (as described in [28] and [23]). If we are interested in the (hard) edge scaling limit of the eigenvalues of the truncated matrix \(\mathsf{Circ}_{2,n+1}^{\ulcorner}\) then one needs to transform the kernel (109) according to (3), and take the limit. This was considered in [1] where the following result was shown. **Theorem 72** ([1]).: _Let \(\Lambda_{n}\) be the eigenvalues of \(\mathsf{Circ}_{2,n+1}^{\ulcorner}\). Then the sequence \(-ni\log\Lambda_{n}\) converges to a determinantal point process \(\mathcal{X}_{2}\) supported on the upper half plane \(\mathbb{H}\) with kernel function_ \[K_{\rm edge}(z,w)=f(z-\bar{w}),\quad f(u)=\frac{1}{\pi}\int_{0}^{1}te^{itu}dt, \quad z,w\in\mathbb{H}, \tag{114}\] _with respect to the Lebesgue measure on \(\mathbb{H}\)._ In fact, this is a special case of a more general problem that [1] considers: the product of independent copies of rank-\(m\) truncations of Haar unitary matrices. The point process \(\mathcal{X}_{2}\) also appears in [11] as the scaling limit of the rank-one additive anti-Hermitian perturbation for the Gaussian unitary ensemble. Note that our Theorem 2 for \(\beta=2\) provides a new characterization for the determinantal point process \(\mathcal{X}_{2}\). It would be interesting to see whether the determinantal structure could be proved directly from that result. ### Open problems We end with a couple of open problems. **Problem 1.** Our results for general \(\beta>0\) consider the models with the edge scaling (3). It would be interesting to explore the limiting behavior under the bulk scaling (i.e. when we do not scale at all). In other words, it would be interesting to see whether one could extend the results of [28] (see Theorem 71) and [25] for general \(\beta\). **Problem 2.** In the \(\beta=2\) case Theorem 71 provides a description of the bulk scaling limit of the truncated circular unitary ensemble as the zero set of a Gaussian analytic function. It would be interesting to see if one could connect this result to some sort of a scaling limit of the characteristic polynomial of the truncated circular ensemble under the bulk scaling. **Problem 3.** A simple calculation shows that under the scaling \(z\mapsto c^{-1}z,c\to\infty\) the kernel (114) converges to a transformed version of the kernel (112), where the transformation is the Cayley transform mapping the unit disk \(\mathbb{D}\) to the upper half plane \(\mathbb{H}\). This implies that as \(c\to\infty\) the scaled edge limit process \(c^{-1}\mathcal{X}_{2}\) converges to the image of the bulk limit process of Peres-Virag under the Cayley transform. Note that similar 'edge-to-bulk' limits are known for other random matrix ensembles, see e.g. [35] for a similar transition involving the point process limits of the Gaussian beta ensemble. It would be interesting to see if a similar limit exists for \(\mathcal{X}_{\beta}\) for general \(\beta>0\). Presumably, this could also provide a way to prove the 'edge-to-bulk' transition in the \(\beta=2\) case without using the determinantal process framework.
2308.10761
CoNe: Contrast Your Neighbours for Supervised Image Classification
Image classification is a longstanding problem in computer vision and machine learning research. Most recent works (e.g. SupCon , Triplet, and max-margin) mainly focus on grouping the intra-class samples aggressively and compactly, with the assumption that all intra-class samples should be pulled tightly towards their class centers. However, such an objective will be very hard to achieve since it ignores the intra-class variance in the dataset. (i.e. different instances from the same class can have significant differences). Thus, such a monotonous objective is not sufficient. To provide a more informative objective, we introduce Contrast Your Neighbours (CoNe) - a simple yet practical learning framework for supervised image classification. Specifically, in CoNe, each sample is not only supervised by its class center but also directly employs the features of its similar neighbors as anchors to generate more adaptive and refined targets. Moreover, to further boost the performance, we propose ``distributional consistency" as a more informative regularization to enable similar instances to have a similar probability distribution. Extensive experimental results demonstrate that CoNe achieves state-of-the-art performance across different benchmark datasets, network architectures, and settings. Notably, even without a complicated training recipe, our CoNe achieves 80.8\% Top-1 accuracy on ImageNet with ResNet-50, which surpasses the recent Timm training recipe (80.4\%). Code and pre-trained models are available at \href{https://github.com/mingkai-zheng/CoNe}{https://github.com/mingkai-zheng/CoNe}.
Mingkai Zheng, Shan You, Lang Huang, Xiu Su, Fei Wang, Chen Qian, Xiaogang Wang, Chang Xu
2023-08-21T14:49:37Z
http://arxiv.org/abs/2308.10761v1
# CoNe: Contrast Your Neighbours for Supervised Image Classification ###### Abstract Image classification is a longstanding problem in computer vision and machine learning research. Most recent works (_e.g._ SupCon [1], Triplet [2], and max-margin [3]) mainly focus on grouping the intra-class samples aggressively and compactly, with the assumption that all intra-class samples should be pulled tightly towards their class centers. However, such an objective will be very hard to achieve since it ignores the intra-class variance in the dataset. (_i.e._ different instances from the same class can have significant differences). Thus, such a monotonous objective is not sufficient. To provide a more informative objective, we introduce Contrast Your Neighbours (CoNe) - a simple yet practical learning framework for supervised image classification. Specifically, in CoNe, each sample is not only supervised by its class center but also directly employs the features of its similar neighbors as anchors to generate more adaptive and refined targets. Moreover, to further boost the performance, we propose "distributional consistency" as a more informative regularization to enable similar instances to have a similar probability distribution. Extensive experimental results demonstrate that CoNe achieves state-of-the-art performance across different benchmark datasets, network architectures, and settings. Notably, even without a complicated training recipe, our CoNe achieves 80.8% Top-1 accuracy on ImageNet with ResNet-50, which surpasses the recent Timm training recipe [4] (80.4%). Code and pre-trained models are available at [https://github.com/mingkai-zheng/CoNe](https://github.com/mingkai-zheng/CoNe). **Keywords: Self-Supervised Learning, Contrastive Learning, Supervised Learning, Image Classification** *Corresponding author(s). E-mail(s): [email protected]; Contributing authors: [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; ## 1 Introduction Deep neural networks have demonstrated superior performance on various visual tasks [5, 6, 7, 8]. In particular, image classification is considered the most fundamental task because of its simplicity and various real-world applications. It also serves as a kind of pre-training task since the learned representations can be easily transferred into various downstream tasks (_e.g._ object detection, video analysis, and semantic segmentation). Thus, improving the image classification performance has become a longstanding problem; an enormous number of regularization methods [9, 10, 11, 12, 13, 14, 15, 16] and training strategies [17, 3, 1] have been proposed to address this issue. A typical image classification algorithm maintains a set of class centers. The training objective aims to maximize the inner product of the feature vectors with their corresponding class center while minimizing the inner product with respect to other class centers. Previous feature learning based methods _e.g._max-margin [3] directly modified the logits calculation, which aims to decouple the magnitude and direction of the feature vectors, and shows that the learned feature could be more discriminative as we maximize and minimize the cosine similarity for the inter-class and intra-class samples. Many works [18, 19, 20, 21, 22] extend this idea and show their promising performance in various domains (_e.g._, Facial Recognition and Person Re-identification). More recently, thanks to the success of contrastive learning [23, 24, 25, 26, 27, 28, 29], SupCon has been proposed to bring the InfoNCE loss into a fully supervised setting by allowing an arbitrary number of positives samples in the loss functions. Theoretical analysis shows that the \(\mathcal{L}_{out}^{sup}\) in SupCon behaves similarly to the triplet loss [30] and hard positive mining. The objective of these methods are the same during training, which is to group the intra-class samples aggressively and compactly, with the assumption that all intra-class samples should be pulled tightly toward their class centers. However, such an objective is tough to achieve and against the intrinsic characteristics, especially for those datasets (_e.g._ ImageNet [5]) with high intra-class variance. As we have shown some examples in Figure 1, we can observe that different instances from the same class can have significant differences. In this case, a monotonous class center might lose its adaptivity to cover this intra-class variance for more intrinsic training. Intuitively, instead of using a single class center as the target for the intra-class samples, those similar positives should also be good targets since such targets are semantically and visually similar. Thus, to provide more informative supervision, we would like to explicitly apply an additional constraint to let the similar positives close in the embedding space. Fortunately, we found that the \(\mathcal{L}_{in}^{sup}\) has exactly the properties we need, although the original SupCon paper claims that it fails to learn a good representation. In this paper, we show that the underlying property of \(\mathcal{L}_{in}^{sup}\) is to encourage the features of training samples to be pulled towards their similar positives adaptively. The reason for the failure of \(\mathcal{L}_{in}^{sup}\) results from the ignorance of the hard positive samples. Based on this observation, we thus propose a new supervised learning framework - Contrast Your Figure 1: Examples to show the high variance among intra-class samples. These images are all from ImageNet class. Images in the same row belong to the same class. \(1^{st}\) row: peeled corn _vs_. unpeeled corn. \(2^{ed}\) row: ambulance car _vs_. ambulance helicopter. \(3^{rd}\) row: different types of speakers. Neighbors (CoNe), which utilizes the \(\mathcal{L}_{in}^{sup}\) more appropriately. Concretely, we simply take the classical cross-entropy loss to ensure the intra-class samples are constrained to have the same targets, and then we can arm the \(\mathcal{L}_{in}^{sup}\) with the nearest neighbors of the training features to construct a more semantic aware and refined targets to help the compactness of the intra-class samples. Furthermore, we also propose "distributional consistency" regularization to enhance the compactness between similar samples by encouraging features with similar probability distribution as their neighbors. Extensive experimental results on multiple settings show the superiority of CoNe. Our contributions can be summarized as follows. * We propose a novel supervised learning framework (CoNe) and utilize the \(\mathcal{L}_{in}^{sup}\) more properly for the image classification task. * We theoretically analyze the \(\mathcal{L}_{in}^{sup}\) loss and show that the training sample features will be pulled towards its similar positive features, and more similar features will have greater contribution to the gradients. * We propose a novel distributional consistency regularization that encourages similar samples to predict similar probability distributions to improve the performance further. * Our proposed method achieves state-of-the-arts performance for image classification tasks. Experiments results shows that our method has better performance than Timm training recipe [4]. For example, with ResNet-50 as the backbone network, CoNe achieves 80.8% Top-1 accuracy on ImageNet. ## 2 Related work ### Training Strategies Many methods have been proposed to improve image classification accuracy. Most of these methods aim to make the training process harder to prevent the network from overfitting. For example, [14, 15, 31] propose a set of rich data augmentation policies to expand the variety of the datasets. Some regional dropout based methods [32, 33] show that removing random regions in images is a simple but effective regularization that improves the classification performance. DropBlock [12] further generalizes this idea by removing a random block from the feature map. Stochastic Depth [16] is a similar idea that randomly drops the entire layer during training. Mixup [9] adopts a convex combination of pairs of images and uses the corresponding mixed labels as the targets, improving the classification accuracy and robustness of the adversarial samples. CutMix [10] combines the idea of Mixup and Cutout, which randomly mix two patches from a pair of images and generate the targets by linear interpolating the one-hot labels based on the area of the two patches. Recent work [4] has explored the optimal combination of these training strategies and significantly improved the baseline results. ### Representation Learning. Recently, because of the success of the contrastive learning [23, 24, 25, 27, 28, 29, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46], SupCon has been proposed to extend the idea of contrastive learning to the fully-supervised setting, which allows the InfoNCE objective to be applied with an arbitrary number of positives. Experimental results in SupCon show that it improves the performance and robustness of the image classification problem. More recently, some works [47] have started rethinking the performance gaps between the supervised and unsupervised pre-training and improving the transferability of the supervised pre-trained model by adding a simple MLP projector. In this paper, we show that CoNe can enhance the performance of both upstream (classification) and downstream tasks (detection, segmentation, and downstream classification tasks). ### Deep Metric Learning The contrastive loss is not only widely used in recent self-supervised learning, but it is also closely related to deep metric learning (DML). Most of the work in DML focuses on how to use hard/easy negatives/ positives to form a better triplet. For example, triplet loss [2, 48] suggests that semi-hard negatives are better than hard negatives. [49] propose a soft margin for triplet loss and use all possible negatives in a batch to omit the complex sampling tricks. Meanwhile, many DML papers [18, 19, 22, 50] also focus on maximizing inter-class margins to get more compact clusters, especially for face verification tasks. In contrast, some works [51, 52, 53] shows that keeping intra-class variance helps learn better features. Note that keeping intra-class variance features is somewhat similar to this paper. However, different from [51, 52, 53] proposed for the DML tasks, our CoNe is specially designed for classification problems, which ensures the compactness of the positive samples while keeping the intra-class variance. On the other hand, our CoNe is simpler and more adaptive than [51, 52] since we do not have the sophisticated design to assign the intra-class samples to different class centers. Most importantly, our CoNe significantly improves the classification performance on the large-scale ImageNet [5] dataset. ## 3 Methodology In this section, we will first revisit the problem formulation for image classification; then, theoretically analyze the underlying property of \(\mathcal{L}_{in}^{sup}\) and introduce our proposed method CoNe. After that, the algorithm and the implementation details will also be explained. ### Revisiting Supervised Image Classification We define the image classification problem as the following. Given a batch of samples \(\boldsymbol{x}\), we adopt a convolutional based encoder \(\mathcal{F}(\cdot)\) to obtain the corresponding representation vector _i.e._\(\boldsymbol{z}=\mathcal{F}(\boldsymbol{x})\). Let us define the \(i\)-th input sample \(\boldsymbol{x}_{i}\) has the label \(\boldsymbol{y}_{i}\), then the classical SoftMax with cross-entropy loss could be written as Eq (1), where \(W\) is the classifier matrix, \(W_{c}\) is the corresponding class center for class \(c\), and \(C\) is the total number of classes. \[\mathcal{L}_{ce}=-\log\frac{\exp(W_{y_{i}}^{T}\cdot\boldsymbol{z }_{i})}{\sum_{c=1}^{C}\exp(W_{c}^{T}\cdot\boldsymbol{z}_{i})} \tag{1}\] SupCon follows the standard contrastive learning approach where each instance is augmented two times, and both augmented views will be passed into the network to obtain the feature vectors. Specifically, the \(\mathcal{L}_{in}^{sup}\) in SupCon can be expressed by Eq. (2), where \(\boldsymbol{z}\) is the normalized feature. \(Pos(i)\) and \(Neg(i)\) refer to the anchors that have the same and different ground truth labels. \(\tau\) is the temperature parameter that controls the sharpness of the distribution. \[\mathcal{L}_{in}^{sup}=-\log\frac{\sum\limits_{p\in Pos(i)}\exp( \boldsymbol{z}_{i}\cdot\boldsymbol{z}_{p}/\tau)}{\sum\limits_{p\in Pos(i)} \exp(\boldsymbol{z}_{i}\cdot\boldsymbol{z}_{p}/\tau)+\sum\limits_{n\in Neg(i )}\exp(\boldsymbol{z}_{i}\cdot\boldsymbol{z}_{n}/\tau)} \tag{2}\] ### Analysis of SupCon Objective To better understand the intrinsic property of Eq. (2), we reformulate the equation as the following form Eq. (3), where \(\max(\boldsymbol{z}_{i}\cdot\boldsymbol{z}_{n})\) is the hardest negative, \(\max(\boldsymbol{z}_{i}\cdot\boldsymbol{z}_{p})\) is the most similar positive, and \(m\) is the approximation bias for \(LogSumExp\) to max operator. As \(\tau\to 0\), we have \(m\to 0\). However, a commonly used value for \(\tau\) is normally from 0.07 to 0.2 [1, 23, 24], which means Eq. (3) is not a strict and rigorous derivation. When \(\tau\) is fixed, the value of \(m\) is mainly affected by the number of anchors that are close to \(\boldsymbol{z}_{i}\). In other words, the approximation results of \(LogSumExp\) is affected by the most similar (_i.e.Top1_ or \(max\)) anchor or \(TopK\) similar anchors of \(\boldsymbol{z}_{i}\). Thus, the training objective of Eq. (2) can be interpreted as maximizing the similarity of the easy positives and minimizing the similarity of the hard negatives. Noted that \(max\) operator in Eq. (3) is just for simplicity; it does not affect our interpretation even when \(m\) is relatively larger. Please see more discussions in our supplementary materials. \[\mathcal{L}_{in}^{sup}=\log\left[1+\exp(\log\sum\limits_{n\in Neg (i)}\exp(\boldsymbol{z}_{i}\cdot\boldsymbol{z}_{n}/\tau)\right.\\ \left.-\log\sum\limits_{p\in Pos(i)}\exp(\boldsymbol{z}_{i}\cdot \boldsymbol{z}_{p}/\tau))\right] \tag{3}\] **Gradient Perspective**. Next, we would like to analyze \(\mathcal{L}_{in}^{sup}\) from the gradient perspective to show that similar positive anchors will have a more significant contribution to the optimization direction. We first derive the gradient with respect to \(\boldsymbol{z}_{i}\) as the following form Eq. (4) (See more details in supplementary materials). Please note that we ignore the gradient for the negative samples since it does not affect our conclusion (See more details in Appendix B) \[-\frac{\partial\mathcal{L}_{in}^{sup}}{\partial\,\mathbf{z}_{i}}=\sum_{p\in Pos(i)} \mathbf{z}_{p}\Big{[}\underbrace{\frac{\exp(\mathbf{z}_{i}\cdot\mathbf{z}_{p}/\tau)}{S_{p} }-\frac{\exp(\mathbf{z}_{i}\cdot\mathbf{z}_{p}/\tau)}{S_{p}+S_{n}}}_{\text{Coefficient $\alpha_{p}$}}\Big{]} \tag{4}\] We define \(\mathcal{S}_{p}\) and \(\mathcal{S}_{n}\) as the summation of the exponential cosine similarity for positive sets and negative sets, which can be expressed as \[\mathcal{S}_{p} =\sum_{p\in Pos(i)}\exp(\mathbf{z}_{i}\cdot\mathbf{z}_{p}/\tau) \tag{5}\] \[\mathcal{S}_{n} =\sum_{n\in Neg(i)}\exp(\mathbf{z}_{i}\cdot\mathbf{z}_{n}/\tau)\] Suppose we have two positive samples \(\mathbf{z}_{a}\), \(\mathbf{z}_{b}\), and \(\mathbf{z}_{i}\cdot\mathbf{z}_{a}>\mathbf{z}_{i}\cdot\mathbf{z}_{b}\). To compare the coefficient value, we simply take a subtraction and formulate the equation as Eq. (6) \[\alpha_{a}-\alpha_{b} =\frac{\exp(\mathbf{z}_{i}\cdot\mathbf{z}_{a}/\tau)-\exp(\mathbf{z}_{i}\cdot \mathbf{z}_{b}/\tau)}{S_{p}} \tag{6}\] \[\qquad\qquad-\frac{\exp(\mathbf{z}_{i}\cdot\mathbf{z}_{a}/\tau)-\exp(\bm {z}_{i}\cdot\mathbf{z}_{b}/\tau)}{S_{p}+S_{n}}\] Since \(\mathbf{z}_{i}\cdot\mathbf{z}_{a}-\mathbf{z}_{i}\cdot\mathbf{z}_{b}>0\), and \(S_{p}>0,S_{n}>0\). Thus, \(\alpha_{a}-\alpha_{b}\) is always greater than \(0\), which yields \(\alpha_{a}>\alpha_{b}\), shows that a more similar positive anchor could have a more significant contribution to the gradient direction, and \(\mathbf{z}_{i}\) tends to be pulled towards more similar positive anchors. The experiments in show that a standalone \(\mathcal{L}_{in}^{sup}\) is failed to learn a good representation. The problem can be revealed by Eq. (3) and Eq. (6). Basically, Eq. (6) shows that the feature \(\mathbf{z}\) will tend to optimize towards more similar positives. Thus, once \(\mathbf{z}\) is close to one or some of the positive features, the objective Eq. (3) can be satisfied, and the loss \(\mathcal{L}_{in}^{sup}\) will stop optimizing. In other words, a standalone \(\mathcal{L}_{in}^{sup}\) ignores the hard positives and does not encourage all intra-class samples will be grouped together since it does not require all \(\mathbf{z}_{i}\cdot\mathbf{z}_{p}\) to be greater than all \(\mathbf{z}_{i}\cdot\mathbf{z}_{n}\). ### Contrast Your Neighbours Although a standalone \(\mathcal{L}_{in}^{sup}\) fails to learn decent representations, the underlying property can be incorporated with the class center based methods to improve the compactness of the intra-class features, as \[\mathcal{L}_{CoNe}=\mathcal{L}_{ce}+\lambda_{sup}\mathcal{L}_{in}^{sup}. \tag{7}\] A typical class center based framework aims to pull the intra-class samples towards the same targets. However, such constraint is too restricted since all the intra-class samples are treated equally; the discrepancy among different instances is ignored. \(\mathcal{L}_{in}^{sup}\) could adaptively find more similar positives (a.k.a. neighbors) and utilize them as more semantically aware and refined targets to Figure 2: **Single Arrow**: One sample is pulled towards another. **Double Arrow**: Two samples are pulled towards each other. **Left**: Typical class center based methods where all the intra-class samples are pulled towards the same targets. **Middle**: Supervised Contrastive learning (\(\mathcal{L}_{out}^{sup}\)) where the features of the samples are directly pulled towards all the positives. Note that the analysis from shows that the harder positive will result in a larger gradient. **Right**: Our proposed method. We adopt the class center to ensure all intra-class samples can be grouped together, and we also encourage the feature to be pulled toward similar samples. help the optimization. We can thus leverage the nearest neighbors as anchor sets for the loss \(\mathcal{L}_{in}^{sup}\). In this way, the Eq. (1) ensures all intra-class samples will be pulled together, and \(\mathcal{L}_{in}^{sup}\) behaves as an auxiliary term that adaptively pulls the features towards its similar positives as well (as in Figure 2). Compared with SupCon, our CoNe has the following two major differences. 1. Our CoNe can be trained end-to-end since the class centers can be directly used for inference, whereas SupCon requires two-stage (pre-training and fine-tuning) to obtain the final model. 2. SupCon follows the standard contrastive learning paradigm where each instance is augmented with two views (_i.e._\(\mathbf{x}^{1}\) and \(\mathbf{x}^{2}\)) to ensure that each sample will have at least one positive pair in the batch. In this case, \(\mathbf{z}^{2}\) is very likely to become the most similar feature of \(\mathbf{z}^{1}\) since they are just two different views of the same instance. By following Eq. (3), the objective for \(\mathbf{z}^{1}\) is approximately equal to minimize \(\max(\mathbf{z}^{1}\cdot\mathbf{z}_{n})-\mathbf{z}^{1}\cdot\mathbf{z}^{2}\). However, since \(\mathbf{z}^{1}\cdot\mathbf{z}^{2}\) is very likely to be a relatively large number, the objective Eq. (3) can be easily achieved and \(\mathcal{L}_{in}^{sup}\) will have a low contribution to the optimization. In our implementation, we do not include contrastive views and discard the loss term if the positive anchor does not exist. Using the rich augmentation policies [14, 15, 23] to generate the contrastive view might help alleviate the problem. However, we show that excluding the contrastive view already works well under the basic augmentation policy (See more details in the ablation study). **Momentum Update**. Since \(\mathcal{L}_{in}^{sup}\) heavily relies on the anchor points, we need to provide a large number of features to get high-quality anchors. Inspired by [24, 34], we utilize an exponential moving averaged (EMA) network and maintain a large memory buffer to store \(K\) features from the past batch. If we denote \(\mathcal{F}_{q}\) as the latest network and \(\mathcal{F}_{k}\) as the EMA network, the update rule for \(\mathcal{F}_{k}\) can be expressed by Eq. (8), where \(m\) is momentum coefficient. All the positive and negative sets are from the memory bank in our implementation. \[\mathcal{F}_{k}\gets m\mathcal{F}_{k}+(1-m)\mathcal{F}_{q} \tag{8}\] **Distributional Consistency.** As we have shown that \(\mathcal{L}_{sup}^{in}\) is a feature-level constraint that encourages similar samples to have similar features. To further enhance this property, we would like to generalize this from the perspective of the probability distribution. _i.e._ We also encourage similar samples to have similar class predictions. To this end, we introduce the "distributional consistency" regularization, which encourages the features to have similar predicted probability distribution as their similar anchors. Concretely, the probability of \(j^{th}\) anchor in the memory buffer belongs to the class \(m\) can be written as Eq. (9) where \(p^{class}\in R^{K\times C}\). \[p_{j,m}^{class}=\frac{\exp(W_{m}^{T}\cdot\mathbf{z}_{j})}{\sum_{c=1}^{C}\exp(W_{c} ^{T}\cdot\mathbf{z}_{j})} \tag{9}\] The similarity between \(\mathbf{z}_{i}\) in the current batch with respect to the \(j^{th}\) feature in the memory bank can be expressed by Eq. (10) where \(p_{i}^{instance}\in R^{1\times K}\). \[p_{i,j}^{instance}=\frac{\exp(\mathbf{z}_{i}\cdot\mathbf{z}_{j}/\tau)}{\sum_{k=1}^{K} \exp(\mathbf{z}_{i}\cdot\mathbf{z}_{k}/\tau)} \tag{10}\] To leverage the class information from the anchors, we calculate the sum of the predicted probability distribution weighted by the similarity score to form a more semantically informative target. We denotes this target as \(p_{i}^{dc}\in R^{1\times C}\), which can be expressed as the follows: \[p_{i}^{dc}=\sum_{j=1}^{K}p_{i,j}^{instance}\cdot p_{j}^{class} \tag{11}\] Now, we optimize the Kullback-Leibler divergence between \(p_{i}^{class}\) and \(p_{i}^{dc}\), which can be expressed by Eq. (12). In this way, the features are constrained to have similar predicted probability distribution as their similar anchors, which further enhances the compactness. \[\mathcal{L}_{dc}=D_{KL}(p_{i}^{dc}\mid\mid p_{i}^{class}) \tag{12}\] **Overall Objective.** Finally, the overall training objective for CoNe will be Eq. (13). where \(\lambda_{sup}\) and \(\lambda_{dc}\) are the balancing factors that control the weights of the two losses. Note that we only use \(p_{class}\) as the prediction during the inference stage. \[\mathcal{L}_{overall}=\mathcal{L}_{ce}+\lambda_{sup}\mathcal{L}_{in}^{sup}+ \lambda_{dc}\mathcal{L}_{dc} \tag{13}\] ## 4 Experiments ### CIFAR-10 and CIFAR-100 The CIFAR-10 dataset [8] consists of 60000 32x32 color images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. CIFAR-100 [8] is quite similar to the CIFAR-10; it has 100 categories containing 600 images in each class, and there are 500 training images and 100 testing images per class. **Implementations.** We adopt the ResNet-50 [54] as our backbone encoder. Following the common practice for low-resolution datasets, we replace the first 7x7 Conv of stride 2 with 3x3 Conv of stride 1 and remove the first max pooling operation for ResNet-50. We attach a two-layer MLP head (with ReLU and BN in the hidden layer) to project the backbone feature from 2048-D into a 256-D space. Note that the additional computational cost of the projection head is negligible compared with the entire network, and we do not observe any improvements in terms of performance. The model is optimized for 1000 epochs by a standard SGD optimizer with a momentum of 0.9 and a weight decay of 0.0001. We linearly warm up the learning rate for 100 epochs until it reaches 0.1 \(\times\) BatchSize/256 (we use a batch size of 1024 by default), then switch to the cosine decay scheduler [55]. By following the setting in [1], we adopt the SimCLR based augmentation with color distortion (strength=0.5), and leave out Gaussian blur. For CoNe related hyper-parameters, we set \(\lambda_{sup}=0.7,\lambda_{dc}=0.4,\tau_{sup}=0.1\), and \(\tau_{dc}=0.07\). (Noted that we use \(\tau_{sup}\) and \(\tau_{dc}\) to denote the temperature parameter in Eq. (2) and Eq. (10) respectively.) We adopt a memory bank with 4096 past examples and use Top-32 features to compute \(\mathcal{L}_{in}^{sup}\). The momentum coefficient is \(m=0.996\) and increases to 1 with a cosine schedule. \begin{table} \begin{tabular}{l c c c c c} \hline \hline DataSet & SimCLR [23] & Cross-Entropy & Max-Margin [3] & SupCon & **CoNe (Ours)** \\ \hline CIFAR-10 [8] & 93.6 & 95.0 & 92.4 & **96.0** & **96.00\(\pm\)0.11** \\ CIFAR-100 [8] & 70.7 & 75.3 & 70.5 & 76.5 & **78.08\(\pm\)0.09** \\ \hline \hline \end{tabular} \end{table} Table 1: Top-1 classification accuracy on ResNet-50 [54] for CIFAR-10 and CIFAR-100 datasets. We compare cross-entropy, unsupervised representation learning (SimCLR [23]), max-margin classifiers [3], SupCon [1]. We report the mean and std over 5 runs for CoNe. **Performance on CIFAR Datasets**. The comparative performance between the proposed approach - CoNe, SimCLR [23], Cross-Entropy, Max-Margin [3], and SupCon [1] on CIFAR-10 and CIFAR-100 datasets is delineated in Table 1. As inferred from the results, both CoNe and SupCon demonstrate highly competitive performance on the CIFAR-10 dataset, attaining 96.0%. However, with respect to the CIFAR-100 dataset, CoNe evidently outperforms SupCon, achieving a performance of 78.08% which surpasses SupCon's 76.5% by a large margin. **Comparing with PaCo under Stronger Setting**. PaCo [56] employed a relatively advanced data augmentation methodology for a similar experiment. In order to conduct a fair comparison with PaCo, we utilized their official codebase1 so as to align with their experimental conditions. The results, as illustrated in Table 2, demonstrate that our CoNe outperforms PaCo [56] by a margin of 1.1% on this particular benchmark. This finding underscores the effective performance of CoNe. Footnote 1: [https://github.com/dvlab-research/Parametric-Contrastive-Learning/blob/main/PaCo/LT/paco_cifar.py](https://github.com/dvlab-research/Parametric-Contrastive-Learning/blob/main/PaCo/LT/paco_cifar.py) ### Experiments on ImageNet. Subsequently, we implement an evaluation of our proposed methodologies on the ImageNet-1k dataset [5] to further substantiate the performance. The majority of hyper-parameters employed in this experiment are congruent with those used in our previous CIFAR tests, with the exception of our utilization of a larger memory bank, containing 65k past features, and construing Top-512 to compute \(\mathcal{L}_{in}^{sup}\). In this experiment, different ResNet models were optimized over a period of 100 epochs, including a warm-up phase of 5 epochs, with standard data augmentation which only employs RandomResizedCrops and RandomFlip. The results are shown in Table 3. For a fair comparison with the cross-entropy baseline, we report our reproduced results under exactly the same training strategy. Derived from the results, our proposed method significantly enhanced the classification accuracy by a margin of 0.9% to 1.8% across various ResNet architectures. **Apple to Apple Comparison with SupCon.** We conducted an apple to apple of our proposed CoNe with SupCon under a fair condition. Specifically, we adhered to the conventional contrastive framework, augmenting each sample twice and forwarding both augmented perspectives through the encoder. We maintained congruence in augmentation strategies and training epochs as employed in the SupCon approach. The other experimental parameters, including temperature, weight decay, and more, were kept consistent with our earlier ImageNet experiment. The outcomes are presented in Table 4. Comparing the performance of CoNe and SupCon, our CoNe exhibits notable improvements of 1.1% and 1.2% for ResNet-50 and ResNet-101, respectively. Notably, when applied to ResNet-200, CoNe surpasses SupCon while demanding significantly fewer training epochs (400 Epochs vs. 700 Epochs). An important advantage of our approach is that CoNe operates as an end-to-end framework, \begin{table} \begin{tabular}{l c c c} \hline \hline Objective & dataset & Epochs & Top-1 \\ \hline \hline Cross-Entropy & ResNet-50 & 400 & 77.9 \\ Cross-Entropy + \(\mathcal{L}_{out}^{sup}\) & ResNet-50 & 400 & 78.0 \\ PaCo & ResNet-50 & 400 & 79.1 \\ \hline **CoNe (Ours)** & **ResNet-50** & **400** & **81.2** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison with PaCo [56] on CIFAR100 under stronger setting. The results for other methods are directly copied from PaCo [56]. \begin{table} \begin{tabular}{l c c c c} \hline \hline Objective & Arch & Aug & Epochs & Top-1 \\ \hline \hline Cross-Entropy & ResNet-18 & Standard & 100 & 70.7 \\ **CoNe (Ours)** & **ResNet-18** & **Standard** & **100** & **72.5** \\ \hline Cross-Entropy & ResNet-34 & Standard & 100 & 74.3 \\ **CoNe (Ours)** & **ResNet-34** & **Standard** & **100** & **75.4** \\ \hline Cross-Entropy & ResNet-50 & Standard & 100 & 76.9 \\ **CoNe (Ours)** & **ResNet-50** & **Standard** & **100** & **78.7** \\ \hline Cross-Entropy & ResNet-101 & Standard & 100 & 78.7 \\ **CoNe (Ours)** & **ResNet-101** & **Standard** & **100** & **79.6** \\ \hline Cross-Entropy & ResNet-152 & Standard & 100 & 79.4 \\ **CoNe (Ours)** & **ResNet-152** & **Standard** & **100** & **80.3** \\ \hline Cross-Entropy & ResNet-200 & Standard & 100 & 79.6 \\ **CoNe (Ours)** & **ResNet-200** & **Standard** & **100** & **80.5** \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison with standard Cross Entropy under 100 Epochs Setting on ImageNet. For baseline, we report our reproduced results under the same training recipe. eliminating the need for an additional 90 epochs of fine-tuning, which is essential in the case of SupCon. Moreover, CoNe demonstrates resilience with the standard SGD optimizer, in contrast to the sensitivity of SupCon to optimizer selection (SupCon employs LARS [57] and RMSProp [58] for pre-training and fine-tuning stages). This robustness in optimization is a key characteristic of CoNe. Comparing with the recent PaCo method [56], our CoNe achieves a competitive edge. For ResNet-50 and ResNet-101, CoNe attains 0.9% and 0.5% improvements with slightly fewer training epochs (350 Epochs vs. 400 Epochs), and the same results for ResNet-200, showcasing the inherent advantages of CoNe. **Compare with State-of-the-arts.** To further explore the limits of our method, we try to incorporate CoNe with three commonly used training strategies. 1). Label Smoothing. 2) Exponential Moving Average, noted this is a different EMA network from our Eq. (8). The difference is that Eq. (8) only averages the models parameters, but this EMA model also averages buffer information. The final performance will be tested on this EMA model. 3). Train/Inference resize tuning. We adjust the training resolution to \(176\times 176\) and inference resolution to \(232\times 232\rightarrow\text{center crop}\ 224\times 224\). We also reduce the weight decay to \(6e-5\), and the rest of the settings are the same as in Table 4. In this experiment, we would like to compare CoNe with the existing state-of-the-art training recipe (Timm [4]). The results are shown in Table 5. Notably, our method surpasses Timm training recipe by a large margin. We want to emphasize that the Timm training recipe [4] adopts lots of advanced training strategies (_e.g._ Stochastic Depth [16], CutMix [10], MixUp[9], Repeated Augmentation [59, 60], and LAMB optimizer [61],). Incorporating more advanced training strategies might potentially improve results further. However, we need more sophisticated hyper-parameter tuning, which is not the focus of this paper. Since we have already achieved state-of-the-art performance, we would like to leave this problem as future work. **More Experiments with Different Architectures** We further demonstrate the generality of CoNe by training it with various architectures. Specifically, we implement CoNe with RegNet [62], MobileNet v2 [63], ShuffleNet v2 [64], and Vision Transformer [65, 66]. From Table 8, we can see that CoNe consistently improves the performance across different architectures. **Transfer Learning on Classification Tasks.** We also show the transferability of CoNe on various downstream classification datasets. \begin{table} \begin{tabular}{l c c c c} \hline \hline Objective & Arch & Aug & Epochs & Top-1 \\ \hline Cross-Entropy & ResNet-50 & MixUp & 300 & 77.4 \\ Cross-Entropy & ResNet-50 & CutMix & 300 & 78.6 \\ Cross-Entropy & ResNet-50 & AutoAug & 350 & 78.2 \\ SupCon & ResNet-50 & AutoAug & 350 & 78.7 \\ SupCon (w/ MB) & ResNet-50 & AutoAug & 350 & 79.1 \\ PaCo & ResNet-50 & RandAug & 400 & 79.3 \\ **CoNe (Ours)** & **ResNet-50** & **AutoAug** & **350** & **80.2** \\ \hline SupCon & ResNet-101 & StackedAug & 350 & 80.2 \\ PaCo & ResNet-101 & StackedAug & 400 & 80.9 \\ **CoNe (Ours)** & **ResNet-101** & **StackedAug** & **350** & **81.4** \\ \hline Cross-Entropy & ResNet-200 & AutoAug & 700 & 80.6 \\ Cross-Entropy & ResNet-200 & StackedAug & 700 & 80.9 \\ SupCon & ResNet-200 & StackedAug & 700 & 81.4 \\ PaCo & ResNet-200 & StackedAug & 400 & **81.8** \\ **CoNe (Ours)** & **ResNet-200** & **StackedAug** & **400** & **81.8** \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of Cross-Entropy, SupCon [1], PaCo [56], and CoNe under stronger augmentation and longer training epochs. The performance of Cross-Entropy and SupCon are directly copied from the [1]. (w/ MB) denotes the memory buffer based implementation for SupCon. \begin{table} \begin{tabular}{l c c c} \hline \hline Objective & Arch & Aug & Top-1 \\ \hline Binary-Cross-Entropy & ResNet-18 & Timm A1 & 71.5 \\ **CoNe (Ours)** & **ResNet-18** & **AutoAug** & **74.3** \\ \hline Binary-Cross-Entropy & ResNet-34 & Timm A1 & 76.4 \\ **CoNe (Ours)** & **ResNet-34** & **AutoAug** & **78.0** \\ \hline Binary-Cross-Entropy & ResNet-50 & Timm A1 & 80.4 \\ Cross-Entropy & ResNet-50 & Timm B & 79.4 \\ Cross-Entropy & ResNet-50 & Timm C.1 & 79.8 \\ Cross-Entropy & ResNet-50 & Timm C.2 & 80.0 \\ Binary-Cross-Entropy & ResNet-50 & Timm D & 79.8 \\ **CoNe (Ours)** & **ResNet-50** & **AutoAug** & **80.8** \\ \hline Binary-Cross-Entropy & ResNet-101 & Timm A1 & 81.5 \\ **CoNe (Ours)** & **ResNet-101** & **StackedAug** & **82.1** \\ \hline Binary-Cross-Entropy & ResNet-152 & Timm A1 & 82.0 \\ **CoNe (Ours)** & **ResNet-152** & **StackedAug** & **82.7** \\ \hline \hline \end{tabular} \end{table} Table 5: Compare with Timm strategy [4]. Different from Table 4, the CoNe results in this table further includes three training strategies - label smoothing, exponential moving average, and train/inference resize tuning. This experiment adopts the pre-trained ResNet-50 model from Table 4. Concretely, we fine-tune our pre-trained ResNet-50 network on CIFAR-10 [8], CIFAR-100 [8], Food101 [67], Cars [68], DTD [69], Oxford-IIIT-Pets [70], Aircraft [71], Oxford-Flowers [72], and Caltech-101 [73]. The results are shown in Table 6; our CoNe achieves the best performance on **8 out of 9** datasets. **Transfer Learning on Detection and Segmentation.** Next, we evaluate the representation quality by transferring the model to object detection and instance segmentation tasks on the COCO dataset [7]. Since a lot of recent unsupervised pretraining methods claim their performance has surpassed the supervised pretrain. We would like to compare our method with both unsupervised and supervised methods. Specifically, the CoNe pre-trained parameters (from Table 4) will serve as the initialization for Mask-RCNN [74] with C4 and FPN backbone. We would like to compare the performance by following the configuration in [24], and [38]. We fine-tune the model \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline Method & CIFAR10 & CIFAR100 & Food & Cars & DTD & Pets & Flowers & Aircraft & Caltech101 & Mean \\ \hline SimCLR [23] & 97.7 & 85.9 & 88.2 & 91.3 & 73.2 & 89.2 & 97.0 & **88.1** & 92.1 & 89.2 \\ CE [23] & 96.5 & 85.1 & 87.4 & 89.6 & 76.9 & 92.4 & 96.9 & 80.8 & 92.3 & 88.7 \\ SupCon & 97.4 & 84.3 & 87.2 & 91.7 & 74.6 & 93.5 & 96.0 & 84.1 & 91.0 & 88.9 \\ \hline **CoNe** & **97.9** & **86.2** & **88.5** & **91.9** & **77.6** & **94.7** & **97.7** & 88.0 & **94.4** & **90.8** \\ \hline \hline \end{tabular} \end{table} Table 6: Transfer learning on downstream classification datasets. The performance of other methods are directly copied from [1]. Following the same evaluation protocol from [23, 25], we report Top-1 accuracy except for Pets, Flowers, and Caltech101 for which we report mean per-class accuracy. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{3}{c}{COCO detection} & \multicolumn{3}{c}{COCO instance seg.} \\ \hline Method & AP\({}_{50}^{Box}\) & AP\({}^{Box}\) & AP\({}_{75}^{Box}\) & AP\({}_{50}^{Mask}\) & AP\({}^{Mask}\) & AP\({}_{75}^{Mask}\) \\ \hline _ResNet-50 with C4 Backbone_ & & & & & & \\ Cross-Entropy (Supervised) & 58.2 & 38.2 & 41.2 & 54.7 & 33.3 & 35.2 \\ SimCLR [23] & 57.7 & 37.9 & 40.9 & 54.6 & 33.3 & 35.3 \\ MoCo v2 [34] & 58.8 & 39.2 & 42.5 & 55.5 & 34.3 & 36.6 \\ SwAV [27] & 58.6 & 38.4 & 41.3 & 55.2 & 33.8 & 35.9 \\ SimSiam [36] & 59.3 & 39.2 & 42.1 & 56.0 & 34.4 & 36.7 \\ Barlow Twins [26] & 59.0 & 39.2 & 42.5 & 56.0 & 34.3 & 36.5 \\ **CoNe (ours)** & **59.9** & **39.8** & **42.9** & **56.3** & **34.6** & **36.8** \\ \hline _ResNet-50 with FPN_ & & & & & & \\ Cross-Entropy (Supervised) [47] & 61.1 & 40.1 & 43.8 & 57.7 & 35.7 & 38.0 \\ Cross-Entropy w/ CutMix (Supervised) [10] & 60.9 & 40.8 & 44.3 & 57.8 & 36.8 & 39.5 \\ SupCon (Supervised) [1] & 61.2 & 41.0 & 44.7 & 58.2 & 37.0 & 39.6 \\ SL-MLP (Supervised) [47] & **61.8** & 40.7 & 44.2 & 58.4 & 36.1 & 38.5 \\ **CoNe (Ours)** & 61.5 & **41.1** & **45.1** & **58.5** & **37.2** & **40.1** \\ \hline \hline \end{tabular} \end{table} Table 7: Transfer learning on object detection and instance segmentation. \begin{table} \begin{tabular}{l c c c} \hline \hline Objective & Arch & Epochs & Top-1 \\ \hline Cross-Entropy & RegNetX-400MF & 100 & 72.8 \\ **CoNe (Ours)** & **RegNetX-400MF** & **100** & **74.6** \\ \hline Cross-Entropy & MobileNet V2 & 150 & 71.9 \\ **CoNe (Ours)** & **MobileNet V2** & **150** & **73.8** \\ \hline Cross-Entropy & ShuffleNet v2 1.0\(\times\) & 240 & 69.4 \\ **CoNe (Ours)** & **ShuffleNet v2 1.0\(\times\)** & **240** & **72.2** \\ \hline Cross-Entropy & DeiT-Tiny & 300 & 74.4 \\ **CoNe (Ours)** & **DeiT-Tiny** & **300** & **76.0** \\ \hline \hline \end{tabular} \end{table} Table 8: More Experiments with Different Architectures. on the _train2017_ set and evaluate on _val2017_. The schedule is the default 1x in Detectron2 [75]. We report the standard evaluation metric AP\({}_{50}\), AP, and AP\({}_{75}\) for detection and segmentation. Table 7 shows that CoNe surpasses prior arts (_e.g._ SimSiam and SL-MLP) on these localization-based tasks and is significantly better than the cross-entropy baseline. **Robustness to Image Corruptions** Next, we also test the robustness of our CoNe on the ImageNet-C dataset [76], which consists of 15 types of natural corruptions. Followed by the standard benchmark in [76], we adopt the Mean Corruption Error (mCE) as our metric. The results are shown in Table 9 below. As we can observe that CoNe dramatically improves the robustness of the model. ## 5 Ablation study In this section, we will empirically study our CoNe based on various conditions and show the effect of each component and hyper-parameter sensitivities of our methods. For all experiments in this section, we adopt the ResNet-50 as our backbone and train the model on ImageNet for 100 epochs. We use the most standard augmentation policies (_i.e._RandomResizedCrops and RandomFlip) by default unless we have mentioned. **Effect of Each Components**. We first show the effect of each loss function in Table 10. The ResNet-50 achieves 76.9% Top-1 accuracy with the vanilla cross-entropy loss, which is our baseline. Adding a projection head does not affect the performance (\(2^{nd}\) row); we want to emphasize again that this projection head is only used to reduce the dimension since we need to store a large number of features in the memory bank; it just introduces an additional 7M FLOPS, which is negligible compared with the entire network (4.1G). Next, we try to joint train the model with \(\mathcal{L}_{in}^{sup}\) loss as Eq. (7). The results in \(3^{rd}\) row show that \(\mathcal{L}_{in}^{sup}\) substantially improve the baseline by \(+1.2\%\). We also tried to replace \(\mathcal{L}_{in}^{sup}\) by \(\mathcal{L}_{out}^{sup}\) and performed an extensive hyper-parameter search on the temperature and loss weight (\(4^{th}\) row). However, the best result we can get is 76.9% which has no difference from our baseline. Finally, further incorporating the distributional consistency loss could bring \(+0.6\%\) improvements. **Effect of Contrastive View**. We also conduct experiments to ablate the effect of including/excluding the contrastive view in \(\mathcal{L}_{in}^{sup}\). We tried both strong (AutoAug [14], RandAug [15], SimCLR Aug [23]) and weak augmentations in this experiment. We do not include the \(\mathcal{L}_{dc}\) and train the model for 100 epochs. The results are in Table 11. Basically, when the standard augmentation is adopted, excluding the contrastive view is \(+0.6\%\) better than including it. When we work with a stronger augmentation, the performance gaps between these two settings can be reduced, but the excluding the contrastive view is consistently slightly better than including it. Thus, we would exclude the contrastive view by default. We believe such experiments should further support our analysis in Section 3.3. \begin{table} \begin{tabular}{l c c c c} \hline \hline Augmentation & W/ Contra & W/o Contra & Diff \\ \hline Standard & 77.5 & 78.1 & \(+0.6\) \\ AutoAug [14] & 78.1 & 78.3 & \(+0.2\) \\ RandAug [15] & 77.9 & 78.1 & \(+0.2\) \\ SimCLR Aug [23] & 76.9 & 77.2 & \(+0.3\) \\ \hline \hline \end{tabular} \end{table} Table 11: Effect of including/excluding contrastive view for \(\mathcal{L}_{in}^{sup}\). \begin{table} \begin{tabular}{l c c c} \hline \hline Arch & Cross-Entropy & SupCon & **CoNe (Ours)** \\ \hline ResNet-50 & 68.6 & 67.2 & **52.7** \\ ResNet-200 & 52.4 & 50.6 & **39.9** \\ \hline \hline \end{tabular} \end{table} Table 9: Robustness experiments on ImageNet-C. Note that we report the mCE in this experiment; **lower mCE indicates better performance**. **Comparing with Other Nearest Neighbor Contrast Method**. The concept of the nearest neighbor contrast method has been introduced in previous research [39, 77]. However, our approach, CoNe, stands apart from the methodologies presented in [39, 77] due to its fundamental differences. In comparison to [77], our CoNe method offers a distinct characteristic - the capability to yield a more substantial gradient for similar positive instances. This distinguishing feature holds immense significance and has been a key focus in our work. Furthermore, when comparing with [39], it becomes evident that the latter exclusively attracts the 1-nearest neighbor (1-NN), while neglecting other positive samples. In contrast, our CoNe method accounts for a broader spectrum of positive instances, resulting in a more comprehensive learning process. To substantiate the efficacy of CoNe, we conducted an experiment comparing its performance against that of [77] and [39]. The results of this experiment are presented in Table 12. The outcomes unmistakably demonstrate the superior performance of our CoNe method compared to other nearest neighbor contrast techniques in a supervised setting. **EMA Related Hyper-parameters**. Next, we would like to examine the effect of various EMA-related hyper-parameters. Note that we directly adopt the optimal value for \(\lambda_{sup}\), \(\lambda_{dc}\), \(\tau_{sup}\), and \(\tau_{dc}\) in this experiment. Concretely, we study the consequence of memory bank size, the number of Top-\(N\) for computing \(\mathcal{L}_{in}^{sup}\), and the momentum updated coefficient value. We have shown the performance for these factors under different settings in Table 15 16 17. CoNe is quite robust against these hyper-parameters; the worst result is only about 0.2% lower than the best setting. **Visualizations for Gradient Coefficient**. To further verify our analysis for the gradient of \(\mathcal{L}_{in}^{sup}\), we randomly select some images and calculate the gradient coefficient \(\alpha\) with respect to their positive samples. We present the visualization results in Figure 3. Obviously, a more semantic similar sample will result in a larger \(\alpha\), which contributes more to the optimization direction. \begin{table} \begin{tabular}{c|c c c c} \hline \(\lambda_{sup}\)\(\tau_{sup}\) & 0.05 & 0.07 & 0.1 & 0.2 \\ \hline 0.3 & 77.5 & 77.4 & 77.6 & 77.3 \\ 0.7 & 77.6 & 78.0 & **78.1** & 78.1 \\ 1.0 & 77.7 & 78.0 & **78.1** & 78.0 \\ \end{tabular} \end{table} Table 13: Hyper-Parameter Sensitivity for \(\lambda_{sup}\) and \(\tau_{sup}\) \begin{table} \begin{tabular}{l c c c} \hline \hline Method & Cross-Entropy & Epoch & Top-1 \\ \hline baseline & & 100 & 76.9 \\ CMSF [77] & & 200 & 76.4 \\ CMSF [77] & ✓ & 100 & 77.0 \\ NNCLR [39] & ✓ & 100 & 77.5 \\ **CoNe (Ours)** & ✓ & **100** & **78.1** \\ \hline \hline \end{tabular} \end{table} Table 12: Comparing with Other Nearest Neighbor Contrast Method. \begin{table} \begin{tabular}{c|c c c c} \hline \(\lambda_{dec}\)\(\tau_{dc}\) & 0.05 & 0.07 & 0.1 & 0.2 \\ \hline 0.2 & 78.6 & 78.4 & 78.6 & 78.5 \\ 0.4 & 78.5 & **78.7** & **78.7** & 78.4 \\ 0.6 & 78.5 & **78.7** & 78.5 & 78.4 \\ \end{tabular} \end{table} Table 14: Hyper-Parameter Sensitivity for \(\lambda_{dc}\) and \(\tau_{dc}\) \begin{table} \begin{tabular}{c|c c c c} \hline \(\lambda_{sup}\)\(\tau_{sup}\) & 0.05 & 0.07 & 0.1 & 0.2 \\ \hline 0.3 & 77.5 & 77.4 & 77.6 & 77.3 \\ 0.7 & 77.6 & 78.0 & **78.1** & 78.1 \\ 1.0 & 77.7 & 78.0 & **78.1** & 78.0 \\ \end{tabular} \end{table} Table 13: Hyper-Parameter Sensitivity for \(\lambda_{sup}\) and \(\tau_{sup}\) \begin{table} \begin{tabular}{c|c c c c} \hline \(\lambda_{dec}\)\(\tau_{dc}\) & 0.05 & 0.07 & 0.1 & 0.2 \\ \hline 0.2 & 78.6 & 78.4 & 78.6 & 78.5 \\ 0.4 & 78.5 & **78.7** & **78.7** & 78.4 \\ 0.6 & 78.5 & **78.7** & 78.5 & 78.4 \\ \end{tabular} \end{table} Table 12: Comparing with Other Nearest Neighbor Contrast Method. **t-SNE Visualizations**. Finally, we perform a t-SNE visualization [78] for the class center-based method, SupCon, and our CoNe on intra-class samples. We randomly select two classes from the ImageNet and present the visualization results in Figure 4. Note that the features shown in each figure belong to the same class. As can be seen, the class center-based method and SupCon does not consider the discrepancy among intra-class instances since the intra-class features are randomly scattered across the figure, whereas our CoNe has a clear separation among the intra-class features. We believe this visualization results could show the characteristics that we present in Figure 2. ## 6 Conclusion In this work, we propose Contrast with Your Neighbours (CoNe), a new supervised learning framework for image classification which utilizes \(\mathcal{L}_{in}^{sup}\) in a more proper way. Theoretical analysis shows that \(\mathcal{L}_{in}^{sup}\) loss aims to pull the feature of the training samples toward its similar positives, and more similar anchor will have a greater contribution to the optimization direction. Incorporating with classical cross-entropy loss significantly enhance the power of \(\mathcal{L}_{in}^{sup}\). We also introduce the distributional consistency, which further enhances the compactness of similar samples and improves the classification performance. An extensive empirical study shows the effect of each component in our framework. The experiments on large-scale datasets and various architectures demonstrate the state-of-the-art performance for image classification problems. The current limitation of this work is that we have not tried CoNe to cooperate with the most advanced training recipe (_e.g._[4]); since it requires more experiments to find the optimal setting; we will leave this problem as our future work.
2306.16864
Validity of Markovian modeling for transient memory-dependent epidemic dynamics
The initial transient phase of an emerging epidemic is of critical importance for data-driven model building, model-based prediction of the epidemic trend, and articulation of control/prevention strategies. In principle, quantitative models for real-world epidemics need to be memory-dependent or non-Markovian, but this presents difficulties for data collection, parameter estimation, computation and analyses. In contrast, the difficulties do not arise in the traditional Markovian models. To uncover the conditions under which Markovian and non-Markovian models are equivalent for transient epidemic dynamics is outstanding and of significant current interest. We develop a comprehensive computational and analytic framework to establish that the transient-state equivalence holds when the average generation time matches and average removal time, resulting in minimal Markovian estimation errors in the basic reproduction number, epidemic forecasting, and evaluation of control strategy. Strikingly, the errors depend on the generation-to-removal time ratio but not on the specific values and distributions of these times, and this universality will further facilitate estimation rectification. Overall, our study provides a general criterion for modeling memory-dependent processes using the Markovian frameworks.
Mi Feng, Liang Tian, Ying-Cheng Lai, Changsong Zhou
2023-06-29T11:20:43Z
http://arxiv.org/abs/2306.16864v2
# Validity of Markovian modeling for transient memory-dependent epidemic dynamics ###### Abstract The initial transient phase of an emerging epidemic is of critical importance for data-driven model building, model-based prediction of the epidemic trend, and articulation of control/prevention strategies. In principle, quantitative models for real-world epidemics need to be memory-dependent or non-Markovian, but this presents difficulties for data collection, parameter estimation, computation and analyses. In contrast, the difficulties do not arise in the traditional Markovian models. To uncover the conditions under which Markovian and non-Markovian models are equivalent for transient epidemic dynamics is outstanding and of significant current interest. We develop a comprehensive computational and analytic framework to establish that the transient-state equivalence holds when the average generation time matches and average removal time, resulting in minimal Markovian estimation errors in the basic reproduction number, epidemic forecasting, and evaluation of control strategy. Strikingly, the errors depend on the generation-to-removal time ratio but not on the specific values and distributions of these times, and this universality will further facilitate estimation rectification. Overall, our study provides a general criterion for modeling memory-dependent processes using the Markovian frameworks. ## Introduction When an epidemic emerges, the initial transient phase of the disease spreading dynamics before a steady state is reached is of paramount importance, for two reasons [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14]. First, key indicators or parameters characterizing the underlying dynamical process and critical for prediction and articulation of control strategies, such as the generation time, the serial intervals and the basic reproduction number, are required to be estimated when the dynamics have not reached a steady state. Second, it is during the transient phase control and mitigation strategies can be effectively applied for preventing a large scale outbreak. Prediction and control depend, of course, on a quantitative model of the epidemic process, which can be constructed based on the key parameters estimated from data collected during the transient phase. In principle, since the dynamical processes underlying real-world epidemics are generally memory-dependent in the sense that the state evolution depends on the history, a rigorous modeling framework needs to be of the non-Markovian type, but this presents great challenges in terms of data collection, parameter estimation, computation and analyses [15; 16; 17]. The difficulties can be significantly alleviated by adopting the traditional memoryless, Markovian framework [18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29]. An outstanding question is, under what conditions will an approximate equivalence between non-Markovian and Markovian dynamics hold during the _transient phase_ of the epidemic? Additionally, another important issue remains, how are the errors of Markovian estimation determined? The purpose of this paper is to a comprehensive answer to these questions. The COVID-19 pandemic has highlighted the need and importance of understanding disease spreading and transmission to accurately predict, control and manage future outbreaks through non-pharmacological interventions and vaccine allocation strategies [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11]. To accomplish these goals, accurate mathematical modeling of the disease spreading dynamics is key. In a general population, epidemic transmission occurs via some kind of point process, where individuals become infected at different points in time. It has been known that point processes in the real world are typically non-Markovian with a memory effect in which the distribution of the interevent times is not exponential [3; 26; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47]. For example, the interevent time distribution arising from the virus transmission with COVID-19 is not of the memoryless exponential type but typically exhibits memory-dependent features characterized by the Weibull distribution [3]. Strictly speaking, from a modeling perspective, disease spreading should be described by a non-Markovian process. A non-Markovian approach takes into account historical memory of disease progression, mathematically resulting in a complex set of integro-differential equations in the form of convolution. There are significant difficulties with non-Markovian modeling of memory-dependent disease spreading. The foremost is data availability. In particular, while standard epidemic spreading models are available, the model parameters need to be estimated through data. A non-Markovian model often requires detailed and granular data that can be difficult to get, especially during the early stage of the epidemic where accurate modeling is most needed [15]. From a theoretical point of view, it is desired to obtain certain closed-form solutions for key quantities such as the onset and size of the epidemic outbreak, but this is generally impossible for non-Markovian models [48; 18]. Computationally, accommodating memory effects in principle makes the underlying dynamical system infinitely dimensional, practically requiring solving an unusually large number of dynamical variables through a large number of complex integro-differential equations [16; 17]. In contrast, in an idealized Markovian point process, events occur at a fixed rate, leading to an exponential distribution for the interevent time intervals and consequently a memoryless process. If the spreading dynamics were of the Markovian type, the aforementioned difficulties associated with non-Markovian dynamics no longer exist. In particular, a Markovian spreading process can be described by a small number of ordinary differential equations with a few parameters that can be estimated even from sparse data, and the numerical simulations can be carried out in a computationally extremely efficient manner [18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29]. For these reasons, many recent studies of the COVID-19 pandemic assumed Markovian behaviors to avoid or "escape" from the difficulties associated with non-Markovian modeling [4; 5; 6; 7; 8; 9; 10; 11]. The issue is whether such a simplified approach can be justified. Addressing this issue requires a comprehensive understanding of the extent to which the Markovian approach represents a good approximation to model non-Markovian type of memory-dependent spreading dynamics, and specifically of the conditions under which the Markovian theory can produce accurate results that match those from the non-Markovian model. There were previous studies of the so-called steady-state equivalence between Markovian and non-Markovian modeling for epidemic spreading. In particular, when the system has reached a steady state, such an equivalence can be established through a modified definition of the effective infection rate [24; 21; 25]. From a realistic point of view, the equivalence limited only to steady states may not be critical as the transient phase of the spreading process before any steady state is reached is more relevant and significant. For example, when an epidemic occurs, it is of fundamental interest to estimate the key indicators such as the generation time (the time interval between the infections of the infector and infectee in a transmission chain), the serial interval (the time from illness onset in the primary case to illness onset in the secondary case), and the basic reproduction number (the average number of secondary transmissions from one infected person), but they are often needed to be estimated when the dynamics have not reached a steady state [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14]. It is the equivalence in the transient dynamics rather than the steady state that determines whether the transmission features in the early stages of a memory-dependent disease outbreak can be properly measured through Markovian modeling. Moreover, it is only during the transient phase control and mitigation strategies can be effective for preventing a large scale outbreak. Discovering when and how a non-Markovian process can be approximated by a Markovian process during the transient state is thus of paramount importance. To our knowledge, such a "transient-state equivalence", where the Markovian and non-Markovian transmission models produce similar behaviors over the entire transient transmission period, has not been established. In fact, the conditions under which the transient equivalence may hold are completely unknown at the present. In this paper, we present results from a comprehensive study of how memory effects impact the Markovian estimations in terms of the errors that arise from the Markovian hypothesis. We consider both the steady-state and transient-state equivalences between non-Markovian and Markovian models. We first rigorously show that, in the steady state, a memory-dependent non-Markovian spreading process is always equivalent to certain Markovian (memoryless) ones. We then turn to the transient states and find that an approximate equivalence can still be achieved but only if the average generation time matches the average removal time in the memory-dependent non-Markovian spreading dynamics. Qualitatively, the equality of the two times gives rise to a memoryless correlation between the infection and removal processes, thereby minimizing the impact of any memory effects. We establish that the equality gives the condition under which Markovian theory accurately describes memory-dependent transmission. One fundamental quantity underlying an epidemic process is the basic reproduction number \(R_{0}\). Our theoretical analysis indicates that, when the average generation and removal times are equal, the transient-state equivalence between memory-dependent and memoryless transmissions will minimize the error of Markovian approach in estimating \(R_{0}\) and lead to its accurate epidemic forecasting and prevention evaluation. Another finding is that the generation-to-removal time ratio plays a decisive role in the accuracy of the Markovian approximation. Specifically, if the average generation time is smaller (greater) than the average removal time, the Markovian approximate will lead to an overestimation (underestimation) of \(R_{0}\) and epidemic forecasting as well as the errors of the prevention evaluation, which can also be verified based on readily accessible clinical data of 4 types of real diseases. Strikingly, the estimation accuracy is largely determined by the time ratio and hardly depends on the particular forms of time distributions or the specific values of the average generation and removal times. This property is of great practical significance, because it is in general challenging to obtain the detailed distributions of the generation and removal times in the early stages of the epidemic [49], but their average values can be reliably estimated even during the transient phase [50; 51; 52; 53; 54]. Moreover, based on this property, we have developed a semi-empirical mathematical relationship that connects the errors in estimating \(R_{0}\) with the generation-to-removal time ratio. This relationship holds practical value as it can be utilized to rectify errors in real-world scenarios. And the rectification of \(R_{0}\) and epidemic forecasting can be accomplished through our web-based application [55]. Overall, our study establishes a general criterion for modeling memory-dependent processes within the context of Markovian frameworks. Once the condition for the existence of a transient-state equivalence between Markovian and non-Markovian dynamics is fulfilled, epidemic forecasting and prevention evaluation can be carried out using the Markovian model, again based solely on the data collected from the transient phase. ## Results The overall structure of this work is depicted in Fig. 1. The section titled "Model" presents the Model building of the age-stratified Susceptible-Infected-Removed (SIR) spreading (Fig. 1a), highlighting the difference between the Markovian (memoryless) and non-Markovian (memory-dependent) dynamics (Fig. 1b). In the "Dynamical equivalence" section, we demonstrate the equivalence between Markovian (memoryless) and non-Markovian (memory-dependent) dynamics for steady state and transient dynamics, which will further lead to the accurate description of memory-dependent dynamics by the Markovian theory (Fig. 1c). The section "Markovian approximation of memory-dependent spreading dynamics" analyzes the errors of the Markovian approach in estimating \(R_{0}\), epidemic forecasting, and prevention evaluation (Fig. 1d). ### Model We articulate an age-stratified SIR spreading dynamics model, in which the entire population is partitioned into various age groups with intricate age-specific contact rates among them. The distribution of population across different age groups is represented by an age distribution vector (\(\mathbf{p}\)), with the age-dependent contact matrix (\(\mathbf{A}\)) quantifying the transmission rates between different age groups. Both \(\mathbf{p}\) and \(\mathbf{A}\) can be constructed from empirical data [56; 57]. For convenience, to distinguish between the actual dynamical process and its theoretical treatment, throughout this paper we use the terms "memory-dependent" and "memoryless" to describe actual spreading processes and Monte Carlo simulations, while in various theoretical analyses, the corresponding terms are "non-Markovian" and "Markovian." The mechanism of disease transmission across different age groups and the recovery or death of infected individuals can be described by the SIR model, as illustrated in Fig. 1a, where the individuals are placed into three compartments: susceptible (S), infected (I), and removed (R). Susceptible individuals (S) have not contracted the disease and are at risk of being infected. Infected individuals (I) have contracted the disease and can infect others. Removed individuals (R) have recovered or died from the disease. There are two dynamical processes: (1) infection during which susceptible individuals become infected by others and transition to the I state so as to become capable of infecting others, as shown in Fig. 1a(i-iii); and (2) removal during which infected individuals recover or die from the disease transmission and transition to the R state, as shown in Fig. 1a(iv). The ability to infect others of an infected individual can be characterized by the infection time distribution, \(\psi_{\rm inf}(\tau)\), where \(\tau\) denotes the time elapsed between the time the individual is infected and the current time, and the probability of the infection process occurring during the time interval \([\tau,\tau+d\tau)\) is given by \(\psi_{\rm inf}(\tau)d\tau\), as shown in Fig. 1b(i). Likewise, the removal process is described by the removal time distribution, \(\psi_{\rm rem}(\tau)\), where the probability of a removal occurring within the time interval \([\tau,\tau+d\tau)\) is given by \(\psi_{\rm rem}(\tau)d\tau\), as shown in Fig. 1b(ii). The time distributions of the infection and removal processes with memory effects are general, with the exponential distributions associated with the memoryless process being a special case of the memory-dependent process. The generic memory-dependent SIR spreading dynamics can be described by a set of deterministic integro Figure 1: **Overall structure of this work.** (**a**) SIR Model. Each individual belongs to one of the three states: susceptible (S), infected (I), or removed (R). When infected (i), a susceptible individual will switch into the I state (ii) and gain the ability to infect others (iii). An infected individual is removed (through recovery or death) with a probability (iv). (**b**) Non-Markovian versus Markovian process. The infection capacity of an infected individual is characterized by the infection time distribution \(\psi_{\rm inf}(\tau)\) and its removal can be described by the removal time distribution \(\psi_{\rm rem}(\tau)\). For the non-Markovian process, the distributions can assume quite general forms, while the distributions are exponential for a Markovian process. (**c**) Equivalence between non-Markovian and Markovian processes: (i) steady-state equivalence holds under all conditions; (ii) transient-state equivalence only holds only when \(T_{\rm gen}\) is equal to \(T_{\rm rem}\). (**d**) Markovian estimation of memory-dependent process. (i) The initial phase of the Monte Carlo simulation is used to fit the parameters according to the Markovian theory. (ii) Important issues such as the estimation of \(R_{0}\), epidemic forecasting, and the evaluation of the vaccination strategies can be addressed by the theory. (iii) The remaining data generated by the Monte Carlo simulation is used to test the accuracy of the estimated \(R_{0}\), epidemic forecasting, and prevention evaluation. differential equations: \[\frac{ds_{l}(t)}{dt} =-s_{l}(t)k\sum_{m=1}^{n}A_{lm}p_{m}\int_{0}^{t}\omega_{\rm inf}(t-t ^{\prime})\Psi_{\rm rem}(t-t^{\prime})dc_{m}(t^{\prime}), \tag{1}\] \[i_{l}(t) =\int_{0}^{t}\Psi_{\rm rem}(t-t^{\prime})dc_{l}(t^{\prime}),\] (2) \[r_{l}(t) =\int_{0}^{t}\big{[}1-\Psi_{\rm rem}(t-t^{\prime})\big{]}dc_{l}(t^ {\prime}), \tag{3}\] where \(s_{l}(t)\), \(i_{l}(t)\), and \(r_{l}(t)\), respectively, denote the fractions of the susceptible, infected, and removed individuals in age group \(l\). The term \(c_{l}(t)=1-s_{l}(t)=i_{l}(t)+r_{l}(t)\) represents the fraction of cumulative infections (including both infections and removals) with respect to the total population in age group \(l\), while \(k\) is a parameter to adjust the overall contacts and \(n\) is the total number of age groups. The quantity \(\omega_{\rm inf}(\tau)\) represents the hazard function of \(\psi_{\rm inf}(\tau)\), meaning the rate at which infection happens at \(\tau\), given that the infection has not occurred before \(\tau\). \(\Psi_{\rm rem}(\tau)\) is the survival function of \(\psi_{\rm rem}(\tau)\), meaning probability that the removal has not occurred by \(\tau\) (see **Method** for detailed calculations for hazard and survival functions). When the infection and removal time distributions are known, Eqs. (1-3) provide an accurate description of generic (including memory-dependent and memoryless) SIR spreading Monte Carlo simulations in an age-stratified population-based system. As shown in Figs. 2a-c, the theory is validated by the agreement between the numerical solutions of Eqs. (1-3) and the results from direct Monte Carlo simulations. (See **Method** for a detailed description of the Monte Carlo simulation procedure.) Eqs. (1-3) provide a general framework encompassing both non-Markovian and Markovian descriptions. If the infection and removal time distributions are exponential: \(\psi_{\rm inf}(\tau)=\gamma e^{-\gamma\tau}\) and \(\psi_{\rm rem}(\tau)=\mu e^{-\mu\tau}\), Eqs. (1-3) can be reformulated into a Markovian theory and further simplified into a set of ordinary differential equations with the constant infection and removal rates \(\gamma\) and \(\mu\) (A detailed derivation of these equations is presented in Supplementary Note 1) : \[\frac{ds_{l}(t)}{dt} =-s_{l}(t)k\gamma\sum_{m=1}^{n}A_{lm}p_{m}i_{m}(t), \tag{4}\] \[\frac{di_{l}(t)}{dt} =s_{l}(t)k\gamma\sum_{m=1}^{n}A_{lm}p_{m}i_{m}(t)-\mu i_{l}(t),\] (5) \[\frac{dr_{l}(t)}{dt} =\mu i_{l}(t). \tag{6}\] While the non-Markovian and Markovian theories [Eqs. (1-3) and Eqs. (4-6), respectively] describe the memory-dependent and memoryless Monte Carlo simulations, we focus on whether the Markovian theory can accurately capture memory-dependent dynamics and how memory effects influence its accuracy. For this purpose, we seek to establish the equivalence between Markovian and non-Markovian approaches for describing spreading dynamics. ### Dynamical equivalence Eqs. (1-6) provide a base to study the steady-state equivalence and transient-state equivalence between non-Markovian and Markovian theories, where a steady state characterizes the long-term dynamics of disease spreading and a transient state is referred to as the short-term behavior prior to system's having reached the steady state. As illustrated in Fig. 1c, note that steady-state equivalence means that the two types of spreading dynamics attain identical steady states [21; 24; 25], whereas transient-state equivalence implies that two types of dynamics are consistent throughout the entire transmission period. Transient-state equivalence thus implies steady-state equivalence, but not vice versa. Since Eqs. (1-6) also provides a numerical framework for Monte Carlo simulations, the terms "steady-state equivalence" and "transient-state equivalence" not only describe the connection between non-Markovian and Markovian theories, but also illustrate the relationship between memory-dependent and memoryless processes. Therefore, the equivalence between the two theories implies the equivalence between the two corresponding processes, and vice versa. #### Steady-state equivalence Eqs. (1-3) give the following transcendental equation for determining the steady state (see Supplementary None 2 for detailed derivation): \[\tilde{s}_{l}=\dot{s}_{l}e^{-\frac{R_{0}}{\Lambda_{\max}}\sum_{m=0}^{n}kA_{lm}p _{m}(\hat{r}_{m}-\hat{r}_{m})}, \tag{7}\] where \(\tilde{s}_{l}=\lim\limits_{t\rightarrow+\infty}s_{l}(t)\) and \(\tilde{r}_{l}=\lim\limits_{t\rightarrow+\infty}r_{l}(t)\) denote the fractions of the susceptible and removed individuals in age group \(l\) at the steady state (note that \(\tilde{s}_{l}=1-\tilde{r}_{l}\), because at steady state, no infection exists), while \(\dot{s}_{l}=s_{l}(0)\) and \(\hat{r}_{l}=r_{l}(0)\) represent the initial fractions of the susceptible and removed individuals in this age group. For non-Markovian dynamics, basic reproduction number \(R_{0}\) can be determined by: \[R_{0}=\Lambda_{\max}\int_{0}^{+\infty}\omega_{\rm inf}(\tau)\Psi_{\rm rem}( \tau)d\tau. \tag{8}\] For Markovian dynamics, \(R_{0}\) is given by \[R_{0}=\frac{\gamma\Lambda_{\max}}{\mu}. \tag{9}\] where \(\Lambda_{\max}\) is the maximum eigenvalue of the matrix \(k\mathbf{A}\circ\mathbf{p}\), and \(\circ\) denotes a row-wise Hadamard product between a matrix and a vector (see Supplementary Note 2 for a detailed description). Since Eq. (7) applies to both non-Markovian and Markovian dynamics, an identical \(R_{0}\) value in the two cases will result in equivalent steady states from the same initial conditions. Consequently, for a given non-Markovian spreading process, there exists an infinite number of Markovian models with the same steady state, as the \(R_{0}\) value is only determined by the ratio of \(\gamma\) to \(\mu\), but not by either value. As shown in Fig. 2d, memory-dependent and memoryless spreading dynamics that reach the same steady state with the identical \(R_{0}\) value confirm the steady-state equivalence. Fig. 2e demonstrates that, even for \(R_{0}\) ranging from \(0.01\) to \(2\), the equivalent memory-dependent and memoryless spreading dynamics still produce highly consistent steady states that can be calculated from Eq. (7), which share the same critical point of phase transition at \(R_{0}=1\). #### Transient-state equivalence From the preceding section, we used the basic reproduction number \(R_{0}\), a fundamental metric quantifying the number of secondary infections generated by a single individual, to characterize the steady-state equivalence. Here, we propose to quantify the transient-state equivalence through the average generation time \(T_{\rm gen}\) that measures the "velocity" at which secondary infections occur. This time can be calculated as [3; 58], \[T_{\rm gen}=\int_{0}^{+\infty}\tau\psi_{\rm gen}(\tau)d\tau,\] where \[\psi_{\rm gen}(\tau)=\frac{\omega_{\rm inf}(\tau)\Psi_{\rm rem}(\tau)}{\int_{ 0}^{+\infty}\omega_{\rm inf}(\tau^{\prime})\Psi_{\rm rem}(\tau^{\prime})d\tau^ {\prime}} \tag{10}\] is the generation time distribution. Effectively, \(T_{\rm gen}\) measures the average duration of disease transmission from an infected individual to the next generation of individuals. Likewise, the average infection time \(T_{\rm inf}\) and the average removal time \(T_{\text{rem}}\) are defined as the mean values of the infection and removal time distributions: \[T_{\text{inf}} =\int_{0}^{+\infty}\tau\psi_{\text{inf}}(\tau)d\tau,\] \[T_{\text{rem}} =\int_{0}^{+\infty}\tau\psi_{\text{rem}}(\tau)d\tau.\] Figure 2: **Steady-state and transient-state equivalence between Markovian and non-Markovian dynamics.****a–c**: The solid brown, blue, and green curves represent the theoretical results of the susceptible, infected, and removed fractions, while the solid orange, red, and purple curves show the corresponding results of 100 independent Monte Carlo simulations. **d**: The red and blue curves, respectively, depict the removed fractions from the memory-dependent and memoryless Monte Carlo simulations of 100 independent realizations with steady-state equivalence. **e**: Red + and blue \(\times\) markers, respectively, represent the steady-state removed fractions of memory-dependent and memoryless Monte Carlo simulations for different values of \(R_{0}\), where each marker is the result of averaging 100 independent simulations. The orange curve is the numerical calculations from Eq. (7), and the vertical dashed line denotes the critical point \(R_{0}=1\). **f–g**: For \(T_{\text{gen}}=T_{\text{rem}}\) in the non-Markovian theory: the blue and green curves in **f** denote the susceptible and removed fractions, while the black \(\times\) markers represent the inferred susceptible fractions calculated by substituting removed fractions in Eq. (11), which agrees with the susceptible curve calculated from Eq. (1). The red and blue curves in **g** denote the non-Markovian susceptible and removed fractions, while the orange and purple dashed curves are the corresponding curves of the Markovian transmission obtained from Eqs. (13–14), which agree with the non-Markovian results. (The Euler-Lotka equation assumes exponential growth of a disease outbreak during the initial stage. As a result, the Markovian curves in **g** slightly deviate from the non-Markovian ones as the cumulative infections increase. **h–**: For \(T_{\text{gen}}\neq T_{\text{rem}}\) in the non-Markovian theory, the inferred susceptible curve does not match the numerical result, and the susceptible and infected curves of the Markovian transmission obtained from Eqs. (13–14) do not match the corresponding non-Markovian results. **j**: Five scenarios for the non-Markovian time-distribution setting (within each scenario, \(T_{\text{inf}}\) and \(T_{\text{rem}}\) are fixed): Weibull, \(T_{\text{inf}}=5\), \(T_{\text{rem}}=7\) (blue +); Weibull, \(T_{\text{inf}}=5\), \(T_{\text{rem}}=5\) (red \(\times\)); Weibull, \(T_{\text{inf}}=7\), \(T_{\text{rem}}=7\) (purple \(\Box\)); log-normal, \(T_{\text{inf}}=5\), \(T_{\text{rem}}=7\) (green \(\Delta\)); gamma, \(T_{\text{inf}}=5\), \(T_{\text{rem}}=7\) (orange \(\Diamond\)). The value of \(T_{\text{gen}}\) is modified to adjust \(\ln\eta\) for better visualization. In calculating \(T_{\rm gen}\), the individual's removal is taken into account, while \(T_{\rm inf}\) measures the average time of the first disease transmission of an infectious individual without factoring in removal. In the classical memoryless transmission with exponential distributions \(\psi_{\rm inf}(\tau)\) and \(\psi_{\rm rem}(\tau)\), the equality \(T_{\rm gen}=T_{\rm rem}\) holds. However, for memory-dependent spreading, the possible scenarios are: \(T_{\rm gen}=T_{\rm rem}\), \(T_{\rm gen}<T_{\rm rem}\), or \(T_{\rm gen}>T_{\rm rem}\). Specifically, because \(T_{\rm gen}\), \(T_{\rm inf}\) and \(T_{\rm rem}\) all represent the mean values of distributions, it is possible for \(T_{\rm rem}\) to be shorter than \(T_{\rm gen}\) or \(T_{\rm inf}\) in some situations. And our web-based application demonstrates the impact of parameters on the time distributions (infection, removal, and generation) as well as their average times [55]. For a non-Markovian spreading process, if the equality \(T_{\rm gen}=T_{\rm rem}\) holds, in the transient state we have \[s_{l}(t)\simeq\hat{s}_{l}e^{-\frac{R_{0}}{\Lambda_{\rm max}}\sum_{m=0}^{n}kA_ {lm}p_{m}[r_{m}(t)-\hat{r}_{m}]}, \tag{11}\] which exhibits a memoryless transmission pattern similar to Markovian dynamics (see Supplementary Notes 3 for a detailed analysis). Intuitively, the equality \(T_{\rm gen}=T_{\rm rem}\) signifies that the infection and removal processes occur concurrently, which in turn leads to a memoryless relationship between the two processes, thereby minimizing the memory effects. Furthermore, to determine the corresponding Markovian parameters \(\gamma\) and \(\mu\) of the Markovian transmission which is equivalent to the non-Markovian dynamics in the transient state, we need to utilize the Euler-Lotka equation [50; 58; 59]: \[1=R_{0}\int_{0}^{+\infty}e^{-g\tau}\psi_{\rm gen}(\tau)d\tau, \tag{12}\] where \(g\) denotes the growth rate of the non-Markovian dynamics and is another measure of how quickly the epidemic is spreading within a population. Therefore, we can calculate the values of the basic reproduction number, \(R_{0}\), and growth rate, \(g\), in the non-Markovian dynamic by using Eqs. (8), (10) and (12). Additionally, the Markovian form of \(\psi_{\rm gen}(\tau)\) according to Eq. (10) is \(\mu e^{-\mu\tau}\), and the equivalent Markovian and non-Markovian dynamics in the transient state have the same values of \(R_{0}\) and the equal values of \(g\). By substituting \(\psi_{\rm gen}(\tau)=\mu e^{-\mu\tau}\) and the calculated \(R_{0}\) and \(g\) into Eq. (12), we can determine the value of \(\mu\). Furthermore, using Eq. (9), we can find the value of \(\gamma\) based on \(\mu\). Hence, the Markovian parameters \(\gamma\) and \(\mu\) are determined as follows: \[\gamma =\frac{gR_{0}}{\Lambda_{\rm max}(R_{0}-1)}, \tag{13}\] \[\mu =\frac{g}{R_{0}-1}. \tag{14}\] And we provide visualizations that illustrate how the values of \(\gamma\) and \(\mu\) are influenced by the distribution parameters in our web-based application[55]. As illustrated in Figs. 2f-g, when the equality \(T_{\rm gen}=T_{\rm rem}\) holds for the non-Markovian dynamics, Eq. (11) holds, which can be seen by comparing the susceptible curve calculated from Eq. (1) to that inferred from Eq. (11), as shown in Fig. 2f. In this case, the Markovian spreading curves deduced from Eqs. (13-14) closely align with the non-Markovian transient curves, as shown in Fig. 2g. However, as shown in Figs. 2h-i, if the equality does not hold, the equivalence in transient states breaks down. It is important to note that the Euler-Lotka equation assumes an exponential growth of a disease outbreak and is only reasonable at the initial stage. Consequently, as the cumulative infections increase (Fig. 2g), the Markovian curves will exhibit slight deviations from the non-Markovian counterparts. Meanwhile, because the equivalent dynamics share the same \(R_{0}\), they will ultimately reach the same steady state, ensuring that deviations will diminish while they approach the steady state. To evaluate, under different values of the generation-to-removal time ratio \(\eta\equiv T_{\rm gen}/T_{\rm rem}\) for non-Markovian dynamics we introduce a metric, \(\varepsilon\), to quantify the difference from the corresponding Markovian results calculated from Eqs. (13-14) (see **Method** for detailed definition of \(\varepsilon\)). Fig. 2j shows, for non-Markovian numerical calculations, five scenarios under various forms of time distributions \(\psi_{\rm inf}(\tau)\) and \(\psi_{\rm rem}(\tau)\) constrained by certain average infection and removal times: Weibull, \(T_{\rm inf}=5\), \(T_{\rm rem}=7\); Weibull, \(T_{\rm inf}=7\), \(T_{\rm rem}=7\); Weibull, \(T_{\rm inf}=5\), \(T_{\rm rem}=5\); log-normal, \(T_{\rm inf}=5\), \(T_{\rm rem}=7\); and gamma, \(T_{\rm inf}=5\), \(T_{\rm rem}=7\) (see **Method** for detailed definitions of Weibull, log-normal, and gamma distributions). The \(T_{\rm gen}\) value is adjusted to obtain different values of \(\ln\eta\). For \(\ln\eta=0\), i.e., \(T_{\rm gen}=T_{\rm rem}\), the "distance" \(\varepsilon\) between the transient states of non-Markovian and Markovian dynamics with parameters determined from Eqs. (13-14) is minimal. Otherwise, \(\varepsilon\) increases as \(\ln\eta\) deviates from zero. Remarkably, Fig. 2j shows that \(\varepsilon\) depends only on the ratio of \(T_{\rm gen}\) to \(T_{\rm rem}\), not on the values of \(T_{\rm gen}\), \(T_{\rm inf}\), or \(T_{\rm rem}\), and it is not affected by the specific form of the time distributions. Furthermore, it is important to note that the condition \(T_{\rm gen}=T_{\rm rem}\) can guarantee transient-state equivalence between a non-Markovian dynamic and a Markovian one, but according to Eqs. (13-14), it does not imply that the average generation and removal times of the non-Markovian dynamic must be equal to those of the equivalent Markovian one. For instance, if a non-Markovian dynamic satisfies the condition of transient-state equivalence and we keep its average generation and removal times fixed, altering the shape of the corresponding time distributions will change the transmission speed [58]. This change, in turn, affects the infection and removal rates of the equivalent Markovian dynamic, leading to different average generation and removal times for the Markovian equivalent dynamic (see Supplementary 4 and 5 for a detailed analysis). ### Markovian approximation of memory-dependent spreading dynamics As illustrated in Fig. 1d, testing the applicability of Markovian theory for memory-dependent spreading dynamics requires three steps. The first step is fitting, where the memory-dependent Monte Carlo simulation data are divided into two parts: (a) a short initial period used as the training data for fitting the Markovian parameters in Eqs. (4-6), and (b) the remaining testing data for evaluating the performance of the Markovian model (see **Methods** for details of the fitting procedure). The second step is to use the fitted Markovian model for tasks such as estimating \(R_{0}\), predicting outbreaks and assessing the prevention effects of different vaccination strategies. The third step is testing, i.e., evaluating the accuracy of the Markovian model, e.g., by comparing the estimated and actual \(R_{0}\) values, disease outbreaks and prevention effects. As real-world disease spreading is subject to environmental, social and political disturbing factors, for the fitting and testing steps, we conduct Monte Carlo simulations of stochastic memory-dependent disease outbreaks to generate the training and testing data. Here, we first analyze the influence of \(\eta\) on the estimation of \(R_{0}\) using the Markovian theory, and use the results to design two tasks to evaluate the applicability of the theory in epidemic forecasting and prevention evaluation of memory-dependent spreading. For comparison, we also generate the corresponding results from the non-Markovian theory in the two tasks. #### Estimation of basic reproduction number Estimating basic reproduction number \(R_{0}\) is crucial for determining the ultimate prevalence of disease spreading and for assessing the effectiveness of various disease containment measures [59; 50; 58]. When using the Markovian theory to fit the early-stage transmission of a memory-dependent process, a key parameter that can affect the estimation of \(R_{0}\) is the ratio \(\eta\). To develop an analysis, recall the basic principle for estimating \(R_{0}\): disease spreading dynamics can be viewed as a combination of two parallel processes: infection and removal. In particular, the infection process is the reproduction of the disease within each generation, where each infected individual generates an average of \(R_{0}\) newly infected individuals in the subsequent generation after a mean time period \(T_{\rm gen}\). In the removal process, infected individuals are removed from the spreading chain, where each generation takes an average time \(T_{\rm rem}\) to be removed. For a Markovian type of dynamics with constant \(\gamma\) and \(\mu\), the equality \(T_{\rm gen}=T_{\rm rem}\) holds. Consequently, during the Markovian fitting step, the average number of new infections upon the removal of a single infected individual is taken as the value of \(R_{0}\). For memory-dependent spreading, if the equality \(T_{\rm gen}=T_{\rm rem}\) holds, the memory-dependent spreading curves will possess an approximate memoryless feature so that \(R_{0}\) can be still be estimated by counting the number of new infections at the time when the current generation of infections is removed, as shown in Fig. 3a. However, for \(T_{\rm gen}<T_{\rm rem}\), more than one generation is produced while the current generation is removed, \(R_{0}\) estimated by the Markovian theory will represent an overestimate, as shown in Fig. 3b. For \(T_{\rm gen}>T_{\rm rem}\), less than one generation is created during \(T_{\rm rem}\), the Markovian theory will give an underestimate of \(R_{0}\), as shown in Fig. 3c. Fig. 3d shows fitting curves (training data) from the early stage of memory-dependent spreading simulations that have identical \(R_{0}\) values for \(T_{\rm gen}=T_{\rm rem}\) (red curves), \(T_{\rm gen}<T_{\rm rem}\) (blue curves), and \(T_{\rm gen}>T_{\rm rem}\) (green curves) from the Markovian theory. When the equality \(T_{\rm gen}=T_{\rm rem}\) holds, the Markovian theory with fitted parameters generates accurate the future evolution (red \(\times\) symbols). For \(T_{\rm gen}<T_{\rm rem}\), the outbreak in the initial stage is accelerated, resulting in an overestimation by the Markovian theory (blue \(+\) symbols). For \(T_{\rm gen}<T_{\rm rem}\), the initial outbreaks are decelerated, leading to an underestimation by the Markovian approach (blue \(+\) symbols). The above qualitative insights lead to a semi-empirical relationship between the Markovian-estimated basic reproduction number \(\hat{R}_{0}\) and its actual value \(R_{0}\) as: \[\hat{R}_{0}=(R_{0})^{\eta^{-a}}, \tag{15}\] where \(a\) is a positive coefficient (see **Method** for a detailed derivation). The value of \(a\) is a crucial and constant parameter in Eq. (15), and it needs to be determined by fitting it to the data. Once this constant \(a\) is obtained, the actual value of \(R_{0}\) can be derived by adjusting the estimated \(\hat{R}_{0}\) based on Eq. (15), and more accurate steady state can be calculated by using Eq. (7). Figure 3: **Estimation of \(R_{0}\)**. **a**–**c**: Mechanism of the \(R_{0}\) estimation. The red arrows represent the infection process of the next generation by the current generation, while the dashed arrows denote the removal of the current generation. The relationship between \(T_{\rm gen}\) and \(T_{\rm rem}\) influences the number of new infections when the current generation of infections is removed. For \(T_{\rm gen}=T_{\rm rem}\), the number of new infections is exactly \(R_{0}\). For \(T_{\rm gen}<T_{\rm rem}\), the number of new infections is greater than \(R_{0}\). For \(T_{\rm gen}>T_{\rm rem}\), the number of new infections is smaller than \(R_{0}\). **d**: Three distinct categories of disease spreading with the same value of \(R_{0}\): \(T_{\rm gen}<T_{\rm rem}\) (blue curves), \(T_{\rm gen}=T_{\rm rem}\) (red curves), and \(T_{\rm gen}>T_{\rm rem}\) (green curves), where the fractions of cumulative infected individuals (i.e., sum of infected and removed fractions) are calculated using 100 independent realizations. The predicted future evolution of the spreading dynamics by the Markovian theory with the fitted parameters are also shown: \(T_{\rm gen}<T_{\rm rem}\) (blue \(+\) symbols), \(T_{\rm gen}=T_{\rm rem}\) (red \(\times\) symbols), and \(T_{\rm gen}>T_{\rm rem}\) (green \(\triangle\) symbols), where the gray area marks the average cumulative infected fraction for selecting the training data (see **Method** for details). **e**: The relationship between \(\ln{(\ln{R_{0}}/\ln{\hat{R}_{0}})}\) (\(R_{0}\) represents the real basic reproduction number, while \(R_{0}\) denoted the estimated one) and \(\ln{\eta}\) where the horizontal and vertical dotted lines show that the equality between \(T_{\rm gen}\) and \(T_{\rm rem}\) results in an accurate estimation of \(R_{0}\) and the dashed line represents a linear fitting with the slope 1.55. Inset: the relation between \(\ln{\hat{R}_{0}}\) and \(\eta\) with the asymptotic behaviors: for \(\eta\to 0\), \({\hat{R}_{0}}\to+\infty\) (\(\ln{\hat{R}_{0}}\to+\infty\)), and for \(\eta\to+\infty\), \({\hat{R}_{0}}\to 1\) (\(\ln{\hat{R}_{0}}\to 1\)). Eq. (15) implies the relationship \(\ln\left(\ln R_{0}/\ln\hat{R}_{0}\right)=a\ln\eta\). We use the five scenarios specified in Fig. 2j for memory-dependent Monte Carlo simulations. Fig. 3e shows the linear relationship between \(\ln\left(\ln R_{0}/\ln\hat{R}_{0}\right)\) and \(\ln\eta\), providing support for our qualitative analysis of the Markovian estimation. The estimation of \(R_{0}\) also depends on the ratio \(\eta\) and is relatively insensitive to the particular forms of the time distributions or the specific values of \(T_{\rm gen}\), \(T_{\rm inf}\), or \(T_{\rm rem}\). The results in the inset of Fig. 3e further confirm that the estimated \(\hat{R}_{0}\) approaches one when \(T_{\rm gen}\) is much larger than \(T_{\rm rem}\) and tends to \(+\infty\) when \(T_{\rm gen}\) is much smaller than \(T_{\rm rem}\). By fitting the available data, we have determined the value of \(a\) to be \(1.55\). After obtaining the value of \(a\), we can develop our web-based application for rectifying \(R_{0}\) and epidemic forecasting [55]. #### Epidemic forecasting As suggested in Fig. 1d, we evaluate the efficacy of Markovian theory for epidemic forecasting. We use the initial period of Monte Carlo simulation data to fit parameters under both Markovian and non-Markovian hypotheses and then to predict future disease outbreaks. The remaining simulation data are leveraged to evaluate the accuracy of the Markovian and non-Markovian forecasting results. Regardless of the type of time distributions in the memory-dependent Monte Carlo simulations (Weibull, log-normal, or gamma), the non-Markovian model fits the training data in a consistent manner, i.e., by selecting Weibull time distribution. Figs. 4a-c show the evolution of the spreading dynamics from three types of memory-dependent Monte Carlo simulations with Weibull infection and removal distributions, where the shape parameters \(\alpha_{\rm inf}\) and \(\alpha_{\rm rem}\) are selected according to \(\ln\alpha_{\rm inf}=-0.3,\ln\alpha_{\rm rem}=1.2\) (Fig. 4a), \(\ln\alpha_{\rm inf}=0.45,\ln\alpha_{\rm rem}=0.45\) (Fig. 4b), and \(\ln\alpha_{\rm inf}=1.2,\ln\alpha_{\rm rem}=-0.3\) (Fig. 4c), for \(T_{\rm inf}=5\) and \(T_{\rm rem}=7\). For the Weibull distributions, we have \(\alpha_{\rm inf}<\alpha_{\rm rem}\), \(\alpha_{\rm inf}=\alpha_{\rm rem}\) and \(\alpha_{\rm inf}>\alpha_{\rm rem}\), corresponding to \(T_{\rm gen}<T_{\rm rem}\), \(T_{\rm gen}=T_{\rm rem}\), and \(T_{\rm gen}>T_{\rm rem}\), respectively. We compare the simulated cumulative infected fractions to those predicted by the Markovian and non-Markovian theories. In general, the non-Markovian theory provides more accurate predictions than the Markovian theory. For the specific parameter setting \(\ln\alpha_{\rm inf}=0.45,\ln\alpha_{\rm rem}=0.45\) (i.e., \(T_{\rm gen}=T_{\rm rem}\)), both theories yield a high accuracy. The accuracy can be assessed through the forecasting error \(\varepsilon^{+}\) that evaluates whether a theory overestimates or underestimates the steady-state cumulative infection, i.e., quantifying the extent of deviation between the results obtained from Markovian or non-Markovian theories and those derived from Monte Carlo simulations (see **Method** for detailed definition of \(\varepsilon^{+}\)). A plus value of \(\varepsilon^{+}\) means overestimation while minus value indicates underestimation. We evaluate the accuracy measure \(\varepsilon^{+}\) in the parameter plane of \(\ln\alpha_{\rm inf}\) and \(\ln\alpha_{\rm rem}\), ranging from \(-0.3\) to \(1.2\). Figs. 4d-e show that the Markovian accuracy is sensitive to parameter changes: underestimated if \(\alpha_{\rm inf}\) is greater than \(\alpha_{\rm rem}\) (\(T_{\rm gen}>T_{\rm rem}\)), overestimated when \(\alpha_{\rm inf}\) is smaller than \(\alpha_{\rm rem}\) (\(T_{\rm gen}<T_{\rm rem}\)), and a high forecasting accuracy is achieved only for \(\alpha_{\rm inf}=\alpha_{\rm rem}\) (\(T_{\rm gen}=T_{\rm rem}\)). In contrast, the non-Markovian theory yields highly accurate results in the whole parameter plane, with only a slight underestimation for \(\alpha_{\rm inf}\gg\alpha_{\rm rem}\) (\(T_{\rm gen}\gg T_{\rm rem}\), this is primarily due to the increased difficulty in fitting simulation data, as the simulation parameters become increasingly unreasonable.). Using the five scenarios specified in Fig. 2j for memory-dependent Monte Carlo simulations, we obtain the relationship between \(\varepsilon^{+}\) and \(\ln\eta\), as shown in Fig. 4f. It can be seen that, in the Markovian framework, an overestimation arises for \(T_{\rm gen}<T_{\rm rem}\), and an underestimation occurs for \(T_{\rm gen}>T_{\rm rem}\). Only when \(T_{\rm gen}=T_{\rm rem}\) is an accurate estimate achieved. In general, the non-Markovian theory provides much more accurate forecasting than the Markovian theory, especially when \(T_{\rm gen}\) and \(T_{\rm rem}\) are not equal. The results further illustrate that the forms of time distributions or the specific values of \(T_{\rm gen}\), \(T_{\rm inf}\), or \(T_{\rm rem}\) have little impact on forecasting accuracy. To establish the relevance of these results to real-world diseases, we obtain the \(\psi_{\rm inf}(\tau)\) and \(\psi_{\rm rem}(\tau)\) relations for four known infectious diseases, including COVID-19, SARS, H1N1 influenza, and smallpox, using the information in Refs. [40; 41; 42; 43; 44; 45; 46; 47]. We then calculate the corresponding values of \(\varepsilon^{+}\) and \(\ln\eta\) based on the Markovian and non-Markovian approaches. As demonstrated in Fig. 4f, the positions of the four diseases in the \((\ln\eta,\,\varepsilon^{+})\) plane are consistent with the results of our estimations. Because the data were from the reports of laboratory-confirmed cases incorporating the effects of the quarantine and distancing from susceptible individuals after the confirmation of the diagnosis, \(T_{\rm gen}\) of the four diseases are all smaller than the corresponding values of \(T_{\rm rem}\), leading to some overestimation for the Markovian forecasting results. #### iv.2.2 Evaluation of vaccination strategies In the development and application of a theory for disease spreading, assessing the effects of different vaccination strategies is an important task. Here we consider five prioritization strategies for vaccine distribution [6]: individuals under 20 years (denoted as \(m=1\)), adults between 20 and 49 years (\(m=2\)), adults above 20 years (\(m=3\)), adults above 60 years (\(m=4\)), and all age groups (\(m=5\)), and implement these strategies in Monte Carlo simulations. Fig. 5a shows the results of epidemic evolution in comparison with those without any vaccination intervention (\(m=0\)), where the shape parameters are chosen according to \(\ln\alpha_{\rm inf}=0.45\) and \(\ln\alpha_{\rm rem}=0.45\) (\(T_{\rm gen}=T_{\rm rem}\)). Figs. 5b-c show the results from the Markovian and non-Markovian theories, respectively, with the corresponding fitted parameters for the vaccination strategies in comparison with those without vaccination (see **Method** for the detailed procedure of vaccination in the theoretical calculation). These results indicate that the Markovian and non-Markovian theories yield the correct epidemic evolution and future outbreaks under different vaccination scenarios. To characterize the effectiveness of different vaccination strategies in blocking disease transmission, we introduce a vector, \(\delta\), whose \(m\)-th element quantifies the cumulative infected fraction with the \(m\)-th vaccination strategy in the steady state: \(\delta_{m}=\tilde{c}_{m}\), for \(m=0,\ldots,5\). Fig. 5d shows that the \(\delta\) vectors from the Monte Carlo simulation, Markovian and non-Markovian theories from Figs. 5a-c, respectively. We further introduce a metric, the so-called prevention evaluation error \(\varepsilon^{*}\), that gauges the ability of the Markovian Figure 4: **Epidemic Forecasting.****a-c:** Predicted evolution of the cumulative infected fraction (i.e., sum of infected and removed fractions) by the Markovian (orange dotted curves) and non-Markovian (red dashed curves) theories, in comparison with the Monte Carlo simulations with Weibull time distributions (blue solid curves), for three sets of simulation parameters, respectively. The results are the averages of 100 independent realizations with the standard deviations indicated by the shaded regions. The gray area marks the average cumulative infected fraction for training data selection. **d-e:** The forecasting errors \(\varepsilon^{+}\) of Markovian and non-Markovian theories with respect to the memory-dependent Monte Carlo simulations in the parameter plane of \(\ln\alpha_{\rm inf}\) and \(\ln\alpha_{\rm rem}\) in the range \([-0.3,1.2]\). The green, blue and red squares mark the parameters of Monte Carlo simulations in **a-c**, respectively. **f:** The forecasting errors \(\varepsilon^{+}\) from the Markovian and non-Markovian theories for different values of \(\ln\eta\) under five scenarios of time distribution setting for Monte Carlo simulations. The corresponding estimations for a number of real-world diseases (COVID-19, SARS, H1N1 and Smallbox) are also included. and non-Markovian theories to estimate the total effectiveness of vaccination, i.e., measuring the disparity between the results calculated by the Markovian or non-Markovian theories and those obtained through Monte Carlo simulations considering various vaccination strategies (see **Method** for the detailed definition of \(\varepsilon^{*}\)). Figs. 5e--f show the average values of \(\varepsilon^{*}\) of the two theories in the simulation parameter plane using 100 independent realizations, which are similar to those in Figs. 4d-e, indicating that the error mainly comes from the \(R_{0}\) estimation. In general, the Markovian theory performs well only in the diagonal area of the parameter plane where \(\alpha_{\rm inf}=\alpha_{\rm rem}\), as shown in Fig. 5e, and the non-Markovian theory outperforms the Markovian counterpart in most cases, as shown in Fig. 5f. Meanwhile, we assess the ability of both the Markovian and non-Markovian theories to detect the optimal vaccination strategy. We also define a quantity, optimization failure probability \(\hat{\varepsilon}\), to quantify the probability of a theory failing to identify the optimal strategy, i.e., that leads to the lowest cumulative infection among the five strategies (see **Method** for the detailed definition of \(\hat{\varepsilon}\)). Figs. 5g--h illustrate the results of \(\hat{\varepsilon}\) for the two theories within the parameter plane \((\ln\alpha_{\rm inf},\ln\alpha_{\rm rem})\). While the non-Markovian theory still demonstrates superior performance, the Markovian approach proves capable of identifying the optimal strategy across a significantly larger parameter space compared to the Markovian results depicted in Figure 5e. We obtain the relationships between \(\varepsilon^{*}\) and \(\ln\eta\), as well as between \(\hat{\varepsilon}\) and \(\ln\eta\), as shown in Fig. 5i-j with the same five time-distribution scenarios as in Fig. 2j. In all cases of Fig. 5i, \(\varepsilon^{*}\) reaches a minimum for \(\ln\eta=0\) and increases as Figure 5: **Evaluation of vaccination strategies.****a–c**: For simulation parameters chosen according to \(\ln\alpha_{\rm inf}=0.45\) and \(\ln\alpha_{\rm rem}=0.45\), the cumulative infected fraction (i.e., sum of infected and removed fractions) curves from Monte Carlo simulations and the corresponding Markovian and non-Markovian theories with fitting parameters for five vaccination strategies and the case of no vaccination. The average results are obtained from 100 independent realizations with the shaded regions representing the standard deviations. The gray area marks the average cumulative infected fraction for training data selection. **d**: Vector \(\delta\) calculated from the results in Figs. 5a–c. **e–f**: The prevention evaluation errors \(\varepsilon^{*}\) of Markovian and non-Markovian theories for evaluating the effects of vaccination prevention in the parameter plane \((\ln\alpha_{\rm inf},\ln\alpha_{\rm rem})\). The green squares mark the selected parameters in **a–c**. **g–h**: The optimization failure probabilities \(\hat{\varepsilon}\) arising from the Markovian and non-Markovian within the parameter plane \((\ln\alpha_{\rm inf},\ln\alpha_{\rm rem})\). The green squares mark the selected parameters in **a–c**. **i**: The prevention evaluation errors \(\varepsilon^{*}\) from the Markovian and non-Markovian theories versus \(\ln\eta\) under five scenarios of time distribution setting for Monte Carlo simulations. The estimated errors for four real diseases (COVID-19, SARS, H1N1 and Smallpox) are also shown. **j**: The optimization failure probabilities \(\hat{\varepsilon}\) from the Markovian and non-Markovian theories against \(\ln\eta\) in five different time distribution scenarios for Monte Carlo simulations. The optimization failure probabilities for four real diseases (COVID-19, SARS, H1N1 and Smallpox) are also presented. \(\ln\eta\) deviates from zero. The agreement of the results from the five scenarios further illustrates that the forms of time distributions or the specific values of \(T_{\text{gen}}\), \(T_{\text{inf}}\), or \(T_{\text{rem}}\) play little role in the errors in vaccination evaluation. Fig. 5i also includes the values of \(\varepsilon^{*}\) for the real-world infectious diseases COVID-19, SARS, H1N1 influenza, and smallpox, which are consistent with those from the non-Markovian and Markovian theories. Regarding the results depicted in Fig. 5j, it is observed that the non-Markovian theories consistently outperform the Markovian counterparts. On the other hand, within a wide range of \(\ln\eta\) values around 0, the Markovian theories successfully identify the optimal vaccination strategy among various commonly employed ones. When the value of \(\ln\eta\) significantly deviates from 0, Markovian theories become ineffective in determining the optimal strategy. (Note that on the left side of Fig. 5j, we only present the failures of Markovian theories to identify the optimal strategy in the Monte Carlo simulations with log-normal distribution. This is primarily due to the fact that the parameters associated with the Weibull and gamma distributions fall outside the acceptable range when we keep \(T_{\text{inf}}\) and \(T_{\text{rem}}\) fixed to modify \(\ln\eta\) to a very low value.) Furthermore, we demonstrate that even when employing Markovian approaches, the optimal vaccination strategy can still be determined among the five strategies considered for the four distinct real diseases. Conducting accurate evaluations in prevention serves as the sufficient condition of the successful identification of the optimal strategy. In comparison to the prevention evaluation errors \(\varepsilon^{*}\) of Markovian theories, the optimization failure probability \(\hat{\varepsilon}\) exhibits a wider range of \(\ln\eta\) values that result in the lowest value. The lack of mathematical continuity among the five strategies is the primary reason for this. It indicates that there is no smooth transition or mathematical relationship connecting these strategies, resulting in the rank of the strategies not changing promptly when the value of \(\ln\eta\) deviates from 0. Therefore, only significant errors from the Markovian theories can result in the failure to detect the optimal strategy. Based on this analysis, the extent to which \(\ln\eta\) deviates from 0, leading to the failure of Markovian theories, as well as whether such failure will occur, depends on the selection of the tested strategies. ## Discussion The COVID-19 pandemic has emphasized the importance of investigating disease transmission in human society through modeling. Empirical observations have consistently demonstrated strong memory effects in real-world transmission phenomena. The initial transient stage of an epidemic is critical for data collection, prediction, and articulation of control strategies, but an accurate non-Markovian model presents difficulties. In contrast, a Markovian model offers great advantages in parameter estimation, computation, and analyses. Uncovering the conditions under which Markovian modeling is suitable for transient epidemic dynamics is necessary. We have developed a comprehensive mathematical framework for both Markovian and non-Markovian compartmentalized SIR disease transmissions in an age-stratified population, which allows us to identify two types of equivalence between Markovian and non-Markovian dynamics: in the steady state and transient phase of the epidemic. Our theoretical analysis reveals that, in the steady state, non-Markovian (memory-dependent) transmissions are always equivalent to the Markovian (memoryless) dynamics. However, transient-state equivalence is approximate and holds when the average generation and removal times match each other. In particular, when the average generation time is approximately equal to the average removal time, the disease transmission and removal of an infected individual exhibit a memoryless correlation, thereby minimizing the memory effects of the dynamical process. This results in highly accurate results from the Markovian theory that captures the characteristics of memory-dependent transmission based solely on the early epidemic curves. Our analysis also suggests that the Markovian accuracy is mainly determined by the value of generation-to-removal time ratio in disease transmission, where a larger-than-one (smaller-than-one) ratio can lead to underestimation (overestimation) of the basic reproduction number and epidemic forecasting, as well as the errors in the evaluation of control or prevention measures. The estimation accuracy primarily depends on this ratio, but is not significantly affected by the specific values and distribution forms of the various times associated with the epidemic. This property exhibits substantial practical importance, because the average generation and removal times can be readily assessed based on sparse data collected from the transient phase of the epidemic, but to estimate their distributions with only sparse data is infeasible. These results provide deeper quantitative insights into the influence of memory effects on epidemic transmissions, leading to a better understanding of the connection and interplay between Markovian and non-Markovian dynamics. There were previous studies of the equivalence between Markovian and non-Markovian transmission in the SIS model [24; 25; 21]. However, these studies addressed the steady-state equivalence rather than the transient-state equivalence. To our knowledge, our work is the first to investigate the transient-state equivalence of the SIR model. In addition, previous studies mainly examined the impact of the average generation time on the transmission dynamics, such as how the shape of the generation time distribution affects the estimation of \(R_{0}\)[58] or the use of serial time distributions in estimating \(R_{0}\) during an epidemic [50]. There was a gap in the literature regarding how generation times affect the accuracy of different models. Our paper fills this gap by providing a criterion for using Markovian frameworks to model memory-dependent transmission based on the relationship between the average generation and removal times. From an application perspective, our study suggests that the impact of the time distribution forms on Markovian estimation accuracy is minimal, making it easier to select models between Markovian and non-Markovian dynamics in the initial outbreak of an epidemic based only on the generation-to-removal time ratio. This insight is especially useful since detailed time distribution forms are often harder to detect than their corresponding mean values. In addition, we note that in previous studies, it was observed that in various scenarios, serial intervals, albeit with larger variances, are anticipated to possess a consistent mean value with the average generation time and are more straightforward to measure [51; 52; 53; 3; 50; 54; 3]. Given the practical difficulties in observing the generation time, our finding of minimal impact from the distribution forms suggests that the average serial interval can be utilized as a substitute of the average generation time to determine the applicability of the Markovian theories for modeling purposes without compromising accuracy, although numerous studies have indicated that replacing the generation time distribution with the serial interval distribution may affect the analysis of transmission dynamics [49; 50; 60]. Meanwhile, based on the Eq. (15), if we determine the ratio of generation-to-removal time, the estimated \(R_{0}\) obtained through the Markovian approach can be adjusted to approximate the true. And our web-based application showcases the demonstration of rectifying \(R_{0}\) and epidemic forecasting [55]. Our study highlights the critical importance of accurately quantifying \(R_{0}\) for achieving precise epidemic forecasting and prevention evaluation. A previous work [59] revealed that the value of \(R_{0}\) depends on three key components: the duration of the infectious period (e.g., \(\psi_{\rm rem}(\tau)\)), the probability of infection resulting from a single contact between an infected individual and a susceptible one (e.g., \(\psi_{\rm inf}(\tau)\)), and the number of new susceptible individuals contacted per unit of time. However, given the practical limitations inherent in obtaining all three components, numerous methods have been developed for estimating \(R_{0}\). Although our work presents a specific approach, which fits the parameters of exponential or non-exponential time distribution by using the initial outbreak curves, it is not the only one available. For example, when contact patterns are unknown, \(R_{0}\) can be estimated by fitting the growth rate \(g\) and the generation time distribution \(\psi_{\rm gen}(\tau)\), and then applying them in the Euler-Lotka equation [50; 58; 59]. However, since the focus of our work is on epidemic forecasting and evaluation of prevention measure, \(R_{0}\) can be directly calculated once \(\psi_{\rm inf}(\tau)\) and \(\psi_{\rm rem}(\tau)\) are fitted, without requiring the fitting of any additional quantity. The estimation of \(R_{0}\) can also be achieved by using data in the steady state, such as the final size of an epidemic or equilibrium conditions [59]. However, this method is not suitable for the transient phase where only early-stage curves are available. Utilizing the approach delineated in this paper is practically more appropriate for estimating \(R_{0}\). While our study focused on transmission within the SIR framework, extension to SEIR or SIS models is feasible. While we emphasized the significance of the transient-state equivalence in disease transmission, transient dynamics are more relevant or even more crucial than the steady state in nonlinear dynamical systems [61]. For example, in ecological systems, transient dynamics play a vital role in empirical observations and are therefore a key force driving natural evolution [62; 63; 64; 65; 66; 67]. In neural dynamics, transient changes in neural activity can mediate synaptic plasticity, a crucial mechanism for learning and memory [68; 69; 70]. Therefore, the identification of suitable conditions for choosing between Markovian and non-Markovian dynamics may not be limited to transmission dynamics alone and may serve as a valuable reference for other fields as well. Taken together, our study establishes an approximate equivalence between Markovian and non-Markovian dynamics in the transient state, assuming that time distributions follow Weibull forms (see Supplementary Note 3 for details). While the applicability of our findings to most synthetic and empirical distributions has been analyzed qualitatively, a quantitative analysis requires further studies. For extreme cases with non-Weibull distributions, the transmission should be evaluated using other specific methods. While we have provided a qualitative analysis of the mechanism underlying why time distribution forms have minimal impacts on the errors of Markovian estimations, a more rigorous theoretical analysis is needed and requires further exploration. In addition, due to the complexity of the nonlinear transmission, our study has produced a semi-empirical relationship to estimate the overestimation and underestimation of Markovian methods. Further research is required to develop a rigorous formula that can accurately predict these effects. ## Methods ### Monte Carlo simulation In the simulation, we classify \(N\) individuals into \(n\) subgroups based on the age distribution \(p\). The index of the subgroup to which an individual belongs is denoted by \(l\) (where \(0\leq l\leq n-1\)), and the index of the individual within the subgroup is denoted by \(u\) (where \(0\leq u\leq p_{l}N\)). The state of the \(u\)-th individual in the \(l\)-th subgroup is represented by \(X_{lu}\), which includes the states \(S\) (susceptible), \(I\) (infected), \(W\) (recovered), and \(D\) (dead), where \(W\) and \(D\) both represent \(R\) (removed). For each individual, we also record the absolute time of infection and removal using two variables: \(T_{lu}^{\inf}\) and \(T_{lu}^{\text{rem}}\), respectively. The absolute time of the system is denoted by \(T\), and we implement the total spreading simulation step by step using a finite time step \(\Delta T\) as follows: 1. Initialization: set \(T=0\), \(X_{lu}\) for every individual is set to \(S\). 2. Set infection seeds: choose a set of individuals as the infection seeds and the corresponding \(X_{lu}\) are set to \(I\), the corresponding \(T_{lu}^{\inf}\) are set to \(0\), and \(T_{lu}^{\text{rem}}\) are set to a random value following the removal time distribution \(\psi_{\text{rem}}(\tau)\). 3. Infection of one step: calculate the infection rate, \(\hat{\omega}_{lu}^{\inf}(T)\), of infected individual \(u\) in age group \(l\) during the current time step by \[\hat{\omega}_{lu}^{\inf}(T)=1-\Psi_{\inf}(T-T_{lu}^{\inf}+\Delta T)/\Psi_{ \inf}(T-T_{lu}^{\inf}).\] The probability \(\bar{\omega}_{l}^{\inf}(T)\) of each susceptible individual in age group \(l\) being infected can be calculated by \[\bar{\omega}_{l}^{\inf}(T)=\sum_{m=1}^{n}p_{m}[1-(1-\frac{\sum_{v\in\mathcal{ I}_{m}}\hat{\omega}_{mv}^{\inf}(T)}{p_{m}N})^{kA_{lm}}],\] where \(\mathcal{I}_{m}\) is the index set of the infected individual in age group \(m\). The number of the susceptible individuals being infected in a age group follows a binomial distribution \(B(s_{l}(T)p_{l}N,\bar{\omega}_{l}^{\inf}(T))\), where \(s_{l}(T)\) denotes the fraction of susceptible individuals in age group \(l\) at time \(T\). Then generate a random number \(N_{l}(T)\) following this binomial distribution and set the \(N_{l}(T)\) individuals as \(I\) state. The corresponding \(T_{lu}^{\inf}\) of the new infected individuals are set to the current \(T\) and \(T_{lu}^{\text{rem}}\) are set to \(T_{\text{rem}}+T\), where the random \(T_{\text{rem}}\) follow the removal time distribution \(\psi_{\text{rem}}(\tau)\). 4. Check if \(T_{lu}^{\text{rem}}\) of each infected individual is during the current time step. If this condition is satisfied, set their state to \(D\) with a probability of death and to \(W\) otherwise. Then let \(T=T+\Delta T\). 5. Repeat the process iii) and iv), until no individual with \(I\) index exists. ### Time distributions In the Monte Carlo simulations, we employ three types of time distributions, i.e., Weibull, log-normal, and gamma, to describe the memory-dependent transmission process. For Weibull time distribution, it follows: \[\psi(\tau)=\frac{\alpha}{\beta}(\frac{\tau}{\beta})^{\alpha-1}e^{-(\frac{\tau}{ \beta})^{\alpha}},\] where \(\alpha\) and \(\beta\) denote the shape and scale parameters, respectively. The log-normal time distribution is defined as follows: \[\psi(\tau)=\frac{1}{\tau\beta\sqrt{2\pi}}\exp(-\frac{(\ln\tau-\alpha)^{2}}{2 \beta^{2}}).\] The gamma time distribution is expressed as follows: \[\psi(\tau)=\frac{1}{\Gamma(\alpha)\beta^{\alpha}}\tau^{\alpha-1}e^{-\frac{\tau }{\beta}},\] where \(\Gamma(\cdot)\) denotes gamma function, while \(\alpha\) and \(\beta\) represent the shape and scale parameters, respectively. For each type of time distribution, denoted as \(\psi(\tau)\), it can be either \(\psi_{\rm inf}(\tau)\) or \(\psi_{\rm rem}(\tau)\). Similarly, the parameter \(\alpha\) can take either \(\alpha_{\rm inf}\) or \(\alpha_{\rm rem}\), and the parameter \(\beta\) can be either \(\beta_{\rm inf}\) or \(\beta_{\rm rem}\). Additionally, the survival function could calculated by: \[\Psi(\tau)=\int_{\tau}^{+\infty}\psi(\tau^{\prime})d\tau^{\prime},\] while the hazard function could deducted by: \[\omega(\tau)=\frac{\psi(\tau)}{\int_{\tau}^{+\infty}\psi(\tau^{\prime})d\tau^ {\prime}}.\] The survival function \(\Psi(\tau)\) can be either \(\Psi_{\rm inf}(\tau)\) or \(\Psi_{\rm rem}(\tau)\), and the hazard function \(\omega(\tau)\) can take either \(\omega_{\rm inf}(\tau)\) or \(\omega_{\rm rem}(\tau)\). ### Derivation of semi-empirical estimation of basic reproduction number Intuitively, the period of \(T_{\rm rem}\) can accommodate \(T_{\rm rem}/T_{\rm gen}=1/\eta\) time intervals of length \(T_{\rm gen}\), corresponding to the result of \(1/\eta\) generations of infections. This can lead to an exponential increase in the number of infections during \(T_{\rm rem}\). This intuition suggests a relationship between the fitted basic reproduction number \(\hat{R}_{0}\) and the actual \(R_{0}\), which can be expressed as an exponential function: \[\hat{R}_{0}=R_{0}^{f(1/\eta)},\] where \(f(\cdot)\) is a monotonically increasing function that satisfies three conditions. First, \(f(1)=1\), indicating that \(\hat{R}_{0}\) can be accurately estimated when \(T_{\rm gen}=T_{\rm rem}\). Second, \(f(0)=0\), meaning that if \(T_{\rm rem}\) is an extremely small fraction of \(T_{\rm gen}\), the transmission will take a long time to reach the steady state, causing the curve to be flat in the initial stage and potentially causing the Markovian fitting to produce the estimate \(\hat{R}_{0}=1\). Third, \(f(+\infty)=+\infty\), implying that if \(T_{\rm rem}\) is extremely large compared to \(T_{\rm gen}\), the transmission will quickly reach the final prevalence, causing the Markovian fitting to give an extremely large estimate of \(\hat{R}_{0}\). Because the actual transmission process involves many complicated nonlinear relationships, identifying the specific form of the function \(f(\cdot)\) is a challenging task. We thus assume \[f(x)=x^{a},\] where \(a\) is an unknown positive coefficient. This leads to Eq. (15). ### Definition of errors The difference \(\varepsilon\) between non-Markovian and the corresponding Markovian results calculated from Eqs. (13-14) is defined as: \[\varepsilon=\sum_{x,x^{*}\in\{(s,s^{*}),(i,i^{*}),(r,r^{*})\}}\frac{\|x^{*}-x\|_ {2,T}}{\|x\|_{2,T}}\] where the pairs \((s,s^{*})\), \((i,i^{*})\) and \((r,r^{*})\) correspond to the non-Markovian and Markovian susceptible, infected and removed curves, respectively. The 2-norm \(\|\cdot\|_{2,T}\) on time duration \(T\) ensures that \(\varepsilon\) measures the "distance" between non-Markovian and the Markovian transient states. It is not appropriate to set \(T\) as the total transmission period because the cumulative infected fraction approaches the final value asymptotically, making it difficult to determine the exact time point of the steady state. To address this issue, we choose \(T\) as \([0,\tilde{t}_{\theta}]\), where \(\tilde{t}_{\theta}\) is the time when the cumulative infected fraction \(c(t)\) reaches the \(\theta\) percentile point within the range that spans from its initial value (\(\hat{c}\)) to its final value (\(\hat{c}\)). The value of \(\theta\) in Fig. 2j is selected as 50 (see Supplementary Note 6 for more selection of \(\theta\) and detailed analysis). The forecasting error \(\varepsilon^{+}\) that evaluates whether a theory overestimates or underestimates the steady-state cumulative infected fraction is defined as: \[\varepsilon^{+}=\frac{c(\tilde{t})-\hat{c}(\tilde{t})}{\hat{c}(\tilde{t})},\] where \(\tilde{t}\) denotes the time when the stochastic simulation reaches the steady state when no infection occurs in the population, \(c(\tilde{t})\) and \(\hat{c}(\tilde{t})\) are the cumulative infected fractions from theory and simulation, respectively. A positive value of \(\varepsilon^{+}\) indicates overestimation, whereas a negative value indicates underestimation. The prevention evaluation error \(\varepsilon^{*}\), that gauges the ability of the Markovian and non-Markovian theories to estimate the total effectiveness of vaccination, is defined as: \[\varepsilon^{*}=\frac{\|\hat{\delta}-\delta\|_{2}}{\|\hat{\delta}\|_{2}},\] where \(\hat{\delta}\) is the result from Monte Carlo simulation and \(\|\cdot\|_{2}\) is the 2-norm of a vector. The optimization failure probability \(\hat{\varepsilon}\), which measures the probability that a theory fails to identify the optimal vaccination strategy, is defined as: \[\hat{\varepsilon}=\frac{\sum_{l=1}^{z}\xi^{(l)}}{z},\] where \[\xi^{(l)}=\begin{cases}0&\text{if }\operatorname{argmin}\hat{\sigma}^{(l)}= \operatorname{argmin}\sigma^{(l)}\\ 1&\text{otherwise}\end{cases},\] \(\hat{\sigma}^{(l)}\) and \(\sigma^{(l)}\) represent the vectors, \(\hat{\sigma}\) and \(\sigma\), for the \(l\)-th experiment, respectively, and \(z\) denotes the total number of experiments (in this paper, \(z\) is set to 100). Consequently, \(\hat{\varepsilon}\) quantifies the fraction of experiments in which a theory fails to identify the optimal vaccination strategy, and serves as a measure of the probability of failure in optimizing the vaccination strategy. ### Fitting method Because the removal process is independent of the infection one, we divide the fitting method into two parts: removal parameter fitting and infection parameter fitting. Specifically, we use \(c^{*}_{\text{init}}\) and \(r^{*}_{\text{init}}\) to denote the cumulative infected and removed fractions in the initial stage of a Monte Carlo simulation. These two types of data are substituted into the Eq. (3) to fit the parameters of \(\psi_{\rm rem}(\tau)\). Likewise, we use \(c^{*}_{i,\rm init}\) to denotes the cumulative infected fraction of age group \(l\) in the initial stage of a Monte Carlo simulation and \(s^{*}_{l,\rm init}=1-c^{*}_{l,\rm init}\) represents the corresponding susceptible fraction. After obtaining the removal parameters, these two types of data are put into Eq. (1) to fit the infection time distribution parameters. In our study, we selected the curves of all states that occurred prior to the time point at which the cumulative infected fraction reached a specific percentile (e.g., 20%) situated between the initial and steady cumulative infected fractions, as the training data. Choosing a specific time period as the training data may not be appropriate, as it can result in an overabundance of data points for fitting due to some instances of fast transmission already having reached the steady state, while some instances of slow transmission may not have spread out yet, leading to too few data points. ### Vaccination method We assume that the individuals will build enough immune protection from the disease \(\kappa\) days after vaccination with the probability \(\rho\). In Monte Carlo simulations, the susceptible individual who gets vaccinated and the corresponding time \(T_{\rm vac}\) are associated with the probability \(\rho\), and the absolute time becomes \(T_{\rm vac}+\kappa\). If this individual has not been infected, he/she will be set to a state called protected state, indicating that this individual is protected from the disease. When a fraction of individuals in age group \(l\) get vaccinated, the detailed vaccination fraction \(v_{l}\), susceptible time \(t_{\rm vac}\) and the fraction of susceptible individuals \(s_{l}(t_{\rm vac})\) are recorded. When the absolute time reaches \(T_{\rm vac}+\kappa\), the corresponding value of \(s_{l}(t_{\rm vac}+\kappa)\) will be set as \(s_{l}(t_{\rm vac}+\kappa)\rho_{\frac{v_{l}}{s_{l}(t_{\rm vac})}}\). ## Data availability All relevant data are available at [https://github.com/fengmi9312/Validity-of-Markovian-for-Memory/tree](https://github.com/fengmi9312/Validity-of-Markovian-for-Memory/tree) /main/FigureData. ## Code availability The web-based application can be visited at [https://cns.hkbu.edu.hk/toolbox/Validity-of-Markovian-for](https://cns.hkbu.edu.hk/toolbox/Validity-of-Markovian-for) -Memory/main.html. The GitHub repository, which includes the source code for all the figure results, the web-based application, and an additional Python application, can be accessed at [https://github.com/fengmi9312/Validity](https://github.com/fengmi9312/Validity) -of-Markovian-for-Memory.git. ## Acknowledgments This work was supported by the Hong Kong Baptist University (HKBU) Strategic Development Fund. This research was conducted using the resources of the High-Performance Computing Cluster Centre at HKBU, which receives funding from the Hong Kong Research Grant Council and the HKBU. Y.-C.L was supported by the Office of Naval Research through Grant No. N00014-21-1-2323. ## Author contributions M.F., L.T. and C.-S.Z. designed research; M.F. performed research; L.T. and C.-S.Z. contributed analytic tools; M.F., L.T. and C.-S.Z. analysed data; M.F., L.T., Y.-C.L. and C.-S.Z. discussed the results and wrote the paper. ## Competing interests The authors declare no competing interests. ## Correspondence To whom correspondence should be addressed: [email protected], [email protected]
2310.19269
Stress and Geometry for Isotropic Singularities
We develop the mathematics needed to treat the interaction of geometry and stress at any isotropic spacetime singularity. This enables us to handle the Einstein equations at the initial singularity and characterize allowed general relativistic stress-energy tensors. Their leading behaviors are dictated by an initial hypersurface conformal embedding. We also show that an isotropic Big Bang determines a canonical non-singular metric on and about the initial hypersurface as well as a cosmological time. This assigns a volume and energy to the initial point singularity.
A. Rod Gover, Jarosław Kopiński, Andrew Waldron
2023-10-30T05:05:39Z
http://arxiv.org/abs/2310.19269v2
# Stress and Geometry for Isotropic Singularities ###### Abstract We develop the mathematics needed to treat the interaction of geometry and stress at any isotropic spacetime singularity. This allows us to handle the Einstein equations at the initial singularity and characterize allowed general relativistic stress-energy tensors. Their leading behaviors are dictated by an initial hypersurface conformal embedding. We also show that an isotropic Big Bang determines a canonical non-singular metric on and about the initial hypersurface as well as a cosmological time. This assigns a volume and energy to the initial point singularity. ## I Introduction Is the causal structure of our universe singular at the Big Bang? This question is of physical import since, as we shall show, a well-defined causal structure at the initial Big Bang singularity imposes strong constraints on the matter content of the early universe. We perform our analysis in the context of isotropic singularities, meaning that when multiplied by some power of a timelike coordinate, the observed physical metric becomes non-singular on an initial hypersurface along which the conformal structure is well-defined. Such spacetimes are physically relevant in light of Penrose's Weyl Curvature hypothesis [1] which asserts that, at any physically relevant initial singularity, the Weyl tensor is finite even when the Ricci curvature is singular [2]. Isotropic singularities subject to various underlying matter model assumptions have been analyzed in detail [7; 8; 9; 10; 11; 12; 13; 14; 15]. Our analysis applies to generic, compatible stress. Because the metric along an isotropic singularity is degenerate but the conformal structure is still well-defined, early universe physics is dictated by the mathematics of conformally embedded hypersurfaces. Conformal submanifold embeddings are crucial to the theory of observables in the AdS/CFT correspondence [16; 17]. This machinery can be fruitfully applied to cosmology. ## II Conformal Geometry For simplicity we focus on generic, dimension 4 [18], causal structures given by the data of a Lorentzian conformal geometry \((M,\mathbf{g})\)[19], where \(\mathbf{g}\) denotes a conformal class of metrics \(g\) with equivalence given by rescalings \(\Omega^{2}g\sim g\) for \(0<\Omega\in C^{\infty}M\). Parallelism determined by the Levi-Civita connection \(\nabla\) is a central mathematical construct of general relativity. Its conformal geometry generalization, known as the tractor connection \(\mathbf{\nabla}\), promotes the tangent bundle \(TM\) to a "tractor bundle" \(\mathbf{T}M\) with dimension six fibers [20; 21]. Indeed \(\mathbf{g}\) contains a metric solving the vacuum Einstein equations precisely when there is a parallel tractor vector field \(I\in\mathbf{T}M\)[21; 22], _viz_ \[\mathbf{\nabla}I=0\,. \tag{1}\] Given a choice of metric \(g\in\mathbf{g}\), a tractor \(I\) is a triple \[I\stackrel{{ g}}{{=}}\begin{pmatrix}\sigma\\ n\\ \rho\end{pmatrix}\stackrel{{\Omega^{2}g}}{{=}}\begin{pmatrix}\Omega \sigma\\ \Omega(n+\sigma d\log\Omega)\\ \Omega^{-1}(\rho-\mathcal{L}_{n}\log\Omega-\frac{\sigma}{2}|d\log\Omega|_{g} ^{2})\end{pmatrix}, \tag{2}\] where \(\sigma\), \(\rho\) are scalars, \(\mathcal{L}_{n}\) is the Lie derivative along the vector \(n\), and the above gauge transformation is valued in a parabolic subgroup of the spacetime conformal group \(SO(4,2)\). Tractors are basic objects for theories incorporating local conformal transformations and diffeomorphisms [23]. The tractor connection is given by \[\mathbf{\nabla}_{a}I=\begin{pmatrix}\nabla_{a}\mathbf{\sigma}-n_{a}\\ \nabla_{a}n^{b}+\sigma P_{a}^{\;b}+\rho\delta_{a}\\ \nabla_{a}\rho-n_{c}P_{a}^{\;c}\end{pmatrix}. \tag{3}\] In the above \(\mathbf{\sigma}\) denotes a conformal density of weight 1. A weight \(w\) conformal density [24] is a section of \(\left[(\wedge^{4}TM)^{2}\right]^{\frac{1}{8}}=:\mathcal{E}M[w]\). This is a power of a tensor density, so the Levi-Civita connection is well-defined acting upon it. A density \(\mathbf{\sigma}\) may also be understood as an equivalence class of metric-function pairs \((g,\sigma)\sim(\Omega^{2}g,\Omega\sigma)\). Indices are raised and lowered using \(\mathbf{g}\) and \(\nabla\) is the Levi-Civita connection (see [23]). Also \(P\) denotes the Schouten tensor and \(J\) its trace. The standard tractor bundle \(\mathbf{T}M\) comes equipped with a parallel "tractor metric" \(h\) and a canonical tractor vector field \(X\in\mathbf{T}M\)[1][21; 23; 25]. Indeed \(h(I,X)=\mathbf{\sigma}\in\mathcal{E}M\)[1]. Moreover, when Eq. (1) holds, it follows that the vacuum cosmological constant \(\Lambda_{\text{vac}}\) obeys \[-\tfrac{1}{3}\Lambda_{\text{vac}}=2\mathbf{\sigma}\rho+|n|_{\mathbf{g}}^{2}=h(I,I)=:I \cdot I=:I^{2}\,.\] To incorporate matter, we must couple the stress-energy tensor to the right hand side of Eq. (1). On the other hand, when the function \(\sigma\) is a good coordinate for a hypersurface \(\Sigma\), the local conformal embedding data \(\Sigma\hookrightarrow(M,\mathbf{g})\) is encoded by \(\mathbf{\nabla}I\in TM\otimes\mathbf{T}M\). The latter gives a conformal geometric analog of extrinsic curvatures. It follows that there is a natural correspondence between stress and local conformal embedding data. ## III Isotropic singularities An isotropic singularity is a spacelike hypersurface \(\Sigma\) in a spacetime \(M\) with a degenerate physical metric \(\tilde{g}\) such that, for \(\alpha<0\) and any defining function \(\tau\)[26], \[g=\tau^{2\alpha}\tilde{g} \tag{4}\] extends to a smooth metric across \(\Sigma\). The degree of metric singularity for the (zero \(\Lambda\)) conformally flat Friedmann-Lemaitre-Robertson-Walker spacetime with perfect fluid pressure to density ratio \(\kappa\) is \[\alpha_{\text{FLRW}}=-\tfrac{2}{3\kappa+1}\,.\] Even this simplest of cosmological scenarios allows non-integer \(\alpha\). This parameter controls both smoothness of the physical metric \(\tilde{g}\) and the volume expansion rate. Another _bona fide_ metric \(g^{\prime}=\Omega^{2}g\) corresponds to a rescaled defining function \(\tau^{\prime}=\Omega^{\frac{1}{\alpha}}\tau\). Thus we may write (4) as \[\tilde{g}=\mathbf{\tau}^{-2\alpha}\mathbf{g}\,,\] with \(\mathbf{\tau}\in\mathcal{E}M[\tfrac{1}{\alpha}]\), where no particular choice of metric in the conformal class \(\mathbf{g}\) has been made. Indeed the causal structure of \(\mathbf{g}\) is well-defined across the initial singularity. Our aim is to analyze the Einstein field equations \[\ddot{G}+\Lambda\tilde{g}=\ddot{T}\,, \tag{5}\] where \(\ddot{T}\) is the stress-energy tensor of a universe with cosmological constant \(\Lambda\) and degenerate physical metric \(\tilde{g}\). For this we use maps from weight 1 scalar densities to weight 0 tractors and from tractor-valued one-forms to weight 1 symmetric trace-free tensors [27; 23] \[\mathbf{\mu}\overset{I}{\longmapsto}\begin{pmatrix}\mathbf{\mu}\\ \nabla^{b}\mathbf{\mu}\\ -\frac{1}{4}(\Box+J)\mathbf{\mu}\end{pmatrix},\qquad\begin{pmatrix}0\\ \frac{\dot{x}_{a}}{-\frac{1}{3}\nabla_{b}\dot{x}_{a}}\end{pmatrix}\overset{q^ {*}}{\longmapsto}\dot{x}_{ab}\,. \tag{6}\] When \(\mathbf{\mu}\) is non-vanishing almost everywhere, \(I_{\mathbf{\mu}}\) is termed a _scale tractor_. For _any_ weight one density \(\mathbf{\mu}\), the definitions of \(\mathbf{\nabla}\) and \(I_{\mathbf{\mu}}\) in Equations (3) and (6) imply vanishing of the top slot of \(\mathbf{\nabla}I_{\mathbf{\mu}}\). So by virtue of Eq. (2) its middle slot is a trace-free conformally covariant rank two tensor equaling \(q^{*}\mathbf{\nabla}I_{\mathbf{\mu}}\). Indeed, \(2\mathbf{\mu}^{-1}q^{*}\mathbf{\nabla}I_{\mathbf{\mu}}\) is precisely the trace-free Einstein tensor for the metric \(\mathbf{\mu}^{-2}\mathbf{g}\), wherever this is defined, and Equations (5) now read \[q^{*}\mathbf{\nabla}I_{\mathbf{\tau}^{\alpha}} = \tfrac{\tau^{\alpha}}{2}\dot{\ddot{T}}\,, \tag{7}\] \[I_{\mathbf{\tau}^{\alpha}}^{2} = \tfrac{1}{12}\ddot{T}_{a}{}^{a}-\tfrac{1}{3}\Lambda\,. \tag{8}\] Scale tractors are potentials for Einstein's equations. The trace of the stress tensor for a spacetime with isotropic singularity cannot vanish along \(\Sigma\) unless \(\alpha=-1\), as the leading behavior of \(I_{\mathbf{\tau}^{\alpha}}^{2}\) is \(\tfrac{\alpha(\alpha+1)}{2}\mathbf{\tau}^{2\alpha-2}|\nabla\mathbf{\tau}|_{g}^{2}\). ## IV traversing the singularity Spacetimes whose singularities are isotropic admit a global causal structure. Hence, even though the Einstein tensor is singular across an isotropic singularity, there are a number of well-defined geometric quantities, invariant to the structure, that constrain matter. The Weyl tensor \(W_{ab}{}^{c}{}_{d}\) is defined independently of any choice of \(g\in\mathbf{g}\) but, unlike the Bach tensor \(B_{ab}\), it is not related to stress by a local differential operator. Let \(\mathsf{P}\) be the conformally covariant _partially massless wave-operator_ defined, acting on a weight 1 trace-free symmetric tensor \(\dot{x}_{ab}\), by \[\mathsf{P}\,\ddot{x}_{ab}:=\Box\dot{x}_{ab}-\nabla_{c}\nabla_{(a}\ddot{x}^{c} {}_{b)_{\alpha}}-\tfrac{1}{3}\nabla_{(a}\nabla_{|c|}\dot{x}^{c}{}_{b)_{\alpha}} +W_{a}{}^{c}{}_{b}^{d}\dot{x}_{cd}\,.\] Partially massless excitations measure Bach flat fluctuations around an Einstein metric [28; 29]. For any non-vanishing weight one density \(\mathbf{\mu}\)[30], \[\mathbf{\mu}\,B:=\mathsf{P}q^{*}\mathbf{\nabla}I_{\mathbf{\mu}}\,. \tag{9}\] Given a causal structure \(\mathbf{g}\), the Bach tensor is non-singular so the above implies that the physical stress obeys a d'Alembert-type equation [31] \[B=\tfrac{1}{\tau^{\alpha}}\mathsf{P}\left(\tfrac{\tau^{\alpha}}{2}\dot{\ddot{T} }\right). \tag{10}\] ### Singularity Geometry So far the conformal embedding data \(\Sigma\hookrightarrow(M,\mathbf{g})\) has not been used. This data determines uniquely the local asymptotics of a metric \(g_{+}\) whose scalar curvature obeys \[R^{g_{+}}=12+\mathcal{O}(\sigma^{4})\,, \tag{11}\] where \(\sigma\) is any defining function for \(\Sigma\). The initial hypersurface \(\Sigma\) is a conformal infinity of this _singular Yamabe metric_\(g_{+}\). An all order "singular Yamabe problem" [32; 33; 34; 35] solution amounts to finding \(\mathbf{\sigma}=[g,\sigma]\in\mathcal{E}M[1]\) such that \(I_{\mathbf{\sigma}}^{2}=-1\). The expansion coefficient of the \(\mathbf{\sigma}^{4}\) term in \(I_{\mathbf{\sigma}}^{2}+1\), along \(\Sigma\), is a weight \(-4\) conformal hypersurface invariant [36; 37; 35] equaling the variation of an energy functional \(E_{\Sigma}\)[38; 39; 40]. This energy is the anomaly in the renormalized volume of \((M,g_{+})\) and an invariant of the initial singularity \[E_{\Sigma}=\int_{\Sigma}\dot{K}_{ab}\dot{F}^{ab}\,dV_{g_{\Sigma}}\,.\] The above integral is over any metric in the conformal class of metrics \(\mathbf{g}_{\Sigma}\) induced along \(\Sigma\) by \(\mathbf{g}\). It is invariantly defined because the contraction of the trace-free extrinsic curvature \(\hat{K}\) with the _Fiallow tensor_[41; 42], \[F_{ab}:=\hat{n}^{c}\hat{n}^{d}W_{cabd}-\hat{K}_{ac}\hat{K}_{b}{}^{c}+\tfrac{1}{4 }\hat{K}_{cd}\hat{K}^{cd}\overline{g}_{ab}\in\odot^{2}T^{*}\Sigma\,,\] defines a conformal density of weight \(-3\). The extrinsic curvatures \((K,F)\) have transverse orders \((1,2)\) and are termed second and third fundamental forms [43]. They give the first two elements in a sequence of trace-free conformal hypersurface invariants defined along \(\Sigma\) and termed _conformal fundamental forms_[44]. These are conformally invariant obstructions to the problem of finding an asymptotically de Sitter (dS) metric with conformal infinity \(\Sigma\). They probe derivatives of \(\mathbf{g}\) off of \(\Sigma\) in the direction of the (future-pointing) timelike unit normal \(\hat{n}\in TM[-1]|_{\Sigma}\). The extrinsic curvature \(K\) measures the difference between the Levi-Civita connections \(\nabla\) and \(\bar{\nabla}\) of \(M\) and \(\Sigma\) respectively, while the Fialkow tensor measures that of the respective tractor connections \(\mathbf{\nabla}\) and \(\bar{\mathbf{\nabla}}\) of \(\mathbf{g}\) and \(\mathbf{g}_{\Sigma}\)[42]. The _fourth conformal fundamental form_[44; 45] takes three normal derivatives of \(\mathbf{g}\)[46], \[\hat{L}_{ab}:=\big{(}\hat{n}^{c}C_{c(ab)}\big{)}^{\top}+H\hat{n} ^{c}\hat{n}^{d}W_{acbd}\\ -\bar{\nabla}^{c}(\hat{n}^{d}W_{d(ab)c})^{\top}\!\!\in\!\otimes^{ 2}\!T^{*}\Sigma[-1]\,.\] Here \(C_{abc}:=2\nabla_{[a}P_{b]c}\) is the Cotton tensor and \(\hat{\otimes}\) denotes the trace-free symmetric product of one forms. The tensor \(\hat{L}\) is distinguished in the context of dS\({}_{4}\) metrics as it extracts the second piece of boundary data (the first being \(\mathbf{g}_{\Sigma}\)). The locally determined singular Yamabe asymptotics of (11) terminate at order four, so the fourth conformal fundamental form \(\hat{L}\) is the last tensor determined this way. Before using the geometric triple \((\hat{K},\hat{F},\hat{L})\) to constrain \(\hat{\hat{T}}\), we study further (pseudo-)Riemannian data determined by the isotropic singularity. ### The Big Bang Metric Remarkably there is a canonical Riemannian metric along the big bang hypersurface \(\Sigma\). It is constructed from the solution \(\mathbf{\sigma}\) to the singular Yamabe problem: The weight \(1\) density \(\mathbf{\gamma}\) defined by \[\mathbf{\gamma}^{\frac{1}{\alpha}-1}:=\frac{\mathbf{\tau}}{\mathbf{\sigma}} \tag{12}\] is nowhere vanishing by our smoothness assumptions and therefore defines a Lorentzian metric \[g_{\mathbf{\gamma}}:=\mathbf{\gamma}^{-2}\mathbf{g}\,,\] and in particular a Riemannian metric \(g_{\Sigma}\) on the spacelike isotropic singularity \(\Sigma\)[47]. Thus the Riemannian three manifold \((\Sigma,g_{\Sigma})\) is an invariant of the Big Bang, as is its volume \(V_{\Sigma}=\int_{\Sigma}dV_{g_{\Sigma}}\). Because the "Big Bang metric" \(g_{\mathbf{\gamma}}\) is an everywhere smooth element of the conformal class \(\mathbf{g}\), it determines the triple of conformal fundamental forms \((\hat{K},\hat{F},\hat{L})\) by the formulae above. Importantly the metric \(g_{\mathbf{\gamma}}\) also furnishes the early universe with a canonical cosmological time coordinate \[t=\frac{\mathbf{\sigma}}{\mathbf{\gamma}}\,.\] ## V Big Bang Stress-Energy Tensor The behavior of stress at an isotropic singularity can be studied using potentials \(I_{\mathbf{\tau}^{\alpha}}\), \(I_{\mathbf{\sigma}}\) and \(I_{\mathbf{\gamma}}\). By virtue of Eq. (12), these must be related: \[I_{\mathbf{\tau}^{\alpha}}=t^{\alpha-2}\Big{[}-\tfrac{\alpha(\alpha-1)}{4\mathbf{ \gamma}}I_{\mathbf{\sigma}}^{2}X+t\big{(}\alpha I_{\mathbf{\sigma}}+\tfrac{\alpha( \alpha-1)}{2\mathbf{\gamma}}I_{\mathbf{\sigma}}.I_{\mathbf{\gamma}}X\big{)}+t^{2}\big{(}(1- \alpha)I_{\mathbf{\gamma}}-\tfrac{\alpha(\alpha-1)}{4\mathbf{\gamma}}I_{\mathbf{\gamma}}^ {2}X\big{)}\Big{]}\,. \tag{13}\] ### Trace of Stress-Energy Tensor The square of a scale tractor measures scalar curvature/trace of stress (see Eq. (8)) so Eq. (13) implies \[\begin{split}\tfrac{1}{4}\hat{T}_{a}{}^{a}-\Lambda&= \tfrac{3}{2}t^{2(\alpha-1)}\big{[}-\alpha\left(\alpha\!+\!1\right)+2t\alpha \left(\alpha\!-\!1\right)H^{\rm ext}\\ &-\tfrac{t^{2}}{12}\left(\alpha-1\right)\left(\alpha-2\right)R^{ g_{\mathbf{\gamma}}}+\mathcal{O}(t^{4})\big{]}\;.\end{split} \tag{14}\] In the above \(H^{\rm ext}:=-I_{\mathbf{\sigma}}\cdot I_{\mathbf{\gamma}}\) canonically extends the mean curvature of \(\Sigma\hookrightarrow(M,g_{\mathbf{\gamma}})\). Given only the trace of stress and causal structure for a spacetime with isotropic singularity, can we recover the physical metric \(\hat{g}\)? Remarkably there exists a "solution generating algebra" that addresses this question: The operator \(I\) acting on \(\mathbf{\sigma}\) (of Section III) is both conformally invariant and second order. It is an example of a more general _Thomas \(D\)-operator_ mapping tractors to tractors [21; 25]. Given the data of a weight \(w^{\prime}\neq 0,-1\) density \(\mathbf{\mu}\), this yields a conformally invariant "d'Alembert-Robin" operator [48] \[\mathbf{L}_{\mathbf{\mu}} :=-w^{\prime}\mathbf{\mu}\left(\Box+wJ\right)\] \[+2\left(w+1\right)\nabla_{a}\mathbf{\mu}\nabla^{a}-\tfrac{w(w+1)}{w^{ \prime}+1}\left(\Box\mathbf{\mu}+w^{\prime}J\mathbf{\mu}\right)\,, \tag{15}\] mapping weight \(w\) densities to weight \(w+w^{\prime}-2\) densities. When \(g_{\mathbf{\mu}}:=\mathbf{\mu}^{-\frac{2}{w^{\prime}}}\mathbf{g}\) is a metric, this gives a d'Alembert operator \(\Box^{g_{\mathbf{\mu}}}+\frac{w(w+w^{\prime}+2)}{6(w^{\prime}+1)}R^{g_{\mathbf{\mu}}}\). Specializing \(\mathbf{\mu}\) to the singular Yamabe defining density \(\mathbf{\sigma}\), the operator \(\mathbf{L}_{\mathbf{\sigma}}\) yields a conformally invariant, Robin-type, boundary operator [51] \[\delta_{\mathrm{R}}\mathrel{\mathop{:}\limits^{\Sigma}}\mathrel{\mathop{:} \limits^{\Sigma}}\mathrel{\mathop{:}\limits^{\Lambda}}-wH\,.\] The crucial point now is that, calling \(\mathcal{S}_{\mathbf{\mu}}:=\mathbf{L}_{\mathbf{\mu}}\mathbf{\mu}\), there is an \(\mathfrak{sl}(2)=\langle x,[x,y],y\rangle\) algebra generated by \[(x,y):=\left(\mathbf{\mu},-\tfrac{2(w^{\prime}+1)}{w^{\prime}\mathcal{S}_{\mathbf{\mu }}}\mathbf{L}_{\mathbf{\mu}}\right).\] In fact Eq. (8) now becomes \[\mathbf{L}_{\mathbf{\tau}^{\mathbf{\sigma}}}\mathbf{\tau}^{\alpha}=\tfrac{1}{3}\dot{T}_{a}{}^{a }-\tfrac{4}{3}\Lambda\.\] The formal asymptotics for traced-Einstein equations can be solved iteratively using the solution generating \(\mathfrak{sl}(2)\) algebra, _cf._[50]. ### Conformal Fundamental Forms and Stress-Energy Tensor We now analyze the trace-free part of the matter coupled Einstein system in terms of conformal embedding geometry. Eq. (7) implies that we must study the tractor gradient of Eq. (13) relating the various scale tractors. Acting with \(q^{*}\) and multiplying by \(t^{-\alpha}\) gives \[t^{-\alpha}q^{*}\mathbf{\nabla}I_{\mathbf{\tau}^{\alpha}} =\tfrac{\alpha(\alpha-1)}{t^{2}}\mathbf{\gamma}d\mathbf{\gamma}d\mathbf{ \gamma}d\mathbf{\gamma}d\mathbf{\gamma}d\mathbf{\gamma}d\mathbf{\gamma}\] \[\qquad+\tfrac{\alpha}{q}q^{*}\mathbf{\nabla}I_{\mathbf{\sigma}}+(1-\alpha) q^{*}\mathbf{\nabla}I_{\mathbf{\gamma}}\,. \tag{16}\] Multiplying by an overall factor \(2/\mathbf{\gamma}\), each (trace-free) term above has a physical interpretation: The left hand side is the physical stress tensor \(\dot{\hat{T}}\). The first summand is the stress of a perfect fluid. The second is \(\alpha\) times the stress tensor of the singular Yamabe metric. It captures the embedding data. The last is \(1-\alpha\) times the stress tensor of the Big Bang metric \(g_{\mathbf{\gamma}}\). Hence we learn the asymptotics of \(\dot{\hat{T}}\): \[\dot{\hat{T}}=\frac{\alpha(\alpha-1)\hat{T}_{\mathrm{fluid}}}{t^{2}}+\frac{ \alpha\hat{T}_{\mathrm{Besch}}}{t}-(\alpha-1)\,\hat{T}_{\mathrm{Big Bang}}\,, \tag{17}\] where \(\hat{T}_{\mathrm{fluid}}:=2dt\mathbf{\odot}dt\). Note that Eq. (9) implies \[\mathsf{P}q^{*}\mathbf{\nabla}I_{\mathbf{\sigma}}=\mathbf{\sigma}B\mathrel{\mathop{:} \limits^{\Sigma}}0\,, \tag{18}\] so the (transverse order 2) partially massless operator acting on \(\tfrac{\alpha}{2}\hat{T}_{\mathrm{Bech}}\) returns \(\mathbf{\sigma}B\). As advertised, Equation (17) characterizes allowed stress at an isotropic singularity. As we next show, the coefficients of terms that diverge as \(t\to 0\) are local invariants of the boundary. This is reminiscent of a similar phenomenon in the AdS/CFT correspondence relating bulk geometry and boundary renormalization group flows [52; 53; 54]. We want to study the first four orders of the early time (\(t\sim 0\)) asymptotics of physical stress. Both the fluid and Big Bang terms in Eq. (17) are completely determined to this order so we focus on the Bach term. Conformally invariant transverse jets of \(q^{*}\mathbf{\nabla}I_{\mathbf{\sigma}}\) generate the second and third but not fourth conformal fundamental forms (see Eq. (18)). There is a notion of a fifth fundamental form, _viz_ the projected Bach tensor \(B^{\top}|_{\Sigma}\). However the Bach-to-stress Equation (10) determines the conformal structure \(\mathbf{g}\) given initial data of the first through fourth fundamental forms, so we focus on these. First note the second fundamental form here obeys \[\dot{K}=q^{*}\mathbf{\nabla}I_{\mathbf{\sigma}}|_{\Sigma}=\tfrac{2}{2}\,\dot{\hat{T}} _{\mathrm{Besch}}\big{|}_{\Sigma}\,.\] To study the next order term, we use the tractor analog of the d'Alembert-Robin operator \(\mathbf{L}_{\mathbf{\sigma}}\) of Eq. (15) to make a transverse order 1 operator [44], again called \(\delta_{\mathrm{R}}\), \[\hat{\mathbf{\gamma}}\,T^{*}M[w]\ni\dot{x}_{ab}\mapsto \tfrac{\delta_{\mathrm{R}}}{\,\mapsto}\left[(\nabla_{\hat{\alpha}}+(2-w)H) \dot{x}_{ab}\right.\] \[\qquad+\tfrac{2}{w-2}\bar{\nabla}({}_{a}\dot{x}_{\hbar b}^{\top}) \right]^{\top,\circ}\in\otimes T^{*}\Sigma[w-1]\,.\] The trace-free Fialkow tensor is then \[\dot{F}=\delta_{\mathrm{R}}q^{*}\mathbf{\nabla}I_{\mathbf{\sigma}}=\delta_{\mathrm{R} }\big{(}\tfrac{2}{2}\,\dot{T}_{\mathrm{Besch}}\big{)}\,.\] Because we cannot extract \(\dot{L}\) from a conformally invariant second normal derivative of \(q^{*}\mathbf{\nabla}I_{\mathbf{\sigma}}\) to relate the fourth fundamental form to stress, we instead consider one normal derivative of Big Bang stress \(\dot{T}_{\mathrm{Big Bang}}\). For this we employ the identity [55] \[\delta_{\mathrm{R}}\,q^{*}\mathbf{\nabla}I_{\mathbf{\gamma}}=\mathbf{\gamma}\dot{L}+\delta ^{(2)}\mathbf{\gamma}\,.\] This yields the last line of Figure 1 summarizing the relations between geometry and stress-energy tensor. ## VI Example: Poincare-Einstein Conformal Cyclic Cosmology Models where the present universe is seeded by pre-Big Bang data [57; 58; 59; 60], dovetail with the above results. One approach [61] employs an asymptotically dS pre-Big Bang metric \(\tilde{g}\) and a physical metric \(\tilde{g}\) with isotropic singularity: \[\hat{g}=\frac{-d\hat{t}^{2}+\hat{h}(\hat{t})}{\hat{t}^{2}}\,,\quad\tilde{g}= \check{t}^{-2\alpha}\big{(}-d\check{t}^{2}+\check{h}(\check{t})\big{)}\,.\] The conformal infinity/initial singularity hypersurface \(\Sigma\) is at \(\hat{t}=0=\tilde{t}\). The pre-Big Bang spatial metric \(\hat{h}\) is defined by a Fefferman-Graham-type expansion [62] about the conformal infinity of \(\hat{g}\) obtained by solving Einstein's equations with non-vanishing stress for suitable late time \(\hat{t}\to 0_{-}\) matter content. Conformal fundamental forms are covariant analogs of Fefferman-Graham expansion coefficients [44] and are determined by \(\hat{h}(\hat{t})\). They can be matched [45] to those of the Big Bang model and thus its stress. Schematically, \[\hat{\tilde{T}}\mapsto\text{conformal fundamental forms}\mapsto\hat{\tilde{T}}\,.\] Just as for stellar models where interior and exterior solutions are matched using fundamental forms [63; 64; 65], cyclic cosmological matching is via conformal fundamental forms. ###### Acknowledgements. We thank Pawel Nurowski useful discussions. A.R.G. and A.W. acknowledge support from the Royal Society of New Zealand via Marsden Grant 19-UOA-008. J.K. acknowledges funding received from the Norwegian Financial Mechanism 2014-2021, project registration number UMO-2019/34/H/ST1/00636. A.W. was also supported by Simons Foundation Collaboration Grant for Mathematicians ID 686131. J.K. and A.W. thank the University of Auckland for warm hospitality.
2310.01426
REMEDI: REinforcement learning-driven adaptive MEtabolism modeling of primary sclerosing cholangitis DIsease progression
Primary sclerosing cholangitis (PSC) is a rare disease wherein altered bile acid metabolism contributes to sustained liver injury. This paper introduces REMEDI, a framework that captures bile acid dynamics and the body's adaptive response during PSC progression that can assist in exploring treatments. REMEDI merges a differential equation (DE)-based mechanistic model that describes bile acid metabolism with reinforcement learning (RL) to emulate the body's adaptations to PSC continuously. An objective of adaptation is to maintain homeostasis by regulating enzymes involved in bile acid metabolism. These enzymes correspond to the parameters of the DEs. REMEDI leverages RL to approximate adaptations in PSC, treating homeostasis as a reward signal and the adjustment of the DE parameters as the corresponding actions. On real-world data, REMEDI generated bile acid dynamics and parameter adjustments consistent with published findings. Also, our results support discussions in the literature that early administration of drugs that suppress bile acid synthesis may be effective in PSC treatment.
Chang Hu, Krishnakant V. Saboo, Ahmad H. Ali, Brian D. Juran, Konstantinos N. Lazaridis, Ravishankar K. Iyer
2023-10-02T21:46:01Z
http://arxiv.org/abs/2310.01426v1
REMEDI: REinforcement learning-driven adaptive MEtabolism modeling of primary sclerosing cholangitis DIsease progression ###### Abstract Primary sclerosing cholangitis (PSC) is a rare disease wherein altered bile acid metabolism contributes to sustained liver injury. This paper introduces REMEDI, a framework that captures bile acid dynamics and the body's adaptive response during PSC progression that can assist in exploring treatments. REMEDI merges a differential equation (DE)-based mechanistic model that describes bile acid metabolism with reinforcement learning (RL) to emulate the body's adaptations to PSC continuously. An objective of adaptation is to maintain homeostasis by regulating enzymes involved in bile acid metabolism. These enzymes correspond to the parameters of the DEs. REMEDI leverages RL to approximate adaptations in PSC, treating homeostasis as a reward signal and the adjustment of the DE parameters as the corresponding actions. On real-world data, REMEDI generated bile acid dynamics and parameter adjustments consistent with published findings. Also, our results support discussions in the literature that early administration of drugs that suppress bile acid synthesis may be effective in PSC treatment. Reinforcement learning, Disease progression, Differential equation, Adaptation ## 1 Introduction Primary sclerosing cholangitis (PSC) is a rare, complex liver disease in which altered bile acid metabolism contributes to liver injury (Bertolini et al., 2022). There are no effective medications, and liver transplantation is often necessary (Vesterhus and Karlsen, 2020). A critical hurdle in exploring therapeutics is the lack of a model capturing the relevant disease dynamics, the body's response to the disease, and the effects of treatments. We aim to develop a machine learning (ML) based PSC progression model with a focus on bile acid metabolism dynamics and its bidirectional interactions with the body over time. Such a model could facilitate treatment evaluations and accelerate drug discovery or repurposing. Examples of computational models guiding interventions already exist for prostate cancer (Zhang et al., 2017) and HIV (Xiao et al., 2013). There were three main challenges in developing the proposed progression model: (1) the absence of a bile acid metabolism model during PSC; (2) limited insight into the body's adaptive response to the disease; and (3) the lack of data from affected organs and a dearth of longitudinal data. (1) While prior studies have proposed differential equation (DE)-based bile acid metabolism models for healthy individuals (Sips et al., 2018), they do not capture bile duct obstruction, the pathophysiological hallmark of PSC (Chapman et al., 2010), and its impact on bile acid metabolism. (2) Over the course of the disease, the body responds to changing bile acid levels by continually adapting and altering bile acid metabolism, which plays a central role in keeping PSC patients asymptomatic for many years (Jansen et al., 2017). However, the specific adaptations during PSC progression are not well understood (Milkiewicz et al., 2016), making them difficult to model. (3) Despite the liver and the bile ducts being central to PSC, direct bile acid measurements in these organs are infeasible. We are limited to bile acid data in the blood. Moreover, these data are cross-sectional, i.e., taken only at a single time point, further complicating the modeling of longitudinal disease progression. We introduce REMEDI to model PSC progression by extending existing bile acid metabolism DEs with PSC pathophysiology and incorporating a reinforcement learning (RL) agent to approximate the body's adaptations. REMEDI addresses the above challenges with the following key innovations: (1) We developed a reduced-order bile acid metabolism model to capture dynamics pertinent to PSC, based on an existing DE model for healthy individuals. We extended the reduced model with clinical domain knowledge to capture bile duct obstruction. (2) We used RL to emulate the body's adaptive response to the disease. We assume the body is a smart agent that, through evolution, has learned to adapt itself to maintain homeostasis of critical metabolic events (Giordano, 2013). In PSC, the body regulates bile acid metabolism enzymes to maintain homeostasis. These enzymes are naturally represented as parameters in the reduced-order bile acid DE model. During disease progression, the RL agent constantly updates these parameters to maximize a reward function that promotes homeostasis and the generation of close-to-reality bile acid profiles. (3) Because PSC patients can have a prolonged, partially successful adaptation period (Jansen et al., 2017), we assume the real-world cross-sectional data were collected during this "stable" period, and we encourage the RL agent to generate stabilized trajectories that are close to the data. The main assumption of REMEDI is that the goal of homeostasis drives biological actions, achieved by smart regulation of body enzymes (Savir et al., 2017; Billman, 2020). Thus, we treat adaptation as a sequential optimization problem with "homeostasis" as the objective function and the sequential regulation of enzymes as the optimization arguments. These enzymes are represented as parameters in the DEs. RL offers a framework to solve this sequential optimization problem. In PSC, the bile acid DEs constitute the environment, "homeostasis" is the reward function, and the modulation of DE parameters is the actions. Therefore, RL, in combination with the DEs, approximates the body's adaptation to disease and enables dynamic modeling of PSC progression. We validated REMEDI against findings from the literature and real-world clinical data. REMEDI produced biologically realistic results. (1) The reduced-order bile acid model captured the relevant dynamics of a more detailed model from the literature with drastically reduced computational cost. (2) Incorporation of bile duct obstruction mimicked PSC pathophysiology observed in clinical and animal studies (Jansen et al., 2017; Boyd et al., 1966; Popper and Schaffner, 1970). (3) On real-world PSC data, REMEDI generated bile acid dynamics and parameter adaptations consistent with the literature (Jansen et al., 2017). (4) We evaluated _in silico_ the effects of two PSC drugs in clinical trials (Vesterhus and Karlsen, 2020) and found REMEDI has the potential to explain the drugs' biologically observed behaviors (Boyer, 2007; Caballero-Camino et al., 2023). Our contributions include: (1) developing the first mathematical model of bile acid metabolism in PSC, based on clinical domain knowledge, and (2) providing an _in silico_ testbed to evaluate the effects of bile acid modulating therapies. Our approach can be leveraged to determine optimal interventions for PSC in combination with comprehensive clinical data. In principle, REMEDI's approach of using RL to estimate time-varying DE parameters can be extended to other diseases where DE-based models with time-varying parameters have been proposed, such as HIV (Liang et al., 2010). Moreover, the innovative strategy of REMEDI that leverages RL to emulate adaptive behaviors holds promise for modeling a variety of homeostatic biological systems. ## 2 REMEDI Framework Our approach has three parts (Figure 1): (1) a reduced-order bile acid metabolism model for healthy individuals (Section 2.1), (2) a domain knowledge-based extension to depict the pathophysiology of PSC (Section 2.2), and (3) RL that captures the body's adaptation to the pathophysiology (Section 2.3). ### Model of healthy bile acid metabolism Several species of bile acids circulate in multiple organs of the human body in a process called _entero-hepatic circulation_(Hofmann, 2009). In the liver (li), cholesterol is metabolized into the unconjugated primary bile acids cholic acid and thechodeoxy acid (uCA and uCDCA), which are transformed into conjugated forms (cCA and cCDCA) and secreted into the bile ducts and gallbladder (bd). These bile acids are then released into the intestines, where bacteria in the ileum (il) and colon (co) convert them into secondary bile acids (SBA). Active and passive uptake reabsorb intestinal bile acids back to the liver, with a small portion escaping into plasma (pl). Unreabsorbed bile acids are excreted with feces (fe). We adopt Sips et al. (2018)'s approach and model bile acid metabolism with a series of DEs. Based on their relevance to PSC, we merge several bile acid species and do not distinguish among certain organ segments (see Appendix A), resulting in a reduced-order model of bile acid \(BA\in\) {cCA, cCDCA, cSBA, uCA, uCDCA, uSBA} in organ \(OG\in\) [li, bd, il, co, pl, fe]. For each BA in each OG, we (1) model its influxes/outfluxes from relevant biochemical and physical processes and (2) combine the fluxes into one DE to describe how the BA level in OG varies with time. See Appendix A for all processes being modeled and Appendix B for all corresponding DEs. As an example, Figure 2 shows the fluxes that affect the liver cCA level: (1) \(r_{\rm syn}^{\rm cCA}\) is de novo synthesis in liver cells; (2) \(r_{\rm li\;from\;gui}^{\rm cCA}\) is active and passive uptake from the gut; (3) \(r_{\rm li\;from\;pl}^{\rm cCA}\) is influx from systemic blood circulation; and (4) \(r_{\rm li\;to\;bd}^{\rm cCA}\) is outflux to the gallbladder and bile ducts. Hence, liver cCA level \(x_{\rm li}^{\rm cCA}\) varies with time (\(\mu\)mol/min) according to: \[\frac{dx_{\rm li}^{\rm cCA}}{dt}=r_{\rm syn}^{\rm cCA}+r_{\rm li\;from\;gui}^{ \rm cCA}+r_{\rm li\;from\;pl}^{\rm cCA}-r_{\rm li\;to\;bd}^{\rm cCA} \tag{1}\] \[r_{\rm syn}^{\rm cCA}=p[\rm synthesis]\cdot p[\rm syn\_frac\_CA} \tag{2}\] \[r_{\rm li\;to\;bd}^{\rm cCA}=p[\rm li\_to\_bd\_freq]\cdot x_{\rm li}^{\rm cCA} \tag{3}\] We model all fluxes with zero- or first-order dynamics. For example, we model the cCA synthesis flux \(r_{\rm syn}^{CCA}\) as the product of two constant parameters, making \(r_{\rm syn}^{cCA}\) also a constant (zero-order). Here, \(p[\rm synthesis]\) denotes the total bile acid synthesis rate, and \(p[\rm syn\_frac\_CA}\)] describes the fraction of cholic Figure 1: Overview of the reinforcement learning formulation of bile acid metabolism adaptation in PSC. Figure 2: Fluxes affecting liver cCA level. acid among the newly synthesized bile acids. The liver-to-bile duct transit rate of cCA, \(r_{\text{i}\text{ to }\text{bd}}^{\text{cCA}}\), exemplifies first-order dynamics. \(x_{\text{i}\text{i}}^{\text{cCA}}\) represents the current liver cCA level, and the constant parameter \(p[\text{li\_to\_bd\_freq}]\) characterizes the first-order dynamics. ### Introducing PSC pathophysiology Drawing on clinical domain knowledge, we extend the reduced-order bile acid metabolism model with PSC pathophysiology (Figure 3) by (1) implementing an obstruction of bile flow in the bile ducts and (2) introducing bile acid backflow to the liver following excessive bile acid buildup in the bile ducts. Obstructed bile flow in the bile ducts:In PSC, chronic inflammation causes scarring and narrowing of the bile ducts (Karlsen et al., 2017), impeding the normal bile flow into the small intestine (Figure 3(b)). The extent of the obstruction determines the reduction of bile flow. We introduce a parameter \(p[\text{bd\_max\_flow}]\) to denote the maximum amount of bile acids allowed to flow through, in proportion to the degree of obstruction. Consequently, if \(r_{\text{bd to }\text{ li}}\) calculated from its first-order dynamics exceeds \(p[\text{bd\_max\_flow}]\), we cap it at \(p[\text{bd\_max\_flow}]\): \[r_{\text{bd to }\text{ il}}=\min(r_{\text{bd to }\text{ il}},p[\text{bd\_max\_flow}]) \tag{4}\] Bile acid backflow to the liver:The bile ducts (and gallbladder) have a limited storage capacity for bile acids. In PSC, bile duct obstruction can result in bile acid buildup exceeding the duct's capacity, leading to regurgitation and backflow of excessive bile acids to the liver (Popper and Schaffner, 1970), as depicted in Figure 3(c). To represent this, we introduce a parameter \(p[\text{bd\_max\_ba}]\) denoting the bile acid holding capacity of the bile duct, and define \(r_{\text{bd to }\text{ il}}\) as the excessive bile acids backflowing from the bile duct to the liver: \[r_{\text{bd to }\text{ li}}=\max(x_{\text{bd}}+r_{\text{bd from }\text{ li}}-r_{\text{bd to }\text{ il}} \tag{5}\] \[\qquad\qquad\qquad\qquad\qquad-p[\text{bd\_max\_ba}],0)\] \(x_{\text{bd}}\) is the current bile acid level in the bile duct, \(r_{\text{bd from }\text{ li}}\) is the influx from the liver, and \(r_{\text{bd to }\text{ il}}\) is the outflux to the ileum. See Appendix B for details. ### Model adaptation in PSC We model adaptation as a series of enzymatic regulations to optimally maintain homeostasis, i.e., preserve physiological functions without deviating too much from the enzyme levels under healthy conditions. We model this adaptation process using RL (Figure 1). State:We assume the body self-regulates based on its current status, captured by the state vector \(=\{\mathbf{x}_{bile\ acids},\mathbf{p}_{adapt}\}\) in our context of bile acid metabolism. \(\mathbf{x}_{bile\ acids}\) denotes bile acid levels across species and organs (30 variables, see Appendix C). \(\mathbf{p}_{adapt}\) denotes parameter values corresponding to regulatable enzymes (five variables, see **Action**). Action:We assume the body adapts through continual enzyme regulations, which translates to the modulation of the subset of DE parameters representing enzyme levels regulated by the body. Our DEs contain five such regulatable parameters, i.e., \(\mathbf{p}_{adapt}=\{p[\text{synthesis}]\,\ p[\text{syn\_frac\_CA}]\,\ p[\text{hep\_ratio\_conj\_tri}]\,\ p[\text{hep\_ratio\_conj\_di}]\,\ p[\text{max\_asb\_rate}]\}\). At every RL step, for each parameter in \(\mathbf{p}_{adapt}\), any of three actions can be taken: up-regulation or down-regulation (with a prespecified fold change or absolute difference), or remaining unchanged. \(p[\text{synthesis}]\) and \(p[\text{syn\_frac\_CA}]\) were introduced in Section 2.1. \(p[\text{hep\_ratio\_conj\_tri}]\) denotes the fraction of reabsorbed bile acid extracted by the liver (in contrast to going into systemic blood) for cCA, and \(p[\text{hep\_ratio\_conj\_di}]\) for cCDCA and cSBA. \(p[\text{max\_asbt\_rate}]\) is the rate parameter in the first-order dynamics of active uptake. Environment:The RL environment simulates bile acid dynamics following the introduction of PSC pathophysiology to healthy conditions. We use the reduced-order DEs extended with bile duct obstruction as the simulator. At every RL step, the RL agent modifies \(\mathbf{p}_{adapt}\) in the DEs and updates the state vector. The environment takes a step forward via numerical integration of the DEs for a fixed duration. The resultant bile acid levels update \(\mathbf{x}_{bile\ acids}\) in the state vector. A reward is computed in the environment step and sent to the RL agent to determine the next action. The simulation terminates when a prespecified time period is reached or when any state variable exceeds physiological ranges. Reward:Our reward function comprises several terms to guide the RL agent towards meaningful adaptations that (1) sustain physiological functions (including minimizing liver toxicity, facilitating fat digestion, and maintaining cholesterol elimination), (2) resemble real-world patient data, and (3) conform with ranges and values reported in the literature. Minimizing toxicity.One of the main goals of adaptation is to limit liver exposure (\(LE\)) to toxic bile acids (Boyer, 2007). We set a negative reward for excessive bile acid exposure in the liver to minimize liver toxicity. We calculate \(LE\) as the cumulative liver bile acid level over one day. If current \(LE\) exceeds \(LE\) under healthy conditions, we set the negative reward to be the normalized excessive exposure. \[-\max\left(\frac{\text{current }LE-\text{healthy }LE}{\text{maximum possible }LE},0\right) \tag{6}\] Facilitating digestion.Adaptation requires preserving digestive functions under disease conditions (Tappenden, 2014). Because bile acids in the ileum are necessary for fat digestion, we promote ileum access (\(IA\)) to bile acids with a reward term defined as the ratio of current \(IA\) to \(IA\) under healthy conditions. \(IA\) is calculated as the cumulative ileum bile acid level over one day. The reward is capped at 1, offering no additional benefit beyond healthy levels. \[\min\left(\frac{\text{current }IA}{\text{healthy }IA},1\right) \tag{7}\] Maintaining cholesterol elimination.Synthesizing bile acids from cholesterol is one of the main pathways for eliminating cholesterol from the body (Wang et al., 2018). Insufficient elimination of cholesterol increases the risks of multiple diseases. We reward sufficient cholesterol elimination, represented by the ratio of the current bile acid synthesis (\(BAS\)) rate to the \(BAS\) rate under healthy conditions. The reward is capped at 1, offering no additional benefit from excessive cholesterol elimination. \[\min\left(\frac{\text{current }BAS\text{ rate}}{\text{healthy }BAS\text{ rate}},1\right) \tag{8}\] Resembling real-world patient data.The way to adapt and sustain physiological functions might not be unique. To obtain an RL agent that mirrors adaptation in humans, we set a reward term to promote RL solutions that resemble real-world plasma data from PSC patients. We select a representative patient from our cohort (see Section 3) and penalize the difference between the patient's data \(BA_{data}\) and the respective RL states \(BA_{RL}\). This difference is divided by the corresponding bile acid's standard deviation. The negative sum of the squared weighted difference is multiplied by a coefficient \(\lambda\in[0,1]\) to match other reward terms' range. \[-\lambda\min\left(\Sigma_{BA}\left(\frac{BA_{RL}-BA_{data}}{\text{std }BA_{data}}\right)^{2},\text{CAP}\right) \tag{9}\] Conforming with values from the literature:We design additional reward terms to ensure the RL agent generates physiologically plausible bile acid levels while keeping the regulatable parameters close to their values under healthy conditions. See Appendix C for further details. ### Implementation of REMEDI Degree of pathophysiology:To determine the extent of bile duct obstruction, we ran a grid search for the parameter \(p\)[bd_max_flow] across values of 1, 2, 3, 5, 10, 20, and 40 \(\mu\)mol/min, simulating scenarios from near-complete to partial obstruction. Corresponding RL agents were trained independently. RL timeframes:At each step, the RL agent modified \(\mathbf{p}_{adapt}\) to simulate the next 24 hours. Considering adaptation unfolds over days to weeks, this daily cycle offered sufficient opportunities for meaningful modulations. We restricted the simulation to a maximum of 240 days to adequately encapsulate the initial adaptation phase (Georgiev et al., 2008). Adaptation amplitudes:We chose relatively large parameter adaptation amplitudes to match the day-long RL steps. For parameters describing rates, i.e., \(p\)[synthesis] and \(p\)[max_asbt_rate], we simulated Figure 3: Idealized modeling of PSC pathophysiology. up- or down-regulation with a 25% higher or lower fold change; for parameters describing fractions, i.e., \(p\)[syn_frac_CA], \(p\)[hep_extract_ratio_conj_tri], and \(p\)[hep_extract_ratio_conj_di]), we applied a 10% addition or subtraction. We also prespecified physiologically plausible ranges for each parameter, and up- or down-regulated values exceeding the ranges were clipped. The RL agent generated stochastic actions during training and evaluation. Real-world patient data:Our dataset includes plasma bile acid measurements of 222 PSC patients from our hospital partner. We selected five representative patients whose measurements minimized the sum of the distances to all patients. Patient-specific models were trained using their respective data, resulting in five models for the five patients. Since PSC is chronic, we assume patient data were collected during the stabilized adaptation phase. Hence, the sum of squared errors to encourage resemblance to patient data was only introduced after the initial four weeks. RL training:We trained the RL agent for 4,000,000 environment steps using the model-free Proximal Policy Optimization (PPO) algorithm implemented in the Python package Stable Baselines3. More details can be found in Appendix C. ## 3 Results ### Healthy bile acid dynamics We derived the DE parameters and initial bile acid values from the well-calibrated model by Sips et al. (2018). We simulated our DEs over 60 days, reaching a steady state at which bile acid dynamics repeated every 24 hours. The Runge-Kutta 45 numerical solver was used to integrate the DEs in our experiments. We validated our reduced-order DEs against the original DEs proposed by (Sips et al., 2018) (see Appendix A for details). The reduced-order DEs completed a 60-day simulation in 2.99s, over ten times faster than the 32.5s of the original DEs, while retaining similar bile acid trends (Appendix A Figure 7). The steady-state (day 60) bile acid levels of the reduced-order DEs were also consistent with real-world plasma data of 302 healthy individuals from our hospital partner (Appendix A Figure 8 and Figure 9). ### PSC bile acid dynamics without RL For this analysis, we employed the reduced-order DEs extended with PSC pathophysiology, excluding RL adaptation. The parameter representing the degree of bile duct obstruction, \(p\)[bd_max_flow], was set to 3 \(\mu\)mol/min (in contrast to 75 \(\mu\)mol/min when unobstructed). Other DE parameters and bile acid levels were initialized with their steady-state values under healthy conditions. We ran the simulation for 60 days, which, as shown by animal models, was enough time to establish adaptation (Georgiev et al., 2008). Figure 4(a) shows the 60-day cCDCA bile acid dynamics following the introduction of PSC pathophysiology (see Appendix D for dynamics of other bile acids). We observed a 230% surge in bile duct bile acid levels, a direct result of impaired bile flow to the ileum and the subsequent accumulation in the bile ducts. A corresponding decrease in ileum bile acids was also observed. Around day five, bile duct bile acids reached a saturation point, causing excess bile acids to flow back into the liver. These observations align with the biologically expected changes (Boyd et al., 1966; Popper and Schaffner, 1970). However, once bile backflow starts, it continues at a near-constant rate throughout the remaining simulation, generating unrealistically high liver bile acid levels. The cCDCA level rose to 27,148 \(\mu\)mol by day 60, in stark contrast to the 45 \(\mu\)mol under healthy conditions. The simulation also indicated decreased conjugated bile acid levels in the plasma, conflicting with data showing elevated levels in PSC patients. These discrepancies arise from the flawed assumption that bile acid metabolism parameters remain unchanged, neglecting the dynamic adaptation occurring in PSC. ### PSC bile acid dynamics with REMEDI We tested REMEDI upon introducing PSC pathophysiology to healthy conditions. We evaluated a range of \(p\)[bd_max_flow] values and chose the case with \(p\)[bd_max_flow]=3 \(\mu\)mol/min for further analysis, as it yielded plasma bile acid levels closest to real data and therefore was more likely to reflect the real-world disease conditions (see Appendix C). Upon introducing PSC pathophysiology in Patient 1's model, we observed an initial surge of bile acid levels in the bile ducts and a decrease in the downstream intestines (Figure 4(b)), as observed in animal studies of PSC (Boyd et al., 1966). Around day five, liver bile acids started to accumulate following saturation in the bile ducts. Importantly, REMEDI was able to adjust and stabilize the liver bile acid levels by week four, avoiding the unrealistic continuous rise seen without RL adaptation (Figure 4(a)). Further more, REMEDI showed an increase in conjugated bile acids in the plasma and a decrease in unconjugated forms (Appendix D Figure 13), better aligning with the real-world PSC patient data than without RL adaptation (Figure 4(c)). Similar results were seen for models trained on Patient 2-5 (Appendix E). To quantitatively assess the improvement from RL, we compared Patient 1's measurements with bile acid predictions from models trained on data from four other patients. The average error with RL was smaller than the error without RL (Table 1). We adopted this approach for validation as we only had one cross-sectional measurement for each patient. Overall, REMEDI with RL-based adaptation captured key disease dynamics and generated a more faithful representation of real-world data. ### Trajectories of adaptive parameters Analyzing parameter adjustments by REMEDI and linking them to the underlying enzymes can shed light on possible adaptive mechanisms driving bile acid metabolism in PSC. Notably, bile acid synthesis went through a sharp decline following the bile duct obstruction and remained low throughout the simulation period (Figure 5), in line with the known down-regulation of the bile acid synthesis enzyme CYPA1 in cholestasis (a condition in which the bile flow from the liver stops or slows) (Jansen et al., 2017). The model also predicted fluctuations in the CA:CDCA ratio among newly synthesized bile acids (\(p\)[syn_frac_CA)]. CA:CDCA ratio depends on the activities of the classical and alternative bile acid synthesis pathways, which are regulated by enzymes such \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Model/Plasma** & **cCA** & **cCDCA** & **cSBA** & **uCA** & **uCDCA** & **uSBA** \\ \hline REMEDI without RL & -12.01 & -10.37 & -2.21 & -0.08 & -0.18 & -0.35 \\ REMEDI & -9.91 & -6.68 & 0.25 & 0.08 & 0.10 & 0.14 \\ \hline \hline \end{tabular} \end{table} Table 1: Error between the model and PSC Patient 1’s data (\(\mu\)mol, day 50 – 60 averaged). Figure 4: Simulated 60-day (panels (a)–(b)) and 240-day ((c)–(e)) bile acid dynamics after introduction of PSC pathophysiology (\(p\)[bd_max_flow]=3 \(\mu\)mol/min) to healthy conditions at day 0. (a) Without adaptation; (b)–(c) with adaptive DE parameters obtained from a trained RL agent; (d) with complete reduction of active uptake; (e) with 50% reduction of active uptake. Results shown here were derived from the model trained with Patient 1’s data. Only cCDCA values at 8 AM of each day are plotted. See Appendix D and Appendix E for the complete dynamics from all patients. as CYP8B1 (Li and Chiang, 2012). How the activities of these enzymes change in PSC is unclear, but our real-world PSC patient data also showed an altered plasma CA:CDCA ratio, warranting further study of potential enzymatic shifts. We observed reduced liver extraction of reabsorbed intestinal bile acids, with more bile acids entering systemic blood (\(p\)[hep_extract_ratio_conj_tri] and \(p\)[hep_extract_ratio_conj_di]), potentially explaining the elevated plasma bile acid levels commonly seen in PSC patients. This trend aligns with cholestas-related down-regulation of NTCP, a key enzyme in liver bile acid extraction (Donner et al., 2007). Finally, there was a temporary drop in the ileum bile acid active uptake efficiency in the initial adaptation phase (\(p\)[max_asbt_rate]), corroborating the down-regulation of the bile acid uptake mediating enzyme ASBT in animal studies (Hruz et al., 2006). Trends in Patient 2-5 were similar (Appendix E). ### In silico evaluation of bile acid therapies Using the trained REMEDI model, we assessed two types of experimental therapies targeting bile acid metabolism _in silico_(Boyer, 2007): (1) suppression of bile acid synthesis, which we studied through analyzing bile acid synthesis rate (\(p\)[synthesis]) adjustments during adaptation, and (2) reduction of bile acid active uptake, which we simulated by decreasing efficiency of intestinal active uptake (\(p\)[max_asbt_rate]). Our analysis suggested partial reduction of bile acid active uptake might protect the liver, highlighting the value of REMEDI in evaluating therapies _in silico_. **Suppressing synthesis:** Drugs suppressing bile acid synthesis enzyme CYP7A1 are in clinical trials for PSC and other cholestatic conditions (Chiang and Ferrell, 2020). Interestingly, REMEDI suggested the liver may naturally inhibit bile acid synthesis (\(p\)[synthesis]) as an adaptive response (Figure 4(c)), implying the therapeutic window for such drugs may be limited to the early stages of the disease, before endogenous adaptations establish. **Reducing active uptake:** ASBT, a central enzyme in ileum bile acid active uptake, has been the target of multiple drugs (Vesterhus and Karlsen, 2020). We simulated two strategies targeting this mechanism by adjusting \(p\)[max_asbt_rate]: (1) complete reduction, down-regulating \(p\)[max_asbt_rate] until it reaches its physiological lower bound, and (2) 50% reduction, down-regulating \(p\)[max_asbt_rate] until it reaches or falls below half of the lower bound. _Complete reduction:_ REMEDI predicted that full inhibition of active uptake would result in physiologically implausible spikes in plasma and intestine bile acids (Figure 4(d)), which led to premature termination of the simulation, implying complete reduction is likely an unrealistic strategy. _50% reduction:_ Partial reduction limited the occurrence of liver bile acid spikes (Figure 4(e)), potentially protecting the liver from excessive bile acid accumulation, in line with animal studies showing a liver-protection effect from an ASBT inhibitor (Caballero-Camino et al., 2023). In contrast to complete reduction, 50% reduction yielded plasma and intestine bile acid levels within realistic ranges, making partial reduction a more viable strategy. ## 4 Related Works Several studies have proposed DE-based mechanistic models of bile acid metabolism (Hofmann et al., 1983; Molino et al., 1986; Sips et al., 2018; Baier et al., 2019; Voronova et al., 2020), albeit for healthy or non-PSC conditions. Moreover, the adaptive responses of the body in pathological conditions were usually not considered. A notable exception is Voronova et al. (2020), which explicitly modeled the FXRFGF19 bile acid self-regulation pathway (Eloranta and Kullak-Ublick, 2008). Our approach is unique for using RL to simultaneously consider multiple regulation pathways without explicitly modeling their mechanisms. A previous study combined DEs with RL to model Alzheimer's disease progression (Saboo et al., 2021). They used RL to estimate DE variable values that maximize cognition and minimize ener Figure 5: DE parameter adaptations in REMEDI trained on Patient 1’s data. getic cost, whereas we estimate DE parameters representing enzyme levels that promote homeostasis. ## 5 Limitations First, we made several assumptions due to the current limited understanding of PSC, including focusing on bile acid metabolism, modeling bile duct obstruction as a sudden blockage, and in setting adaptation goals. Refining these assumptions as new insights of PSC emerge will lead to a more realistic model. Second, our bile acid trajectory prediction was only compared to a single time point because we only had access to cross-sectional data. Future collection of longitudinal data and data from multiple organs will be crucial for further validating REMEDI. ## 6 Conclusion We developed REMEDI, a novel model of PSC progression by combining bile acid metabolism DEs with an RL agent that captures the body's adaptation. REMEDI captures key bile acid trends in disease progression consistent with the literature and predicted therapy responses _in silico_.
2302.01637
Khayyam-Pascal Determinantal Arrays, Star of David Rule and Log-Concavity
In this paper we develop a new geometric method to answer the log-concavity questions related to a nice class of combinatorial sequences arising from the Pascal triangle.
Hossein Teimoori Faal, Hasan Khodakarami
2023-02-03T10:13:21Z
http://arxiv.org/abs/2302.01637v1
# Khayyam-Pascal determinantal arrays, star of David rule and log-concavity ###### Abstract. In this paper we we develop a new geometric method to answer the log-concavity questions related to a nice class of combinatorial sequences arising from the Pascal triangle. ## 1. Introduction One of the important task in _enumerative combinatorics_ is to determine _log-concavity_ of a combinatorial sequence. **Definition 1.1**.: A sequence \(a_{0},a_{1},\ldots,a_{n}\) of real numbers is said to be _concave_ if \(\frac{a_{i-1}+a_{i+1}}{2}\leq a_{i}\) for all \(1\leq i\leq n-1\), and _logarithmically concave_ (or log-concave for short) if \(a_{i-1}a_{i+1}\leq a_{i}^{2}\) for all \(1\leq i\leq n-1\). **Definition 1.2**.: The sequence \(a_{0},a_{1},\ldots,a_{n}\) is called _symmetric_ if \(a_{i}=a_{n-i}\) for \(0\leq i\leq n\). **Definition 1.3**.: We say that a polynomial \(a_{0}+a_{1}q+\cdots+a_{n}q^{n}\) has a certain property (such as log-concave or symmetric) if its sequence \(a_{0},a_{1},\ldots,a_{n}\) of coefficients has the property. There are many ways to prove the log-concavity of a combinatorial sequence. One of the classic method of proof is a direct combinatorial approach, which is of significant interest for combinatorial people. **Example 1.1**.: _The best-known log-concave sequence is the \(n\)-th row of Khayyam-Pascal's triangle:_ \[\binom{n}{0},\binom{n}{1},\binom{n}{2},\ldots,\binom{n}{n}.\] _Here, the log-concavity is easy to show directly because of the explicit formula \(\binom{n}{k}=\frac{n!}{k!(n-k)!}\). Indeed,_ \[\frac{\binom{n}{k}^{2}}{\binom{n}{k-1}\binom{n}{k+1}}=\frac{(k+1)(n-k+1)}{k(n -k)}>1,\] _which is equivalent to \(n>-1\) (or \(n\geq 0\)), as required._ **Example 1.2**.: _For the sequence of the \(n\)-th diagonal of the Khayyam-Pascal triangle:_ \[\binom{n}{0},\binom{n+1}{1},\binom{n+2}{2},\ldots,\binom{n+k}{k},\ldots,\] Introduction The _geometric idea_ behind the definition of the log-concavity of a sequence, to the best of our knowledge, there is no geometric approach to tackle this issue. In this paper, we develop a new geometric method to answer the log-concavity questions related to a nice class of combinatorial sequences arising from the Khayyam-Pascal triangle. ## 2. Khayyam-Pascal Array and Parallelepiped Determinantal Identities Consider a \(45\,^{\circ}\) rotation of the Khayyam-Pascal triangle which we call it _Khayyam-Pascal squared array_[1]. Now, we construct a parallelepiped with two triangles as its bases which is shown with six entries of this array and the corresponding edges in Figure 1. Then, we have the following determinantal identities which are the direct consequence of the recurrence relation for the Khayyam-Pascal array. Figure 1. A Determinantal Parallelepiped **Proposition 2.1**.: _(Parallellepiped Determinantal Identities)_ \[\begin{array}{r}i)\quad\left|\begin{array}{cc}u&v\\ u^{\prime}&v^{\prime}\end{array}\right|\ \ =\ \ \left|\begin{array}{cc}w&v\\ w^{\prime}&v^{\prime}\end{array}\right|,\\ ii)\quad\left|\begin{array}{cc}w&v\\ w^{\prime}&v^{\prime}\end{array}\right|\ \ =\ \ \left|\begin{array}{cc}w&u\\ w^{\prime}&u^{\prime}\end{array}\right|.\end{array}\] _In other words, the determinants formed by three faces of the parallelepiped \(uvvvu^{\prime}v^{\prime}w^{\prime}\) in Figure 1 are equal._ Proof.: By the rule of Khayyam-Pascal array, we have \[\begin{array}{c}u=v+w\\ u^{\prime}=v^{\prime}+w^{\prime}.\end{array}\] Now, multiplying the above equalities by \(v\) and \(v^{\prime}\), respectively, we get \[\begin{array}{c}uv^{\prime}=vv^{\prime}+wv^{\prime}\\ u^{\prime}v=vv^{\prime}+w^{\prime}v.\end{array}\] Subtracting the above equalities, we obtain \[uv^{\prime}-u^{\prime}v=wv^{\prime}-w^{\prime}v,\] or equivalently \[\left|\begin{array}{cc}u&v\\ u^{\prime}&v^{\prime}\end{array}\right|=\left|\begin{array}{cc}w&v\\ w^{\prime}&v^{\prime}\end{array}\right|,\] which is the first determinantal identity. The second one can be proved in a similar way and left to the reader as a simple exercise. **Proposition 2.2**.: _Every diagonal of the Khayyam-Pascal triangle is log-concave._ Proof.: First of all note that the diagonals of the Khayyam-Pascal triangle correspond to the columns (rows) of the Khayyam-Pascal squared array. Now, we use the previous determinantal identities in their special cases to give a new geometric proof of the log-concavity of the diagonals of the Khayyam-Pascal triangle. To this end, consider three consecutive terms \(a_{k-1},a_{k},a_{k+1}\) in any arbitrary column of the Khayyam-Pascal squared array, as shown in Figure 2. We consider a parallelepiped in its special case where two antipodal vertices (\(u\) and \(w^{\prime}\) in Figure 1) coincide. Here, those vertices correspond to two equal entries \(a_{k}\). By Proposition 2.1, we have \[\left|\begin{array}{cc}a_{k}&a_{k+1}\\ a_{k-1}&a_{k}\end{array}\right|=\left|\begin{array}{cc}b_{k}&a_{k}\\ b_{k+1}&a_{k+1}\end{array}\right|.\] But, we already know that the 2-by-2 determinant in the right-hand side of the above identity is a Narayana number [4]. Therefore, we obtain \[\left|\begin{array}{cc}a_{k}&a_{k+1}\\ a_{k-1}&a_{k}\end{array}\right|\geq 0,\] and this completes the proof. Figure 2. Log-Concavity of Diagonals of the Khayyam-Pascal Triangle Array. Next we prove the log-concavity of the rows of the Khayyam-Pascal triangle, using the same technique. **Proposition 2.3**.: _Every row of the Khayyam-Pascal triangle is log-concave._ Proof.: We note that the rows of the Khayyam-Pascal triangle correspond to the diagonals of the the Khayyam-Pascal squared array. Consider an special parallelepiped \(vuwv^{\prime}u^{\prime}v\), as shown in Figure 3. Then, we have \[\left|\begin{array}{cc}v&w\\ v^{\prime}&v\end{array}\right|=\left|\begin{array}{cc}w^{\prime}&w\\ u^{\prime}&u\end{array}\right|.\] On the other hand, from the parallelepiped \(w^{\prime}uwu^{\prime}u^{\prime\prime}u\) we get \[\left|\begin{array}{cc}w^{\prime}&w\\ u^{\prime}&u\end{array}\right|=\left|\begin{array}{cc}w^{\prime}&u\\ u^{\prime}&u^{\prime\prime}\end{array}\right|.\] Figure 3. Log-Concavity of Rows of Khayyam-Pascal Triangle Array. Therefore, we conclude that \[v^{2}-wv^{\prime}=\left|\begin{array}{cc}w^{\prime}&u\\ u^{\prime}&u^{\prime\prime}\end{array}\right|.\] But, again the last determinant in the above equality is the Narayana number and a non-negative integer. This completes the proof. **Definition 2.1**.: We call an array a _row log-concave_ (diagonal log-concave) array, if every row (diagonal) of this array is log-concave. As in the paper of McNamara and Sagan [2] for every array \(A=(a_{ij})_{i,j\geq 0}\), we will call the determinants \(\left|\begin{array}{cc}a_{i,j}&a_{i,j+1}\\ a_{i+1,j}&a_{i+1,j+1}\end{array}\right|\), its _adjacent minors_. From the proofs of the two previous propositions, we get the following interesting result. **Corollary 2.4**.: _Every diagonal log-concave array with non-negative adjacent minors, is also a row log-concave array._ ## 3. Khayyam-Pascal Determinantal Arrays In this section, we introduce an infinite class of arrays of numbers as a generalization of the standard Khayyam-Pascal squared array. We will denote the entries of the the Khayyam-Pascal squared array by \(P=\big{(}P_{i,j}=\binom{i+j}{i}\big{)}_{i,j\geq 0}\). Our main goal here is to prove that the members of this new class of arrays are diagonal and row log-concave, again using geometric ideas. **Definition 3.1**.: A Khayyam-Pascal determinantal array of order \(k\), \(k\geq 1\), is an infinite array \(PD_{k}=\big{(}P_{i,j}^{(k)}\big{)}_{i,j\geq 0}\) in which \(P_{i,j}^{(k)}\) is the determinant of a \(k\)-by-\(k\) subarray of the Khayyam-Pascal squared array, starting form \((i,j)\)-entry. Namely, \[P_{i,j}^{(k)}:=\left|\begin{array}{ccc}P_{i,j}&\ldots&P_{i,j+k-1}\\ \vdots&\ddots&\vdots\\ P_{i+k-1,j}&\ldots&P_{i+k-1,j+k-1}\end{array}\right|.\] **Example 3.1**.: _A Khayyam-Pascal determinantal array of order 2 has shown in Figure 4. This is a well-known array which is the squared-form of the so-called Narayana triangular array (see A001263 in [3])._ In [4], the authors have shown that if we define the weight of any arbitrary rectangle whose vertices are the entries of the Khayyam-Pascal determinantal array of order \(k\) as shown in Figure 5, by \[W:=\frac{P_{i+m,j+l^{\prime}.P_{i,j}^{(k)}}^{(k)}}{P_{i+m,j}^{(k)}.P_{i,j+l}^{(k)}},\] Figure 4. Khayyam-Pascal Determinantal array of order 2. Figure 5. Weighted Version of Star of David. then when we move the anchor, the circled-vertex, along the diagonal of the Khayyam-Pascal determinantal array (indicated by the arrow \(d\) in Figure 5), the weights remain unchanged. They called this property the _weighted-version of the Star of David Rule_. As they have shown in another paper [5], the weighted-version of the Star of David Rule can also be used to prove the following interesting property of this new class of arrays. **Proposition 3.1**.: _In any Khayyam-Pascal determinantal array, the ratio of any pair of \(r\)-by-\(r\) minors along any arbitrary diagonal \(x+y=d\) of the array is the same as the ratio of the product of the entries appearing in their back diagonals parallel to \(d\) (see Figure 6). In other words, we have_ \[\left|\begin{array}{ccc}P_{i,j}^{(k)}&\ldots&P_{i,j+r-1}^{(k)}\\ \vdots&\ddots&\vdots\\ P_{i+r-1,j}^{(k)}&\ldots&P_{i+r-1,j+r-1}^{(k)}\\ \hline P_{i^{\prime},j^{\prime}}^{(k)}&\ldots&P_{i^{\prime},j^{\prime}+r-1}^{ (k)}\\ \vdots&\ddots&\vdots\\ P_{i^{\prime}+r-1,j^{\prime}}^{(k)}&\ldots&P_{i^{\prime}+r-1,j^{\prime}+r-1} ^{(k)}\\ \end{array}\right|=\frac{P_{i,j+r-1}^{(k)}...P_{i,j+r-1}^{(k)}}{P_{i^{\prime}, j^{\prime}+r-1}^{(k)}\cdots P_{i^{\prime},j^{\prime}+r-1}^{(k)}}\] Figure 6. Ratio of Determinants in Khayyam-Pascal Determinantal array The following lemma is the key in the proof of diagonal log-concavity of the Khayyam-Pascal determinantal Arrays. **Lemma 3.2**.: _For every integer \(n\geq 1\), the log-concave sequence \(\{a_{i}\}_{i\geq 1}\) satisfies the following inequality_ \[\frac{a_{2}a_{n+1}}{a_{1}a_{n+2}}\geq 1.\] Proof.: We use induction on \(n\). The basis case, \(n=1\), is just the definition of the log-concavity of the sequence \(\{a_{i}\}_{i\geq 1}\). Now, let us assume by induction hypothesis that the assertion is true for \(n-1\). Hence, we have \[1\leq\frac{a_{2}a_{n}}{a_{1}a_{n+1}}=(\frac{a_{2}a_{n}}{a_{1}a_{n+1}})(\frac{ a_{n+1}a_{n+2}}{a_{n+1}a_{n+2}})=(\frac{a_{2}a_{n+1}}{a_{1}a_{n+2}})(\frac{a_{n}a _{n+2}}{a_{n+1}^{2}}).\] Thus, we get \[\frac{a_{2}a_{n+1}}{a_{1}a_{n+1}}\geq\frac{a_{n+1}^{2}}{a_{n}a_{n+2}}\geq 1.\] The later inequality holds because of the definition of the log concavity of the sequence \(\{a_{i}\}_{i\geq 1}\). This completes the proof by induction. Now, we are at the position to state our main result of this section. **Theorem 3.3**.: _For every integer \(k\geq 1\), the Khayyam-Pascal determinantal array of order \(k\) is diagonal log-concave._ Proof.: Assume that \(\alpha\), \(\beta\), \(\theta\), \(\gamma\) are four entries of the Khayyam-Pascal determinantal array of order \(k\) such that \(\beta\), \(\theta\), \(\gamma\) are three consecutive diagonal entries, as shown in Figure 7. Figure 7. Four Entries of A diagonal of Khayyam-Pascal Determinantal Array Clearly the back diagonal entries of these four entries of the Khayyam-Pascal determinantal array of order \(k\), as the four \(k\)-by-\(k\) minors of the Khayyam-Pascal squared array, lie in some diagonal of the Khayyam-Pascal squared array. For simplicity of arguments, we will show their entries from south-west to north-east by \(\beta_{1}\), \(\beta_{2}\), \(\ldots\), \(\beta_{k}\), \(\theta_{1}\), \(\theta_{2}\), \(\ldots\), \(\theta_{k}\), \(\gamma_{1}\), \(\gamma_{2}\), \(\ldots\), \(\gamma_{k}\) and \(\alpha_{1}\), \(\alpha_{2}\), \(\ldots\), \(\alpha_{k}\), respectively. It is not hard to see that we have the following relations among their entries: \[\beta_{2} =\theta_{1},\beta_{3}=\theta_{2},\ldots,\beta_{k}=\theta_{k-1},\] \[\theta_{1} =\gamma_{1},\theta_{3}=\gamma_{2},\ldots,\theta_{k}=\gamma_{k-1}.\] To prove the log-concavity, it suffices to show that \(\theta^{2}-\beta\gamma\geq 0\). But, using the determinants ratio Proposition 3.1 and the above relations, we have \[\theta^{2}-\beta\gamma\] \[=\left(\frac{\alpha}{\alpha_{1}\cdots\alpha_{k-1}\alpha_{k}} \right)^{2}\left[(\theta_{1}\cdots\theta_{k-1}\theta_{k})^{2}-(\beta_{1} \cdots\beta_{k-1}\beta_{k})(\gamma_{1}\cdots\gamma_{k-1}\gamma_{k})\right],\] \[=\left(\frac{\alpha}{\alpha_{1}\cdots\alpha_{k-1}\alpha_{k}} \right)^{2}\left[(\beta_{2}\beta_{3}^{2}\cdots\beta_{k}^{2}\gamma_{k-1})( \beta_{2}\gamma_{k-1}-\beta_{1}\gamma_{k}\right].\] Therefore we need to prove that \(\frac{\beta_{2}\gamma_{k-1}}{\beta_{1}\gamma_{k}}\geq 1\), which is nothing more than the inequality of the key lemma, Lemma 3.2, by the _renaming technique_. Next, we prove the row log-concavity of the Khayyam-Pascal determinantal array. **Theorem 3.4**.: _For every integer \(k\geq 1\), the Khayyam Pascal determinantal array of order \(k\) is a row log-concave array._ Proof.: Using Corollary 2.4, it is only suffices to prove that every adjacent minor of the Khayyam-Pascal determinantal array of order \(k\) is nonnegative. Now by the Proposition 3.1 about the ratio of determinants along the diagonal \(x+y=d\), we get \[\left|\begin{array}{cc}P_{i,j}^{(k)}&P_{i,j+1}^{(k)}\\ P_{i+1,j}^{(k)}&P_{i+1,j+1}^{(k)}\\ \hline 1&P_{i+j,1}^{(k)}\\ 1&P_{i+j+1,1}^{(k)}\end{array}\right|=\frac{P_{i+1,j}^{(k)}P_{i,j+1}^{(k)}}{P_ {i+j,1}^{(k)}},\] which is clearly a positive integer. Thus, to prove that the adjacent minor \(\left|\begin{array}{cc}P_{i,j}^{(k)}&P_{i,j+1}^{(k)}\\ P_{i+1,j}^{(k)}&P_{i+1,j+1}^{(k)}\\ 1&P_{i+j+1,1}^{(k)}\end{array}\right|\) is a nonnegative integer, we only need to show that \(1\)\(P_{i+j,1}^{(k)}\) is positive for every \(i,j\geq 0\), which is equivalent to show that the first column, starting form \(0\), of the Khayyam-Pascal determinantal array of order \(k\) is an increasing sequence. It is not hard to see that this first column is indeed the \(k\)th column of the Khayyam-Pascal squared array [1]. Finally we need to show that for every \(l\geq 0\), we have \[\frac{\binom{l+k}{k}}{\binom{(l-1)+k}{k}}>1,\] which is equivalent to inequality \(k>0\) or \(k\geq 1\), as required.
2310.11635
Break-up and Recovery of Harmony between Direct and Indirect Pathways in The Basal Ganglia; Huntington's Disease and Treatment
The basal ganglia (BG) in the brain exhibit diverse functions for motor, cognition, and emotion. Such BG functions could be made via competitive harmony between the two competing pathways, direct pathway (DP) (facilitating movement) and indirect pathway (IP) (suppressing movement). As a result of break-up of harmony between DP and IP, there appear pathological states with disorder for movement, cognition, and psychiatry. In this paper, we are concerned about the Huntington's disease (HD), which is a genetic neurodegenerative disorder causing involuntary movement and severe cognitive and psychiatric symptoms. For the HD, the number of D2 SPNs ($N_{\rm D2}$) is decreased due to degenerative loss, and hence, by decreasing $x_{\rm D2}$ (fraction of $N_{\rm D2}$), we investigate break-up of harmony between DP and IP in terms of their competition degree ${\cal C}_d$, given by the ratio of strength of DP (${\cal S}_{DP}$) to strength of IP (${\cal S}_{IP}$) (i.e., ${\cal C}_d = {\cal S}_{DP} / {\cal S}_{IP}$). In the case of HD, the IP is under-active, in contrast to the case of Parkinson's disease with over-active IP, which results in increase in ${\cal C}_d$ (from the normal value). Thus, hyperkinetic dyskinesia such as chorea (involuntary jerky movement) occurs. We also investigate treatment of HD, based on optogenetics and GP ablation, by increasing strength of IP, resulting in recovery of harmony between DP and IP. Finally, we study effect of loss of healthy synapses of all the BG cells on HD. Due to loss of healthy synapses, disharmony between DP and IP increases, leading to worsen symptoms of the HD.
Sang-Yoon Kim, Woochang Lim
2023-10-18T00:03:30Z
http://arxiv.org/abs/2310.11635v2
Break-up and Recovery of Harmony between Direct and Indirect Pathways in The Basal Ganglia; Huntington's Disease and Treatment ###### Abstract The basal ganglia (BG) in the brain exhibit diverse functions for motor, cognition, and emotion. Such BG functions could be made via competitive harmony between the two competing pathways, direct pathway (DP) (facilitating movement) and indirect pathway (IP) (suppressing movement). As a result of break-up of harmony between DP and IP, there appear pathological states with disorder for movement, cognition, and psychiatry. In this paper, we are concerned about the Huntington's disease (HD), which is a genetic neurodegenerative disorder causing involuntary movement and severe cognitive and psychiatric symptoms. For the HD, the number of D2 SPNs (\(N_{\rm D2}\)) is decreased due to degenerative loss, and hence, by decreasing \(x_{\rm D2}\) (fraction of \(N_{\rm D2}\)), we investigate break-up of harmony between DP and IP in terms of their competition degree \(\mathcal{C}_{d}\), given by the ratio of strength of DP (\(\mathcal{S}_{DP}\)) to strength of IP (\(\mathcal{S}_{IP}\)) (i.e., \(\mathcal{C}_{d}=\mathcal{S}_{DP}/\mathcal{S}_{IP}\)). In the case of HD, the IP is underactive, in contrast to the case of Parkinson's disease with over-active IP, which results in increase in \(\mathcal{C}_{d}\) (from the normal value). Thus, hyperkinetic dyskinesia such as chorea (involuntary jerky movement) occurs. Treatment of HD, based on optogenetics, is also investigated through recovery of harmony between DP and IP. Finally, we study effect of loss of healthy synapses of all the BG cells on HD. Basal ganglia, Huntington's disease, Direct pathway (DP), Indirect pathways(IP), Harmony between DP and IP, Competition degree pacs: 87.19.lj, 87.19.lu, 87.19.rs ## I Introduction The basal ganglia (BG) (called the dark basement of the brain) are a group of subcortical deep-lying nuclei, receiving excitatory cortical input from most areas of cortex, and they provide inhibitory output to the thalamus and brainstem [1; 2; 3; 4]. The BG exhibit a variety of functions for motor control and regulation of cognitive and emotional processes [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. Dysfunction in the BG is related to movement disorder [e.g., Parkinson's disease (PD) and Huntington's disease (HD)] and cognitive and psychiatric disorders [1; 2; 3; 4]. In this paper, we are concerned about the HD. It is a rare hereditary neurodegenerative disease with severe symptoms for motor, cognition, and emotion [11; 12; 13; 14; 15; 16; 17]. As is well known, patients with HD show hyperkinetic dyskinesia such as chorea (involuntary jerky dance-like movement) as well as cognitive (e.g., dementia) and psychiatric (e.g,, depression and anxiety) disorders. In contrast, patients with PD show hypokinetic disorder such as slowed movement (bradykinesia) [18; 19; 20; 21; 22; 23]. Thus, if PD lies at one end of the spectrum of movement disorders in the BG, HD lies at the other end. We note that HD is caused by a mutated huntingtin (HTT) gene on chromosome 4 [24; 25]. As a result of mutation in HTT gene, the defective HTT gene has abnormal excessive repeats of a three-base (CAG) DNA sequence; in the mutant gene, the repeat occurs over and over again, from 40 times to more than 80. The greater the number of CAG repeats, the earlier the onset and severity of HD. This kind of trinucleotide repeat expansion results in production of abnormal HTT protein that accumulates, resulting in creation of toxic HTT protein aggregates damaging neurons (e.g., death of striatal cells in the BG). Thus, the primary pathological feature of HD is appearance of toxic HTT protein aggregates, causing the characteristic neurodegeneration seen in HD, in contrast to the case of PD where DA deficiency is a major cause. In our previous work for the PD in the BG [26], we developed a spiking neural network (SNN) for the BG, based on anatomical and physiological data derived from rat-based studies [27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50]. Here, we use rat-brain terminology throughout. The BG receive excitatory cortical input from most regions of cortex via the input nuclei [striatum and subthalamic nucleus (STN)] and project inhibitory output via the output nucleus [substantia nigra para rsetiulata (SNr)], through the thalamus to the motor area of the cortex [7; 10]. We also note that, the principal input nucleus, striatum, is the primary recipient of dopamine (DA), arising from the substantia nigra para compacta (SNc). Within the striatum, spine projection neurons (SPNs), comprising up to 95 % of the whole striatal population, are the only primary output neurons [51; 52]. There are two types of SPNs with D1 and D2 receptors for the DA. The DA modulates firing activity of the D1 and D2 SPNs in a different way [53; 54; 55]. In the early stage of HD, degenerative loss of D2 SPNs occurs due to mutation in the HTT gene, while DA level in the striatum is nearly normal [56; 57; 58; 59]. There are two competing pathways, direct pathway (DP) and indirect pathway (IP), in the BG [60; 61; 62; 63]. D1 SPNs in the striatum make direct inhibitory projection to the output nucleus, SNr, through DP, and then the thalamus becomes disinhibited. Consequently, movement facilitation occurs. In contrast, D2 SPNs are connected to the SNr through IP, crossing the intermediate control nucleus, GP (globus pallidus), and the STN. In the case of IP, the firing activity of the SNr becomes enhanced mainly because of excitatory input from the STN. As a result, firing activity of the thalamus becomes decreased, resulting in movement suppression. Diverse functions of the BG could be made via "balance" of DP and IP. So far, a variety of subjects for the BG have been investigated in many computational works [52; 53; 54; 55; 56; 57; 58; 59; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79]. But, no quantitative analysis for balance between DP and IP was made. As a first time, in our recent work [26], we made quantitative analysis for competitive harmony (i.e., competition and cooperative interplay) between DP and IP by introducing their competition degree \(\mathcal{C}_{d}\), given by the ratio of strength of DP (\(\mathcal{S}_{DP}\)) to strength of IP (\(\mathcal{S}_{IP}\)) (i.e., \(\mathcal{C}_{d}=\mathcal{S}_{DP}/\mathcal{S}_{IP}\)); \(\mathcal{S}_{DP}\) (\(\mathcal{S}_{IP}\)) is given by the magnitude of the total time-averaged synaptic current into the output nucleus, SNr, through DP (IP). In this paper, we take into consideration of degenerative loss of D2 SPNs for the HD; \(N_{\rm D2}\) (number of D2 SPNs) \(=N_{\rm D2}^{*}\) (normal value) \(x_{\rm D2}\) [\(1>x_{\rm D2}\) (fraction of number of D2 SPNs) \(\geq 0\)] [56; 57; 58; 59; 90; 91]. By decreasing \(x_{\rm D2}\) from 1, we investigate break-up of harmony between DP and IP for the HD by employing the competition degree \(\mathcal{C}_{d}\) in the case of normal DA level (\(\phi=0.3\)). Due to degenerative loss of D2 SPNs, IP becomes under-active (i.e., weakened), leading to increase in \(\mathcal{C}_{d}\) from normal value. Thus, hyperkinetic dyskinesia such as chorea occurs, which is in contrast to the case of PD with reduced \(\mathcal{C}_{d}\), causing hypokinetic disorder. Next, based on optogenetics [92; 93], treatment of HD is also studied via recovery of harmony between DP and IP. Through activation of D2 SPN (STN) and deactivation of GP, IP becomes strengthened and thus harmony between DP and IP may be recovered. Finally, we investigate effect of loss of healthy synapses of all the BG cells on HD [94; 95; 96; 97; 98; 99]. This paper is organized as follows. In Sec. II, we describe our SNN for the BG. Then, in the main Sec. III, we make quantitative analysis of break-up and recovery of harmony between DP and IP for the HD. Finally, we give summary and discussion in Sec. IV. ## II BG Spiking Neural Network In this section, we describe our SNN for the BG. In our recent work [26], we developed our BG SNN, composed of D1/D2 SPNs, STN neurons, GP neurons, and SNr neurons, based on the anatomical and the physiological properties of the BG [27; 28; 29; 29; 30]. In our BG SNN, we also consider the modulation effect of DA on D1/D2 SPN and afferent synapses into the D1/D2 SPNs, the STN, and the GP [53; 54; 55]. Here, for the sake of completeness and understanding of readers, we provide description of our BG SNN; for more details, refer to Sec. II in Ref. [26]. ### Framework of Our SNN of The BG Figure 1 shows a box diagram of major neurons and synaptic connections in our BG SNN. Our BG SNN consists of striatum and STN (input nuclei), SNr (output nucleus), and GP (intermediate controller). Here, all the BG neurons except the excitatory STN neuron are inhibitory ones. Both striatum and STN receive cortical inputs from most regions of the cortex. Here, we model cortical inputs in terms of 1,000 independent Poisson spike trains with firing rate \(f_{i}\) (\(i=1,\cdots,1000\)). Tonic cortical input in the resting state has \(f=3\) Hz. On the other hand, the phasic cortical input in the phasically-active state has \(f=10\) Hz, independently of \(i\)[7; 52; 55; 78; 100; 101; 102; 103; 104]. The striatum is also the primary recipient of the DA (arising from the SNc). Two kinds of SPNs with D1 and D2 receptors for the DA exist within the striatum. They comprise up to 95 % of the whole striatal population [51; 52]. We note that these D1 and D2 SPNs show different firing behaviors because of DA modulation [53; 54; 55]. Figure 1: Box diagram of our spiking neural network for the basal ganglia (BG). Excitatory and inhibitory connections are denoted by lines with triangles and circles, respectively, and dopamine-modulated cells and connections are represented in blue color. Striatum and STN (subthalamic nucleus), receiving the excitatory cortical input, are two input nuclei to the BG. In the striatum, there are two kinds of inhibitory spine projection neurons (SPNs); SPNs with the D1 receptors (D1 SPNs) and SPNs with D2 receptors (D2 SPNs). The D1 SPNs make direct inhibitory projection to the output nuclei SNr (substantia nigra pars reticulate) through the direct pathway (DP; green color). In contrast, the D2 SPNs are connected to the SNr through the indirect pathway (IP; red color) crossing the GP (globus pallidus) and the STN. The inhibitory output from the SNr to the thalamus/brain stem is controlled through competition between the DP and IP. There are two competing pathways, DP and IP, in the BG [60; 61; 62; 63]. The D1 SPNs make direct inhibitory projection to the output nucleus, SNr, via DP (green color in Fig. 1). Then, the thalamus is disinhibited, resulting in movement facilitation. On the other hand, the D2 SPNs are connected to the SNr via IP (red color in Fig. 1), crossing the GP and the STN. Here, the GP plays a role of intermediate controller to control the firing behavior of the STN. In the case of IP, the firing activity of the SNr becomes increased mainly due to excitatory input from the STN. Consequently, firing activity of the thalamus becomes decreased, leading to movement suppression. Thus, the firing activity of the output nucleus, SNr, is controlled through competition between DP (green) and IP (red). We choose the numbers of the striatal neurons, the STN neurons, the GP neurons, and the SNr neurons in the BG, based on the anatomical information [29]. A scaled-down SNN where the total number of striatal neurons is \(2,791\), corresponding to \(\frac{1}{1000}\) of the \(2,791\cdot 10^{3}\) striatal cells found in the rat BG, is developed. In this way, we scale down with ratio \(10^{-3}\) for all the BG neurons [75; 81]; \(N_{\rm D1}=N_{\rm D2}=1325\), \(N_{\rm STN}=14\), \(N_{\rm GP}=46\), and \(N_{\rm SNr}=26\) (\(N_{X}\) is the number of neurons in the \(X\) population). 90-97 % of the whole striatal population corresponds to the major subpopulation of D1/D2 SPNs [75]; in our BG SNN, we choose 95 %. The remaining 5 % subpopulation of fast spiking interneurons are not considered in our SNN. The cortex (Ctx) provides the external excitatory inputs randomly to the D1/D2 SPNs and the STN neurons with the connection probabilities, \(p_{c}^{\rm(SPN,Ctx)}=0.084\) (8.4 %) and \(p_{c}^{\rm(STN,Ctx)}=0.03\) (3 %), respectively [55]. Here, we consider random synaptic connections between BG cells in Fig. 1. The synaptic connection probabilities \(p_{c}^{(T,S)}\) from a presynaptic neuron in the source population (\(S\)) to a postsynaptic neuron in the target population (\(T\)) in the BG are as follows [78]: \(p_{c}^{\rm(SN,D1)}=0.033\) (3.3 %), \(p_{c}^{\rm(GP,D2)}=0.033\) (3.3 %), \(p_{c}^{\rm(GP,STN)}=0.3\) (30 %), \(p_{c}^{\rm(GP,GP)}=0.1\) (10 %), \(p_{c}^{\rm(STN,GP)}=0.1\) (10 %), \(p_{c}^{\rm(SNr,STN)}=0.3\) (30 %), and \(p_{c}^{\rm(SNr,GP)}=0.1066\) (10.66 %). ### Izhikevich Spiking Neuron Models and DA Effects in Our BG SNN The Izhikevich spiking neuron model (which is not only biologically plausible, but also computationally efficient) is used as elements of our BG SNN [105; 106; 107; 108]. Unlike the Hodgkin-Huxley-type conductance-based models, the Izhikevich model matches neuronal dynamics by tuning its parameters. Our BG SNN consists of 5 populations of D1 SPNs, D2 SPNs, STN neurons, GP neurons, and SNr neurons. The following equations govern evolution of dynamical states of individual neurons in the \(X\) population [\(X=\) D1 (SPN), D2 (SPN), STN, GP, and SNr]: \[C_{X}\frac{dv_{i}^{(X)}}{dt} = k_{X}(v_{i}^{(X)}-v_{r}^{(X)})(v_{i}^{(X)}-v_{t}^{(X)}) \tag{1}\] \[-u_{i}^{(X)}+I_{i}^{(X)},\] \[\frac{du_{i}^{(X)}}{dt} = a_{X}\left\{b_{X}(v_{i}^{(X)}-v_{r}^{(X)})-u_{i}^{(X)}\right\};\] (2) \[\qquad i=1,...,N_{X},\] with the auxiliary after-spike resetting: \[\text{if }v_{i}^{(X)}\geq v_{peak}^{(X)},\text{ then }v_{i}^{(X)}\gets c_{X} \text{ and }u_{i}^{(X)}\gets u_{i}^{(X)}+d_{X}. \tag{3}\] Here, \(N_{X}\) and \(I_{i}^{(X)}(t)\) are the total number of neurons and the current into the \(i\)th neuron in the \(X\) population, respectively. The dynamical state of the \(i\)th neuron in the \(X\) population at a time \(t\) (msec) is given by its membrane potential \(v_{i}^{(X)}(t)\) (mV) and the slow recovery variable \(u_{i}^{(X)}(t)\) (pA). As the membrane potential \(v_{i}^{(X)}(t)\) arrives at its apex \(v_{peak}^{(X)}\), the neuron fires, and then both the membrane potential \(v_{i}^{(X)}\) and the recovery variable \(u_{i}^{(X)}\) are reset according to the rules of Eq. (3). The Izhikevich neuron model has 9 intrinsic parameters in each \(X\) population; \(C_{X}\) (membrane capacitance, \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Parameters & D1/D2 SPN & STN & GP & SNr \\ \hline \(C_{X}\) & 16.1 & 23.0 & 68.0 & 172.1 \\ \hline \(v_{r}^{(X)}\) & -80.0 & -56.2 & -53.0 & -64.58 \\ \hline \(v_{i}^{(X)}\) & -29.3 & -41.4 & -44.0 & -51.8 \\ \hline \(k_{X}\) & 1 & 0.439 & 0.943 & 0.7836 \\ \hline \(a_{X}\) & 0.01 & 0.021 & 0.0045 & 0.113 \\ \hline \(b_{X}\) & -20 & 4 & 3.895 & 11.057 \\ \hline \(c_{X}\) & -55 & -47.7 & -58.36 & -62.7 \\ \hline \(d_{X}\) & 84.2 & 17.1 & 0.353 & 138.4 \\ \hline \(v_{peak}^{(X)}\) & 40 & 15.4 & 25 & 9.8 \\ \hline \end{tabular} \end{table} Table 1: Intrinsic parameter values for each BG cell in the \(X\) [= D1 (SPN), D2 (SPN), STN, GP, SNr] population. pF), \(v_{r}^{(X)}\) (resting membrane potential, mV), \(v_{t}^{(X)}\) (instantaneous threshold potential, mV), \(k_{X}\) (parameter associated with the neuron's rheobase, nS/mV), \(a_{X}\) (recovery time constant, msec\({}^{-1}\)), \(b_{X}\) (parameter associated with the input resistance, nS), \(c_{X}\) (after-spike reset value of \(v_{i}^{(X)}\), mV), \(d_{X}\) (after-spike jump value of \(u_{i}^{(X)}\), pA), and \(v_{peak}^{(X)}\) (spike cutoff value, mV). The 9 intrinsic parameter values of D1 SPN, D2 SPN, STN, GP, and SNr are shown in Table 1. We obtain the parameter values of the STN neuron, GP neuron, and SNr neuron, based on the work in [55], in addition to the parameter values of the D1/D2 SPNs given in [53; 54]. These parameter values are based on physiological properties of the D1/D2 SPNs, the STN neurons, the GP neurons, and the SNr neurons [31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41]. Time-evolution of the dynamical state [\(v_{i}^{(X)}(t)\), \(u_{i}^{(X)}(t)\)] is governed by the current \(I_{i}^{(X)}(t)\) into the \(i\)th neuron in the \(X\) population in Eq. (1), given by: \[I_{i}^{(X)}(t)=I_{ext,i}^{(X)}(t)-I_{syn,i}^{(X)}(t)+I_{stim}^{(X)}(t), \tag{4}\] where \(I_{ext,i}^{(X)}\), \(I_{syn,i}^{(X)}(t)\), and \(I_{stim}^{(X)}(t)\) represent the external current from the external background region (not considered in the modeling), the synaptic current, and the injected stimulation current, respectively. Here, we consider the case of no injected stimulation DC current (i.e., \(I_{stim}=0\)). We model the external current \(I_{ext,i}^{(X)}(t)\) in terms of its time average, \(I_{spon,i}^{(X)}\) (spontaneous current for spontaneous firing activity), and fluctuation from the time average, \(I_{back,i}^{(X)}(t)\) (random background input). Here, \(I_{spon}^{(X)}\) (independent of \(i\)) is just the spontaneous in-vivo current, \(I_{vivo}^{(X)}\), to obtain the spontaneous in-vivo firing rate \(f_{vivo}^{(X)}\) in the presence of synaptic inputs in the tonic cortical resting state. We also model the random background current \(I_{back,i}^{(X)}(t)\) as follows: \[I_{back,i}^{(X)}(t)=D_{X}\cdot\xi_{i}^{(X)}(t). \tag{5}\] Here, \(D_{X}\) is the noise intensity parameter and \(\xi_{i}^{(X)}\) is the Gaussian white noise, which satisfies the zero mean and the unit variance [109; 110; 111]: \[\langle\xi_{i}^{(X)}(t)\rangle=0\text{ and }\langle\xi_{i}^{(X)}(t)\xi_{j}^{(X) }(t^{\prime})\rangle=\delta_{ij}\delta(t-t^{\prime}). \tag{6}\] Finally, we consider the effects of DA modulation on the D1 and D2 SPNs [53; 54; 55]. As a result of D1 receptors activation, there occur two opposing effects on intrinsic ion channels. Enhancement of the inward-rectifying potassium current (KIR) results in hyperpolarization of the D1 SPN. On the other hand, decrease in the activation threshold of the L type Ca\({}^{2+}\) current leads to depolarization of the D1 SPN. These two competing hyperpolarization and depolarization effects may be modeled through changes in intrinsic parameters of the D1 SPN: \[v_{r}(1+\beta_{1}^{(\text{D1})}\phi_{1})\quad\text{and}\quad d(1-\beta_{2}^{( \text{D1})}\phi_{1}), \tag{7}\] where \(\beta_{1}^{(\text{D1})}=0.0289\), \(\beta_{2}^{(\text{D1})}=0.331\), and \(\phi_{1}\) is the DA level (i.e., fraction of active DA receptors) for the D1 SPNs. Here, the hyperpolarizing effect of the increasing KIR is modeled by upscaling \(v_{r}\) in Eq. (7). On the other hand, enhanced depolarizing effect of the L type Ca\({}^{2+}\) current is modelled by downscaling \(d\) in Eq. (7). The parameters \(\beta_{1}^{(\text{D1})}\) and \(\beta_{2}^{(\text{D1})}\) denote the amplitudes of their respective effects. In contrast, small inhibitory effect on the slow A-type potassium current through D2 receptors activation results in decrease in the neuron's rheobase current. We model the depolarizing effect by downscaling the parameter, \(k\): \[k(1-\beta^{(\text{D2})}\phi_{2}), \tag{8}\] where \(\phi_{2}\) is the DA level for the D2 SPNs, and the parameter \(\beta^{(\text{D2})}\) (\(=\,0.032\)) denotes the downscaling degree in \(k\). In this paper, we consider the case of normal DA level throughout; \(\phi_{1}=\phi_{2}=\phi=0.3\). In-vivo firing activities of BG neurons in awake resting state with tonic cortical input (3 Hz) are shown in Table 2; spontaneous in-vivo currents \(I_{vivo}^{(X)}\), in-vivo firing rates \(f_{vivo}^{(X)}\), and random background inputs \(D_{X}^{*}\) in each \(X\) population [\(X\) = D1 (SPN), D2 (SPN), STN, GP, and SNr] are given. ### Synaptic Currents and DA Effects in Our BG SNN We consider the synaptic current \(I_{syn,i}^{(X)}(t)\) into the \(i\)th neuron in the \(X\) population in Eq. (4). As in our previous works [112; 113; 114; 115; 116; 117], we follow the "canonical" formalism for the synaptic currents. There are 3 kinds of synaptic currents, \(I_{\text{AMPA},i}^{(X,Y)}(t)\), \(I_{\text{NMDA},i}^{(X,Y)}(t)\), and \(I_{\text{GABA},i}^{(X,2)}(t)\). Here, \(I_{\text{AMPA},i}^{(X,Y)}(t)\) and \(I_{\text{NMDA},i}^{(X,Y)}(t)\) are the excitatory AMPA (\(\alpha\)-amino-3-hydroxy-5-methyl-4-isoxazoleproionic acid) receptor-mediated and NMDA (\(N\)-methyl-\(D\)-aspartate) receptor-mediated currents from the presynaptic source \(Y\) population to the postsynaptic \(i\)th neuron in the target \(X\) population. In contrast, \(I_{\text{GABA},i}^{(X,Z)}(t)\) is the inhibitory GABA\({}_{\text{A}}\) (\(\gamma\)-aminobutyric acid type A) receptor-mediated current from the presynaptic source \(Z\) population to the postsynaptic \(i\)th neuron in the target \(X\) population. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Parameters & D1/D2 SPN & STN & GP & SNr \\ \hline \(I_{vivo}^{(X)}\) & 0 & 56.5 & 84.0 & 292.0 \\ \hline \(f_{vivo}^{(X)}\) & 1 & 9.9 & 29.9 & 25.5 \\ \hline \(D_{X}^{*}\) & 246 & 11.9 & 274 & 942 \\ \hline \end{tabular} \end{table} Table 2: Spontaneous in-vivo current \(I_{vivo}^{(X)}\). in-vivo firing rates \(f_{vivo}^{(X)}\), and random background input \(D_{X}^{*}\) for in-vivo firing activities of BG cells in awake resting state with tonic cortical input (3 Hz) for the normal DA level of \(\phi=0.3\); \(X\) = D1 (SPN), D2 (SPN), STN, GP, and SNr The \(R\) (= AMPA, NMDA, or GABA) receptor-mediated synaptic current \(I_{R,i}^{(T,S)}(t)\) from the presynaptic source \(S\) population to the \(i\)th postsynaptic neuron in the target \(T\) population is given by: \[I_{R,i}^{(T,S)}(t)=g_{R,i}^{(T,S)}(t)\ (v_{i}^{(T)}(t)-V_{R}^{(S)}). \tag{9}\] Here, \(g_{(R,i)}^{(T,S)}(t)\) and \(V_{R}^{(S)}\) are synaptic conductance and synaptic reversal potential, respectively. We get the synaptic conductance \(g_{R,i}^{(T,S)}(t)\) from: \[g_{R,i}^{(T,S)}(t)=\widetilde{g}_{max,R}^{(T,S)}\sum_{j=1}^{N_{S}}w_{ij}^{(T,S) }\ s_{j}^{(T,S)}(t). \tag{10}\] Here, \(\widetilde{g}_{max,R}^{(T,S)}\) is the maximum synaptic conductance and \(N_{S}\) is the number of neurons in the source population. The interpopulation synaptic connection from the source \(S\) population to the target \(T\) population is given by the connection weight matrix \(W^{(T,S)}\) (\(=\{w_{ij}^{(T,S)}\}\)) where \(w_{ij}^{(T,S)}=1\) in the presence of synaptic connection from the \(j\)th neuron in the source \(S\) population to the \(i\)th neuron in the target \(T\) population; otherwise, in the absence of synaptic connection, \(w_{ij}^{(T,S)}=0\). Fraction of open postsynaptic ion channels (opened via binding of neurotransmitters emitted from the source \(S\) population) is represented by \(s^{(T,S)}(t)\) in Eq. (10). Time evolution of \(s^{(T,S)}(t)\) is given by a sum of exponential-decay functions \(E_{R}^{(T,S)}(t-t_{f}^{(j)}-\tau_{R,l}^{(T,S)})\): \[s_{j}^{(T,S)}(t)=\sum_{f=1}^{F_{j}^{(S)}}E_{R}^{(T,S)}(t-t_{f}^{(j)}-\tau_{R,l }^{(T,S)}). \tag{11}\] Here, \(t_{f}^{(j)}\) and \(F_{j}^{(S)}\) are the \(f\)th spike time and the total number of spikes of the \(j\)th cell in the source \(S\) population, respectively, and \(\tau_{R,l}^{(T,S)}\) is the synaptic latency time constant for \(R\)-mediated synaptic current. As in our previous works in the cerebellum [112; 113], the exponential-decay function \(E_{R}^{(T,S)}(t)\) (corresponding to contribution of a presynaptic spike occurring at \(t=0\) in the absence of synaptic latency) is given by: \[E_{R}^{(T,S)}(t)=e^{-t/\tau_{R,d}^{(T,S)}}\cdot\Theta(t). \tag{12}\] Here, \(\tau_{R,d}^{(T,S)}\) is the synaptic decay time constant, and \(\Theta(t)\) is the Heaviside step function: \(\Theta(t)=1\) for \(t\geq 0\) and \(0\) for \(t<0\). In the NMDA-receptor case, some of the postsynaptic NMDA channels are known to be blocked by the positive magnesium ion Mg\({}^{2+}\). In this case, fraction of NMDA channels that are not blocked by the Mg\({}^{2+}\) ion is given by a sigmoidal function \(f(v^{(T)})\)[55; 118; 53], \[f(v^{(T)}(t))=\frac{1}{1+0.28\cdot[\text{Mg}^{2+}]\cdot e^{-0.062v^{(T)}(t)}}. \tag{13}\] Here, \(v^{(T)}\) is the membrane potential of a neuron in the target population \(T\) and [Mg\({}^{2+}\)] is the equilibrium concentration of magnesium ions ([Mg\({}^{2+}\)] = 1 mM). Thus, the synaptic current into the \(i\)th neuron in the target \(X\) population becomes \[I_{syn,i}^{(X)}(t)=I_{\text{AMPA},i}^{(X,Y)}(t)+f(v_{i}^{(X)}(t))\cdot I_{ \text{NMDA},i}^{(X,Y)}(t)+I_{\text{GABA},i}^{(X,Z)}(t). \tag{14}\] Synaptic parameters of the synaptic currents from the source population \(S\) to the target population \(T\): maximum synaptic conductance \(\tilde{g}_{max,R}^{(T,S)}\), synaptic decay time \(\tau_{R,d}^{(T,S)}\), synaptic delay time \(\tau_{R,l}^{(T,S)}\), and synaptic reversal potential \(V_{R}^{(S)}\) are shown in Table 3. These parameter values are also based on the physiological properties of the relevant neurons [42; 43; 44; 45; 46; 47; 48; 49; 50; 54; 55; 57; 32]. Finally, we consider the effect of DA modulation on the synaptic currents into D1 SPN, D2 SPN, STN, and GP neurons in Fig. 1[53; 54; 55]. In the case of synaptic currents into the D1 SPNs, DA modulation effect is modeled by up-scaling the NMDA receptor-mediated current \(I_{\text{NMDA}}\) with the factor \(\beta^{(\text{D1})}\): \[I_{\text{NMDA}}(1+\beta^{(\text{D1})}\phi_{1}), \tag{15}\] where \(\beta^{(\text{D1})}=0.5\) and \(\phi_{1}\) is the DA level for the D1 SPNs. (In the case of D1 SPNs, there is no effect of DA modulation on the AMPA receptor-mediated current \(I_{\text{AMPA}}\).) On the other hand, in the case of synaptic currents into the D2 SPNs, DA modulation effect is modeled by down-scaling the AMPA receptor-mediated current \(I_{\text{AMPA}}\) with the factor \(\beta^{(\text{D2})}\): \[I_{\text{AMPA}}(1-\beta^{(\text{D2})}\phi_{2}), \tag{16}\] \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \(S\to T\) & \(R\) & \(\tilde{g}_{max,R}^{(T,S)}\) & \(\tau_{R,d}^{(T,S)}\) & \(\tau_{R,l}^{(T,S)}\) & \(V_{R}^{(S)}\) \\ \hline \multirow{2}{*}{Ctx \(\rightarrow\) D1/D2 SPN} & AMPA & 0.6 & 6 & 10 & 0 \\ \cline{2-6} & NMDA & 0.3 & 160 & 10 & 0 \\ \hline \multirow{2}{*}{Ctx \(\rightarrow\) STN} & AMPA & 0.388 & 2 & 2.5 & 0 \\ \cline{2-6} & NMDA & 0.233 & 100 & 2.5 & 0 \\ \hline D1 SPN \(\rightarrow\) SNr & GABA & 4.5 & 5.2 & 4 & -80 \\ \hline D2 SPN \(\rightarrow\) GP & GABA & 3.0 & 6 & 5 & -65 \\ \hline \multirow{2}{*}{STN \(\rightarrow\) GP} & AMPA & 1.29 & 2 & 2 & 0 \\ \cline{2-6} & NMDA & 0.4644 & 100 & 2 & 0 \\ \hline GP \(\leftrightarrow\) GP & GABA & 0.765 & 5 & 1 & -65 \\ \hline GP \(\rightarrow\) STN & GABA & 0.518 & 8 & 4 & -84 \\ \hline \multirow{2}{*}{STN \(\rightarrow\) SNr} & AMPA & 12 & 2 & 1.5 & 0 \\ \cline{2-6} & NMDA & 5.04 & 100 & 1.5 & 0 \\ \hline GP \(\rightarrow\) SNr & GABA & 73 & 2.1 & 3 & -80 \\ \hline \end{tabular} \end{table} Table 3: Parameters for the synaptic currents from the source population (\(S\)) to the target population (\(T\)): Maximum synaptic conductances \(\tilde{g}_{max,R}^{(T,S)}\), synaptic decay times \(\tau_{R,d}^{(T,S)}\), synaptic delay times \(\tau_{R,l}^{(T,S)}\), and synaptic reversal potential \(V_{R}^{(S)}\). where \(\beta^{\rm(D2)}\) = 0.3 and \(\phi_{2}\) is the DA level for the D2 SPNs. (There is no DA effect on \(I_{\rm NMDA}\) for the D2 SPNs.) Unlike the above cases of D1 and D2 SPNs, there are DA effects on all the synaptic currents, \(I_{\rm AMPA}\), \(I_{\rm NMDA}\), and \(I_{\rm GABA}\) into the STN neurons and GP neurons. In these cases, all excitatory and inhibitory synaptic currents, \(I_{\rm AMPA}\), \(I_{\rm NMDA}\), and \(I_{\rm GABA}\), are down-scaled with their scaling factors, depending on \(\phi_{2}\) (DA level for the D2 SPNs): \[(I_{AMPA}+f(v)\cdot I_{NMDA})(1-\beta_{1}^{\rm(STN)}\phi_{2})\] \[+I_{GABA}(1-\beta_{2}^{\rm(STN)}\phi_{2}), \tag{17}\] \[(I_{AMPA}+f(v)\cdot I_{NMDA})(1-\beta_{1}^{\rm(GP)}\phi_{2})\] \[+I_{GABA}(1-\beta_{2}^{\rm(GP)}\phi_{2}). \tag{18}\] Here, the scaling factors \(\beta_{1}^{\rm(STN)}\) = \(\beta_{2}^{\rm(STN)}\) = 0.5 and \(\beta_{1}^{\rm(GP)}\) = \(\beta_{2}^{\rm(GP)}\) = 0.5. ## III Quantitative analysis of break-up and recovery of harmony between DP and IP for the HD In this section, we quantitatively analyze competitive harmony (i.e., competition and cooperative interplay) between DP and IP for the HD in terms of the competition degree \({\cal C}_{d}\) between them, introduced in our previous work [26]. \({\cal C}_{d}\) is given by the ratio of strength of DP (\({\cal S}_{DP}\)) to strength of IP (\({\cal S}_{IP}\)) (i.e., \({\cal C}_{d}\) = \({\cal S}_{DP}\)/\({\cal S}_{IP}\)). We consider the early stage of HD where neurodegenerative loss of D2 SPNs occurs; \(N_{\rm D2}\) (number of D2 SPNs) = \(N_{\rm D2}^{*}\) (=1325; normal value) \(x_{\rm D2}\) [1 \(>x_{\rm D2}\) (fraction of number of D2 SPNs) \(\geq\) 0] [56; 57; 58; 59; 90; 91]. By decreasing \(x_{\rm D2}\) from 1, we investigate break-up of harmony between DP and IP in both cases of tonic cortical input (3 Hz) in the resting state and phasic cortical input (10 Hz) in the phasically active state. In these cases, the IP becomes weakened, and thus \({\cal C}_{d}\) becomes larger than normal ones. Consequently, involuntary jerky movement and abnormal hyperkinetic movement occur in the tonic and phasic cases, respectively. Next, we study treatment Figure 2: Involuntary jerky movement due to degenerative loss of D2 SPNs in the tonic pathological state for the tonic cortical input (3 Hz) in the resting state. Colors: parts, related to DP (green), while parts, associated with IP (red). (a) Raster plot of spikes and IPSR (instantaneous population spike rate) \(R_{\rm D1}(t)\) of D1 SPNs. Raster plots of spikes and IPSRs \(R_{\rm D2}(t)\) of D2 SPNs for (b1) \(x_{\rm D2}\) = 1.0, (b2) 0.8, (b3) 0.5, and (b4) 0.2. (c) Plot of population-averaged MFR (mean firing rate) \(\langle f_{i}^{\rm(D2)}\rangle\) of D2 SPNs versus \(x_{\rm D2}\). Raster plots of spikes and IPSRs \(R_{\rm STN}(t)\) of STN neurons for (d1) \(x_{\rm D2}\) = 1.0, (d2) 0.8, (d3) 0.5, and (d4) 0.2. Raster plots of spikes and IPSRs \(R_{\rm GP}(t)\) of GP neurons for (e1) \(x_{\rm D2}\) = 1.0, (e2) 0.8, (e3) 0.5, and (e4) 0.2. Plots of population-averaged MFRs of (f) STN neurons \(\langle f_{i}^{\rm(STN)}\rangle\) and (g) GP neurons \(\langle f_{i}^{\rm(GP)}\rangle\) versus \(x_{\rm D2}\). (h) Plots of strengths of DP \({\cal S}_{DP}\) and IP \({\cal S}_{IP}\) versus \(x_{\rm D2}\). (i) Plot of the competition degree \({\cal C}_{d}\) versus \(x_{\rm D2}\). Raster plot of spikes and IPSR \(R_{\rm SNr}(t)\) of SNr neurons for (j1) \(x_{\rm D2}\) =1, (j2) 0.8, (j3) 0.5, and (j4) 0.2. (k) Plot of population-averaged MFR \(\langle f_{i}^{\rm(SNr)}\rangle\) of SNr neurons versus \(x_{\rm D2}\). of HD through recovery of harmony between DP and IP. We strengthen the IP via activation of D2 SPNs and STN neurons and deactivation of GP neurons, based on optogenetics [92; 93]. Consequently, harmony between DP and IP becomes recovered, leading to normal movement. Finally, we also investigate the effect of loss of healthy synapses in the BG neurons on the HD. ### Break-up of Harmony between DP and IP for the HD In the early stage of HD, we consider the case of normal DA level of \(\phi_{1}=\phi_{2}=\phi=0.3\) for the D1 and D2 SPNs. As explained in Sec. II.1, cortical inputs are modeled in terms of 1,000 independent Poisson spike trains with firing rate \(f\). We first consider the case of tonic cortical input with \(f=3\) Hz in the resting state [7; 52; 55; 78; 100-104]. Population firing behavior of BG neurons could be well visualized in the raster plot of spikes, corresponding to a collection of spike trains of individual BG neurons. Figure 2(a) shows the raster plot of spikes for D1 SPNs, associated with DP (green color). In contrast to the case of D1 SPNs, degenerative loss of D2 SPNs occurs. With decreasing \(x_{\rm D2}\) (i.e., fraction of number of D2 SPNs) from 1, we also get the raster plots of spikes of D2 SPNs [Figs. 2(b1)-2(b4)], the STN neurons [Figs. 2(d1)-2(d4)], and the GP neurons [Figs. 2(e1)-2(e4)], related to the IP (red color) for \(x_{\rm D2}=\) 1.0, 0.8, 0.5, and 0.2. As a collective quantity showing population behaviors, we employ an IPSR (instantaneous population spike rate) which may be obtained from the raster plot of spikes [119; 120; 121; 122; 123]. Each spike in the raster plot is convoluted with a kernel function \(K_{h}(t)\) to get a smooth estimate of IPSR \(R_{X}(t)\)[124]: \[R_{X}(t)=\frac{1}{N_{X}}\sum_{i=1}^{N_{X}}\sum_{s=1}^{n_{i}^{(X)}}K_{h}(t-t_{s, i}^{(X)}). \tag{19}\] Here, \(N_{X}\) is the number of the neurons in the \(X\) population, and \(t_{s,i}^{(X)}\) and \(n_{i}^{(X)}\) are the \(s\)th spiking time and the total number of spikes for the \(i\)th neuron, respectively. We use a Gaussian kernel function of band width \(h\): \[K_{h}(t)=\frac{1}{\sqrt{2\pi h}}e^{-t^{2}/2h^{2}},\ \ \ \ -\infty<t<\infty, \tag{20}\] where the band width \(h\) of \(K_{h}(t)\) is 20 msec. The IPSRs \(R_{X}(t)\) for \(X=\) D1 (SPN), D2 (SPN), STN, GP, and SNr are also shown below their respective raster plots of spikes. Here, the case of \(x_{\rm D2}=1\) corresponds to the normal one without degenerative loss of D2 SPNs. With decreasing \(x_{\rm D2}\) from 1, the population firing activities of the D2 SPNs, the STN neurons, and the GP neurons, associated with IP (red), are changed, while that of the D1 SPN, related to DP (green), is unchanged. We also study the population-averaged mean firing rate (MFR) of the neurons \(\langle f_{i}^{(X)}\rangle\) in the \(X\) population [\(X=\) D1 (SPN), D2 (SPN), STN, and GP]; \(f_{i}^{(X)}\) is the MFR of the \(i\)th neuron in the \(X\) population, and \(\langle\cdots\rangle\) represents a population average over all the neurons. For the D1 and D2 SPNs, \(\langle f_{i}^{(\rm D1)}\rangle=1.03\) Hz and \(\langle f_{i}^{(\rm D2)}\rangle=0.97\) Hz, independently of \(x_{\rm D2}\), because there is no change in cortical inputs to the D1/D2 SPNs; see Fig. 2(c) for the D2 SPNs. As \(x_{\rm D2}\) is decreased from 1, \(\langle f_{i}^{(\rm GP)}\rangle\) of the GP neurons is increased from 29.9 to 38.8 Hz, due to decrease in inhibitory projection from the D2 SPNs, as shown in Fig. 2(g). In contrast, because of increased inhibitory projection from the GP, \(\langle f_{i}^{(\rm STN)}\rangle\) of the STN neurons is decreased from 9.9 to 6.5 Hz [see Fig. 2(f)]. We note that, there are two types of synaptic currents into the (output) SNr neurons, \(I_{DP}\) and \(I_{IP}\), via DP (green) and IP (red) in Fig. 1, respectively. The DP current, \(I_{DP}(t)\), is just the (inhibitory) synaptic current from the D1 SPNs to the SNr neurons: \[I_{DP}(t)=-I_{syn}^{(\rm SNr,D1)}(t). \tag{21}\] There is no change in \(I_{DP}(t)\), independently of \(x_{\rm D2}\). The IP current, \(I_{IP}(t)\), is composed of the excitatory component, \(I_{IP}^{(E)}(t)\), and the inhibitory component, \(I_{IP}^{(I)}(t):\) \[I_{IP}(t)=I_{IP}^{(E)}(t)+I_{IP}^{(I)}(t). \tag{22}\] Here, \(I_{IP}^{(E)}(t)\) [\(I_{IP}^{(I)}(t)\)] is just the synaptic current from the STN (GP) to the SNr: \[I_{IP}^{(E)}(t)=-I_{syn}^{(\rm SNr,STN)}(t)\ \ \ {\rm and}\ \ \ I_{IP}^{(I)}(t)=-I_{syn}^{(\rm SNr,GP)}(t). \tag{23}\] Unlike the case of \(I_{DP}(t)\), with decreasing \(x_{\rm D2}\) from 1, \(I_{IP}(t)\) becomes decreased due to decrease in \(I_{IP}^{(E)}(t)\) and increase in \(|I_{IP}^{(I)}(t)|\) (\(|\cdots|\): absolute magnitude). Firing activity of the (output) SNr neurons is determined through competition between \(I_{DP}(t)\) (DP current) and \(I_{IP}(t)\) (IP current) into the SNr. The strengths of DP and IP, \(\mathcal{S}_{DP}\) and \(\mathcal{S}_{IP}\), are given by the magnitudes of their respective time-averaged synaptic currents: \[\mathcal{S}_{DP}=|\overline{I_{DP}(t)}|\ \ \ {\rm and}\ \ \ \mathcal{S}_{IP}=| \overline{I_{IP}(t)}|, \tag{24}\] where the overline denotes the time averaging and \(|\cdots|\) represents the absolute magnitude. Then, the competition degree \(\mathcal{C}_{d}\) between DP and IP (given by the ratio of \(\mathcal{S}_{DP}\) to \(\mathcal{S}_{IP}\)) was introduced in [26]: \[\mathcal{C}_{d}=\frac{\mathcal{S}_{DP}}{\mathcal{S}_{IP}}. \tag{25}\] For \(x_{\rm D2}=1\) (without degenerative loss of D2 SPNs), \(\mathcal{S}_{DP}=23.1\) and \(\mathcal{S}_{IP}=23.4\), and hence DP and IP become nearly balanced (i.e., \(\mathcal{C}_{d}=0.99\)). In this non-degenerative case, the SNr neurons fire very actively with \(\langle f_{i}^{(\rm SNr)}\rangle=25.5\) Hz. Due to strong inhibitory projection from the SNr, the thalamic cells become silent, resulting in no movement (i.e., the BG door to the thalamus is locked in the normal tonic default state). But, with decreasing \(x_{\rm D2}\) from 1 (degenerative case), as shown in Fig. 2(h), \(\mathcal{S}_{IP}\) is rapidly decreased from 23.4 to 1.4, while there is no change in \(\mathcal{S}_{DP}\) (= 23.1). In this way, IP for the HD becomes weakened. Thus, as \(x_{\rm D2}\) is decreased from 1, the competition degree \(\mathcal{C}_{d}\) between DP and IP is found to increase from 0.99 to 16.5 [see Fig. 2(i)]. Thus, balance between DP and IP becomes broken up in the degenerative tonic case. Figures 2(j1)-2(j4) show raster plots of spikes and IPSRs \(R_{\rm SNr}(t)\) of the (output) SNr neurons for \(x_{\rm D2}\) = 1.0, 0.8, 0.5, and 0.2, respectively. We note that, firing activity of the SNr neurons becomes reduced with decreasing \(x_{\rm D2}\) because of weakened IP. As a result of decrease in \(\mathcal{S}_{IP}\) (strength of IP), the population-averaged MFR \(\langle f_{i}^{\rm(SNr)}\rangle\) is found to decrease from 25.5 to 6.9 Hz with decreasing \(x_{\rm D2}\) from 1, as shown in Fig. 2(k). Thus, the BG gate to the thalamus becomes opened even in the case of tonic cortical input (3 Hz) in the resting state via break-up of balance between DP and IP. Consequently, a tonic pathological state with involuntary jerky movement occurs, in contrast to the tonic default state without movement. Next, we consider the case of phasic cortical input (10 Hz) in the phasically active state [7; 52; 55; 78; 100; 101; 102; 103; 104], which is shown in Fig. 3. Population firing behavior of D1 SPNs, associated with DP (green color), is shown in their raster plot of spikes and the IPSR \(R_{\rm D1}\)(t) in Fig. 3(a). In comparison to the tonic case with the population-averaged MFR \(\langle f_{i}^{\rm(D1)}\rangle\) = 1.03 Hz in Fig. 2(a), firing activity of the D1 SPNs become very active with \(\langle f_{i}^{\rm(D1)}\rangle\) = 30.7 Hz, independently of \(x_{\rm D2}\). But, due to degenerative loss of D2 SPNs, population firing activities of the D2 SPNs, the STN neurons, and the GP neurons [related to the IP (red color)] are changed with decreasing \(x_{\rm D2}\), as shown in their raster plots of spikes and IPSRs in Fig. 3. The population-averaged MFRs of the D2 SPNs, the STN neurons, and the GP neurons are also shown in Figs. 3(c), 3(f), and 3(g), respectively. For the D2 SPNs, \(\langle f_{i}^{\rm(D2)}\rangle\) = 24.1 Hz [much larger than that (0.97 Hz) in the tonic case], indepen Figure 3: Abnormal hyperkinetic movement due to degenerative loss of D2 SPNs in the phasic pathological state for the phasic cortical input (10 Hz) in the phasically-active state. Colors: parts, related to DP (green), while parts, associated with IP (red). (a) Raster plot of spikes and IPSR (instantaneous population spike rate) \(R_{\rm D1}(t)\) of D1 SPNs. Raster plots of spikes and IPSRs \(R_{\rm D2}(t)\) of D2 SPNs for (b1) \(x_{\rm D2}\) = 1.0, (b2) 0.8, (b3) 0.5, and (b4) 0.2. (c) Plot of population-averaged MFR (mean firing rate) \(\langle f_{i}^{\rm(D2)}\rangle\) of D2 SPNs versus \(x_{\rm D2}\). Raster plots of spikes and IPSRs \(R_{\rm STN}(t)\) of STN neurons for (d1) \(x_{\rm D2}\) = 1.0, (d2) 0.8, (d3) 0.5, and (d4) 0.2. Raster plots of spikes and IPSRs \(R_{\rm GP}(t)\) of GP neurons for (e1) \(x_{\rm D2}\) = 1.0, (e2) 0.8, (e3) 0.5, and (e4) 0.2. Plots of population-averaged MFRs of (f) STN neurons \(\langle f_{i}^{\rm(STN)}\rangle\) and (g) GP neurons \(\langle f_{i}^{\rm(GP)}\rangle\) versus \(x_{\rm D2}\). (h) Plots of strengths of DP \(\mathcal{S}_{DP}\) and IP \(\mathcal{S}_{IP}\) versus \(x_{\rm D2}\). (i) Plot of the competition degree \(\mathcal{C}_{d}\) versus \(x_{\rm D2}\). Raster plot of spikes and IPSR \(R_{\rm SNr}(t)\) of SNr neurons for (j1) \(x_{\rm D2}\) =1, (j2) 0.8, (j3) 0.5, and (j4) 0.2. (k) Plot of population-averaged MFR \(\langle f_{i}^{\rm(SNr)}\rangle\) of SNr neurons versus \(x_{\rm D2}\). dently of \(x_{\rm D2}\), because there is no change in cortical input to the D2 SPNs. As a result of decreased inhibitory projection from the D2 SPNs, \(\langle f_{i}^{\rm(GP)}\rangle\) of the GP neurons is rapidly increased from 7.3 to 66.1 Hz with decreasing \(x_{\rm D2}\) from 1; the increasing rate is higher than the tonic case. On the other hand, due to increase in inhibitory projection from the GP, \(\langle f_{i}^{\rm(STN)}\rangle\) of the STN neurons decreases from 39.8 to 17.6 Hz; the decreasing rate is also larger than that in the tonic case. We consider the case of \(x_{\rm D2}=1\) without degeneration. In this non-degenerative case, \({\cal S}_{DP}=2309.7\) and \({\cal S}_{IP}=815.6\). Thus, the competition degree becomes \({\cal C}_{d}=2.82\) [i.e., \({\cal S}_{DP}\) (strength of DP) is 2.82 times larger than \({\cal S}_{IP}\) (strength of IP)]. In this case, \(\langle f_{i}^{\rm(SNr)}\rangle\) of the (output) SNr neurons are decreased to 5.5 Hz (cf., in the tonic case, 25.5 Hz). Consequently, the BG door to the thalamus becomes opened, leading to normal movement. This phasic healthy state with \({\cal C}_{d}=2.82\) is in contrast to the tonic healthy state with \({\cal C}_{d}\simeq 1.0\) resulting in no movement. However, as \(x_{\rm D2}\) is decreased from 1 (degenerative case), \({\cal S}_{IP}\) is rapidly decreased from 815.6 to 92.3, while there is no change in \({\cal S}_{DP}\) (= 2309.7) [see Fig. 3(h)]. Thus, IP becomes rapidly weakened. Due to such under-activity of IP, the competition degree \({\cal C}_{d}\) increases from 2.82 (healthy state) to 25.0, as shown in Fig. 3(i). Consequently, harmony between DP and IP becomes broken up in the degenerative case with \(x_{\rm D2}<1\), and then a phasic pathological state with abnormal hyperkinetic movement appears, in contrast to the phasic healthy state with normal movement. Raster plots of spikes and IPSRs \(R_{\rm SNr}(t)\) of the (output) SNr neurons for \(x_{\rm D2}=1.0\), 0.8, 0.5, and 0.2 are shown in Figs. 3(j1)-3(j4), respectively. Due to under-activity of IP, firing activity of the SNr neurons becomes decreased with decreasing \(x_{\rm D2}\) from 1. Due to decreased \({\cal S}_{IP}\) (strength of IP), the population-averaged MFR \(\langle f_{i}^{\rm(SNr)}\rangle\) decreases from 5.5 (healthy state) to 0.7 Hz with decreasing \(x_{\rm D2}\) from 1 [see Fig. 3(k)]. In this phasic pathological state with \(\hat{\cal C}_{d}>2.82\) (where harmony between DP and IP is broken up), abnormal hyperkinetic movement disorder occurs, in contrast to the normal movement for the phasic healthy state with \({\cal C}_{d}=2.82\) (where there is harmony between DP and IP). To sum up the above results briefly, it is shown that, for the HD, pathological states (where harmony between DP and IP is broken up) appear due to degenerative loss of D2 SPNs in the cases of both tonic and phasic cortical inputs. On the other hand, for the PD, pathological states appear because of DA deficiency [18; 19; 20; 21; 22; 23; 26]. In the case of HD, IP is under-active, in contrast to the case of PD with over-active IP. Thus, patients with HD exhibit abnormal hyperkinetic movement disorder, while patients with PD show abnormal hypokinetic movement disorder. Consequently, HD lies at one end of the spectrum of movement disorders in the BG, while PD lies at the other end. ### Treatment of HD via Recovery of Harmony between DP and IP For the pathological state in the HD, IP is under-active due to degenerative loss of D2 SPNs, in comparison to the healthy state. Thus, harmony between DP and IP is broken up (i.e. occurrence of disharmony between DP and IP), leading to abnormal hyperkinetic movement disorder. Here, we investigate treatment of the pathological state with enhanced competition degree \(\mathcal{C}_{d}\) (than the normal one for the healthy state) in both cases of tonic and phasic cortical inputs via recovery of harmony between DP and IP. Activation and deactivation of the target neurons via optogenetics are studied in [26]. When light-sensitive proteins (called the opsins) are activated through light stimulation, variation in the intrinsic ionic currents of the neurons in the target population \(X\), \(\Delta I_{ion}^{(X)}\), takes place. If \(\Delta I_{ion}^{(X)}\) is positive (negative), firing activity of the target neurons is increased (decreased), leading to their activation (deactivation) [92; 93]. As discussed in [26], we simulate the effects of optogenetics by adding \(\Delta I_{ion}^{(X)}\) in Eq. (1), in addition to the current, \(I_{i}^{(X)}\), into the target \(X\) population. With increasing the intensity of light stimulation, the magnitude of \(\Delta I_{ion}^{(X)}\) also increases. We first consider tonic pathological states with enhanced competition degree \(\mathcal{C}_{d}\) [larger than that (1) for the tonic healthy state (with balanced DP and IP)], occurring due to degenerative loss of D2 SPNs, in the case of tonic cortical input (3 Hz) (see Fig. 2). As an example, we consider the tonic pathological case of \(x_{\rm D2}=0.5\) with \(\mathcal{C}_{d}=1.53\). In this pathological case, IP is under-active in comparison to the tonic healthy case (with balanced DP and IP); firing activity of D2 SPNs is under-active, leading to over-activity of GP neurons, which then results in under-activity of the STN neurons. Hence, for recovery of balance between DP and IP, we try to strengthen the IP via activation of D2 SPNs and STN neurons and deactivation of GP neurons. We first strengthen the IP through activation of the target (under-active) D2 SPNs. Figure 4(a1) shows plots of \(\mathcal{S}_{IP}\) (strength of IP) and \(\mathcal{S}_{DP}\) (strength of DP) versus \(\Delta I_{ion}^{(\rm D2)}\). As \(\Delta I_{ion}^{(\rm D2)}\) is increased from 0, \(\mathcal{S}_{IP}\) (red) increases from 15.1, while \(\mathcal{S}_{DP}\) (green) remains unchanged (i.e., 23.1). As a result of increase in \(\mathcal{S}_{IP}\), the competition degree \(\mathcal{C}_{d}\) between DP and IP is found to decrease from 1.53 [Fig. 4(a2)]. Also, the population-averaged MFR of the output SNr neurons, \(\langle f_{i}^{(\rm SNr)}\rangle\), is found to increase from 16.1 Hz [Fig. 4(a3)]. We note that, as \(\Delta I_{ion}^{(\rm D2)}\) passes a threshold \(\Delta I_{ion}^{(\rm D2)*}\) (\(=\) 262 pA), \(\mathcal{C}_{d}\)\(=\)\(\mathcal{C}_{d}^{*}\) (\(=\) 1.0) and \(\langle f_{i}^{(\rm SNr)}\rangle\)\(=\)\(\langle f_{i}^{(\rm SNr)*}\rangle\) (\(=\) 25.5 Hz); \(\mathcal{C}_{d}^{*}\) and \(\langle f_{i}^{(\rm SNr)*}\rangle\) are those for the tonic healthy state, and they are represented by the horizontal dashed lines in Figs. 4(a2) and 4(a3). Thus, for \(x_{\rm D2}=0.5\), the pathological state with \(\mathcal{C}_{d}=1.53\) may have \(\mathcal{C}_{d}\) (\(=\) 1.0) via activation of D2 SPNs for the threshold, \(\Delta I_{ion}^{(\rm D2)*}\) (\(=\) 262 pA); DP and IP becomes balanced, as in the case of tonic healthy state. In this way, balance between DP and IP is recovered for \(\Delta I_{ion}^{(\rm D1)*}\) = 262 pA. Figure 4(b) shows the plot of \(\Delta I_{ion}^{(\rm D2)*}\) versus \(x_{\rm D2}\). As \(x_{\rm D2}\) is decreased from 1, the threshold \(\Delta I_{ion}^{(\rm D2)*}\) is increased; with decreasing \(x_{\rm D2}\), more \(\Delta I_{ion}^{(\rm D2)*}\) is necessary for recovery of balance between DP and IP. We also strengthen the IP via activation of the target (under-active) STN neurons, which is shown in Figs. 4(c1)-4(c3) for \(x_{\rm D2}=0.5\). All the behaviors are qualitatively the same as those in the case of activation of D2 SPNs. With increasing \(\Delta I_{ion}^{(\rm STN)}\) from 0, \(\mathcal{S}_{IP}\) (strength of IP) increases, leading to decrease in the competition degree \(\mathcal{C}_{d}\), and the population-averaged MFR of the output SNr neurons, \(\langle f_{i}^{(\rm SNr)}\rangle\), also increases. But, the threshold \(\Delta I_{ion}^{(\rm STN)*}\) (\(=\) 14 pA), where balance between DP and IP is recovered (i.e., \(\mathcal{C}_{d}=\) 1 and \(\langle f_{i}^{(\rm SNr)}\rangle=\) 25.5 Hz, as in the case of tonic healthy state), is smaller than that (262 pA) in the case of activation of D2 SPNs. The mono-synaptic effect of STN neurons on the output SNr neurons is more direct than the bi- or tri-synaptic effect of D2 SPNs, which could result in the smaller threshold \(\Delta I_{ion}^{(\rm STN)*}\) in the case of STN neurons. Figure 4(d) shows the plot of \(\Delta I_{ion}^{(\rm STN)*}\) versus \(x_{\rm D2}\). With decreasing \(x_{\rm D2}\) from 1, the threshold \(\Delta I_{ion}^{(\rm STN)*}\) increases, as shown in Fig. 4(d); As \(x_{\rm D2}\) is decreased, more \(\Delta I_{ion}^{(\rm STN)*}\) is necessary for recovery of balance between DP and IP. Unlike the cases of activation of (under-active) D2 SPNs and STN neurons, IP may be strengthened via deactivation of (over-active) GP neurons; in the case of deactivation, \(\Delta I_{ion}^{(GP)}\) is negative, in contrast to the case of activation with \(\Delta I_{ion}^{(X)}>\) 0 [\(X\) = D2 (SPN) and STN]. Figures 4(e1)- 4(e3) and 4(f) show the case of deactivation of GP neurons. As the magnitude of \(\Delta I_{ion}^{(\rm GP)}\) is increased (i.e., more negative), strength of IP, \(\mathcal{S}_{IP}\) (red), is found to increase from 15.1, while \(\mathcal{S}_{DP}\) (green) remains constant (\(=\) 23.1). Thus, when passing a threshold \(\Delta I_{ion}^{(\rm GP)*}=-\)28 pA, balance between DP and IP becomes recovered (i.e., the competition degree \(\mathcal{C}_{d}\) becomes 1 and the population-averaged MFR of output SNr neurons \(\langle f_{i}^{(\rm SNr)}\rangle\) becomes 25.5 Hz) [see Figs. 4(e2) 4(e3)]. As shown in Fig. 4(f), with decreasing \(x_{\rm D2}\) from 1, the threshold \(\Delta I_{ion}^{\rm(GP)*}\) is decreased (i.e., its magnitude increases); as \(x_{\rm D2}\) is decreased from 1, more negative \(\Delta I_{ion}^{\rm(GP)*}\) is required for recovery of balance between DP and IP. Instead of the above deactivation of GP neurons via optogenetics, we also consider ablation of (over-active) GP neurons in the pathological state for \(x_{\rm D2}=0.5\) to reduce the over-activity of GP neurons. In the case of ablation, the number of GP neurons, \(N_{\rm GP}\), is reduced to \(N_{\rm GP}^{(n)}\)\(x_{\rm GP}\) (\(1>x_{\rm GP}>0\)), where \(N_{\rm GP}^{(n)}\) (\(=46\)) is the normal number of GP neurons and \(x_{\rm GP}\) is the fraction of number of GP neurons. As shown in Figs. 4(g1)- 4(g3) and 4(h), the effect of decreasing \(x_{\rm GP}\) via ablation is similar to that of deactivation of GP neurons via optogenetics. As \(x_{\rm GP}\) is decreased from 1, strength of IP, \(\mathcal{S}_{IP}\) (red), is found to increase from 15.1 (i.e., IP becomes strengthened) [see Fig. 4(g1)]. When passing a threshold, \(x_{\rm GP}^{*}\) (\(\simeq 0.78\)), balance between DP and IP becomes recovered (i.e., \(\mathcal{C}_{d}\) = 1.0 and \(\langle f_{i}^{\rm(SNr)}\rangle\) = 25.5 Hz), as shown in Figs. 4(g2)-4(g3). Figure 4(h) shows the plot of \(x_{\rm GP}^{*}\) versus \(x_{\rm D2}\). With decreasing \(x_{\rm D2}\) from 1, \(x_{\rm GP}^{*}\) decreases; more ablation (i.e., smaller \(x_{\rm GP}\)) is necessary for balance between DP and IP. Next, we consider phasic pathological states with enhanced competition degree \(\mathcal{C}_{d}\) [larger than that (2.82) for the phasic healthy state (with harmony between DP and IP)], occurring due to degenerative loss of D2 SPNs, in the case of phasic cortical input (10 Hz) (see Fig. 3). As an example, we consider the pathological case of \(x_{\rm D2}=0.5\) with \(\mathcal{C}_{d}=7.19\). In this phasic pathological case, IP is under-active in comparison to the case of phasic healthy state. For the phasic healthy state with \(\mathcal{C}_{d}^{*}\) = 2.82 (i.e., harmony between DP and IP), the population-averaged MFR of output STr neurons, \(\langle f_{i}^{\rm(SNr)*}\rangle\), is much reduced to 5.5 Hz, leading to normal movement, in contrast to the case of tonic healthy state with \(\mathcal{C}_{d}\simeq 1.0\) and \(\langle f_{i}^{\rm(SNr)}\rangle\) = 25.5 Hz without movement. As in the above tonic pathological state, firing activity of D2 SPNs is under-active, resulting in over-activity of GP neurons, which then leads to under-activity of the STN neurons. Hence, for recovery of harmony between DP and IP, we strengthen the IP through activation of D2 SPNs and STN neurons and deactivation of GP neurons by employing optogenetic technique and via ablation of GP neurons. Figure 5 shows treatment of phasic pathological state for \(x_{\rm D2}=0.5\) with \(\mathcal{C}_{d}=7.19\); (1) activation of D2 SPNs, (2) activation of STN neurons, (3) deactivation of GP neurons, and (4) ablation of GP neurons. The overall results of these treatments are qualitatively the same as those in the above case of tonic pathological state in Fig. 4. Only the corresponding thresholds are quantitatively different; (1) \(\Delta I_{ion}^{\rm(D2)*}\) = 1,636 pA, (2) \(\Delta I_{ion}^{\rm(STN)*}\) = 405 pA, (3) \(\Delta I_{ion}^{\rm(GP)*}=-540\) pA, and (4) \(x_{\rm G}^{*}\) (\(\simeq 0.52\)). When passing a threshold for each treatment, harmony between DP and IP becomes recovered (i.e., \(\mathcal{C}_{d}\) = 2.82 and \(\langle f_{i}^{\rm(SNr)}\rangle\) = 5.5 Hz), resulting in normal movement. Finally, we note that, with decreasing \(x_{\rm D2}\), the thresholds, \(\Delta I_{ion}^{\rm(D2)*}\) and \(\Delta I_{ion}^{\rm(STN)*}\), for activations of D2 SPNs and STN neurons are increased (i.e., more positive), and the threshold, \(\Delta I_{ion}^{\rm(GP)*}\) for deactivation of GP neurons becomes more negative, as shown in Figs. 5(b), 5(d), and 5(f). Thus, as \(x_{\rm D2}\) is decreased, more light stimulation for activation and deactivation is necessary for recovery of harmony between DP and IP. Also, in the case of ablation of GP neurons, with decreasing \(x_{\rm D2}\), more ablation is required to get harmony between DP and IP. ### Effect of Loss of Healthy Synapses on The HD In the HD, loss of healthy synapses occurs not only in the striatum, but also in other regions of the BG, including STN, GP, and SNr [94; 95; 96; 97; 98; 99]. Such loss of synapses in the BG neurons is an important feature of HD, and it is thought to contribute to the motor and cognitive symptoms of the disease. Here, we study effect of loss of healthy synapses of all the BG neurons on HD. As examples, we consider pathological states for \(x_{\rm D2}\) = 0.5 in both cases of tonic (3 Hz) and phasic (10 Hz) cortical inputs. Loss of synapses in the BG neurons is modeled in terms of decreased synaptic connection probability, \(p_{c}=p_{c}^{(n)}\)\(x_{c}\); \(p_{c}^{(n)}\) is the normal synaptic connection probability (depending on the type of BG neurons and given in the subsection II.1) and \(x_{c}\) represents the fraction in \(p_{c}\) (\(1>x_{c}>0\)). We first consider a tonic pathological state in Figs. 6(a)-6(c). As a result of loss of synapses, decreased cortical inputs into D1 SPNs leads to reduction in their firing activity \(\langle f_{i}^{\rm(D1)}\rangle\). Then, strength of DP, \(\mathcal{S}_{DP}\), becomes decreased. As shown in Fig. 6(a), \(\mathcal{S}_{DP}\) (green color) is found to monotonically decrease from 23.1 with decreasing \(x_{c}\) from 1. Also, due to reduced cortical synaptic inputs into D2 SPNs, firing activity of D2 SPNs, \(\langle f_{i}^{\rm(D2)}\rangle\), becomes decreased, leading to increased firing activity of GP neurons (\(\langle f_{i}^{\rm(GP)}\rangle\)), which then results in decrease in the firing activity of STN neurons (\(\langle f_{i}^{\rm(STN)}\rangle\)). Consequently, strength of IP, \(\mathcal{S}_{IP}\), becomes decreased. In this case of IP, with decreasing \(x_{c}\) from 1, \(\mathcal{S}_{IP}\) (red color) is found to more rapidly decrease from 15.1 than the case of DP. Then, the competition \(\mathcal{C}_{d}\) between DP and IP increases rapidly from 1.53 with decreasing \(x_{c}\) from 1 [see Fig. 6(b)]. Thus, as \(x_{c}\) is decreased from 1, population-averaged MFR of the output SNr neurons (\(\langle f_{i}^{\rm(SNr)}\rangle\)) decreases from 16.1 Hz. In this way, with decreasing \(x_{c}\), the degree of disharmony between DP and IP becomes increased, resulting in more severe involuntary jerky movement in the tonic pathological case. Next, we consider a phasic pathological state in Figs. 6(d)-6(f). With decreasing \(x_{c}\), tendency in the phasic pathological case is qualitatively the same as that in the above tonic pathological case. Based on the same reasoning given in the tonic pathological case, as \(x_{c}\) is decreased from 1, strength of IP (\(\mathcal{S}_{IP}\); red color) is found to decreases much more rapidly than strength of DP (\(\mathcal{S}_{DP}\); green color), as shown in Fig. 5(d). Then, the competition degree \(\mathcal{C}_{d}\) increases from 7.19 with decreasing \(x_{c}\) from 1 [see Fig. 5(e)]. Consequently, firing activity of the output SNr neurons (\(\langle f_{i}^{\rm(SNr)}\rangle\)) decreases from 2.7 Hz with \(x_{c}\), as shown in Fig. 5(f). In this way, as \(x_{c}\) is decreased from 1, the broken-up degree of harmony between DP and IP becomes increased, leading to more severe abnormal hyperkinetic movement disorder in the phasic pathological case. Overall, in both tonic and phasic pathological cases, as a result of loss of healthy synapses in the BG neurons, symptoms of the HD become more severe with decreasing \(x_{c}\), because the disharmony degree between DP and IP becomes increased. ## IV Summary and Discussion The BG exhibit a variety of functions for motor, cognition, and emotion. Dysfunction in the BG is associated with movement disorder (e.g., HD and PD) and cognitive and psychiatric disorders. There are two competing pathways in the BG, DP (facilitating movement) and Figure 6: Effect of loss of healthy synapses in all the BG neurons on the HD for \(x_{\rm D2}\) = 0.5. Colors: parts, related to DP (green), while parts, associated with IP (red). (1) Tonic pathological state: Plots of (a) \(\mathcal{S}_{DP}\) and \(\mathcal{S}_{IP}\), (b) \(\mathcal{C}_{d}\), and (c) \(\langle f_{i}^{\rm(SNr)}\rangle\) versus \(x_{c}\). (2) Phasic pathological state: Plots of (d) \(\mathcal{S}_{DP}\) and \(\mathcal{S}_{IP}\), (e) \(\mathcal{C}_{d}\), and (f) \(\langle f_{i}^{\rm(SNr)}\rangle\) versus \(x_{c}\). IP (suppressing movement) [60; 61; 62; 63]. In our recent work [26], as a first time, we made quantitative analysis of competitive harmony between DP and IP in the default tonic state and the phasic healthy and pathological states by introducing their competition degree, \(\mathcal{C}_{d}\), between DP and IP, given by the ratio of strength of DP (\(\mathcal{S}_{DP}\)) to strength of IP (\(\mathcal{S}_{IP}\)) (i.e., \(\mathcal{C}_{d}=\mathcal{S}_{DP}/\mathcal{S}_{IP}\)). In the case of normal DA level. a healthy state with normal movement was found to appear, while in the case of lower DA level, a pathological state (PD) with reduced competition degree was found to occur. In PD, DP is under-active, while IP is over-active, resulting in abnormal hypokinetic movement. In this paper, we are concerned in the HD which is a genetic neurodegenerative disease. As a result of mutant HTT gene, toxic HTT protein aggregates appear, causing the characteristic neurodegeneration seen in HD. We considered degenerative loss of D2 SPNs in the case of normal DA level. By decreasing \(x_{\text{D2}}\) (i.e. fraction of number of D2 SPNs) from 1, we quantitatively analyzed break-up of harmony between DP and IP. IP was found to be under-active (i.e., weakened), in contrast to the case of PD with over-active IP. Thus, the competition degree \(\mathcal{C}_{d}\) becomes increased than normal one. Consequently, abnormal hyperkinetic movement such as chorea occurs, in contrast to the case of PD with hypokinetic disorder. Unfortunately, at present there is no cure for HD. The available treatments for HD primarily aim to control and alleviate its symptoms, resulting from weakened IP: medication treatment [125; 126; 127; 128], reducing symptoms, deep brain stimulation in research and clinical trials [129; 130; 131], and experimental surgery [132]. We studied treatment of HD via recovery of harmony between DP and IP by activating D2 SPNs and STN neurons and deactivating GP neurons, based on optogenetics [92; 93]. Through the treatment process, the IP becomes strengthened, and thus harmony between DP and IP may be regained. We also studied effects of loss of healthy synapses of the BG cells on the HD. As a result of such synaptic loss, the HD status was found to become more severe. Finally, we discuss limitations of our present work and future works. In the present work, we considered early stage of HD where degenerative loss of D2 SPNs occurs in the nearly normal DA level. But, in the late stage of HD, degenerative loss of D1 SPN also occurs along with decrease in DA level, leading to hypokinetic disorder (e.g., rigidity and bradykinesia) due to weakened DP, as in the case of PD [133]. Moreover, in addition to deaths of D1/D2 SPNs, degeneration of cortical pyramidal cells occurs [134; 135]. Hence, as a future work, it would be interesting to investigate consequences of degeneration of D1 SPNs and cortical pyramidal cells, in addition to degenerative loss of D2 SPNs. Next, we would like to consider more realistic striatal circuit in the BG. In our present striatal circuit, we considered only the D1/D2 SPNs (95 % major population). But, the minor population of fast interneurons (FSIs) in the striatum are known to exert strong effects on firing activities of the D1/D2 SPNs [136; 52]. Hence, in future, it would be worth while to contain the FSIs in our BG SNN. In our present BG SNN, cortical inputs were modelled by Poisson spike trains. Such SNN could be extended to the cortico-BG-thalamo-cortical (CBGTC) loop by including the cortical and the thalamic neurons for more complete computational work [137; 66]. We also make discussion on application of the optogenetics to human patients for treatment of a pathological state [138; 139]. In the case of HD, harmony between DP and IP is broken up due to under-active IP. As shown in Sec. III.2, harmony between DP and IP could be recovered by strengthening IP. To this end, optogenetic techniques may be employed. Activation of D2 SPNs and STN neurons via optogenetics results in strengthening IP. We hope that, in near future, safe clinical applications of optogenetics to human patients with HD could be successfully available via collaboration of researchers and clinicians. Then, it would take a substantial step forward for treatment of HD. ###### Acknowledgements. This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (Grant No. 20162007688).
2302.06158
Commuting upper triangular binary morphisms
A morphism $g$ from the free monoid $X^*$ into itself is called upper triangular if the matrix of $g$ is upper triangular. We characterize all upper triangular binary morphisms $g_1$ and $g_2$ such that $g_1g_2=g_2g_1$.
Juha Honkala
2023-02-13T07:40:31Z
http://arxiv.org/abs/2302.06158v4
# Commuting upper triangular binary morphisms ###### Abstract A morphism \(g\) from the free monoid \(X^{*}\) into itself is called upper triangular if the matrix of \(g\) is upper triangular. We characterize all upper triangular binary morphisms \(g_{1}\) and \(g_{2}\) such that \(g_{1}g_{2}=g_{2}g_{1}\). **Keywords:** Free monoid morphism; Commutativity; Combinatorics on morphisms ## 1 Introduction The free monoid morphisms play an important role in many areas of mathematics and theoretical computer science (see [1, 8, 9, 11, 12, 13]). On the other hand, many questions concerning combinatorics on morphisms appear to be rather difficult. It is instructive to consider the problem of commutativity. If \(u\) and \(v\) are words, the equation \(uv=vu\) holds if and only if there is a word \(w\) such that \(u\) and \(v\) are powers of \(w\) (see [8]). For free monoid morphisms the situation is more complicated. For two morphisms \(g_{1}\) and \(g_{2}\), the equation \(g_{1}g_{2}=g_{2}g_{1}\) does not imply that \(g_{1}\) and \(g_{2}\) are powers of a third morphism (see, however, [10]). In this paper we study commuting upper triangular binary morphisms. Let \(X=\{a,b\}\) be a binary alphabet. A morphism \(g\) from the free monoid \(X^{*}\) into itself is called upper triangular if the matrix of \(g\) is upper triangular. If \(a\) is the first letter of \(X\), this means that there is a nonnegative integer \(s\) such that \(g(a)=a^{s}\). We will characterize all upper triangular binary morphisms \(g_{1}\) and \(g_{2}\) such that \(g_{1}g_{2}=g_{2}g_{1}\). We now outline the contents of this paper. In Section 2 we recall the basic definitions. In Section 3 we discuss the connections between freeness and commutativity. In Section 4 we give examples of commuting morphisms. In Section 5 we study infinite words generated by morphisms. While the morphisms we study are not uniform, it turns out to be possible to use results concerning automatic sequences. In particular, we will apply the theorem of Cobham character izing those sequences which are automatic in two multiplicatively independent bases (see [1]). In Sections 6,7,8 and 9 we characterize all upper triangular binary morphisms \(g_{1}\) and \(g_{2}\) such that \(g_{1}g_{2}=g_{2}g_{1}\). Assume that \(a\) is the first letter and \(b\) is the second letter of the binary alphabet. In Section 6 we consider nonsingular morphisms such that both \(g_{1}(b)\) and \(g_{2}(b)\) contain at least two occurrences of \(b\). We have two cases depending on whether these numbers are multiplicatively independent or not. The remaining cases are easier and are discussed in Sections 7,8 and 9. We assume that the reader is familiar with the basics of free monoid morphisms, infinite words, automatic sequences and combinatorics on words (see [1, 8, 9, 11, 12, 13]). For previous results concerning combinatorics on morphisms see e.g. [4, 5, 6, 7, 10]. ## 2 Definitions We use standard notation and terminology concerning free monoids and their morphisms (see [1, 8, 9, 11, 12]). If \(X\) is a finite nonempty set, \(X^{*}\) is the _free monoid_ generated by \(X\). The identity element of \(X^{*}\) is the _empty word_ denoted by \(\varepsilon\). If \(u,v,w\) are words such that \(uv=w\), we denote \(v=u^{-1}w\). If \(w\) is a word and \(a\) is a letter, then \(|w|_{a}\) is the number of occurrences of \(a\) in \(w\). The _length_ of a word \(w\), denoted by \(|w|\), is the total number of letters in \(w\). Let \(X\) and \(Y\) be finite nonempty alphabets. A mapping \(h:X^{*}\to Y^{*}\) is a _morphism_ if \[h(uv)=h(u)h(v)\] for all \(u,v\in X^{*}\). The set of all morphisms from \(X^{*}\) to \(X^{*}\) is denoted by \(\operatorname{Hom}(X^{*})\). \(\operatorname{Hom}(X^{*})\) is a monoid with respect to the usual product of morphisms. If \(h\in\operatorname{Hom}(X^{*})\) and the letters of \(X\) are \(x_{1},\dots,x_{d}\) in a fixed order, then the _matrix_\(M_{h}\) of \(h\) is defined by \[M_{h}=\left(\begin{array}{cccc}|h(x_{1})|_{x_{1}}&|h(x_{2})|_{x_{1}}&\dots& |h(x_{d})|_{x_{1}}\\ |h(x_{1})|_{x_{2}}&|h(x_{2})|_{x_{2}}&\dots&|h(x_{d})|_{x_{2}}\\ \vdots&\vdots&&\vdots\\ |h(x_{1})|_{x_{d}}&|h(x_{2})|_{x_{d}}&\dots&|h(x_{d})|_{x_{d}}\end{array} \right).\] A morphism \(h\in\operatorname{Hom}(X^{*})\) is _upper triangular_ if its matrix \(M_{h}\) is upper triangular. The set of upper triangular morphisms from \(X^{*}\) to \(X^{*}\) is denoted by \(\operatorname{Tri}(X^{*})\). A morphism \(h\in\operatorname{Hom}(X^{*})\) is _nonsingular_ if its matrix is nonsingular. Let now \(X\) be a finite alphabet and let \(h\in\operatorname{Hom}(X^{*})\). If \(w\in X^{*}\) is a word such that \(w\) is a prefix of \(h(w)\) and \(\lim_{n\to\infty}|h^{n}(w)|=\infty\), we say that \(h\) is _prolongable_ on \(w\) and define the infinite word \(h^{\omega}(w)\) by \[h^{\omega}(w)=\lim_{n\to\infty}h^{n}(w).\] Hence, \(h^{\omega}(w)\) is the unique infinite word \(u\) such that \(h^{n}(w)\) is a prefix of \(u\) for all \(n\in\mathbb{N}\). ## 3 Connections between freeness and commutativity A nonempty subset \(Y\) of a semigroup \(S\) is called _free_ if every element of the subsemigroup of \(S\) generated by \(Y\) can be written uniquely as a product of elements of \(Y\). In other words, a set \(Y\) is free if for all positive integers \(m\) and \(n\) and \(u_{1},\ldots,u_{m},v_{1},\ldots,v_{n}\in Y\), the equation \[u_{1}u_{2}\cdots u_{m}=v_{1}v_{2}\cdots v_{n}\] implies that \[m=n\ \ \mbox{ and }\ \ u_{i}=v_{i}\ \ \mbox{ for }\ \ i=1,\ldots,m.\] For an excellent introduction to freeness problems over semigroups we refer to [2]. If a set contains two elements which commute, then the set is not free. If \(u,v\in X^{*}\) and \(u\neq v\), then \(\{u,v\}\) is free if and only if \(u\) and \(v\) do not commute (see [8]). We recall some related results for upper triangular morphisms. First, let \(X=\{a,b\}\) be a binary alphabet. Let \(g_{1},g_{2}\in\mbox{Tri}(X^{*})\). We say that \(\{g_{1},g_{2}\}\) is a _special pair_ if \(g_{1}(b)\) and \(g_{2}(b)\) belong to \(a^{*}ba^{*}\) and exactly one of \(g_{1}(a)\) and \(g_{2}(a)\) equals \(a\). The following result is from [7]. **Theorem 1**: _Let \(X=\{a,b\}\) and let \(g_{1},g_{2}\in\mbox{Tri}(X^{*})\) be nonsingular upper triangular morphisms. Assume that \(g_{1}\neq g_{2}\). Assume that \(\{g_{1},g_{2}\}\) is not a special pair. If \(\{g_{1},g_{2}\}\) is not free, then \(g_{1}g_{2}=g_{2}g_{1}\)._ For larger alphabets we have the following result (see [6]). **Theorem 2**: _Let \(X\) be an arbitrary alphabet. Let \(g_{1},g_{2}\in\mbox{Tri}(X^{*})\) and let \(M_{i}\) be the matrix of \(g_{i}\) for \(i=1,2\). Assume \(g_{1}\neq g_{2}\). Assume that all diagonal entries of \(M_{i}\) are at least two for \(i=1,2\). If \(\{g_{1},g_{2}\}\) is not free, then \(g_{1}g_{2}=g_{2}g_{1}\)._ Theorems 1 and 2 imply the following lemma. **Lemma 3**: _Assume that the morphisms \(g_{1}\) and \(g_{2}\) satisfy the assumptions of Theorem 1 or Theorem 2. Let \(m\) and \(n\) be positive integers. If \(g_{1}^{m}\) and \(g_{2}^{n}\) commute, then \(g_{1}\) and \(g_{2}\) commute._ **Proof.** Assume that \(g_{1}^{m}g_{2}^{n}=g_{2}^{n}g_{1}^{m}\). Then the pair \(\{g_{1},g_{2}\}\) is not free and the claim follows by Theorem 1 or by Theorem 2. \(\Box\) Examples In this section we give examples of commuting morphisms. The morphisms considered in Example 1 can be regarded as direct sums of unary morphisms. **Example 1**: Let \(X=\{x_{1},\ldots,x_{k}\}\) be an alphabet having \(k\) letters. Let \((m_{1},\ldots,m_{k})\) and \((n_{1},\ldots,n_{k})\) be \(k\)-tuples of nonnegative integers. Define the morphisms \(g_{1},g_{2}\in\mathrm{Tri}(X^{*})\) by \[g_{1}(x_{i})=x_{i}^{m_{i}}\quad\text{ and }\quad g_{2}(x_{i})=x_{i}^{n_{i}}\] for \(i=1,\ldots,k\). Then \[g_{1}g_{2}(x_{i})=g_{1}(x_{i}^{n_{i}})=x_{i}^{m_{i}n_{i}}\] and \[g_{2}g_{1}(x_{i})=g_{2}(x_{i}^{m_{i}})=x_{i}^{m_{i}n_{i}}\] for \(i=1,\ldots,k\). Hence \[g_{1}g_{2}=g_{2}g_{1}.\] **Example 2**: Let \(X=\{a,b\}\) and define the morphisms \(g_{1},g_{2}\in\mathrm{Tri}(X^{*})\) by \[g_{1}(a)=a,\ \ g_{1}(b)=b^{2}\] and \[g_{2}(a)=a^{2},\ \ g_{2}(b)=b.\] By Example 1 the morphisms \(g_{1}\) and \(g_{2}\) commute. However, there do not exist positive integers \(m\), \(n\) and a morphism \(g\in\mathrm{Hom}(X^{*})\) such that \(g_{1}=g^{m}\) and \(g_{2}=g^{n}\). To see this, assume that such \(g\), \(m\) and \(n\) exist. Then neither \(g(a)\) nor \(g(b)\) is the empty word. Furthermore, either \(|g(a)|=1\) or \(|g(b)|=1\) but not both. Without loss of generality assume that \(|g(a)|=1\). Then \(g(a)=a\) or \(g(a)=b\). The first alternative is not possible since \(g^{n}(a)=a^{2}\). The second alternative is not possible since it would imply that the only word of length one in \(g(X^{*})\) is \(b\). **Example 3**: Let \(X=\{a,b\}\) and let \(p\) and \(q\) be positive integers. Let \(\alpha\) be a nonnegative integer. Define the morphisms \(g_{1},g_{2}\in\mathrm{Tri}(X^{*})\) by \(g_{1}(a)=g_{2}(a)=a\), \(g_{1}(b)=(ba^{\alpha})^{p-1}b\), \(g_{2}(b)=(ba^{\alpha})^{q-1}b\). To prove the equation \(g_{1}g_{2}=g_{2}g_{1}\), let \(z=ba^{\alpha}\). Then \(g_{1}(z)=z^{p}\) and \(g_{2}(z)=z^{q}\). Hence \(g_{1}g_{2}(z)=g_{2}g_{1}(z)\). Therefore \(g_{1}g_{2}(b)a^{\alpha}=g_{2}g_{1}(b)a^{\alpha}\), which implies that \(g_{1}g_{2}(b)=g_{2}g_{1}(b)\). Trivially \(g_{1}g_{2}(a)=g_{2}g_{1}(a)\). **Example 4**: Let \(X=\{a,b\}\) and let \(g_{1},g_{2}\in\mathrm{Tri}(X^{*})\) be nonsingular upper triangular binary morphisms. Assume that there exist positive integers \(m\) and \(n\) such that \(g_{1}^{m}=g_{2}^{n}\). Assume \(g_{1}\neq g_{2}\). Then \(\{g_{1},g_{2}\}\) is not a special pair. Indeed, if one of \(g_{1}(a)\) and \(g_{2}(a)\) equals \(a\), then both do. Hence Theorem 1 implies that \(g_{1}g_{2}=g_{2}g_{1}\). Let \(u\) and \(v\) be words over the binary alphabet \(X=\{a,b\}\). We say that \(u\) and \(v\) are \(a\)_-conjugates_ if there exist nonnegative integers \(p,q,r,s\) and a word \(w\) such that \[u=a^{p}wa^{q},\ \ \ v=a^{r}wa^{s}\ \ \ \mbox{and}\ \ \ p+q=r+s.\] **Example 5**: Let \(X=\{a,b\}\) and let \(g_{1},g_{2}\in\mbox{Tri}(X^{*})\) be nonsingular upper triangular morphisms. Assume that \(g_{1}(a)=g_{2}(a)=a\). Assume that there are positive integers \(m\) and \(n\) such that \(g_{1}^{n}(b)\) and \(g_{2}^{m}(b)\) are \(a\)-conjugates. We show that these conditions imply that \(g_{1}\) and \(g_{2}\) commute. By Lemma 3 it is enough to show that \(g_{1}^{n}\) and \(g_{2}^{m}\) commute. By assumption, there exist nonnegative integers \(\gamma_{1},\gamma_{2},\delta_{1},\delta_{2},\alpha_{1},\ldots,\alpha_{p-1}\) such that \(g_{1}^{n}(b)=a^{\gamma_{1}}za^{\gamma_{2}}\) and \(g_{2}^{m}(b)=a^{\delta_{1}}za^{\delta_{2}}\), where \(z=ba^{\alpha_{1}}ba^{\alpha_{2}}b\cdots ba^{\alpha_{p-1}}b\) and \(\gamma_{1}+\gamma_{2}=\delta_{1}+\delta_{2}\). Then \[g_{1}^{n}g_{2}^{m}(b) = a^{\delta_{1}}g_{1}^{n}(z)a^{\delta_{2}}\] \[= a^{\delta_{1}+\gamma_{1}}za^{\gamma_{2}+\alpha_{1}+\gamma_{1}}za ^{\gamma_{2}+\alpha_{2}+\gamma_{1}}za^{\gamma_{2}}\cdots a^{\gamma_{1}}za^{ \gamma_{2}+\alpha_{p-1}+\gamma_{1}}za^{\gamma_{2}+\delta_{2}}\] and \[g_{2}^{m}g_{1}^{n}(b) = a^{\gamma_{1}}g_{2}^{m}(z)a^{\gamma_{2}}\] \[= a^{\gamma_{1}+\delta_{1}}za^{\delta_{2}+\alpha_{1}+\delta_{1}}za ^{\delta_{2}+\alpha_{2}+\delta_{1}}za^{\delta_{2}}\cdots a^{\delta_{1}}za^{ \delta_{2}+\alpha_{p-1}+\delta_{1}}za^{\delta_{2}+\gamma_{2}}.\] Therefore \(g_{1}^{n}g_{2}^{m}(b)=g_{2}^{m}g_{1}^{n}(b)\). Hence \(g_{1}^{n}g_{2}^{m}=g_{2}^{m}g_{1}^{n}\). We conclude this section by two examples involving singular morphisms. **Example 6**: Let \(X=\{a,b\}\). Define the morphisms \(g_{1},g_{2}\in\mbox{Tri}(X^{*})\) by \[g_{1}(a)=g_{2}(a)=\varepsilon,\ \ \ g_{1}(b)=w^{i},\ \ \ g_{2}(b)=w^{j}\] where \(w\in X^{*}\) and \(i\) and \(j\) are nonnegative integers. Then \[g_{1}g_{2}(b)=g_{1}(w^{j})=w^{ij|w|_{b}}\ \ \mbox{and}\ \ \ g_{2}g_{1}(b)=g_{2}(w^{i})=w^{ij|w|_{b}}.\] Hence \(g_{1}g_{2}=g_{2}g_{1}\). **Example 7**: Let \(X=\{a,b\}\). Define the morphisms \(g_{1},g_{2}\in\mbox{Tri}(X^{*})\) by \[g_{1}(a)=\varepsilon,\ \ \ g_{1}(b)=(a^{\alpha}ba^{\beta})^{i}\] and \[g_{2}(a)=a,\ \ \ g_{2}(b)=(ba^{\alpha+\beta})^{j}b\] where \(\alpha,\beta,i,j\) are nonnegative integers. Then \[g_{1}g_{2}(b)=g_{1}((ba^{\alpha+\beta})^{j}b)=g_{1}(b^{j+1})=(a^{\alpha}ba^{ \beta})^{i(j+1)}\] and \[g_{2}g_{1}(b)=g_{2}((a^{\alpha}ba^{\beta})^{i})=(a^{\alpha}(ba^{\alpha+\beta} )^{j}ba^{\beta})^{i}=(a^{\alpha}ba^{\beta})^{i(j+1)}.\] Hence \(g_{1}g_{2}=g_{2}g_{1}\). Properties of infinite words generated by upper triangular binary morphisms Let \(X=\{a,b\}\) be a binary alphabet. Regard \(a\) as the first letter of \(X\) and \(b\) as the second letter of \(X\). Let \(h\in{\rm Tri}(X^{*})\). Assume that \(h\) is nonsingular. Then there exist a non-negative integer \(\gamma\) and a word \(v\) such that \(h(b)=a^{\gamma}bv\). Let \(c\) be a new letter and let \(Y=X\cup\{c\}\). Regard \(c\) as the third letter of \(Y\). Define the morphism \({\bf RIGHT}(h)\in{\rm Tri}(Y^{*})\) by \[{\bf RIGHT}(h)(x)=h(x),\quad\mbox{ if }x\in X,\quad\quad{\bf RIGHT}(h)(c)=cv.\] Assume that \(v\neq\varepsilon\). Then we define the infinite word \(\omega(h)\) by \[\omega(h)=bc^{-1}{\bf RIGHT}(h)^{\omega}(c).\] In other words, the infinite word \(\omega(h)\) is obtained from \({\bf RIGHT}(h)^{\omega}(c)\) by replacing its first letter \(c\) by \(b\). Hence, if \(n\) is any positive integer, the word obtained from \(h^{n}(b)\) by deleting all occurrences of \(a\) preceding the first occurrence of \(b\) is a prefix of \(\omega(h)\). For the proof of the following lemma see [6]. **Lemma 4**: _Let \(g_{1},g_{2}\in{\rm Tri}(X^{*})\) be nonsingular morphisms. Let \(h_{i}={\bf RIGHT}(g_{i})\) for \(i=1,2\). Assume that \(h_{i}(c)\neq c\) for \(i=1,2\). If \(g_{1}g_{2}=g_{2}g_{1}\), then \(\omega(g_{1})=\omega(g_{2})\)._ We will now study some properties of the infinite words defined above. Let \(w\) be an infinite word over \(X\) having infinitely many occurrences of \(b\). For \(i\geq 1\), let \(A_{w}(i)\) be the number of occurrences of the letter \(a\) in \(w\) between the \(i\)th and the \((i+1)\)th occurrences of \(b\) in \(w\). The following lemma gives a formula for \(A_{w}(i)\), which will be used repeatedly. **Lemma 5**: _Let \(h\in{\rm Tri}(X^{*})\) be the morphism defined by_ \[h(a)=a^{s}\quad\mbox{ and }\quad h(b)=a^{\gamma_{1}}ba^{\alpha_{1}}ba^{\alpha_{ 2}}b\cdots ba^{\alpha_{p-1}}ba^{\gamma_{2}},\] _where \(s\geq 1\), \(p\geq 2\) and \(\gamma_{1},\gamma_{2},\alpha_{1},\ldots,\alpha_{p-1}\geq 0\). Let \(w=\omega(h)\). Then_ _(i) \(A_{w}(i+pn)=\alpha_{i}\) if \(i\in\{1,\ldots,p-1\}\) and \(n\geq 0\)._ _(ii) \(A_{w}(pi)=sA_{w}(i)+\gamma_{1}+\gamma_{2}\) if \(i\geq 1\)._ _(iii) If \(m\geq 1\), \(k\geq 0\), \(d_{m},\ldots,d_{m+k}\in\{0,1,\ldots,p-1\}\) and \(d_{m}\neq 0\), then_ \[A_{w}(d_{m}p^{m}+d_{m+1}p^{m+1}+\cdots+d_{m+k}p^{m+k})=\alpha_{d_{m}}s^{m}+( \gamma_{1}+\gamma_{2})(1+s+\cdots+s^{m-1}).\] **Proof.** The infinite word \(w\) belongs to \(a^{-\gamma_{1}}h(b)\{a,h(b)\}^{\omega}\) and \(|h(b)|_{b}=p\). This implies (i). To prove (ii), let \[w=w_{1}ba^{j}b\cdots,\] where \(|w_{1}b|_{b}=i\) and \(j=A_{w}(i)\). Then \[w=a^{-\gamma_{1}}\,h(w_{1})\,h(b)\,a^{js}\,h(b)\,\cdots,\] where \(|h(w_{1})h(b)|_{b}=p|w_{1}b|_{b}=pi\). Hence \[A_{w}(pi)=\gamma_{2}+js+\gamma_{1}=sA_{w}(i)+\gamma_{1}+\gamma_{2}.\] This proves (ii). If \(m=1\), (iii) is a consequence of (i) and (ii). Assume inductively that (iii) holds for \(m\geq 1\). Assume that \(k\geq 0\), \(e_{m+1},\ldots,e_{m+1+k}\in\{0,1,\ldots,p-1\}\) and \(e_{m+1}\neq 0\). Then \[A_{w}(e_{m+1}p^{m+1}+e_{m+2}p^{m+2}+\cdots+e_{m+k+1}p^{m+k+1})\] \[=sA_{w}(e_{m+1}p^{m}+e_{m+2}p^{m+1}+\cdots+e_{m+k+1}p^{m+k})+\gamma_{1}+\gamma_ {2}\] \[=s(\alpha_{e_{m+1}}s^{m}+(\gamma_{1}+\gamma_{2})(1+s+\cdots+s^{m-1}))+\gamma_ {1}+\gamma_{2}\] \[=\alpha_{e_{m+1}}s^{m+1}+(\gamma_{1}+\gamma_{2})(1+s+\cdots+s^{m}).\] Here the first equation follows by (ii) and the second equation by the inductive hypothesis. This proves (iii). \(\Box\) The final lemma of this section studies the case of eventually periodic words. **Lemma 6**: _Let \(h\) be as in Lemma 5. Assume that \(w=\omega(h)\) is eventually periodic. Then \(\gamma_{1}=\gamma_{2}=0\) and \(\alpha_{1}=\alpha_{2}=\cdots=\alpha_{p-1}\)._ **Proof.** Since \(w\) is eventually periodic, also the sequence \((A_{w}(i))_{i\geq 1}\) is eventually periodic. In particular, this sequence takes only finitely many different values. Hence, by Lemma 5, we have \(\gamma_{1}=\gamma_{2}=0\). If \(\alpha_{1}=\cdots=\alpha_{p-1}=0\), the claim of the lemma holds. Assume that some \(\alpha_{i}\) is nonzero. Then Lemma 5 implies that \(s=1\). Assume \[A_{w}(i)=A_{w}(i+d)\quad\mbox{ for all }i\geq i_{0},\] where \(i_{0}\) is an integer and \(d=ep^{m}+fp^{m+1}\) for some \(m\geq 0\), \(e\in\{1,\ldots,p-1\}\) and \(f\geq 0\). Choose an integer \(n\) such that \(n>m\) and \(p^{n}\geq i_{0}\). Then \[A_{w}(jp^{n})=A_{w}(ep^{m}+fp^{m+1}+jp^{n})\] for \(j=1,\ldots,p-1\). Now Lemma 5 implies that \[\alpha_{j}=\alpha_{e}\] for \(j=1,\ldots,p-1\). This implies the claim. \(\Box\) Commuting nonsingular morphisms \(g_{1}\) and \(g_{2}\) such that both \(g_{1}(b)\) and \(g_{2}(b)\) have at least two occurrences of \(b\) In this section \(X=\{a,b\}\). We will consider nonsingular morphisms \(g_{1},g_{2}\in{\rm Tri}(X^{*})\) such that \(|g_{i}(b)|_{b}\geq 2\) for \(i=1,2\). We have two different cases to consider according to whether \(|g_{1}(b)|_{b}\) and \(|g_{2}(b)|_{b}\) are multiplicatively independent or not. Recall that two integers \(p\geq 2\) and \(q\geq 2\) are _multiplicatively dependent_ if there are positive integers \(r,m,n\) such that \(p=r^{m}\) and \(q=r^{n}\) (see [1]). ### The numbers of occurrences of \(b\) are multiplicatively independent **Lemma 7**: _Let \(g_{i}\in{\rm Tri}(X^{*})\), \(i=1,2\), be morphisms such that_ \[g_{1}(a)=a^{s}\quad\mbox{ and }\quad g_{1}(b)=a^{\gamma_{1}}ba^{\alpha_{1}}ba^{ \alpha_{2}}b\cdots ba^{\alpha_{p-1}}ba^{\gamma_{2}}\] _and_ \[g_{2}(a)=a^{t}\quad\mbox{ and }\quad g_{2}(b)=a^{\delta_{1}}ba^{\beta_{1}}ba^{ \beta_{2}}b\cdots ba^{\beta_{q-1}}ba^{\delta_{2}}\] _where \(s,t\geq 1\), \(p,q\geq 2\) and \(\gamma_{1},\gamma_{2},\delta_{1},\delta_{2},\alpha_{1},\ldots,\alpha_{p-1}, \beta_{1},\ldots,\beta_{q-1}\geq 0\). Assume that \(p\) and \(q\) are multiplicatively independent. Assume that \(g_{1}(b)\not\in b^{*}\). Assume that \(\omega(g_{1})=\omega(g_{2})\). Then \(s=t=1\) and \(\gamma_{1}=\gamma_{2}=\delta_{1}=\delta_{2}=0\)._ **Proof.** Let \(z\) be the least positive integer such that \(\beta_{z}=\max\{\beta_{i}\mid i=1,2,\ldots,q-1\}\). Then \(\beta_{z}\geq 0\) but it is possible that \(\beta_{z}=0\). By Lemma 5 we have \[A_{\omega(g_{2})}(i)\leq A_{\omega(g_{2})}(zq^{n}) \tag{1}\] for \(n\geq 1\) and \(i<zq^{n}\). Consider the numbers \(zq^{n}\), \(n\geq 1\). For \(n\geq 1\), let \[zq^{n}=p^{\tau(n)}(i_{n}+pj_{n}),\] where \(\tau(n),j_{n}\geq 0\) and \(i_{n}\in\{1,\ldots,p-1\}\). Now, the set \(\{j_{n}\mid n\geq 1\}\) is infinite. To see this, assume on the contrary that it is finite. Then there are integers \(m\) and \(n\) such that \(i_{m}+pj_{m}=i_{n}+pj_{n}\) and \(m<n\). This implies that \[\frac{zq^{m}}{p^{\tau(m)}}=\frac{zq^{n}}{p^{\tau(n)}}\.\] Hence \(p^{\tau(n)-\tau(m)}=q^{n-m}\), which contradicts the assumption. It follows that the set \(\{j_{n}\mid n\geq 1\}\) is infinite. Therefore there is an integer \(n\geq 1\) such that \[zq^{n}=p^{\tau(n)}(i_{n}+x_{1}p+\cdots+x_{k}p^{k})\] where \(k\geq 2\) and \(x_{k}\neq 0\). Next, let \(y\) be an integer such that \(\alpha_{y}=\max\{\alpha_{i}\ |\ i=1,\ldots,p-1\}\) and consider the numbers \[K_{1}=yp^{\tau(n)+k-1}\quad\mbox{and}\quad K_{2}=zq^{n}=p^{\tau(n)}(i_{n}+x_{1}p +\cdots+x_{k}p^{k}).\] Then we have \(K_{1}<K_{2}\). Therefore (1) implies that \[A_{\omega(g_{1})}(K_{1})=A_{\omega(g_{2})}(K_{1})\leq A_{\omega(g_{2})}(K_{2}) =A_{\omega(g_{1})}(K_{2}).\] On the other hand, Lemma 5 implies that \[A_{\omega(g_{1})}(K_{1})=\alpha_{y}s^{\tau(n)+k-1}+(\gamma_{1}+\gamma_{2})(1+s +\cdots+s^{\tau(n)+k-2})\] and \[A_{\omega(g_{1})}(K_{2})=\alpha_{i_{n}}s^{\tau(n)}+(\gamma_{1}+\gamma_{2})(1+ s+\cdots+s^{\tau(n)-1}).\] Since \(A_{\omega(g_{1})}(K_{1})\leq A_{\omega(g_{1})}(K_{2})\), we have \(\gamma_{1}=\gamma_{2}=0\). If \(\alpha_{y}=0\), we would have \(g_{1}(b)\in b^{*}\) which contradicts our assumption. Hence \(\alpha_{y}\neq 0\) and \(s=1\). Since \(\gamma_{1}=\gamma_{2}=0\) and \(s=1\), we have \(A_{\omega(g_{1})}(i)\in\{\alpha_{1},\ldots,\alpha_{p-1}\}\) for all \(i\geq 1\). Now the equality \(\omega(g_{1})=\omega(g_{2})\) implies that \(\delta_{1}=\delta_{2}=0\) and \(t=1\). \(\Box\) The next theorem gives all nonsingular morphisms \(g_{i}\in\mbox{Tri}(X^{*})\), \(i=1,2\), such that \(g_{1}g_{2}=g_{2}g_{1}\) and the numbers \(|g_{1}(b)|_{b}\) and \(|g_{2}(b)|_{b}\) are multiplicatively independent integers larger than one. In the proof we use automatic sequences and Cobham's theorem characterizing sequences which are \(p\)-automatic and \(q\)-automatic for multiplicatively independent integers \(p\) and \(q\) (see [1]). **Theorem 8**: _Let \(g_{i}\in\mbox{Tri}(X^{*})\), \(i=1,2\), be morphisms such that_ \[g_{1}(a)=a^{s}\quad\mbox{ and }\quad g_{1}(b)=a^{\gamma_{1}}ba^{\alpha_{1}}ba^{ \alpha_{2}}b\cdots ba^{\alpha_{p-1}}ba^{\gamma_{2}}\] _and_ \[g_{2}(a)=a^{t}\quad\mbox{ and }\quad g_{2}(b)=a^{\delta_{1}}ba^{\beta_{1}}ba^{ \beta_{2}}b\cdots ba^{\beta_{q-1}}ba^{\delta_{2}}\] _where \(s,t\geq 1\), \(p,q\geq 2\) and \(\gamma_{1},\gamma_{2},\delta_{1},\delta_{2},\alpha_{1},\ldots,\alpha_{p-1}, \beta_{1},\ldots,\beta_{q-1}\geq 0\). Assume that \(p\) and \(q\) are multiplicatively independent. Then \(g_{1}g_{2}=g_{2}g_{1}\) if and only if at least one of the following conditions holds:_ _(i) \(g_{i}(b)\in b^{*}\) for \(i=1,2\),_ _(ii) \(g_{1}(a)=g_{2}(a)=a\), \(g_{1}(b)=(ba^{\alpha})^{p-1}b\) and \(g_{2}(b)=(ba^{\alpha})^{q-1}b\), where \(\alpha=\alpha_{1}\)._ **Proof.** If (i) or (ii) holds, then \(g_{1}g_{2}=g_{2}g_{1}\) (see Examples 1 and 3). Assume that \(g_{1}g_{2}=g_{2}g_{1}\). By Lemma 4 we have \(\omega(g_{1})=\omega(g_{2})\). Hence, if \(g_{1}(b)\in b^{*}\), also \(g_{2}(b)\in b^{*}\) and (i) holds. Assume that \(g_{1}(b)\not\in b^{*}\) and \(g_{2}(b)\not\in b^{*}\). Now Lemma 7 implies that \(s=t=1\) and \(\gamma_{1}=\gamma_{2}=\delta_{1}=\delta_{2}=0\). By Lemma 5, the sequence \((A_{\omega(g_{1})}(i))_{i\geq 1}\) is \(p\)-automatic and the sequence \((A_{\omega(g_{2})}(i))_{i\geq 1}\) is \(q\)-automatic. Since these sequences are equal and the numbers \(p\) and \(q\) are multiplicatively independent, \((A_{\omega(g_{1})}(i))_{i\geq 1}\) is eventually periodic. Hence \(\omega(g_{1})\) is eventually periodic. Now Lemma 6 implies that \(\alpha_{1}=\alpha_{2}=\cdots=\alpha_{p-1}\). Hence \(g_{1}(b)=(ba^{\alpha})^{p-1}b\) where \(\alpha=\alpha_{1}\). A similar argument shows that \(g_{2}(b)=(ba^{\beta})^{q-1}b\), where \(\beta=\beta_{1}\). Since \(\omega(g_{1})=\omega(g_{2})\), we have \(\alpha_{1}=\beta_{1}\). Hence (ii) holds. \(\Box\) ### The numbers of occurrences of \(b\) are multiplicatively dependent In this subsection we first consider the case that \(|g_{1}(b)|_{b}\) and \(|g_{2}(b)|_{b}\) are equal. **Lemma 9**: _Let \(g_{i}\in{\rm Tri}(X^{*})\), \(i=1,2\), be morphisms such that_ \[g_{1}(a)=a^{s}\quad\mbox{ and }\quad g_{1}(b)=a^{\gamma_{1}}ba^{\alpha_{1}}ba^{ \alpha_{2}}b\cdots ba^{\alpha_{p-1}}ba^{\gamma_{2}}\] _and_ \[g_{2}(a)=a^{t}\quad\mbox{ and }\quad g_{2}(b)=a^{\delta_{1}}ba^{\beta_{1}}ba^{ \beta_{2}}b\cdots ba^{\beta_{q-1}}ba^{\delta_{2}}\] _where \(s,t\geq 1\), \(p,q\geq 2\) and \(\gamma_{1},\gamma_{2},\delta_{1},\delta_{2},\alpha_{1},\ldots,\alpha_{p-1}, \beta_{1},\ldots,\beta_{q-1}\geq 0\). Assume that \(p=q\). If \(g_{1}g_{2}=g_{2}g_{1}\), then at least one of the following conditions holds:_ _(i) \(g_{1}=g_{2}\),_ _(ii) \(g_{i}(b)\in b^{*}\) for \(i=1,2\),_ _(iii) \(g_{1}(a)=g_{2}(a)=a\) and the words \(g_{1}(b)\) and \(g_{2}(b)\) are \(a\)-conjugates._ **Proof.** Assume that \(g_{1}g_{2}=g_{2}g_{1}\). Let \(w_{i}=\omega(g_{i})\) for \(i=1,2\). By Lemma 4 we have \(w_{1}=w_{2}\). Since \(p=q\), the equation \(w_{1}=w_{2}\) implies that \(\alpha_{i}=\beta_{i}\) for \(i=1,\ldots,p-1\). If \(g_{1}(b)\in b^{*}\), we have \(w_{1}=b^{\omega}\). This implies that \(g_{2}(b)\in b^{*}\) and hence (ii) holds. Assume that \(g_{1}(b)\not\in b^{*}\) and \(g_{2}(b)\not\in b^{*}\). Next, assume that \(A_{w_{1}}(i)\) takes only finitely many different values. Then Lemma 5 implies that \(\gamma_{1}=\gamma_{2}=0\). Since \(g_{1}(b)\not\in b^{*}\), some \(\alpha_{i}\) is nonzero. This implies that \(s=1\). By a similar reasoning it is seen that \(\delta_{1}=\delta_{2}=0\) and \(t=1\). Then \(g_{1}(a)=g_{2}(a)\) and \(g_{1}(b)=g_{2}(b)\) and condition (i) holds. Assume then that \(A_{w_{1}}(i)\) takes infinitely many values. Since \(A_{w_{1}}(pi)=A_{w_{2}}(pi)\) for all \(i\geq 1\), Lemma 5 implies that \[sA_{w_{1}}(i)+\gamma_{1}+\gamma_{2}=tA_{w_{2}}(i)+\delta_{1}+\delta_{2}\] for all \(i\geq 1\). Because this equation holds for infinitely many different values of \(A_{w_{1}}(i)=A_{w_{2}}(i)\), it follows that \(s=t\) and \(\gamma_{1}+\gamma_{2}=\delta_{1}+\delta_{2}\). If now \(s=t=1\), we have (iii). Assume that \(s=t>1\). By counting the number of occurrences of \(a\) before the first occurrence of \(b\) in \(g_{1}g_{2}(b)=g_{2}g_{1}(b)\), we see that \[s\delta_{1}+\gamma_{1}=t\gamma_{1}+\delta_{1}.\] By counting the number of occurrences of \(a\) after the last occurrence of \(b\) in \(g_{1}g_{2}(b)=g_{2}g_{1}(b)\), we see that \[\gamma_{2}+s\delta_{2}=\delta_{2}+t\gamma_{2}.\] Hence \((s-1)\delta_{1}=(t-1)\gamma_{1}\) and \((s-1)\delta_{2}=(t-1)\gamma_{2}\). Since \(s=t>1\) we have \(\gamma_{1}=\delta_{1}\) and \(\gamma_{2}=\delta_{2}\). Therefore condition (i) holds. \(\Box\) The next theorem gives all nonsingular morphisms \(g_{i}\in{\rm Tri}(X^{*})\), \(i=1,2\), such that \(g_{1}g_{2}=g_{2}g_{1}\) and the numbers \(|g_{1}(b)|_{b}\) and \(|g_{2}(b)|_{b}\) are multiplicatively dependent integers larger than one. **Theorem 10**: _Let \(g_{i}\in{\rm Tri}(X^{*})\), \(i=1,2\), be morphisms such that_ \[g_{1}(a)=a^{s}\quad\mbox{ and }\quad g_{1}(b)=a^{\gamma_{1}}ba^{\alpha_{1}}ba^{ \alpha_{2}}b\cdots ba^{\alpha_{p-1}}ba^{\gamma_{2}}\] _and_ \[g_{2}(a)=a^{t}\quad\mbox{ and }\quad g_{2}(b)=a^{\delta_{1}}ba^{\beta_{1}}ba^{ \beta_{2}}b\cdots ba^{\beta_{q-1}}ba^{\delta_{2}}\] _where \(s,t\geq 1\), \(p,q\geq 2\) and \(\gamma_{1},\gamma_{2},\delta_{1},\delta_{2},\alpha_{1},\ldots,\alpha_{p-1}, \beta_{1},\ldots,\beta_{q-1}\geq 0\). Assume that \(p=r^{m}\) and \(q=r^{n}\) where \(m,n,r\) are positive integers. Then \(g_{1}g_{2}=g_{2}g_{1}\) if and only if at least one of the following conditions holds:_ _(i) \(g_{1}^{n}=g_{2}^{m}\),_ _(ii) \(g_{i}(b)\in b^{*}\) for \(i=1,2\),_ _(iii) \(g_{1}(a)=g_{2}(a)=a\) and the words \(g_{1}^{n}(b)\) and \(g_{2}^{m}(b)\) are \(a\)-conjugates._ **Proof.** If at least one of the conditions (i), (ii) or (iii) holds, then \(g_{1}g_{2}=g_{2}g_{1}\) (see Examples 1, 4 and 5). Assume \(g_{1}g_{2}=g_{2}g_{1}\). Let \(h_{1}=g_{1}^{n}\) and \(h_{2}=g_{2}^{m}\). Then \(|h_{1}(a)|_{a}=s^{n}\), \(|h_{2}(a)|_{a}=t^{m}\) and \(|h_{1}(b)|_{b}=p^{n}=r^{mn}=q^{m}=|h_{2}(b)|_{b}\). Since \(h_{1}h_{2}=h_{2}h_{1}\), Lemma 9 implies that at least one of the following conditions holds: (1) \(h_{1}=h_{2}\), (2) \(h_{i}(b)\in b^{*}\) for \(i=1,2\), (3) \(h_{1}(a)=h_{2}(a)=a\) and the words \(h_{1}(b)\) and \(h_{2}(b)\) are \(a\)-conjugates. Now (1) implies (i), (2) implies (ii) and (3) implies (iii). \(\Box\) Commuting nonsingular morphisms \(g_{1}\) and \(g_{2}\) such that \(|g_{1}(b)|_{b}=1\) and \(|g_{2}(b)|_{b}\geq 2\) Let \(X=\{a,b\}\). The following theorem gives all commuting nonsingular morphisms \(g_{i}\in{\rm Tri}(X^{*})\), \(i=1,2\), such that \(|g_{1}(b)|_{b}=1\) and \(|g_{2}(b)|_{b}\geq 2\). **Theorem 11**: _Let \(g_{i}\in{\rm Tri}(X^{*})\), \(i=1,2\), be morphisms such that_ \[g_{1}(a)=a^{s}\quad\mbox{ and }\quad g_{1}(b)=a^{\gamma_{1}}ba^{\gamma_{2}}\] _and_ \[g_{2}(a)=a^{t}\quad\mbox{ and }\quad g_{2}(b)=a^{\delta_{1}}ba^{\beta_{1}}ba^{ \beta_{2}}b\cdots ba^{\beta_{q-1}}ba^{\delta_{2}}\] _where \(s,t\geq 1\), \(q\geq 2\) and \(\gamma_{1},\gamma_{2},\delta_{1},\delta_{2},\beta_{1},\ldots,\beta_{q-1}\geq 0\). Then \(g_{1}g_{2}=g_{2}g_{1}\) if and only if at least one of the following conditions holds:_ _(i) \(g_{1}(x)=x\) for all \(x\in\{a,b\}\),_ _(ii) \(g_{i}(b)\in b^{*}\) for \(i=1,2\)._ **Proof.** If (i) or (ii) holds, then \(g_{1}g_{2}=g_{2}g_{1}\). Assume then that \(g_{1}g_{2}=g_{2}g_{1}\). Let \(h_{1}=g_{1}g_{2}\) and \(h_{2}=g_{2}\). Then \(h_{1}h_{2}=h_{2}h_{1}\). Since \(|h_{1}(b)|_{b}=|h_{2}(b)|_{b}\geq 2\), Lemma 9 implies that at least one of the following conditions holds: (1) \(h_{1}=h_{2}\), (2) \(h_{i}(b)\in b^{*}\) for \(i=1,2\), (3) \(h_{1}(a)=h_{2}(a)=a\) and the words \(h_{1}(b)\) and \(h_{2}(b)\) are \(a\)-conjugates. Now (1) implies (i), (2) implies (ii) and (3) implies (i). \(\Box\) Commuting nonsingular morphisms \(g_{1}\) and \(g_{2}\) such that \(|g_{1}(b)|_{b}=|g_{2}(b)|_{b}=1\) Let \(X=\{a,b\}\). In this section we give all commuting nonsingular morphisms \(g_{i}\in{\rm Tri}(X^{*})\), \(i=1,2\), such that \(|g_{1}(b)|_{b}=|g_{2}(b)|_{b}=1\). **Proposition 12**: _Let \(g_{i}\in{\rm Tri}(X^{*})\), \(i=1,2\), be morphisms such that_ \[g_{1}(a)=a^{s}\quad\mbox{ and }\quad g_{1}(b)=a^{\gamma_{1}}ba^{\gamma_{2}}\] _and_ \[g_{2}(a)=a^{t}\quad\mbox{ and }\quad g_{2}(b)=a^{\delta_{1}}ba^{\delta_{2}}\] _where \(s,t\geq 1\) and \(\gamma_{1},\gamma_{2},\delta_{1},\delta_{2}\geq 0\). Then \(g_{1}g_{2}=g_{2}g_{1}\) if and only if_ \[(s-1)\delta_{i}=(t-1)\gamma_{i}\quad\mbox{ for }\quad i=1,2.\] **Proof.** We have \[g_{1}g_{2}(b)=a^{s\delta_{1}+\gamma_{1}}ba^{s\delta_{2}+\gamma_{2}}\quad\mbox {and}\quad g_{2}g_{1}(b)=a^{t\gamma_{1}+\delta_{1}}ba^{t\gamma_{2}+\delta_{2}}.\] Hence \(g_{1}g_{2}=g_{2}g_{1}\) if and only if \(s\delta_{1}+\gamma_{1}=t\gamma_{1}+\delta_{1}\) and \(s\delta_{2}+\gamma_{2}=t\gamma_{2}+\delta_{2}\). This implies the claim. \(\Box\) ## 9 Commuting morphisms \(g_{1}\) and \(g_{2}\) such that \(g_{1}\) or \(g_{2}\) is singular Let \(X=\{a,b\}\) and let \(h\in{\rm Tri}(X^{*})\). If \(h\) is singular, then \(h(a)=\varepsilon\) or \(h(b)\in a^{*}\). In this section we give all commuting morphisms \(g_{i}\in{\rm Tri}(X^{*})\), \(i=1,2\), such that \(g_{1}\) or \(g_{2}\) is singular. **Proposition 13**: _Let \(g_{i}\in{\rm Tri}(X^{*})\) for \(i=1,2\). Assume that \(g_{1}(b)\in a^{*}\). Then \(g_{1}g_{2}=g_{2}g_{1}\) if and only if \(|g_{1}g_{2}(b)|=|g_{2}g_{1}(b)|\) or, equivalently,_ \[|g_{1}(a)||g_{2}(b)|_{a}+|g_{1}(b)||g_{2}(b)|_{b}=|g_{2}(a)||g_{1}(b)|.\] **Proof.** Since \(g_{2}g_{1}(b)\in a^{*}\) and \(g_{1}g_{2}(b)\in a^{*}\), the claim holds. \(\Box\) **Proposition 14**: _Let \(g_{1},g_{2}\in{\rm Tri}(X^{*})\) be morphisms such that_ \[g_{1}(a)=\varepsilon,\ \ g_{1}(b)=u\] \[g_{2}(a)=a^{t},\ \ g_{2}(b)=v,\] _where \(t\geq 0\) and both \(u\) and \(v\) have at least one occurrence of \(b\). Then \(g_{1}g_{2}=g_{2}g_{1}\) if and only if at least one of the following conditions holds:_ _(i) \(g_{1}=g_{2}\),_ _(ii) \(g_{2}(x)=x\) for all \(x\in X\),_ _(iii) \(t=0\) and \(uv=vu\),_ _(iv) \(g_{i}(b)\in b^{*}\) for \(i=1,2\),_ _(v) \(t=1\) and there exist nonnegative integers \(\alpha,\beta,i\) and \(j\) such that_ \[g_{1}(b)=(a^{\alpha}ba^{\beta})^{i},\ \ g_{2}(b)=(ba^{\alpha+\beta})^{j}b.\] **Proof.** If (i), (ii), (iii), (iv) or (v) holds, then \(g_{1}g_{2}=g_{2}g_{1}\) (see Examples 1, 6, 7). Assume that \(g_{1}g_{2}=g_{2}g_{1}\). Then \[g_{2}(u)=g_{2}g_{1}(b)=g_{1}g_{2}(b)=g_{1}(v)=u^{|v|_{b}}.\] If \(u\in b^{*}\), this equation implies that \(v\in b^{*}\). Hence (iv) holds. Assume that \(u\not\in b^{*}\). Next, assume that \(t=0\). Then \(u^{|v|_{b}}=g_{2}(u)=v^{|u|_{b}}\), which shows that (iii) holds. Assume then that \(t\neq 0\). By assumption, \(|v|_{b}=1\) or \(|v|_{b}\geq 2\). Assume first that \(|v|_{b}=1\). Then the equation \(g_{2}(u)=u\) shows that (ii) holds. Assume finally that \(|v|_{b}\geq 2\). Then \(\omega(g_{2})\) is defined. Since \(\omega(g_{2})\) is obtained from \(g_{2}^{\omega}(u)=u^{\omega}\) by deleting the occurrences of \(a\) preceding the first occurrence of \(b\), the word \(\omega(g_{2})\) is eventually periodic. Hence \(t=1\). By Lemma 6 there are nonnegative integers \(\gamma\) and \(j\) such that \(g_{2}(b)=(ba^{\gamma})^{j}b\). Since \(\omega(g_{2})=(ba^{\gamma})^{\omega}\), there exist nonnegative integers \(\alpha,\beta\) and \(i\) such that \(g_{1}(b)=(a^{\alpha}ba^{\beta})^{i}\) and \(\gamma=\alpha+\beta\). Hence (v) holds. \(\Box\) For a systematic study of the equation \(h(w)=w^{n}\), \(n\geq 2\), for binary morphisms, see [3].
2303.09142
Estimation of anisotropic bending rigidities and spontaneous curvatures of crescent curvature-inducing proteins from tethered-vesicle experimental data
The Bin/amphiphysin/Rvs (BAR) superfamily proteins have a crescent binding domain and bend biomembranes along the domain axis. However, their anisotropic bending rigidities and spontaneous curvatures have not been experimentally determined. Here, we estimated these values from the bound protein densities on tethered vesicles using a mean-field theory of anisotropic bending energy and orientation-dependent excluded volume. The dependence curves of the protein density on the membrane curvature are fitted to the experimental data for the I-BAR and N-BAR domains reported by C. Prevost et al. Nat. Commun. 6, 8529 (2015) and F.-C. Tsai et al. Soft Matter 17, 4254 (2021), respectively. For the I-BAR domain, all three density curves of different chemical potentials exhibit excellent fits with a single parameter set of anisotropic bending energy. When the classical isotropic bending energy is used instead, one of the curves can be fitted well, but the others exhibit large deviations. In contrast, for the N-BAR domain, two curves are not well-fitted simultaneously using the anisotropic model, although it is significantly improved compared to the isotropic model. This deviation likely suggests a cluster formation of the N-BAR domains.
Hiroshi Noguchi, Nikhil Walani, Marino Arroyo
2023-03-16T08:18:38Z
http://arxiv.org/abs/2303.09142v2
Estimation of anisotropic bending rigidity and spontaneous curvature of crescent curvature-inducing proteins from tethered-vesicle experimental data ###### Abstract The Bin/amphiphysin/Rvs (BAR) superfamily proteins have a crescent binding domain and bend biomembranes along the domain axis. However, their anisotropic bending rigidities and spontaneous curvatures have not been experimentally determined. Here, we estimated these values from the bound protein densities on tethered vesicles using a mean-field theory of anisotropic bending energy and orientation-dependent excluded volume. The dependence curves of the protein density on the membrane curvature are fitted to the experimental data for the I-BAR and N-BAR domains reported by C. Prevost et al. Nat. Commun. **6**, 8529 (2015) and F.-C. Tsai et al. Soft Matter **17**, 4254 (2021), respectively. For the I-BAR domain, all three density curves of different chemical potentials exhibit excellent fits with a single parameter set of anisotropic bending energy. When the classical isotropic bending energy is used instead, one of the curves can be fitted well, but the others exhibit large deviations. In contrast, for the N-BAR domain, two curves are not well-fitted simultaneously using the anisotropic model, although it is significantly improved from the isotropic model. This deviation likely suggests a cluster formation of the N-BAR domains. ## I Introduction In living cells, membrane morphology is regulated by the binding and unbinding of curvature-inducing proteins [1; 2; 3; 4; 5; 6; 7; 8]. Some types of these proteins bend a membrane in a laterally isotropic manner and generate spherical membrane buds [3; 4; 5; 6]. In contrast, the Bin/amphiphysin/Rvs (BAR) superfamily proteins have a crescent binding domain (BAR domain) and bend membranes along the BAR domain axis, generating cylindrical membrane tubes [1; 2; 3; 9; 10; 11; 12]. Several types of BAR domains are known: N-BARs and F-BARs bend membranes outward, but I-BARs bend them inward. These curvature-inducing proteins can sense membrane curvature; that is, their binding onto membranes depends on the local membrane curvatures. Tethered vesicles have been widely used to observe the curvature sensing experimentally [7; 8; 13; 14; 15; 16; 17; 18; 19; 20; 21]. A vesicle is pulled by optical tweezers and a micropipette to form a narrow membrane tube (tether). The tube radius can be controlled by adjusting the position of the optical tweezers. The curvature sensing of BAR proteins [7; 8; 13; 14; 15], G-protein coupled receptors [16], ion channels [17; 18], dynamin [19], annexins [20], and Ras proteins [21] have been reported. Additionally, the curvature sensing has been detected by the protein binding onto different sizes of spherical vesicles [22; 23; 21]. Evaluating the mechanical properties of these proteins is crucial for quantitatively understanding their curvature generation and sensing. The aim of this study is to determine the anisotropic bending rigidity and spontaneous curvature of BAR proteins from experimental data of tethered vesicles. In previous studies [13; 14; 15], the bending rigidity and spontaneous curvature of BAR proteins have been estimated using the Canham-Helfrich theory [24; 25]. However, this theory is formulated for laterally isotropic fluid membranes; thus, the anisotropy of the proteins is not considered. Recently, we developed a mean-field model for anisotropic bending energy and entropic interactions [26; 27; 28]. Orientational fluctuations are included based on Nascimentos' theory for three-dimensional liquid crystals [29]. The first- and second-order transitions between isotropic and nematic phases are obtained with increasing protein density in narrow membrane tubes [28]. In the present study, we use this theoretical model to estimate the anisotropic bending rigidity and spontaneous curvature. The experimental data for the I-BAR domain of IRSp53 and the N-BAR domain of amphiphysin 1 reported in Refs. [14] and [15], respectively, are used for the estimation. In theoretical studies, a protein is often assumed to be a rigid body [30; 31; 32; 33]. The interaction of two rigid proteins qualitatively reproduces that of two flexible proteins obtained by meshless membrane simulations, but the amplitude is overestimated [34]. Hence, the estimation of the bending rigidity is also important for evaluating the interaction between proteins. The mean-field theory and fitting method are described in Sec. II. Secs. III and IV present and discuss the fitting results for the I-BAR and N-BAR domains, respectively. Additionally, the results of the isotropic and anisotropic bending models are compared. Sec. V concludes the paper. ## II Theory A cylindrical membrane tube (tether) protr from the spherical vesicle, as depicted in Fig. 1(a). The tether length \(L_{\rm cy}\) and radius \(R_{\rm cy}\) are controlled by force \(f_{\rm ex}\) generated by optical tweezers and a micropipette. The membrane is in a fluid phase and is homogeneous. The radius \(R_{\rm v}\) of the spherical region is on a \(\mu\)m scale; thus, the membrane can be approximated as flat. The subscripts v and cy represent the quantities in the spherical and tether regions, respectively. The total membrane area \(A\) is fixed. The tether area \(A_{\rm cy}=2\pi R_{\rm cy}L_{\rm cy}\) is approximated as a constant, since the tube volume is negligibly small [35; 36]. The protein density \(\phi\) is the local area fraction covered by the bound proteins (\(\phi_{\rm v}\) and \(\phi_{\rm cy}\) represent the densities in the spherical and tether regions, respectively). Proteins bind onto the outer or inner surface (see Fig. 1(b)). Here, the curvature direction is defined as outward following the membrane curvature. Hence, the proteins binding to the inner surface have the opposite sign of curvature from the protein viewpoint (see I-BAR in Fig. 1(b)). ### Isotropic proteins First, we describe the mean-field theory of proteins that bend membranes isotropically (no preferred lateral direction). The bending energy is given as follows: [37; 38] \[F_{\rm cv} = 4\pi\bar{\kappa}_{\rm d}(1-g_{\rm ves})+\int{\rm d}A\,\,\Bigl{\{} 2\kappa_{\rm d}H^{2}(1-\phi) \tag{1}\] \[+\frac{\kappa_{\rm pi}}{2}(2H-C_{0})^{2}\phi+(\bar{\kappa}_{\rm pi }-\bar{\kappa}_{\rm d})K\phi\Bigr{\}},\] where \(g_{\rm ves}\) represents the genus of the vesicle (\(g_{\rm ves}=0\) for tethered vesicles). \(H=(C_{1}+C_{2})/2\) and \(K=C_{1}C_{2}\) represent the mean and Gaussian curvatures of each position, respectively, with \(C_{1}\) and \(C_{2}\) being the principal curvatures. The bare (protein-unbound) membrane has a bending rigidity of \(\kappa_{\rm d}\), zero spontaneous curvature, and saddle-splay modulus of \(\bar{\kappa}_{\rm d}\) (also called the Gaussian modulus) in the Canham-Helfrich theory [24; 25; 39]. The bound membrane has a bending rigidity of \(\kappa_{\rm pi}\), finite spontaneous curvature \(C_{0}\), and saddle-splay modulus of \(\bar{\kappa}_{\rm pi}\). The first term of Eq. (1) represents the integral over the Gaussian curvature \(K\). Note that the curvature mismatch model [8; 14; 15; 16] and spontaneous curvature model [8; 40; 41; 15] are subsets of the present model for \(\kappa_{\rm pi}>\kappa_{\rm d}\) and \(\kappa_{\rm pi}=\kappa_{\rm d}\), respectively [37]. For \(\kappa_{\rm pi}<\kappa_{\rm d}\), the proteins exhibit curvature sensing but do not have a curvature-generation capability [38]. The membrane free energy \(F\) consists of the binding energy and mixing entropy in addition to the bending energy \(F_{\rm cv}\), \[F=F_{\rm cv}+\int{\rm d}A\,\Bigl{\{}-\frac{\mu}{a_{\rm p}}\phi+\frac{k_{\rm B }T}{a_{\rm p}}[\phi\ln(\phi)+(1-\phi)\ln(1-\phi)]\Bigr{\}}, \tag{2}\] where \(a_{\rm p}\) represents the area covered by one protein and \(k_{\rm B}T\) represents the thermal energy. The maximum number of bound proteins is \(A/a_{\rm p}\). The first and second terms in the integral of Eq. (2) represent the protein-binding energy with the chemical potential \(\mu\) and the mixing entropy of bound proteins, respectively. Here, we neglect the inter-protein interaction energy (\(\sim\phi^{2}\)) [36; 37; 38], since we consider low protein densities in this study. In thermal equilibrium, the protein density \(\phi\) is locally determined for each membrane curvature: [37; 38] \[\phi = \frac{1}{1+\exp(w_{\rm b})}, \tag{3}\] \[w_{\rm b} = -\frac{\mu}{k_{\rm B}T}\] \[+\frac{a_{\rm p}}{k_{\rm B}T}\Bigl{(}2\kappa_{\rm dif}H^{2}+\bar{ \kappa}_{\rm dif}K-2\kappa_{\rm pi}C_{0}H+\frac{\kappa_{\rm pi}C_{0}^{2}}{2} \Bigr{)},\] where \(\kappa_{\rm dif}=\kappa_{\rm pi}-\kappa_{\rm d}\) and \(\bar{\kappa}_{\rm dif}=\bar{\kappa}_{\rm pi}-\bar{\kappa}_{\rm d}\). Since the curvature of the spherical region of the tethered vesicles is approximated as \(H=K=0\), the protein density in the spherical region is given as \(\phi_{\rm v}=1/\{1+\exp[(-\mu+a_{\rm p}\kappa_{\rm pi}C_{0}^{2}/2)/k_{\rm B}T]\}\). Hence, the protein density \(\phi_{\rm cy}\) in the tether regions is given as \[\phi_{\rm cy}=\frac{1}{1+\frac{1-\phi_{\rm v}}{\phi_{\rm v}}\exp\big{[}\frac{a_ {\rm p}}{k_{\rm B}T}\bigl{(}\frac{\kappa_{\rm dif}}{2k_{\rm cy}{}^{2}}-\frac{ \kappa_{\rm pi}C_{0}}{R_{\rm cy}}\bigr{)}\bigr{]}}. \tag{5}\] Figure 1: Schematic of a tethered vesicle and protein binding. (a) Experimental setup of the tethered vesicle. The proteins are bound in the tether and spherical vesicle regions with bound densities of \(\phi_{\rm cy}\) and \(\phi_{\rm v}\), respectively. The angles between the nematic direction \({\bf S}\), the azimuthal direction, and/or the protein axis are shown in the bottom panel. (b) Binding and unbinding of BAR domains. N-BAR and I-BAR domains bind onto the outer and inner surfaces of the vesicle, respectively. (c) Excluded-volume interactions between proteins. A perpendicular protein pair has a larger excluded area (represented by thick dashed lines) than a parallel pair (compare the left and right panels). When the membrane has the sensing curvature \(C_{\rm s}=\kappa_{\rm pi}C_{0}/\kappa_{\rm dif}\), \(\phi_{\rm cy}\) is maximized. For \(\kappa_{\rm dif}\neq 0\), the numerator of the second term in parentheses can be replaced with \(\kappa_{\rm dif}C_{\rm s}\), so that \(\kappa_{\rm dif}\) and \(C_{\rm s}\) can be used as fitting parameters. Here, the protein density of tethered vesicles is independent of \(\bar{\kappa}_{\rm dif}\), although strong dependence on \(\bar{\kappa}_{\rm dif}\) is obtained in the comparison of small vesicles and tethers of the same mean curvature [38]. For the low-density limit (\(\phi_{\rm cy}\ll 1\)), the density ratio is expressed by the exponential function [36; 14] \[\frac{\phi_{\rm cy}}{\phi_{\rm v}}=\exp\Big{[}-\frac{a_{\rm p}}{k_{\rm B}T} \Big{(}\frac{\kappa_{\rm dif}}{2{R_{\rm cy}}^{2}}-\frac{\kappa_{\rm pi}C_{0}}{ R_{\rm cy}}\Big{)}\Big{]}. \tag{6}\] Details regarding isotropic-protein binding on tethered vesicles are described in Ref. [36]. ### Anisotropic proteins The lateral shape of a bound protein is approximated as an ellipse with major and minor axis lengths of \(\ell_{1}\) and \(\ell_{2}\), respectively. The aspect ratio is \(d_{\rm el}=\ell_{1}/\ell_{2}\), and the area is \(a_{\rm p}=\pi\ell_{1}\ell_{2}/4\). These proteins have an orientation-dependent excluded-volume interaction and can align on the membrane surface. When neighboring proteins have a perpendicularly orientation, the excluded area \(A_{\rm exc}\) between them is larger than that for parallel pairs, as shown in Fig. 1(c). This area \(A_{\rm exc}\) is approximated as a function of the angle \(\theta_{\rm pp}\) between the major axes of the two proteins: [28]\(A_{\rm exc}=a_{\rm p}[4-b_{\rm exc}(\cos^{2}(\theta_{\rm pp})-1)]\). The effective excluded area is \(A_{\rm eff}=\lambda A_{\rm exc}\). Although \(\lambda\) decreases slightly with an increase in the protein density, we use a constant value \(\lambda=1/3\) for simplicity [26; 27; 28]. The bending energy of a bound protein is given as follows: \[U_{\rm p} = \frac{\kappa_{\rm p}a_{\rm p}}{2}(C_{\ell 1}-C_{\rm p})^{2}+\frac{ \kappa_{\rm side}a_{\rm p}}{2}(C_{\ell 2}-C_{\rm side})^{2}, \tag{7}\] \[C_{\ell 1} = C_{1}\cos^{2}(\theta_{\rm pc})+C_{2}\sin^{2}(\theta_{\rm pc}),\] (8) \[C_{\ell 2} = C_{1}\sin^{2}(\theta_{\rm pc})+C_{2}\cos^{2}(\theta_{\rm pc}), \tag{9}\] where \(C_{\ell 1}\) and \(C_{\ell 2}\) represent the curvatures along the major and minor axes of the protein, respectively, and \(\theta_{\rm pc}\) represents the angle between the major protein axis and membrane principal direction (the azimuthal direction of the cylindrical tube), as shown in Fig. 1(a). The proteins can have different values of bending rigidity and spontaneous curvature along the major and minor protein axes: \(\kappa_{\rm p}\) and \(C_{\rm p}\) along the major protein axis and \(\kappa_{\rm side}\) and \(C_{\rm side}\) are along the minor axis (side direction). The free energy \(F_{\rm p}\) of the bound proteins is expressed as follows: \[F_{\rm p} = \int f_{\rm p}\ {\rm d}A, \tag{10}\] \[f_{\rm p} = \frac{\phi k_{\rm B}T}{a_{\rm p}}\Big{[}\ln(\phi)+\frac{S\Psi}{2} -\ln\Big{(}\int_{-\pi}^{\pi}w(\theta_{\rm ps})\ {\rm d}\theta_{\rm ps}\Big{)}\Big{]},\] (11) \[w(\theta_{\rm ps}) = g\exp\Big{[}\Psi s_{\rm p}(\theta_{\rm ps})+\bar{\Psi}\sin( \theta_{\rm ps})\cos(\theta_{\rm ps})\] (12) \[-\frac{U_{\rm p}}{k_{\rm B}T}\Big{]}\Theta(g),\] \[g = 1-\phi(b_{0}-b_{2}Ss_{\rm p}(\theta_{\rm ps})), \tag{13}\] where \(\Theta(x)\) denotes the unit step function and \(s_{\rm p}(\theta_{\rm ps})=\cos^{2}(\theta_{\rm ps})-1/2\). The proteins are ordered as \(S=2\langle s_{\rm p}(\theta_{\rm ps})\rangle\), where \(\theta_{\rm ps}\) represents the angle between the major protein axis and the ordered direction and \(\langle...\rangle\) denotes the ensemble average (see Fig. 1). Factor \(g\) expresses the effect of the orientation-dependent excluded volume, where \(b_{0}=(4+b_{\rm exc}/2)\lambda\) and \(b_{2}=b_{\rm exc}\lambda\). At \(d_{\rm el}=2\), 3, 4, and 6, \(b_{\rm exc}=0.840\), 1.98, 3.44, and 6.14, respectively. Non-overlapped states exist at \(g>0\). The ensemble average of a protein quantity \(\chi\) is given as \[\langle\chi\rangle=\frac{\int_{-\pi}^{\pi}\chi w(\theta_{\rm ps})\ {\rm d} \theta_{\rm ps}}{\int_{-\pi}^{\pi}w(\theta_{\rm ps})\ {\rm d}\theta_{\rm ps}}. \tag{14}\] The quantities \(\Psi\) and \(\bar{\Psi}\) are the symmetric and asymmetric components of the nematic tensor, respectively, and are determined using \(S\) and \(\langle\sin(\theta_{\rm ps})\cos(\theta_{\rm ps})\rangle=0\) via Eq. (14). When the nematic order is parallel to one of the directions of the membrane principal curvatures (\(\theta_{\rm sc}=0\) or \(\pi/2\)), \(\bar{\Psi}=0\). In this study, the integral is performed in the range of \(-\pi<\theta_{\rm ps}\leq\pi\). Since the shape is rotationally symmetric, the range \(-\pi/2<\theta_{\rm ps}\leq\pi/2\) can be used alternatively, in which the chemical potential is shifted by \(\Delta\mu=k_{\rm B}T\ln(2)\)[42]. Note that the separate integrals for the bending energy and other terms used in Ref. [43] are not applicable, since the orientational fluctuations of proteins are significantly large [26; 28]. Since an external force \(f_{\rm ex}\) is imposed, the free energy of the membrane tether is given as \(F=F_{\rm p}+U_{\rm mb}-f_{\rm ex}L_{\rm cy}\), where the energy of the bare (unbound) membrane is \(U_{\rm mb}=\kappa_{\rm d}A/2{R_{\rm cy}}^{2}\). This force \(f_{\rm ex}\) is balanced with the membrane axial force and is obtained using \(\partial F/\partial L_{\rm cy}|_{\phi}=0\), as follows: \[f_{\rm ex}=2\pi\frac{\partial f_{\rm p}}{\partial(1/R_{\rm cy})}\bigg{|}_{ \phi}+f_{\rm mb}. \tag{15}\] Here, the last term \(f_{\rm mb}\) represents the force of the bare membrane tube: \(f_{\rm mb}=2\pi\kappa_{\rm d}/R_{\rm cy}\). The equilibrium of binding and unbinding is obtained by minimizing \(F-\mu N_{\rm p}\), where \(\mu\) represents the binding chemical potential. Thus, the protein density is balanced at \(\mu=a_{\rm p}\partial f_{\rm p}/\partial\phi\). Details are described in Refs. [26; 28]. Since we consider a low density of bound proteins, the proteins in the spherical vesicle region are randomly oriented, that is, \(S=\Psi=\bar{\Psi}=0\). Hence, the free energy density \(f_{\rm p,v}\) of the spherical vesicle region is given as \[f_{\rm p,v}=\frac{\phi_{\rm v}}{a_{\rm p}}\Big{\{}k_{\rm B}T\big{[}\ln(\phi_{\rm v })-\ln(1-b_{0}\phi_{\rm v})-\ln(2\pi)\big{]}+U_{\rm p,v}\Big{\}}, \tag{16}\] where \(U_{\rm p,v}=(\kappa_{\rm p}C_{\rm p}^{\ 2}+\kappa_{\rm side}C_{\rm side}^{\ 2})a_{\rm p}/2\). Hence, \(\mu\) for the density \(\phi_{\rm v}\) is obtained as \[\frac{\mu-U_{\rm p,v}}{k_{\rm B}T}=\ln\left(\frac{\phi_{\rm v}}{1-b_{0}\phi_{ \rm v}}\right)+\frac{b_{0}\phi_{\rm v}}{1-b_{0}\phi_{\rm v}}-\ln(2\pi)+1. \tag{17}\] For the I-BAR and N-BAR domains, we use \(d_{\rm cl}=6\), and 3, respectively. For both proteins, we use \(a_{\rm p}=50\,{\rm nm}^{2}\) in accordance with Refs. [14] and [15]. We also calculate the orientational order \(S_{z}\) along the tube (\(z\)) axis, since it is more easily measured than \(S\) in experiments. When the orientational order is along the azimuthal and axial directions (\(\theta_{\rm sc}=0\) and \(\pi/2\)), \(S_{z}=-S\) and \(S_{z}=S\), respectively. At a high protein density and small tube radius (\(1/R_{\rm cy}>C_{\rm p}\)), the orientational order can deviate from the azimuthal or axial direction (\(0<\theta_{\rm sc}<\pi/2\)). However, in this study, the fitted results remain in the range of \(\theta_{\rm sc}=0\) and \(\pi/2\), since the protein densities are sufficiently low. Figure 2: Fitting of the density–curvature curves for I-BAR-domain binding based on the isotropic protein model. The density \(\phi_{\rm cy}\) is normalized by \(\phi_{\rm v}\) for the curvature \(1/R_{\rm cy}\) of the tethered membrane. Circles, triangles, and squares indicate the experimental data for \(\phi_{\rm v}=0.01\), \(0.02\), and \(0.05\), respectively (reproduced from Ref. [14]). The solid lines are given by Eq. (5) with the fitting parameters \(\kappa_{\rm dif}\) and \(C_{\rm s}\); from top to bottom, \(\phi_{\rm v}=0.01\), \(0.02\), and \(0.05\). (a) Data at \(\phi_{\rm v}=0.01\) are fitted: \(\kappa_{\rm dif}/k_{\rm B}T=45.9\) and \(C_{\rm s}=0.0530\,{\rm nm}^{-1}\). (b) Data at \(\phi_{\rm v}=0.02\) are fitted: \(\kappa_{\rm dif}/k_{\rm B}T=33.6\) and \(C_{\rm s}=0.0578\,{\rm nm}^{-1}\). (c) Data at \(\phi_{\rm v}=0.05\) are fitted: \(\kappa_{\rm dif}/k_{\rm B}T=29.0\) and \(C_{\rm s}=0.0540\,{\rm nm}^{-1}\). (d) All data are fitted: \(\kappa_{\rm dif}/k_{\rm B}T=27.5\) and \(C_{\rm s}=0.0629\,{\rm nm}^{-1}\). ### Fitting The experimental data of the bound protein density on the membrane tether are used for the fitting. We employ a least-squares method and search the conditions for minimizing the mean squared deviation: \[\Lambda=\frac{1}{{\phi_{\mathrm{m}}}^{2}N}\sum_{i}^{N}(\phi_{i}-\phi_{\mathrm{ theory}})^{2}, \tag{18}\] where \(N\) represents the number of experimental data, and \(\phi_{i}\) and \(\phi_{\mathrm{theory}}\) represent the experimental and theoretical values of the protein density of the tether region, respectively. This fit deviation is normalized by the mean value \(\phi_{\mathrm{m}}=(1/N)\sum_{i}\phi_{i}\) of the experimental data. If no normalization is applied, the obtained values of \(\Lambda\) depend on the choice of units (\(\phi_{\mathrm{cy}}\) or \(\phi_{\mathrm{cy}}/\phi_{\mathrm{v}}\)). For the I-BAR domain of IRSp53, the experimental data reported in Ref. [14] are used. The fit deviations for \(\phi_{\mathrm{v}}=0.01\), \(0.02\), and \(0.05\) are represented by \(\Lambda_{1}\), \(\Lambda_{2}\), and \(\Lambda_{3}\), respectively. For the N-BAR domain of amphiphysin 1, the experimental data from Ref. [15] are used. We assume that the average densities for \(n_{\mathrm{v}}<50\,\mu\mathrm{m}^{-2}\), \(50\,\mu\mathrm{m}^{-2}<n_{\mathrm{v}}<120\,\mu\mathrm{m}^{-2}\), and \(120\,\mu\mathrm{m}^{-2}<n_{\mathrm{v}}<500\,\mu\mathrm{m}^{-2}\) are \(\phi_{\mathrm{v}}=0.0013\), \(0.0043\), and \(0.016\), respectively, where \(n_{\mathrm{v}}\) represents the number density of proteins in the spherical vesicle region (\(\phi_{\mathrm{v}}=n_{\mathrm{v}}a_{\mathrm{p}}\)). The fit deviations for \(\phi_{\mathrm{v}}=0.0013\) and \(0.0043\) are represented by \(\Lambda_{1}\) and \(\Lambda_{2}\), respectively. The data of higher densities, i.e., \(120\,\mu\mathrm{m}^{-2}<n_{\mathrm{v}}<500\,\mu\mathrm{m}^{-2}\), are not used for the fitting, because they are widely distributed from \(\phi_{\mathrm{cy}}/\phi_{\mathrm{v}}\simeq 3\) to \(20\) for a narrow range of the tether curvature (\(0.04\,\mathrm{nm}^{-1}\lesssim 1/R_{\mathrm{cy}}\lesssim 0.11\,\mathrm{nm}^{-1}\)). We compare only the mean value; when the density ratio at \(1/R_{\mathrm{cy}}=0.07\,\mathrm{nm}^{-1}\) is in the range \(10\lesssim\phi_{\mathrm{cy}}/\phi_{\mathrm{v}}\lesssim 15\), we consider the fit to be good for \(\phi_{\mathrm{v}}=0.016\). ## III Binding of I-BAR domains ### Isotropic protein model Before examining the anisotropic protein model, we fit the experimental data of I-BAR domains using the isotropic protein model with Eq. (5). The rigidity difference \(\kappa_{\mathrm{diff}}\) and sensing curvature \(C_{\mathrm{s}}\) are fitted. Figure 2(a), (b), and (c) show the best fits for \(\phi_{\mathrm{v}}=0.01\), \(0.02\), and \(0.05\), to minimize \(\Lambda_{1}\), \(\Lambda_{2}\), and \(\Lambda_{3}\), respectively (see Figs. 3 and 4). The target density data are fitted very well but the other two datasets exhibit large deviations. When the sum \(\Lambda_{1}+\Lambda_{2}+\Lambda_{3}\) is minimized, the obtained curves are close to the results of the fit to the middle data, i.e., Figure 4: Two-dimensional map of the fit deviations \(\Lambda_{1}\), \(\Lambda_{2}\), \(\Lambda_{3}\), and \(\Lambda_{1}+\Lambda_{2}+\Lambda_{3}\) for I-BAR-domain binding based on the isotropic protein model. The solid lines represent the valleys connecting the values of \(C_{\mathrm{s}}\) for the lowest \(\Lambda\) values at fixed \(\kappa_{\mathrm{diff}}\). The circles represent the minima and the dashed lines represent the contours of values exceeding the minima by \(10\%\). \(\Lambda_{2}\) (compare Fig. 2(d) and (b)). Therefore, not all of the data can be reproduced together using the isotropic protein model. As \(\kappa_{\rm dif}\) increases, the lowest value of \(\Lambda\) is obtained at lower \(C_{\rm s}\) (see Fig. 3). This is because \(\kappa_{\rm dif}C_{\rm s}\) (\(=\kappa_{\rm pi}C_{0}\)) is a factor of the major term in the exponent of Eq. (5). Thus, \(\Lambda\) values close to the minimum are obtained in the long narrow regions along the solid curves in Fig. 4. When a statistical error is considered to be 10% of \(\Lambda\), \(\kappa_{\rm dif}\) and \(C_{\rm s}\) are in the regions of the long ellipses in Fig. 4. The anisotropic model exhibits a similar dependence for \(\kappa_{\rm p}\) and \(C_{\rm p}\) as described later. Equation (6) for \(\phi_{\rm cy}\ll 1\) has been used instead of Eq. (5) in the previous studies [14; 15; 16; 18]. We compared the results of these two equations for \(\Lambda_{1}\), \(\Lambda_{2}\), and \(\Lambda_{3}\) using the best-fit parameters at \(\phi_{\rm v}=0.01\), \(0.02\), and \(0.05\), respectively. We found that the low-density limit approximation overestimates the densities by approximately 20%, 30%, and 40%, respectively. A bound protein prevents the binding of other proteins to the same position (the binding rate is proportional to \((1-\phi)\)) [44]. Although this effect is negligible in the limit \(\phi_{\rm cy}\ll 1\), it becomes recognizable at \(\phi_{\rm cy}\simeq 0.1\) (compare functions Figure 7: Fit deviation as a function of the protein curvature \(C_{\rm p}\) for I-BAR-domain binding based on the anisotropic protein model. The solid lines represent the theoretical results for \(\kappa_{\rm p}/k_{\rm B}T=100\), 82, 72, and 60 at \(\kappa_{\rm side}=0\), from left to right. The dashed lines represent the data for \(\kappa_{\rm p}/k_{\rm B}T=86\), \(\kappa_{\rm side}/\kappa_{\rm p}=0.5\), and \(C_{\rm side}=0\). (a) Fit deviation \(\Lambda_{1}\) for \(\phi_{\rm v}=0.01\). For \(\kappa_{\rm side}=0\), the minimum value of \(\Lambda_{1}^{\rm min}=0.044\) is obtained at \(\kappa_{\rm p}/k_{\rm B}T=100\) and \(C_{\rm p}=0.043\,{\rm nm}^{-1}\) (corresponding to Fig. 5(a)). For \(\kappa_{\rm side}\neq 0\), a lower minimum \(\Lambda_{1}^{\rm min}=0.039\) is obtained at \(\kappa_{\rm p}/k_{\rm B}T=86\), \(C_{\rm p}=0.0475\,{\rm nm}^{-1}\), and \(\kappa_{\rm side}/\kappa_{\rm p}=0.5\) (corresponding in Fig. 9). (b) Fit deviation \(\Lambda_{2}\) for \(\phi_{\rm v}=0.02\). The minimum \(\Lambda_{2}^{\rm min}=0.032\) is obtained at \(\kappa_{\rm p}/k_{\rm B}T=72\), \(C_{\rm p}=0.0505\,{\rm nm}^{-1}\), and \(\kappa_{\rm side}=0\) (corresponding to Fig. 5(b)). (c) Fit deviation \(\Lambda_{3}\) for \(\phi_{\rm v}=0.05\). The minimum \(\Lambda_{2}^{\rm min}=0.045\) is obtained at \(\kappa_{\rm p}/k_{\rm B}T=60\), \(C_{\rm p}=0.054\,{\rm nm}^{-1}\), and \(\kappa_{\rm side}=0\) (corresponding to Fig. 5(c)). (d) Sum of fit deviations \(\Lambda_{1}+\Lambda_{2}+\Lambda_{3}\). The minimum value \((\Lambda_{1}+\Lambda_{2}+\Lambda_{3})^{\rm min}=0.140\) is obtained at \(\kappa_{\rm p}/k_{\rm B}T=82\), \(C_{\rm p}=0.047\,{\rm nm}^{-1}\), and \(\kappa_{\rm side}=0\) (corresponding to Fig. 6). Figure 6: Fitting of the density–curvature curves for I-BAR-domain binding based on the anisotropic protein model. The solid lines are given by the theoretical results at \(\kappa_{\rm p}/k_{\rm B}T=82\) and \(C_{\rm p}=0.047\,{\rm nm}^{-1}\) to minimize \(\Lambda_{1}+\Lambda_{2}+\Lambda_{3}\). (a) Circles, triangles, and squares indicate the experimental data of \(\phi_{\rm cy}/\phi_{\rm v}\) for \(\phi_{\rm v}=0.01\), \(0.02\), and \(0.05\), respectively (reproduced from Ref. [14]). (b) Force generated by the protein binding. (c) Degree \(S_{z}\) of protein order along the membrane tube (\(z\)) axis. (a, c) From top to bottom, \(\phi_{\rm v}=0.01\), \(0.02\), and \(0.05\). (b) From top to bottom, \(\phi_{\rm v}=0.01\), \(0.02\), and \(0.05\) at \(1/R_{\rm cy}<0.05\,{\rm nm}^{-1}\) The dashed lines in (b) and (c) represent the data for \(\kappa_{\rm p}/k_{\rm B}T=60\) and \(C_{\rm p}=0.054\,{\rm nm}^{-1}\) (the minimum value of \(\Lambda_{3}\) corresponding to Fig. 5(c)). \(\exp(-x)\) and \(1/[1+\exp(x)]\)). Therefore, Eq. (5) should be used in \(\phi_{\rm cy}\gtrsim 0.1\). ### Anisotropic protein model The experimental data of the I-BAR domains are fitted using the anisotropic protein model. First, we use the protein rigidity \(\kappa_{\rm p}\) and protein curvature \(C_{\rm p}\) as the fitting parameters with \(\kappa_{\rm side}=0\), as shown in Figs. 5-8. To determine the minimum for each fit, the parameters are varied discretely with \(\Delta\kappa_{\rm p}=2k_{\rm B}T\) and \(\Delta C_{\rm p}=0.0005\,{\rm nm}^{-1}\). Very good agreement is obtained for not only the target density-curvature curve but also the other two curves (see Fig. 5). The minimum \(\Lambda\) values for the target curves are almost identical to those of the isotropic protein model (compare Fig. 7(a)-(c) and Fig. 3(a)-(c)). However, the sum, \(\Lambda_{1}+\Lambda_{2}+\Lambda_{3}\), is significantly reduced; thus, the total fitness is improved (compare Fig. 7(d) and Fig. 3(d)). Although the density-curvature curves are sufficiently reproduced using \(\kappa_{\rm side}=0\), a deviation is recognized in \(1/R_{\rm cy}\gtrsim 0.07\,{\rm nm}^{-1}\) for \(\phi_{\rm v}=0.01\). In the experimental data, the protein density decreases by a larger amount as \(1/R_{\rm cy}\) increases. The other two curves do not exhibit such a deviation, possibly owing to limited data for the narrow tubes (one and no data points for \(1/R_{\rm cy}>0.07\,{\rm nm}^{-1}\) at \(\phi_{\rm v}=0.02\) and \(0.05\), respectively). This deviation can be eliminated by using a finite value of the bending rigidity \(\kappa_{\rm side}\) in the side direction. In addition, \(\kappa_{\rm side}\) is varied discretely with \(\Delta\kappa_{\rm side}=0.5\kappa_{\rm p}\) at \(C_{\rm side}=0\) for \(0\leq\kappa_{\rm side}\leq\kappa_{\rm p}\). A better fit is obtained for \(\phi_{\rm v}=0.01\), as shown in Figs. 7(a) and 9. Although \(C_{\rm side}\) does not vary together here, \(\Lambda_{1}\) has a minimum at \(C_{\rm side}=0\) for the variation in \(C_{\rm side}\) with \(\Delta C_{\rm side}=0.0025\,{\rm nm}^{-1}\) when the other parameters are fixed. Thus, the zero side curvature is reasonable. In contrast to \(\Lambda_{1}\), lower values of the others (\(\Lambda_{2}\), \(\Lambda_{3}\), and \(\Lambda_{1}+\Lambda_{2}+\Lambda_{3}\)) are not obtained using \(\kappa_{\rm side}\neq 0\). Thus, the best fit for the total data (\((\Lambda_{1}+\Lambda_{2}+\Lambda_{3})^{\rm min}=0.140\)) is given at \(\kappa_{\rm side}=0\), as shown in Fig. 6. When a 10% larger value of \(\Lambda_{1}+\Lambda_{2}+\Lambda_{3}\) is allowed as a statistical error, the elliptical region surrounded by dashed lines in Fig. 8 is the expected range of \(\kappa_{\rm p}\) and \(C_{\rm p}\). The minimum points of \(\Lambda_{1}\) and \(\Lambda_{2}\) are also included in this range. Hence, we concluded that the I-BAR domain has \(\kappa_{\rm p}/k_{\rm B}T=82\pm 20\) and \(C_{\rm p}({\rm nm}^{-1})=0.047-0.0003(\kappa_{\rm p}/k_{\rm B}T-82)\pm 0.001\). Although the experimental data are well-fitted without the side rigidity, the existence of the side rigidity is not excluded. The measurement of other quantities can increase the estimation accuracy for the mechanical properties of proteins. Here, we propose the force \(f_{\rm ex}\) and orientational degree \(S_{z}\) along the tube axis as candidates. Figure 8: Two-dimensional map of the fit deviations of \(\Lambda_{1}+\Lambda_{2}+\Lambda_{3}\) for I-BAR-domain binding based on the anisotropic protein model. The circle represents the minimum point, and the dashed lines represent the contour of values exceeding the minimum by 10%. The diamond, triangle, and cross indicate the minimum points for \(\Lambda_{1}\), \(\Lambda_{2}\), and \(\Lambda_{3}\), respectively. Figure 9: Fitting of the density–curvature curves for I-BAR-domain binding based on the anisotropic protein model. The solid lines are given by the theoretical results at \(\kappa_{\rm p}/k_{\rm B}T=86\), \(C_{\rm p}=0.0475\,{\rm nm}^{-1}\), \(\kappa_{\rm side}/\kappa_{\rm p}=0.5\), and \(C_{\rm side}=0\) to minimize \(\Lambda_{1}\). (a) Circles, triangles, and squares indicate the experimental data of \(\phi_{\rm cy}/\phi_{\rm v}\) for \(\phi_{\rm v}=0.01\), \(0.02\), and \(0.05\), respectively (reproduced from Ref. [14]). (b) Force generated by the protein binding. (c) Degree \(S_{z}\) of protein order along the membrane tube (\(z\)) axis. (a, c) From top to bottom, \(\phi_{\rm v}=0.01\), \(0.02\), and \(0.05\). (b) From top to bottom, \(\phi_{\rm v}=0.01\), \(0.02\), and \(0.05\) at \(1/R_{\rm cy}<0.05\,{\rm nm}^{-1}\). The force \(f_{\rm ex}\) has been experimentally measured from the position of optically trapped beads [45; 46]. Force modification due to the binding of BAR domains has been reported [13; 14]. Although \(S_{z}\) has not been experimentally measured, it is measurable using polarizers in principle. As the tube curvature \(1/R_{\rm cy}\) increases, the two quantities vary in different manners with respect to the protein density, as shown in Figs. 6 and 9. Each has a minimum value at a tube curvature lower than the sensing curvature. Since the densities are sufficiently low, \(S_{z}\) exhibits only a small dependence on \(\phi_{\rm v}\). Importantly, they vary with changes in the bending parameters, although the density curves do not vary significantly (compare the solid and dashed lines in Fig. 6(b) and (c), and also see Fig. 9(b) and (c)). This suggests that the bending rigidity and curvature of proteins can be more accurately estimated through additional fitting of \(f_{\rm ex}\) and \(S_{z}\) to the experimental data. ## IV Binding of N-BAR domains ### Isotropic protein model In this section, we consider the binding of N-BAR domains reported in Refs. [13; 15]. First, we examine the isotropic protein model, as for the I-BAR domains considered in Sec. III. Figures 10 and 11 show the fitted density-curvature curves and fit deviations, respectively, using Eq. (5). The target curve is well-fitted, whereas the other is not, similar to the case of the I-BAR domain. ### Anisotropic protein model The experimental data of the N-BAR domains are fitted using the anisotropic protein model. First, the fitting is performed using the fitting parameters \(\kappa_{\rm p}\) and \(C_{\rm p}\) at \(\kappa_{\rm side}=0\). The parameters are varied discretely with \(\Delta\kappa_{\rm p}=k_{\rm B}T\) and \(\Delta C_{\rm p}=0.0005\,{\rm nm}^{-1}\). Figure 12(a) shows the fitted density-curvature curves for minimizing \(\Lambda_{1}+\Lambda_{2}\). The two curves exhibit far better agreements Figure 10: Fitting of the density–curvature curves for N-BAR-domain binding based on the isotropic protein model. Circles and triangles indicate the experimental data of \(\phi_{\rm cy}/\phi_{\rm v}\) for \(\phi_{\rm v}=0.0013\) and \(0.0043\), respectively (reproduced from Ref. [15] with permission from the Royal Society of Chemistry). The solid lines are given by Eq. (5) with the fitting parameters \(\kappa_{\rm dif}\) and \(C_{\rm s}\); from top to bottom, \(\phi_{\rm v}=0.0013\) and \(0.0043\). (a) Data at \(\phi_{\rm v}=0.0013\) are fitted: \(\kappa_{\rm dif}/k_{\rm B}T=17.2\) and \(C_{\rm s}=0.01032\,{\rm nm}^{-1}\). (b) Data at \(\phi_{\rm v}=0.0043\) are fitted: \(\kappa_{\rm dif}/k_{\rm B}T=27.4\) and \(C_{\rm s}=0.0735\,{\rm nm}^{-1}\). (c) Both datasets are fitted: \(\kappa_{\rm dif}/k_{\rm B}T=23.3\) and \(C_{\rm s}=0.0820\,{\rm nm}^{-1}\). than those obtained using the isotropic protein model (compare Figs. 12(a) and 10(c)). Nonetheless, the deviations from the curves fitted to the data for \(\phi_{\rm v}=0.0013\) or \(0.0043\) are large. Additionally, the curve obtained for \(\phi_{\rm v}=0.016\) is slightly exceeded the expected values (\(\phi_{\rm cy}/\phi_{\rm v}=18\) at \(1/R_{\rm cy}=0.07\,{\rm nm}^{-1}\), as indicated by the dashed line in Fig. 12(a)). The fit deviation of the curve minimizing \(\Lambda_{1}\) is identical to that of the isotropic model, but that for \(\Lambda_{2}\) is slightly worse (see Figs. 13(a), (b) and 14). Similar to the case of the I-BAR domain with \(\phi_{\rm v}=0.01\), this is due to the smaller reduction in \(\phi_{\rm cy}/\phi_{\rm v}\) at high tube curvatures (see Fig. 13(b)). Thus, we perform the fit with a finite side rigidity for \(0<\kappa_{\rm side}\leq\kappa_{\rm p}\), as shown in Figs. 13(c) and 15. The parameters are varied discretely with \(\Delta\kappa_{\rm p}=k_{\rm B}T\), \(\Delta C_{\rm p}=0.001\,{\rm nm}^{-1}\), \(\Delta\kappa_{\rm side}=0.5\kappa_{\rm p}\), and \(\Delta C_{\rm side}=0.0025\,{\rm nm}^{-1}\). A better fit to the data at \(\phi_{\rm v}=0.0043\) is obtained (\(\Lambda_{2}^{\rm min}\) is 4% smaller than that of the isotropic model). For the other fit deviations (\(\Lambda_{1}+\Lambda_{2}\) and \(\Lambda_{1}\)), the minimum values are almost identical to those at \(\kappa_{\rm side}=0\). Interestingly, in all cases, better fits are obtained at \(\kappa_{\rm side}/\kappa_{\rm p}=0.5\) than at \(\kappa_{\rm side}/\kappa_{\rm p}=1\). Since the density \(\phi_{\rm v}\) has a wide distribution in the experimental data, we additionally performed the fitting at \(\phi_{\rm v}=0.001\) and \(0.0015\) to examine the effect of the choice of \(\phi_{\rm v}\) values for \(n_{\rm v}<50\,\mu{\rm m}^{-2}\). The minimum value of the sum \(\Lambda_{1}+\Lambda_{2}\) decreases by 5% and increases by 4% for \(\phi_{\rm v}=0.001\) and \(0.0015\), (\((\Lambda_{1}+\Lambda_{2})^{\rm min}=0.336\) and \(0.368\)), respectively. The corresponding values of \(\kappa_{\rm p}/k_{\rm B}T\) and \(C_{\rm p}\) (\({\rm nm}^{-1}\)) are shifted by only 1 and 0.001, respectively. For \(\Lambda_{1}\), \(\kappa_{\rm p}/k_{\rm B}T\) and \(C_{\rm p}\) (\({\rm nm}^{-1}\)) are shifted by only 2 and 0.0025, respectively, and the minimum value does not change. Therefore, a slight variation in \(\phi_{\rm v}\) does not result in a significant change. Next, we consider the effects of variations in the other fixed parameters. An aspect ratio of \(d_{\rm el}=3\) is used for the N-BAR domains. Since the protein densities are low, the excluded-volume effect is weak. We examined the cases of \(d_{\rm el}=2\) and 4 with the best-fit conditions, and only obtained a 5% decrease in \((\Lambda_{1}+\Lambda_{2})^{\rm min}\) at \(d_{\rm el}=4\). Thus, we concluded that the N-BAR domains have \(30\lesssim\kappa_{\rm p}/k_{\rm B}T\lesssim 60\) and \(0.06\lesssim C_{\rm p}({\rm nm}^{-1})\lesssim 0.09\). The protein area \(a_{\rm p}\) can be slightly varied by area definition. In our previous studies [26; 27; 28], we used Figure 12: Fitting of the density–curvature curves for N-BAR-domain binding based on the anisotropic protein model. The lines are given by the theoretical results at \(\kappa_{\rm p}/k_{\rm B}T=39\), \(C_{\rm p}=0.072\,{\rm nm}^{-1}\), and \(\kappa_{\rm side}=0\) for minimizing \(\Lambda_{1}+\Lambda_{2}\). (a) Circles and triangles indicate the experimental data of \(\phi_{\rm cy}/\phi_{\rm v}\) for \(\phi_{\rm v}=0.0013\) and 0.0043, respectively (reproduced from Ref. [15] with permission from the Royal Society of Chemistry). (b) Force generated by the protein binding. (c) Degree \(S_{z}\) of protein order along the membrane tube (\(z\)) axis. (a, c) From top to bottom, \(\phi_{\rm v}=0.0013\), 0.0043, and 0.016. (b) From top to bottom, \(\phi_{\rm v}=0.0013\), 0.0043, and 0.016 at \(1/R_{\rm cy}<0.08\,{\rm nm}^{-1}\). Figure 13: Fitting of the density–curvature curves for N-BAR-domain binding based on the anisotropic protein model. Circles and triangles indicate the experimental data of \(\phi_{\rm cy}/\phi_{\rm v}\) for \(\phi_{\rm v}=0.0013\) and 0.0043, respectively (reproduced from Ref. [15] with permission from the Royal Society of Chemistry). The solid lines are given by the theoretical results; from top to bottom, \(\phi_{\rm v}=0.0013\) and 0.0043. (a) Data at \(\phi_{\rm v}=0.0013\) are fitted: \(\kappa_{\rm p}/k_{\rm B}T=28\) and \(C_{\rm p}=0.0895\,{\rm nm}^{-1}\) at \(\kappa_{\rm side}=0\). (b) Data at \(\phi_{\rm v}=0.0043\) are fitted: \(\kappa_{\rm p}/k_{\rm B}T=55\) and \(C_{\rm p}=0.0585\,{\rm nm}^{-1}\) at \(\kappa_{\rm side}=0\). (c) Data at \(\phi_{\rm v}=0.0043\) are fitted: \(\kappa_{\rm p}/k_{\rm B}T=46\), \(C_{\rm p}=0.066\,{\rm nm}^{-1}\), \(\kappa_{\rm side}=0.5\kappa_{\rm p}\), and \(C_{\rm side}=-0.0025\,{\rm nm}^{-1}\). \(a_{\rm p}=60\,{\rm nm}^{2}\), since the protein was approximated as elliptical with \(\ell_{1}=15\,{\rm nm}\) and \(\ell_{2}=5\,{\rm nm}\), and the protein area partially included bare membrane regions. To fit a density-curvature curve, the area \(a_{\rm p}\) only appears as pairs with bending rigidities in the theoretical models (\(a_{\rm p}\kappa_{\rm p}\) and \(a_{\rm p}\kappa_{\rm side}\) in the anisotropic model and \(a_{\rm p}\kappa_{\rm dif}\) in the isotropic model). Thus, the area variation influences the bending rigidity in accordance with \(\kappa^{\prime}=\kappa a_{\rm p}/a_{\rm p}^{\prime}\) (e.g., a 20% decrease in \(a_{\rm p}\) results in a 25% increase in the bending rigidities). The force, \(f_{\rm ex}-f_{\rm mb}\), is also modified by the factor \(a_{\rm p}/a_{\rm p}^{\prime}\). When the protein density is measured as the number density \(n_{\rm v}\), the area fraction is slightly modified according to the area definition (\(\phi_{\rm v}=n_{\rm v}a_{\rm p}\)). However, this results in only slight changes, as discussed above. In our previous study [28], we compared the results of the anisotropic model and a coarse-grained membrane simulation. When the proteins are distributed homogeneously in the membrane, the results agree very well. In contrast, deviations are obtained, when the proteins form small clusters. Since the clusters have a larger area than the individual proteins, they are more oriented in the preferred direction. The ratio of the clusters increases with the protein density. Thus, we consider that the deviation in this study suggests non-negligible cluster formation of the N-BAR domains. ## V Summary We have developed an estimation method for the mechanical properties of bound proteins based on the experiments of tethered vesicles and applied it to the I-BAR and N-BAR domains. When the anisotropy of the proteins is taken into account, the experimental data are reproduced far better. When the classical isotropic model is used, each density-curvature curve is well reproduced but the other curves largely deviate. When the recently developed anisotropic model is used, this deviation is significantly reduced. For the I-BAR domains, all three curves are well-fitted by a single parameter set, and the bending rigidity \(\kappa_{\rm p}/k_{\rm B}T=82\pm 20\) and spontaneous curvature \(C_{\rm p}({\rm nm}^{-1})=0.047-0.0003(\kappa_{\rm p}/k_{\rm B}T-82)\pm 0.001\) along the protein axis are determined. For the N-BAR domains, the two density-curvature curves are not completely fitted simultaneously, even when the anisotropic model is used. This deviation is likely caused by a small cluster formation, and \(30\lesssim\kappa_{\mathrm{p}}/k_{\mathrm{B}}T\lesssim 60\) and \(0.06\lesssim C_{\mathrm{p}}(\mathrm{nm}^{-1})\lesssim 0.09\) are estimated. If the definition of the protein area is modified, the bending rigidity is changed as \(\kappa_{\mathrm{p}}^{\prime}=\kappa_{\mathrm{p}}a_{\mathrm{p}}/a_{\mathrm{p}} ^{\prime}\). The experimental data were well-fitted without the side bending rigidity. Including them, the fitness was improved for some of the conditions but the others were not changed significantly. Since positive and negative side curvatures can promote and suppress the tubulation, respectively [47], the estimation of the side rigidity and side curvature is important. Recent experiments [48; 49; 50; 23] revealed that the intrinsically disordered domains of curvature-inducing proteins play a significant role in membrane remodeling. The disordered domains can be modeled by excluded-volume chains. At a low protein density, the membrane-chain interaction slightly increases the bending rigidity and spontaneous curvature isotropically (i.e., in both the axial and side directions of proteins) [51; 52; 53; 54]. At a high density, the inter-chain interactions have strong effects in protein clusters [55; 56; 51] and also between the clusters [57; 54]. These effects should be further examined by the comparison of tether-vesicle experiments. In this study, we fitted only the density-curvature curves. The estimation quality can be further improved by additional fitting for other quantities. For this purpose, we have proposed two parameters, the axial force and the orientational degree. They exhibit different behaviors from the density-curvature curve; thus, comparison with the experimental results can facilitate the determination of the mechanical properties. ###### Acknowledgements. This work was supported by JSPS KAKENHI Grant Number JP21K03481.
2303.06271
Repartitioned Brillouin-Wigner Perturbation Theory with a Size-Consistent Second-Order Correlation Energy
Second-order M{\o}ller-Plesset perturbation theory (MP2) often breaks down catastrophically in small-gap systems, leaving much to be desired in its performance for myriad chemical applications such as noncovalent interactions, thermochemistry, and dative bonding in transition metal complexes. This divergence problem has reignited interest in Brillouin-Wigner perturbation theory (BWPT), which is regular at all orders but lacks size-consistency and extensivity, severely limiting its application to chemistry. In this work, we propose an alternative partitioning of the Hamiltonian that leads to a regular BWPT perturbation series that, through second order, is size-extensive, size-consistent (provided its Hartree-Fock reference is also), and orbital invariant. Our second-order size-consistent Brillouin-Wigner (BW-s2) approach is capable of describing the exact dissociation limit of H$_2$ in a minimal basis set regardless of the spin-polarization of the reference orbitals. More broadly, we find that BW-s2 offers improvements relative to MP2 for covalent bond breaking, noncovalent interaction energies, and metal/organic reaction energies, while rivaling coupled-cluster with single and double substitutions (CCSD) for thermochemical properties.
Kevin Carter-Fenk, Martin Head-Gordon
2023-03-11T01:17:32Z
http://arxiv.org/abs/2303.06271v2
# Repartitioned Brillouin-Wigner Perturbation Theory with a ###### Abstract Second-order Moller-Plesset perturbation theory (MP2) often breaks down catastrophically in small-gap systems, leaving much to be desired in its performance for myriad chemical applications such as noncovalent interactions, thermochemistry, and drive bonding in transition metal complexes. This divergence problem has reignited interest in Brillouin-Wigner perturbation theory (BWPT), which is regular at all orders but lacks size-consistency and extensivity, severely limiting its application to chemistry. In this work, we propose an alternative partitioning of the Hamiltonian that leads to a regular BWPT perturbation series that, through second order, is size-extensive, size-consistent (provided its Hartree-Fock reference is also), and orbital invariant. Our second-order size-consistent Brillouin-Wigner (BW-s2) approach is capable of describing the exact dissociation limit of H\({}_{2}\) in a minimal basis set regardless of the spin-polarization of the reference orbitals. More broadly, we find that BW-s2 offers improvements relative to MP2 for covalent bond breaking, noncovalent interaction energies, and metal/organic reaction energies, while rivaling coupled-cluster with single and double substitutions (CCSD) for thermochemical properties. ## I Introduction The oldest and most tractable wave function approach that captures electron correlation from first principles is second-order Moller-Plesset perturbation theory (MP2). While the \(\mathcal{O}(N^{5})\) asymptotic scaling of MP2[1] does not compete with the \(\mathcal{O}(N^{3})\) scaling of density functional theory (DFT), MP2 is immune to many of the nonphysical problems that manifest in DFT such as self-interaction error, which can obfuscate the underlying physics of chemical systems by artificially delocalizing charge density.[2; 3; 4] The _ab initio_, and therefore self-interaction-free, correlation offered by MP2 has led to its incorporation into double-hybrid density functionals, which combine MP2 with DFT exchange-correlation.[5; 6; 7; 8; 9; 10] On its own, MP2 can promote fundamental insights into the physical properties of chemical systems that are untarnished by self-interaction errors, making it a valuable tool in the arsenal of quantum chemistry. The Moller-Plesset many-body perturbation series is based on Rayleigh-Schrodinger perturbation theory (RSPT),[11] which imbues MP2 with the size-consistency and extensivity that lead to its proper treatment of many-body systems. On the other hand, the Moller-Plesset series inherits a divergence problem from RSPT, such that in the limit of zero-gap systems the Moller-Plesset correlation energy becomes singular. While exact degeneracy is perhaps an extreme case that occurs relatively infrequently in nature, nonphysically large correlation energies brought on by near-degeneracies are more commonly encountered. Large, but not necessarily divergent correlation energies are often found in systems that exhibit significant nonadditive correlation effects,[12] such as dairy bonds in metal complexes[13] and dispersion-bound complexes dominated by \(\pi\)-\(\pi\) interactions.[14; 15; 16; 17] The nonadditive correlation energy can be defined as the difference between the true correlation energy and the pairwise correlations captured by MP2, \(E_{\rm c}^{\rm NA}=E_{\rm c}-E_{\rm c}^{\rm PW}\). In cases where MP2 yields poor estimates of the correlation energy, the nonadditive component is generally large and positive, implying that the dominant nonadditive contribution comes from a screening of the pair correlations. Indeed, in large systems with extended \(\pi\) networks the MP2 correlation energy becomes catastrophically large, and without nonadditive screening interaction energies can be overestimated by more than 100%.[18; 19] Many useful strategies that account for nonadditive electron correlation have been developed over the years. A simple one is to directly scale the same-spin and/or opposite-spin correlation energies,[20; 21; 22; 23] which can improve the performance of MP2 for thermochemistry and noncovalent interactions.[24] Another strategy is to use only the short-range part of the Coulomb operator when evaluating the MP2 energy, thereby attenuating the range of the correlation interaction and improving results for a wide range of chemical problems.[25; 26; 27; 28] However, while these approaches treat the symptoms of a completely pairwise correlation energy approximation, they do not directly address the underlying cause. One approach that offers direct screening of pair correlations is regularized MP2. Broadly speaking, regularization modifies the MP2 energy expression with a function that damps any divergent or excessively large correlations, ideally while retaining the unvarnished MP2 energy for weaker correlations. Regularization has been used to avoid singular correlation energies that are encountered while optimizing molecular orbitals under a potential that contains the MP2 energy (orbital-optimized MP2),[29; 30; 31; 32; 33; 34] but even without orbital optimization regularized MP2 can outperform MP2 across myriad chemical problems.[12] Regularized MP2 corrects the divergent nature of the Rayleigh-Schrodinger perturbation series in zero-gap systems. Singularities manifest in the second-order RSPT energy, \[E_{\text{RS}}^{(2)}=\sum_{k\neq 0}\frac{\langle\Phi_{0}|\hat{V}|\Phi_{k}\rangle \langle\Phi_{k}|\hat{V}|\Phi_{0}\rangle}{E_{0}-E_{k}} \tag{1}\] in cases of degeneracy, _i.e._ when \(E_{k}=E_{0}\). More appropriate formulations of perturbation theories have been developed throughout the years in efforts to sidestep this divergence problem. These include retaining the excitation degree (RE) methods,[35; 36; 37] which define the unperturbed Hamiltonian as one that is block-diagonal in configuration space and the perturbation as the couplings between ancillary excitation blocks. The RE approaches offer substantial improvements over MP2, with orbital-optimized RE/MP2 approaches often attaining chemical accuracy for thermochemical properties.[38] There has also been substantial effort to improve many-body perturbation theory with Green's function based methods.[39; 40; 41] A less modern approach that has regained considerable attention in recent years was pioneered in the 1930s by Lennard-Jones, Brillouin, and Wigner as an alternative to the Rayleigh-Schrodinger power series and came to be known as Brillouin-Wigner (or Lennard-Jones-Brillouin-Wigner) perturbation theory (BWPT).[42; 43; 44; 45] The first term where BWPT differs from RSPT is the second-order energy, which takes the form, \[E_{\text{BW}}^{(2)}=\sum_{k\neq 0}\frac{\langle\Phi_{0}|\hat{V}|\Phi_{k} \rangle\langle\Phi_{k}|\hat{V}|\Phi_{0}\rangle}{E_{0}-E_{k}+E_{\text{BW}}^{(2)}} \tag{2}\] There are a few distinct advantages to BWPT: it converges more rapidly than RSPT for a given problem[43; 44] and it is regular at all orders due to \(E_{\text{BW}}^{(n)}\) in the denominator. In fact, second-order BWPT is exact for a two-level system while RSPT requires summation to infinite order to achieve the exact result.[45] On the other hand, \(E_{\text{BW}}^{(2)}\) appears on both sides of the above expression and must therefore be determined self-consistently. While this does increase the cost of the perturbation theory, it is not the fatal flaw that has limited the application of BWPT in quantum chemistry over the last half century. Instead, BWPT fell into disuse after it was found that it is not size-extensive and therefore fails as a proper many-body theory.[46] Despite its failures for single-reference systems, the mathematical form of the Brillouin-Wigner series is convenient for multireference theories and is still actively used in this context.[47; 48; 49; 50; 51; 52; 53] In particular, it is notable that the Brillouin-Wigner cluster expansion of the wave function is equivalent to the Rayleigh-Schrodinger one with the key exception that multireference Brillouin-Wigner coupled-cluster theory is immune to the intruder state problem.[54] The treatment of intruder states and the divergences encountered in single-reference perturbation theories are closely linked,[55] so it is natural to wonder whether the problems in single-reference BWPT can be amended to obtain a regular correlation energy at MP2 cost. If BWPT could be made size-consistent and size-extensive, it could supply correlation energies that naturally incorporate nonadditive screening effects at all orders. This has spurred interest in deriving size-extensivity corrections for BWPT from the Bloch equations,[56] and through renormalization of the second-order energy.[57] Recently, an alternative _ansatz_ to standard BWPT was proposed,[58] where the correlation energy per electron (\(E_{\text{BW}}^{(2)}/N_{e}\)) was inserted into the denominator of Eq. 2 in an effort to restore size-extensivity. Importantly, Ref. [58] pointed out that the derivation of Eq. 2 can be generalized to an arbitrary level-shift in place of \(E_{0}+E_{\text{BW}}^{(2)}\), thus opening the door for a wide variety of level-shift energies to be conceived and applied. In this work, we present a different approach to this problem, based on a partitioning of the Hamiltonian that incorporates a judiciously designed one-electron regularization operator into the zero-order Hamiltonian while the remainder of the correlation energy is described as a perturbation. Furthermore, we cast the second-order energy expression into a tensor framework, ensuring that our approach retains invariance to unitary transformations among the occupied or virtual orbitals. Our chosen form of the regularization operator satisfies size-consistency and extensivity through second order. We benchmark the performance of our proposed method across a wide variety of datasets where MP2 performs rather poorly, including covalent bond breaking, noncovalent interaction energies, reaction barrier heights, thermochemical properties, and metal/organic reaction energies. ## II Theory Throughout this work, \(\{i,j,k\dots\}\) refer to occupied orbitals, \(\{a,b,c\dots\}\) refer to unoccupied orbitals, \(\{p,q,r\dots\}\) are arbitrary orbitals, and \(\{P,Q,R,\dots\}\) are auxiliary functions. ### Orbital-Energy Dependent Regularized MP2 The MP2 correlation energy in the canonical molecular orbital basis is, \[E_{c}=-\frac{1}{4}\sum_{ijab}\frac{|\mathbb{I}_{ijab}|^{2}}{\epsilon_{a}+ \epsilon_{b}-\epsilon_{i}-\epsilon_{j}}=-\frac{1}{4}\sum_{ijab}\frac{|\mathbb{ I}_{ijab}|^{2}}{\Delta_{ij}^{ab}}\;, \tag{3}\] where, \[\mathbb{I}_{ijab}=(ij||ab) \tag{4}\] are the antisymmetrized two-electron integrals and \(\epsilon_{p}\) is the orbital energy of orbital \(p\). This expression for the correlation energy is clearly divergent when the denominator approaches zero, but the energy may become much too large long before this limit is reached if nonadditive screening is particularly important. A straightforward approach to tempering this bad behavior is to add a level-shift to the denominator of the form \(\Delta_{ij}^{ab}+\delta\) where \(\delta>0\),[29; 30; 32] but this approach generally provides too weak of regularization and lacks input from the underlying physics of the system. More sophisticated regularizers that have orbital energy dependence can be derived by Laplace transform of Eq. 3 where the correlation energy can be exactly rewritten as,[59] \[E_{c}=-\frac{1}{4}\sum_{ijab}\int_{0}^{\infty}d\tau e^{-\tau\Delta_{ij}^{ab}}| \mathbbm{1}_{ijab}|^{2} \tag{5}\] From here, the upper integration bound can be truncated to a finite value, \(\sigma(\Delta_{ij}^{ab})^{p-1}\) to give, \[E_{c}=-\frac{1}{4}\sum_{ijab}\frac{|\mathbbm{1}_{ijab}|^{2}}{\Delta_{ij}^{ab}} \Big{(}1-e^{-\sigma(\Delta_{ij}^{ab})^{p}}\Big{)}\, \tag{6}\] where \(p=1\) gives what is known as \(\sigma\)-MP2. The case \(p=2\) gives \(\sigma^{2}\)-MP2 and can be derived through second-order perturbative analysis of the flow equations.[60; 61] In this work, we will focus on a flavor of empirical regularization known as \(\kappa\)-MP2,[33] where the integrals themselves are damped by a factor of \((1-\kappa\Sigma_{ij}^{ab})\) leading to, \[E_{c}=-\frac{1}{4}\sum_{ijab}\frac{|\mathbbm{1}_{ijab}|^{2}}{\Delta_{ij}^{ab} }\Big{(}1-e^{-\kappa\Delta_{ij}^{ab}}\Big{)}^{2} \tag{7}\] All of the above orbital-energy dependent (\(\Delta\)-dependent) regularizers rely on a single empirical parameter (\(\sigma\) or \(\kappa\)) that is somewhat transferable, but expresses different optimal values for different classes of chemical problem.[12] We will limit our investigations in this work to \(\kappa\)-MP2, but given that all of the aforementioned flavors of \(\Delta\)-dependent regularization yield similar results,[12] we expect the conclusions drawn here for \(\kappa\)-MP2 to be general for this class of regularizer. ### Brillouin-Wigner Theory With Modified Energy It was recently proposed that the second-order Brillouin-Wigner energy can be derived as a specific case of the more general correlation expression,[58] \[E^{(2)}=\sum_{k\neq 0}\frac{\langle\Phi_{0}|\hat{V}|\Phi_{k}\rangle\langle \Phi_{k}|\hat{V}|\Phi_{0}\rangle}{E_{\text{LS}}-E_{k}}\, \tag{8}\] where \(E_{\text{LS}}\) is an arbitrary level-shift. Usually, \(E_{\text{LS}}\) is taken to be the exact ground-state energy, \(E_{\text{LS}}=E\), but the consideration of a more general \(E_{\text{LS}}\) unlocks myriad possibilities for the precise form of the correlation energy. In effect, this reframes the BWPT problem in terms of \(E_{c}[E_{\text{LS}}(\Psi_{0})]\), where the correlation energy is expressed in terms of a level-shift energy that itself depends on the wave function. Inserting various _ansatze_ into Eq. 8 leads to different correlation energies. For example, setting \(E_{\text{LS}}=E_{0}\) yields second-order Moller-Plesset perturbation theory and \(E_{\text{LS}}=E_{0}+\delta\) gives \(\delta\)-MP2. Other choices include the pair-correlation energy (\(E_{\text{LS}}=E_{0}+e_{ij}\)), which leads to the independent electron pair approximation (IEPA) or the second-order Bethe-Goldstone equation (BGE2),[11; 62; 63] the second-order correlation energy (\(E_{\text{LS}}=E_{0}+E^{(2)}\)) gives second-order BWPT, and the correlation energy per electron (\(E_{\text{LS}}=E_{0}+E^{(2)}/N_{e}\)) gives the size-extensive xBW2 method.[58] Each choice results in a different correlation energy with different mathematical properties that are summarized in Tab. 1. ### Repartitioned Brillouin-Wigner Perturbation Theory Inspired by the generality of such a modification to BWPT, we consider a slightly more formalized approach by partitioning the Hamiltonian such that the zero-order Hamiltonian contains a regularizing operator that modulates the occupied orbital energies. Specifically, we propose the following partition, \[\hat{H}=\hat{H}_{0}+\lambda\hat{\hat{V}}\, \tag{9}\] where, \[\begin{split}\hat{H}_{0}&=\hat{H}_{0}+\hat{R}\\ \hat{\hat{V}}&=\hat{V}-\hat{R}\end{split} \tag{10}\] where \(\hat{H}_{0}\) is the Fock operator, \(\hat{\hat{V}}\) contains all of the many-body correlations that are not contained within \(\hat{H}_{0}\) and \(\hat{R}\), and \(\hat{R}\) is a one-electron regularizer operator of the form, \[\hat{R}=\sum_{ij}r_{ij}a_{j}^{\dagger}a_{i}. \tag{11}\] \begin{table} \begin{tabular}{l c c c c} \hline \hline Method & \(E_{\text{LS}}\) & Size-consistent & Size-extensive & Invariant \\ \hline MP2, \(\kappa\)-MP2, \(\sigma^{p}\)-MP2 & \(E_{0}\) & ✓ & ✓ & ✓ \\ \(\delta\)-MP2 & \(E_{0}+\delta\); \(\delta>0\) & ✓ & ✓ & ✓ \\ IEPA/BGE2 & \(E_{0}+e_{ij}\); \(\epsilon_{ij}=-\frac{1}{4}\sum_{ab}\frac{|(i|\hat{H}|ab)|^{2}}{\Delta_{ij}^{ab}+ \epsilon_{ij}}\) & ✓ & ✗ & ✗ \\ BW2 & \(E_{0}+E_{c}^{\text{BW2}}\) & ✗ & ✗ & ✓ \\ xBW2 & \(E_{0}+E_{c}^{\text{BW2}}/N_{e}\) & ✗ & ✓ & ✓ \\ BW-s2\({}^{a}\) & \(\bar{E}_{0}+E^{(2)}-E_{\text{R,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ Of particular note is the fact that the infinite summation of the Brillouin-Wigner perturbation series is invariant to partitioning the Hamiltonian in this way.[64] Next, we write the perturbed Schrodinger equation as \[(E-\hat{H}_{0})|\Psi\rangle=\lambda\hat{V}|\Psi\rangle \tag{12}\] Defining \(\hat{Q}=1-|\Phi_{0}\rangle\langle\Phi_{0}|\) and multiplying by this quantity on the left we find, \[\hat{Q}|\Psi\rangle=\lambda\hat{Q}(E-\hat{\bar{H}}_{0})^{-1}\hat{V}|\Psi \rangle=\lambda\hat{\Gamma}_{0}\hat{\bar{V}}|\Psi\rangle \tag{13}\] where, \[\hat{\Gamma}_{0}=\sum_{k\neq 0}\frac{|\Phi_{k}\rangle\langle\Phi_{k}|}{E-\bar{E} _{k}} \tag{14}\] is the resolvent. In the above, we have assumed that \(\Phi_{k}\) are also eigenfunctions of \(\hat{H}_{0}\), such that, \[\bar{E}_{k}=\langle\Phi_{k}|\hat{\bar{H}}_{0}|\Phi_{k}\rangle=\sum_{i}^{\rm occ }\left(F_{i\neq i}^{i}+R_{\neq i}^{i}\right) \tag{15}\] where \(\bar{E}_{k}=E_{k}+E_{R,k}\), is the energy of state \(k\) as modulated by the regularizer operator. Taking the usual assumption of intermediate normalization, _i.e._\(\langle\Phi_{0}|\Psi\rangle=1\), allows us to expand the wave function and energy in a perturbation series, \[\Psi^{(n)} =\sum_{m=0}^{n}(\lambda\hat{\Gamma}_{0}\hat{\bar{V}})^{m}|\Phi_{ 0}\rangle \tag{16}\] \[E^{(n)} =\lambda\langle\Phi_{0}|\hat{V}|\Psi^{(n-1)}\rangle\] Therefore, to first order in \(E\), we find, \[E^{(1)} =\langle\Phi_{0}|\hat{\bar{V}}|\Psi^{(n-1)}\rangle \tag{17}\] \[=\langle\Phi_{0}|\hat{H}|\Phi_{0}\rangle-\langle\Phi_{0}|\hat{ \bar{H}}_{0}|\Phi_{0}\rangle=E_{\rm HF}-\bar{E}_{0}\] which when combined with Eq. 15 (for \(k=0\)) gives the usual result for the first-order energy, \(E=\bar{E}_{0}+E^{(1)}=E_{\rm HF}\). Thus, there is no first-order correction to the Hartree-Fock energy, \(E_{\rm HF}\). The second-order correction differs from BWPT and RSPT, \[E^{(2)} =\sum_{k\neq 0}\frac{\langle\Phi_{0}|\hat{V}|\Phi_{k}\rangle \langle\Phi_{k}|\hat{V}|\Phi_{0}\rangle}{E-\bar{E}_{k}} \tag{18}\] \[=\sum_{k\neq 0}\frac{\langle\Phi_{0}|\hat{V}|\Phi_{k}\rangle \langle\Phi_{k}|\hat{V}|\Phi_{0}\rangle}{(E_{0}-E_{k})+(E_{R,0}-E_{R,k})+E^{(2)}}\] where \(E=\bar{E}_{0}+E^{(2)}\) has been substituted in the denominator, as per the usual BWPT approach. The fully expanded denominator now consists of the zero-order energy gap \(E_{0}-E_{k}\), the correlation energy \(E^{(2)}\), and a new contribution from the regularizer \(E_{R,0}-E_{R,k}\) that changes the state energies. Interestingly, if we take \(\bar{E}_{0}+E^{(2)}-E_{R,k}=E_{\rm LS}\) in accordance with the proposed approach in Ref. [58], we recover Eq. 8. ### Tensor formulation of the second-order energy A convenient tool that ensures orbital invariance of our final correlation energy expression is the tensor formulation of many-body perturbation theory.[65; 66; 67] For MP2, the linear amplitude equation takes the form, \[\sum_{ikcl}\Delta_{ijkl}^{abcd}\cdot t_{kl}^{cd}=-\mathbb{I}_{ijab}\, \tag{19}\] where \(t_{kl}^{cd}\) are the amplitudes, and \[\Delta_{ijkl}^{abcd}=(F_{ac}\delta_{bd}+\delta_{ac}F_{bd})\delta_{ik}\delta_ {jl}-(F_{ik}\delta_{jl}+\delta_{ik}F_{jl})\delta_{ac}\delta_{bd} \tag{20}\] is the usual 8-rank tensor composed of Fock matrix elements, \(F_{pq}\). In the basis of canonical molecular orbitals, where the Fock matrix is diagonal and the orbitals form an orthonormal set, Eq. 20 is trivially diagonal such that solving Eq. 19 leads to the well-known form of the MP2 amplitudes, \[t_{ij}^{ab}=-\frac{\mathbb{I}_{ijab}}{\epsilon_{a}+\epsilon_{b}-\epsilon_{i}- \epsilon_{j}} \tag{21}\] which gives way to the usual MP2 energy expression in Eq. 3. Within this framework, the shifted zero-order Hamiltonian from Eq. 10 leads to, \[\sum_{ikcl}\left(\Delta_{ijkl}^{abcd}+R_{ijkl}^{abcd}\right)\cdot t_{kl}^{cd}= -\mathbb{I}_{ijab} \tag{22}\] where \(\mathbf{R}\) is a regularizing tensor. In the hypothetical case of diagonal \(\mathbf{\Delta}\) and \(\mathbf{R}\) tensors, the amplitudes, \[t_{ij}^{ab}=-\frac{\mathbb{I}_{ijab}}{\Delta_{ij}^{ab}+R_{ij}^{ab}} \tag{23}\] result in the same energy expression as that of Eq. 18 with \(\mathbf{R}\) playing the role of \(E_{R,0}-E_{R,k}\). To retain size-consistency at second order, it is crucial to choose a form of \(\mathbf{R}\) that cancels \(E^{(2)}\) while still modulating the orbital energy gap to avoid divergences. An important feature of Eq. 18 is that it enables a straightforward mechanism for cancelling out the redundant correlation terms in the denominator that result in size-consistency errors in standard BWPT. Namely, if we can define \(\hat{R}\) such that \(\langle\Phi_{ij}^{ab}|(E-\hat{H}_{0}-\hat{R})|\Phi_{kl}^{cd}\rangle=\Delta_{ijkl }^{abcd}+R_{ijkl}^{abcd}\), then the contributions to the denominator of the resolvent that arise from the correlation energy of the entire system (_i.e._\(E^{(2)}\) at second order) will vanish, thereby eliminating size-inconsistent terms. To this end, we choose a form of \(\mathbf{R}\) that ensures that the correlation between any two orbitals \(\{i,j\}\) goes to zero when the orbitals are far apart, \[R_{ijkl}^{abcd}=\frac{1}{2}(W_{ik}\delta_{jl}+\delta_{ik}W_{jl})\delta_{ac} \delta_{bd}\, \tag{24}\] where \[W_{ij}=\frac{1}{2}\sum_{kab}\left[t_{ik}^{ab}(jk||ab)+t_{jk}^{ab}(ik||ab) \right]. \tag{25}\] An important property of \(\mathbf{W}\) is that \(\text{tr}(\mathbf{W})=E^{(2)}\), which results in total cancellation of \(E^{(2)}\) in the resolvent, leading to a size-consistent energy expression at second-order. Specifically, it can be shown that, \[\langle\Phi_{ij}^{ab}|\hat{R}|\Phi_{kl}^{cd}\rangle=\sum_{n}W_{nn}+\frac{1}{2}(W_ {ik}\delta_{jl}+\delta_{ik}W_{jl})\delta_{ac}\delta_{bd} \tag{26}\] thus straightforwardly cancelling \(E^{(2)}\) while modifying the matrix elements connecting pairs of occupied orbitals (and the occupied-orbital energies). However, we note that size-inconsistent terms enter at third and higher orders. Matrix elements of Eq. 25 appear also in an orbital invariant CEPA(3) correction,[68] and share similarities with one of the terms in the MP2 orbital energy gradient.[33] In particular, Eq. 25 is related to the correlation contribution to the ionization energy of orbital \(i\), \[E_{c}^{\text{IP,}j}=\frac{1}{2}\sum_{lab}t_{ik}^{ab}(ik||ab) \tag{27}\] where the orbitals are fixed at those of the \(n\)-electron system. One may notice that the elements of \(\mathbf{W}\) in Eq. 25 correspond to \(2E_{i}^{\text{IP,corr}}\). Not only does this factor of 2 naturally emerge from the necessity of cancelling \(E^{(2)}\) in the resolvent, but it can also be understood as a means of modulating the energies of both occupied orbitals involved in any double substitution with \(E_{c}^{\text{IP,}i}\). We elaborate further on this point in Appendix A. One complication that arises in the solution of Eq. 22 with our proposed form of \(\mathbf{R}\) is that in the canonical orbital basis the left-hand side of Eq. 22 is not diagonal. Instead, it takes the form, \[\sum_{klcd}\bigg{\{}\{F_{ac}\delta_{bd}+\delta_{ac}F_{bd}\}| \delta_{ik}\delta_{jl}-\delta_{ac}\delta_{cd}[F_{ik}\delta_{jl}+\delta_{ik}F_ {jl}] \tag{28}\] \[\qquad-\frac{\delta_{ac}\delta_{bd}}{2}\left(W_{ik}\delta_{jl}+ \delta_{ik}W_{jl}\right)\bigg{\}}t_{kl}^{cd}=-\mathbb{I}_{ijab}\] which, after contracting the first two terms over all orbital indexes \(\{k,l,c,d\}\) and the final four terms over virtual-orbital indexes \(\{c,d\}\) gives, \[[\varepsilon_{a}+\varepsilon_{b}]t_{ij}^{ab} \tag{29}\] \[\qquad\qquad-\sum_{kl}\bigg{[}\bigg{(}F_{ik}+\frac{W_{ik}}{2} \bigg{)}\delta_{jl}+\delta_{ik}\bigg{(}F_{jl}+\frac{W_{jl}}{2}\bigg{)}\bigg{]} t_{kl}^{ab}\] \[\qquad\qquad=-\mathbb{I}_{ijab}\] Where we have not carried out the contraction over indexes \(k\) and \(l\) for the occupied-occupied block of \((\mathbf{\Delta}+\mathbf{R})\cdot\mathbf{t}\) as to emphasize both that \(\mathbf{W}\) only changes the occupied-occupied block and that \(\mathbf{W}\) is not diagonal in the basis of canonical orbitals. One way to solve Eq. 29 is to store the amplitudes in memory and solve for them using an iterative scheme, as is often done in local correlation methods.[69; 70; 71] However, amplitude storage can be avoided if we find a suitable basis wherein the left-hand side of Eq. 29 is diagonal. To accomplish this goal, we can leverage the orbital invariance of Eq. 29 by rotating the occupied molecular orbitals into a basis where the matrix \(\mathbf{F}_{\text{oo}}+\frac{1}{2}\mathbf{W}\) is diagonal (where \(\mathbf{F}_{\text{oo}}\) is the occupied-occupied block of the Fock matrix). To find the appropriate rotation, we solve the Hermitian eigenvalue equation, \[\bigg{(}\mathbf{F}_{\text{oo}}+\frac{1}{2}\mathbf{W}\bigg{)}\mathbf{U}=\tilde{ \varepsilon}\mathbf{U} \tag{30}\] where \(\tilde{\varepsilon}\) are a set of _dressed_ occupied orbital eigenvalues. Rotating the occupied molecular orbital coefficients, \(\mathbf{C}_{\text{occ}}\), into this new basis _via_ the unitary matrix, \(\mathbf{U}\), \[\mathbf{\tilde{C}}_{\text{occ}}=\mathbf{C}_{\text{occ}}\mathbf{U} \tag{31}\] ensures that the tensor \(\mathbf{\Delta}+\mathbf{R}\) is diagonal. In this new basis, Eq. 29 takes the form, \[(\varepsilon_{a}+\varepsilon_{b}-\tilde{\varepsilon}_{i}-\tilde{\varepsilon} _{j})\tilde{t}_{ij}^{ab}=-\mathbb{I}_{ijab} \tag{32}\] where the integrals in \(\mathbb{I}_{ijab}\) have been rotated into the new basis. Solving the transformed equation gives the amplitudes, \[\tilde{t}_{ij}^{ab}=-\frac{\mathbb{I}_{ijab}}{(\varepsilon_{a}+\varepsilon_{b} -\tilde{\varepsilon}_{i}-\tilde{\varepsilon}_{j})} \tag{33}\] and the energy \[\tilde{E}_{c}=-\frac{1}{4}\sum_{ijab}\frac{|\mathbb{I}_{ijab}|^{2}}{(\varepsilon _{a}+\varepsilon_{b}-\tilde{\varepsilon}_{i}-\tilde{\varepsilon}_{j})} \tag{34}\] Note the use of the dressed eigenvalues \(\tilde{\varepsilon}_{p}\) in the above equations, which are a consequence of the change of basis. These dressed eigenvalues are modulated by the choice of \(\mathbf{R}\), which in our case is related to the ionization potential of the orbital. Specifically, using Koopmans' theorem[72] we may rewrite the canonical orbital-energy differences as, \[\Delta_{ij}^{ab}=E_{i}^{\text{IP}}+E_{j}^{\text{IP}}-E_{a}^{\text{EA}}-E_{b}^{ \text{EA}} \tag{35}\] where \(E_{p}^{\text{IP}}\) and \(E_{p}^{\text{EA}}\) are the ionization energy and electron affinity of orbital \(p\), respectively. Considering the relationship in Eq. 27, the action of our regularizer is to replace \(E_{p}^{\text{IP}}\) with their correlated counterparts, \(\tilde{E}_{i}^{\text{IP}}=E_{i}^{\text{IP}}+E_{i}^{\text{IP,corr}}\), thus augmenting the gap by correlating the ionization energies. Notably, this concept of correcting the quasiparticle energies has strong similarities to Green's function based perturbation theories[39; 40] which are actively being explored in the context of regularized perturbation theories.[41] Our adherence to the tensorial formalism and careful consideration of exact conditions ensures that this _ansatz_ for the form of \(\hat{R}\) retains crucial properties such as size-consistency, size-extensivity, and orbital invariance in the second order energy. Therefore, we limit our studies in this work to those that probe the properties of Brillouin-Wigner perturbation theory with a size-consistent second-order correlation energy, herein denoted BW-s2. The size-consistency of BW-s2 can indeed be proven, and we have done so in Appendix B. While we avoid amplitude storage, the BW-s2 energy expression remains self-consistent because the \(\mathbf{W}\) matrix depends on the amplitudes, which themselves depend on the modulation of the energy gap supplied by the \(\mathbf{W}\) matrix. The flowchart in Fig. 1 shows the iterative protocol that we use to solve for the amplitudes. We opt for an energy convergence threshold such that once the change in energy between iterations is sufficiently small, the algorithm converges. We note that this procedure is general and can be used in conjunction with all of the orbital-invariant methods listed in Tab. 1. As an example, for MP2 the \(\mathbf{R}\) tensor is simply the zero matrix so the rotations supplied by \(\mathbf{U}\) are the identity matrix, \(\mathbf{I}\), and the algorithm converges in one step. This corresponds to setting \(E=E_{0}\) in Eq. 18 with the matrix representation of \(\hat{R}\) being the zero matrix. Similarly, for \(\delta\)-MP2, \(\mathbf{R}\) is a diagonal matrix whose nonzero entries are the value of \(\delta\), leading again to a one-step solution. In the case of the BW2 and xBW2 methods, the \(\mathbf{W}\) matrix is diagonal with elements \(E_{c}^{\text{BW2}}\delta_{j}\) or \((E_{c}^{\text{xBW2}}/N_{e})\delta_{ij}\), again leading to \(\mathbf{U}=\mathbf{I}\), but with a self-consistent energy expression that will still require several cycles to converge. In order to greatly speed up the evaluation of the two-electron integrals we use the resolution-of-the-identity (RI) approximation,[73, 74] where, \[(ia|jb)=\sum_{PQ}(ia|P)(P|Q)^{-1}(Q|jb) \tag{36}\] The RI fit coefficients, \(C_{pq}^{P}\), for the \(|pq\rangle\) charge distribution are, \[C_{pq}^{P}=\sum_{pqQ}(P|Q)^{-1}(Q|pq) \tag{37}\] the 3-center, 2-particle density matrix is, \[\Gamma_{ai}^{P}=\sum_{jb}t_{ij}^{ab}C_{jb}^{P} \tag{38}\] and finally, we also define \[V_{ia}^{P}=(ia|P)\;. \tag{39}\] This allows us to rewrite Eq. 25 as, \[W_{ij}=\frac{1}{2}\sum_{aP}V_{ia}^{P}\Gamma_{aj}^{P}+\Gamma_{ia}^{P}V_{aj}^{P} \tag{40}\] which is bottlenecked by the \(\mathcal{O}(N^{5})\) construction of \(\mathbf{\Gamma}\), therefore adding only trivial overhead to the usual MP2 energy evaluation. Finally, we rewrite the MP2-like energy expression from Eq. 34 in the dressed-orbital basis as, \[\hat{E}_{c}=-\frac{1}{2}\sum_{iaP}\hat{V}_{ia}^{P}\Gamma_{ai}^{P}\,, \tag{41}\] where \(\mathbf{\tilde{\Gamma}}\) and \(\mathbf{\tilde{V}}\) are constructed using the transformed integrals and amplitudes according to Eqns. 31 and 32. With iterative \(\mathcal{O}(N^{5})\) cost, the RI approximation makes the BW-s2 approach comparable in cost to other common methods like CC2,[75, 76] or EOM-MBPT2.[77] ## III Computational details All calculations were performed in a development version of Q-Chem v6.0.2.[78] All SCF convergence thresholds were set to \(10^{-8}\) root-mean-square error and the convergence threshold for the correlation energy was likewise set to \(10^{-8}\) Ha for all calculations except for those of the L7 dataset, where it was reduced to \(10^{-5}\) Ha for the sake of computational cost. This should not influence the accuracy of the calculations because an energy difference of \(10^{-5}\) Ha is only 0.006 kcal/mol. We note that even in the case of a tight Brillouin-Wigner correlation energy threshold of \(10^{-8}\) Ha, only 6 cycles were required on average (regardless of dataset) to converge the correlation energy. Figure 1: Flowchart outlining the iterative procedure for solving for the amplitudes for any orbital-invariant second-order correlation method. \(E_{c}^{(i)}\) indicates the correlation energy on the current iteration \(i\), and \(E_{c}^{(i-1)}\) is the correlation energy from the previous iteration. To avoid the well-known degradation of perturbation theory results in systems with appreciable spin-contamination,[79; 80; 81; 82; 83; 84] we use restricted open-shell orbitals which are separately pseudocanonicalized in the \(\alpha\) and \(\beta\) spaces before computing the correlation energy in all open-shell systems, akin to the RMP2 method.[83] We include the non-Brillouin singles (NBS) contributions via, \[E_{\text{NBS}}=-\sum_{ia}\frac{|F_{ia}|^{2}}{\epsilon_{a}-\epsilon_{i}} \tag{42}\] where \(F_{ia}\) are off-diagonal Fock matrix elements. Notably, \(E_{\text{NBS}}\) is invariant to the change of basis that is used to solve the BW-s2 equations. While Kohn-Sham orbitals have been applied to Moller-Plesset perturbation theory with great effect,[85; 86; 87] we emphasize that we use the Hartree-Fock reference determinant throughout this work, leaving the prospect of combining Kohn-Sham orbitals with BW-s2 for future exploration. ## IV Results and discussion We first assess the fundamental properties of various second-order correlation methods with some simple numerical tests. We have proven the size-consistency, and by extension size-extensivity of BW-s2 in Appendix B, but from a practical perspective it is important to recover these properties in numerical calculations, so these tests will serve to aid in the verification of our implementation. While we make some conclusions about the size-extensivity, size-consistency, and unitary invariance of other methods in this section, we emphasize that these tests are insufficient to prove that a given method has these properties in general. However, it is necessary that any method that is size-consistent, size-extensive, and unitary invariant must recover the expected results in the following tests, so a failure on any one of these metrics is sufficient to discount a method from having the property that the metric was designed to test. For the first test, we check for orbital invariance by using canonical and Edmiston-Roudenberg localized orbitals[88; 89] with the cc-pVDZ[90; 91] basis set on the H\({}_{2}\) dimer, placed in a parallel configuration at 5.4 A separation. A method that yields the same correlation energy despite arbitrary orbital rotations in the occupied (or virtual) subspace is considered to be orbital invariant, therefore we should expect no change in the correlation energy on the change from canonical to localized orbital representations. In the cases of the MP2, BW2, xBW2, and BW-s2 methods, the correlation energy remains exactly the same regardless of orbital representation, but the IEPA/BGE2 method is not invariant to orbital rotations, leading to an energy difference of 6.3 meV between canonical and localized representations. This is a well known result,[63; 11; 62] and actually requires that we skip the orbital rotation step in the algorithm in Fig. 1 for the IEPA/BGE2 method, instead opting for direct solution of the correlation energy expression with off-diagonal contributions from the pair-correlation energy in the denominator. We next assess size-consistency by calculating the interaction energy between He and Xe at 40 A separation using the Def2-SVP/Def2-ECP basis set and effective core potential.[92] A method is considered to be size-consistent if the total energy for a supersystem comprised of noninteracting subsystems \(A\) and \(B\) is the same as the sum of the energies of the individual subsystems, \(E(A+B)=E(A)+E(B)\). In our case, the He\(\cdots\)Xe interaction energy at 40 A separation should obviously be zero if a method is properly size-consistent, which is precisely the result obtained with the MP2, IEPA/BGE2, and BW-s2 method. The BW2 method has a large residual correlation energy of 111 meV and xBW2 gives a smaller, but still significant 1 meV interaction energy for this system. This implies that the size-extensive xBW2 method is not size-consistent, which could have dramatic consequences in calculations on extended systems, for which it was proposed.[58] Finally, as a metric for size-extensivity Fig. 2 examines the correlation energy per electron in a linear chain of He atoms. A method is considered to be size-extensive if, for any chain of identical subsystems, the total correlation energy grows linearly with the number of electrons in the system. Thus, the slope of each line in Fig. 2 should be zero for a size-extensive method. This is the case for MP2, xBW2, and BW-s2, but not for BW2 or IEPA/BGE2. The Brillouin-Wigner series is infamous for its failure as a many-body theory, and the monotonic decrease in correlation energy per electron of BW2 makes this abundantly clear. IEPA/BGE2 exhibits strange behavior, approaching a slope of zero in the limit of an infinitely large He chain, but with an inverse-power dependence on the number of electrons. In this sense, for finite systems IEPA/BGE2 is not size-extensive, and because finite systems encompass nearly all practical calculations we consider this a notable failure of IEPA/BGE2. Lastly, we note that for a single He atom the BW2, IEPA/BGE2, and BW-s2 methods yield the same correlation energy, which is expected for two-electron systems. The summary of the findings of all of these tests can be seen in Table 1. So far, we have used the raw form of Eq. 24, but we should Figure 2: Correlation energy per electron in He chains of increasing size using the cc-pVDZ basis set. The spacing between He atoms was set to 3 Å. All calculations were done in the full basis of all He atoms by including ghost functions for the atoms that are not included explicitly. note that this form is somewhat arbitrary and may be amenable to a scaling parameter, \(\alpha\), that modulates the extent of regularization in the form \(\alpha\mathbf{R}\). Such a parameter would maintain the size-consistent/extensive nature of the perturbation theory, as \(E=\tilde{E}_{0}+\alpha E^{(2)}\) in Eq. 18 and \(\text{tr}(\mathbf{W})=\alpha E^{(2)}\) in this case. Fortunately, the agreement between BW-s2 and BW2 for two-electron systems offers an exact condition for which the parameter \(\alpha\) can be set. For a two-state system BWPT yields the exact energy at second order,[45] so we should expect BW2 and a properly parameterized BW-s2 to achieve the exact dissociation limit for hydrogen molecule in a minimal (two-orbital) basis set. It can be shown that in such a minimal basis set the regularizer tensor in Eq. 24 reduces to the BW2 correlation energy if and only if \(\alpha=1\) (_i.e._ the unmodified tensor), allowing us to set the value of \(\alpha\) from first principles. Somewhat more laboriously, it can also be shown that at the dissociation limit in this minimal basis the optimal BW2 and BW-s2 amplitude is exactly \(t_{in}^{aa}=1\), as expected for a two-electron, two-orbital system where the orbitals \(i\) and \(a\) are exactly degenerate. From here, we resort to numerical testing to illustrate the behavior of BW-s2(\(\alpha=1\)) for H\({}_{2}\) dissociation in minimal and non-minimal basis sets. Dissociation curves in both STO-3G[94] and aug-cc-pV5Z basis sets for the \(\alpha=1\) case are shown in Fig. 3a. As one might expect, the STO-3G results with restricted Hartree-Fock (RHF) orbitals show a steep rise to energies that are too high, while the energies using unrestricted Hartree-Fock (UHF) orbitals quickly meet the full configuration interaction (FCI) result for the dissociation limit, leading to the appearance of a Coulson-Fischer point at 1.3 A. What is most interesting is the behavior at the asymptotic limit, where the highest occupied molecular orbital and lowest unoccupied molecular orbital are exactly degenerate and the RHF Figure 3: Bond-stretching potential energy curves for (a) dissociation of the hydrogen molecule with BW-s2(\(\alpha=1\)) in (left) a minimal basis set of two orbitals and (right) a fairly complete basis set, and (b) from left to right, C–C, C=C, and N\(\equiv\)N dissociation curves of ethane, ethene, and nitrogen, respectively. The gray dashed lines mark the asymptotic limit of BW-s2 with RHF orbitals as numerically estimated by a calculation in which the bond length was set to 100,000 Å. The potential energy curves in (b) were calculated using the aug-cc-pVQZ basis set. All equilibrium geometries were optimized at the \(\omega\)B97X-V/Def2-TZVPPD level[93] and the equilibrium bond distance was incremented by 0.1 Å steps to generate the potential surfaces. MP2 energy diverges. In this limit, we find that the appropriately parameterized BW-s2(\(\alpha=1\))/STO-3G theory converges to the exact FCI result regardless of whether or not the initial orbitals were spin-polarized. However, with BW-s2(\(\alpha=1\), RHF/aug-cc-pVSZ the FCI limit is no longer attained at second order and we instead find an upper bound to the exact result. Encouragingly, the RHF/UHF difference remains quite small even in the large basis set at roughly 12 mHa, so we shall retain the _ab initio_\(\alpha=1\) parameter throughout this work. Repeating this exercise by fitting the value of \(\kappa\) such that the \(\kappa\)-MP2/STO-3G energy with RHF orbitals equates to the FCI energy at the asymptotic limit results in an optimal \(\kappa=495.2\) Ha\({}^{-1}\), amounting to what appears to be almost no regularization. However, with a gap of almost zero the \(\Delta\)-dependent regularizers naturally suppress most of the correlation energy as the extent of regularization is proportional to \((1-\exp[-\kappa\Delta_{ij}^{ab}])\), so the optimal \(\kappa\) value must be large to retain any appreciable amount of correlation. Such a limit of near-degenerate orbitals therefore leads to a situation where the optimal value of \(\kappa\) becomes exponentially sensitive to the particular value of the (very small) energy gap, introducing acute basis set dependence. For instance, using the same value of \(\kappa\) from the STO-3G calculation leads to an RHF-like energy of \(-0.76\) Ha in the aug-cc-pV5Z basis set. Unfortunately, this implies that such _ab initio_ parameterization for \(\Delta\)-dependent regularizers is not appropriate, but these results also showcase a crucial advantage of amplitude-dependent regularization in BW-s2; namely, BW-s2 will predict nonzero correlation energies between orbitals that are exactly degenerate, perhaps improving its performance for statically correlated systems. Additional potential energy curves that feature single-bond breaking in ethane, double-bond breaking in ethane, and triple-bond breaking in nitrogen are shown in Fig. 3b. Remarkably, BW-s2 succeeds in breaking the C-C sigma bond in ethane without error in the dissociation limit such that the RHF and UHF solutions are asymptotically equivalent. While it might be expected that BW-s2 performs well in the case of 2-electron, 2-orbital strong correlation, one might be less optimistic about how a double-substitution theory will hold up when multiple bonds are dissociated. Indeed, as the bond order increases, the asymptotic solution of BW-s2 with RHF orbitals deviates further and further from the spin-polarized result, leading to errors of 47 mHa and 233 mHa for ethane and nitrogen, respectively. Despite this, the potential energy curves are smooth and don't yield any particularly surprising results. The performance of BW-s2 in the sigma-bond breaking of H\({}_{2}\) and ethane is highly encouraging for future single-bond breaking applications. We now turn our attention to the statistical performance of the BW-s2 method across several noncovalent interaction (NCI) datasets. The NCI datasets span a wide range of molecular sizes and interaction types. Where A24,[95] S22,[17] S66,[96] and the non-I containing subset of X40 (herein called X31)[97] are datasets of small to medium sized nonbonded molecular complexes with a variety of interaction motifs, the L7 dataset contains mostly nanoscale \(\pi\)-stacking interactions which are particularly difficult for MP2.[98] To compare with the benchmark complete basis set limit (CBS) coupled-cluster with single, double and perturbative triple substitutions (CCSD(T)) data, all perturbation theory results are extrapolated to the CBS limit using the aug-cc-pVDZ/aug-cc-pVTZ[99; 90] extrapolation scheme from Ref. [100]. We note for the L7 set that we compare to the updated domain-localized pair natural orbital CCSD(T\({}_{0}\))/CBS[101; 102; 103; 104] benchmarks of Lao and coworkers and that we use the heavy-aug-cc-pVDZ/heavy-aug-cc-pVTZ extrapolation method that was recommended therein.[105] The results for the NCI datasets in Fig. 4 compare BW-s2, MP2, and \(\kappa\)-MP2 using several \(\kappa\) parameters for each respective dataset. Regularized perturbation theories outperform MP2 for S22, S66, and X31 datasets, and are only marginally different from MP2 for the A24 set where MP2 already performs quite well. The optimal value of \(\kappa\) changes a fair amount between the NCI datasets (1.1\(\leq\kappa\leq\)1.45 Ha\({}^{-1}\)),[12] Figure 4: Root-mean-squared errors (log scale) of various second-order perturbation theories against CCSD(T)/CBS reference energies for noncovalent interaction datasets. The optimal (opt) \(\kappa\) parameters for \(\kappa\)-MP2 were set to 1.2 Ha\({}^{-1}\) for S22, 1.45 Ha\({}^{-1}\) for S66, X31, and A24, and 1.1 Ha\({}^{-1}\) for L7 as per Ref. [12]. Figure 5: Root-mean-squared errors of various second-order perturbation theories against theoretical best estimate values for H-atom transfer (HTBH38) and non-H-atom transfer (NHTBH38) datasets. All data were extrapolated to the CBS limit using an aug-cc-pVTZ/aug-cc-pVQZ extrapolation scheme. so we report results from \(\kappa\)-MP2 with the optimal parameter along with the two previously suggested "universal" parameters \(\kappa=1.1\)[12] and \(\kappa=1.45\).[33] The results obtained with \(\kappa=1.1\) and the optimized value of \(\kappa\), \(\kappa\)(opt), offer consistently low errors across the NCI benchmarks, and \(\kappa=1.45\) performs well across all but the L7 dataset, where the error increases dramatically from 1.3 to 4.2 kcal/mol with \(\kappa\)(opt) and \(\kappa=1.45\), respectively. Across all NCI datasets, BW-s2 performs roughly the same as \(\kappa=1.45\) with slightly larger errors on average. Notably, on the A24 dataset, BW-s2 outperforms all methods, which contrasts with the fact that all \(\kappa\)-MP2 results give errors greater than or equal to MP2. This suggests that BW-s2 has some degree of flexibility in its regularization that is not present in \(\kappa\)-MP2, perhaps hinting at some additional transferability offered by the BW-s2 framework. Overall, it is encouraging to see such similar performance between BW-s2 and one of the suggested "universal" \(\kappa\) parameters, especially given that BW-s2 is parameter-free. This notion of transferability can be further tested by examining H and heavy-atom transfer barrier heights of HTBH38 and NHTBH38,[106, 107] where MP2 performs better without regularization.[12] The data in Fig. 5 show that MP2 still performs better without regularization, but BW-s2 comes very close to this unregularized limit. As the \(\kappa\) parameter in \(\kappa\)-MP2 is adjusted away from the rather large (optimal) value of \(\kappa=1.6\) to either of the two "universal" values of \(\kappa=1.45\) or \(\kappa=1.1\), the errors climb dramatically. This is a clear reminder that \(\kappa\)-MP2 does not truly have a universal parameter that works well for all chemical problems, but the good performance of BW-s2 here seems to indicate that BWPT may be more versatile. The self-consistent nature of the BWPT equations leads to a modulation of the fundamental gap that is informed by the value of \(\mathbf{W}\), which in turn is informed by the amplitudes, introducing a feedback loop that leads to improved transferability of BW-s2 over that of noniterative gap-dependent regularizers. We now consider the nonmultireference subset of the W4-11 thermochemical database,[109] which includes 124 atomization energies, 505 heavy-atom transfer energies, 83 bond-dissociation energies, 20 isomerization energies, and 13 nucleophilic substitution energies of small molecules and radicals. The data in Fig. 6 show that \(\kappa\)-MP2 does not generally improve upon the MP2 results, occasionally making matters worse for heavy-atom transfers, isomerization, and nucleophilic substitution energies. Even the value of \(\kappa=1.45\), which was parameterized on the W4-11 dataset, performs only about as well as MP2 overall. On the other hand, BW-s2 far exceeds the performance of \(\kappa\)-MP2 with tangible reductions in errors across all subsets of W4-11 except for bond-dissociation energies which remain roughly the same. For the whole W4-11 set, the BW-s2 results are markedly better than MP2 and \(\kappa\)-MP2, even rivaling the overall performance of CCSD. The largest improvements offered by BW-s2 are in the total atomization energies, improving upon the MP2 results by 3 kcal/mol. Overall, BW-s2 has a root-mean-squared error (RMSE) of 6.2 kcal/mol for W4-11, improving thermochemical properties relative to MP2 and \(\kappa\)-MP2 by roughly 1.5 kcal/mol. These data suggest that the BW-s2 method does not only track well with gap-dependent regularizers for NCIs, but for barrier heights and general thermochemical properties it exceeds their performance, implying that BWPT approaches may be more transferable across chemical problems. Another setting where regularized MP2 seems to thrive whereas MP2 often fails is in transition metal systems.[12] To assess our approach, we report a finite-basis set comparison of MP2, \(\kappa\)-MP2, and BW-s2 against the def2-TZVPP data of the metal-organic reactions (MOR41) dataset in Fig. 7.[13] For transition metal systems, MP2 performs poorly with an RMSE of 12.4 kcal/mol and \(\kappa\)-MP2 with a very low \(\kappa=0.8\) performs quite admirably with an RMSE of 5 kcal/mol. However, this value of \(\kappa\) represents very strong regularization, and values of \(\kappa=1.1\) or \(\kappa=1.45\) are more appropriate for general usage. The error does not increase when going to \(\kappa=1.1\), but it increases noticeably to 7.4 kcal/mol with \(\kappa=1.45\). While Figure 6: Root-mean-squared errors for various second-order perturbation theories with respect to the subsets of thermochemical data within W4-11 which includes total atomization energies (TAE), heavy-atom transfer (HAT), bond-dissociation energies (BDE), isomerization (ISO), and nucleophilic substitution (SN) energies. The CBS limit results for all of W4-11 were obtained with an aug-cc-pVTZ/aug-cc-pVQZ extrapolation scheme and are compared with CCSD data from Ref. [108]. Figure 7: Root-mean-squared error for the MOR41 dataset for MP2, \(\kappa\)-MP2, and BW-s2 using the def2-TZVPP basis along with the def2-ECP for 4d and 5d metal atoms. An optimal value of \(\kappa=0.8\) was determined in Ref. [12]. BW-s2 does not perform as well as \(\kappa\)-MP2 on MOR41, the RMSE is still reduced by 2.4 kcal/mol relative to MP2. Regarding the \(\kappa\)-MP2 results, the parameter \(\kappa=0.8\) is very small and performs poorly for NCIs, barrier heights, and thermochemical properties, suggesting that it is highly adapted to transition metal complexes. While the results in Fig. 7 highlight some limitations in the flexibility of BW-s2, whose errors are most similar to \(\kappa\)-MP2(\(\kappa=1.45\)), it also shows that empirical parameterization can be tailor made for a given class of chemical problem. Overall, BW-s2 provides a satisfactory improvement relative to MP2 for transition metal systems while remaining comparable to \(\kappa\)-MP2 within a more typical \(\kappa\) parameter range. ## V Conclusions We have suggested a novel partitioning of the Hamiltonian into a zero-order part that includes the usual sum of Fock operators along with a regularizer operator that acts to screen the pair correlations in the resultant theory. We cast the second-order Brillouin Wigner energy from this theory into a tensor framework such that orbital invariance was straightforwardly preserved, and we chose a form of the regularizer operator that resulted in a size-consistent and size-extensive second-order energy. We also suggested a general algorithm to solve the self-consistent second-order equation at \(\mathcal{O}(N^{5})\) cost and without the need to store amplitudes. Over a small set of single, double, and triple bond dissociations, second-order size-consistent Brillouin-Wigner perturbation theory with a shifted zero-order Hamiltonian (BW-s2) performs encouragingly by dissociating the C-C bond in ethane to an asymptotic limit that is invariant to the spin-polarization of the reference orbitals, while supplying smooth potential energy curves for multiple-bond dissociation in ethene and nitrogen. Our approach is exact for two-electron, two-orbital systems and dissociates minimum basis H\({}_{2}\) to the full configuration interaction limit regardless of the choice of reference orbitals. The BW-s2 approach also performs about as well as the \(\kappa\)-MP2 approach across noncovalent interactions of small molecules, and while performing only slightly worse than \(\kappa\)-MP2(\(\kappa=1.45\)) for metal/organic reaction barriers and noncovalent interaction energies of nanoscale \(\pi\)-stacked systems, BW-s2 still improves significantly upon the MP2 results. Importantly, for broad thermochemical properties BW-s2 outperforms \(\kappa\)-MP2 by a wide margin, even nearing the performance of CCSD. The L7 and MOR41 datasets require exceptionally strong regularization for \(\kappa\)-MP2 to be successful (\(\kappa=1.1\) and \(\kappa=0.8\), respectively). In these cases, BW-s2 does not match the accuracy of \(\kappa\)-MP2 as it generally supplies softer regularization that tends to be more comparable to a more conservative \(\kappa\)-MP2(\(\kappa=1.45\)). So, while BW-s2 is less flexible than empirically parameterized regularizers, it still gives results that are consistent with typical values of \(\kappa\) when the requisite \(\kappa\)-regularizer becomes extreme. All of this was accomplished with an _ab initio_ partitioning of the Hamiltonian, which itself could be parameterized to augment the strength of the regularization. ###### Acknowledgements. This work was supported by the Director, Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. K. C.-F. acknowledges support from the National Institute Of General Medical Sciences of the National Institutes of Health under Award Number F32GM149165. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. ## Author Declarations ### Conflict of Interest Martin Head-Gordon is a part-owner of Q-Chem, which is the software platform used to perform the developments and calculations described in this work. ### Author Contributions **Kevin Carter-Fenk**: Conceptualization (equal); investigation (equal); writing - original draft (lead); formal analysis (lead); writing - review and editing (equal); Software (lead); funding acquisition (equal). **Martin Head-Gordon**: Conceptualization (equal); investigation (equal); writing - review and editing (equal); funding acquisition (equal); Supervision (lead). ## Data Availability Cartesian coordinates for each point along the bond-dissociation potential energy curves are available in the supplementary material. Any other data that support this study are available from either corresponding author upon reasonable request. ## Appendix ### Additional notes on the form of W Let us consider the simple case of a system with a single doubly-occupied orbital and \(n_{v}\) virtual orbitals. In this case, Eq. 22 can be iteratively solved directly in the canonical molecular orbital basis, as **W** is trivially diagonal when only one occupied orbital is present (_i.e._ the orbitals are fixed in the canonical representation). The amplitudes take the form \(t_{ii}^{ab}\), and the matrix elements \(W_{ii}\) work out to be, \[W_{ii}=\sum_{ab}t_{ii}^{ab}(ii||ab)=2E_{c}^{\text{IP},i}\;. \tag{43}\] When we consider constructing the full denominator with \(\Delta_{ii}^{ab}\) and \(R_{ii}^{ab}\) (Eq. 24), it becomes apparent that this factor of two is necessary, \[R_{ii}^{ab}=\frac{1}{2}(W_{ii}+W_{ii})=2E_{c}^{\mathrm{IP},i}\,, \tag{44}\] leading to the full denominator, \[\begin{split}\Delta_{ii}^{ab}+R_{ii}^{ab}&=\epsilon _{a}+\epsilon_{b}-2\epsilon_{i}+2E_{c}^{\mathrm{IP},i}\\ &\qquad\qquad=2E^{\mathrm{IP},i}+2E_{c}^{\mathrm{IP},i}-E^{ \mathrm{EA},a}-E^{\mathrm{EA},b}\end{split} \tag{45}\] Therefore the factor of \(2\times\) the ionization energy is required to augment both occupied orbital energies (ionization potentials) by the correlation contribution. We note that analysis of the MP2 correlation energy in terms of Koopmans' theorem has been employed to understand why MP2 energy denominators are typically overestimated, even in manifestly nondegenerate cases; namely that this can be understood in terms of missing particle-hole interactions which would otherwise stabilize the zeroth-order double-excitations.[110] The contribution of \(E_{i}^{\mathrm{IP},\mathrm{corr}}\) is consistently positive, though for any given orbital pair, the corresponding elements \(W_{ij}\) are not necessarily negative. The overall effect of this is to increase the energy gaps, but \(\mathbf{W}\) still incorporates off-diagonal contributions that destabilize the final orbital energies. ### Proof of BW-s2 Size-Consistency We consider two closed shell subsystems, \(A\) and \(B\), that are infinitely far apart. As the subsystems are isolated from one another and the BW-s2 energy is orbital invariant by nature of the tensor formulation, we can cleanly ascribe occupied and virtual orbitals to each subsystem. We first examine the form of the \(\mathbf{W}\) matrix, Eq. 25, in this localized orbital basis, \[\mathbf{W}=\begin{bmatrix}\mathbf{W}_{AA}&\mathbf{W}_{AB}\\ \mathbf{W}_{BA}&\mathbf{W}_{BB}\end{bmatrix} \tag{46}\] where \(A\) and \(B\) denote the subsystem. In this form, we can readily rule out the cross terms by examining, \[W_{i_{A},j_{B}}=\frac{1}{2}\sum_{PQR}\sum_{k_{P}a_{Q}b_{R}}t_{i_{A}b_{P}}^{a_{Q }b_{R}}(j_{B}k_{P}||a_{Q}b_{R})+t_{j_{B}k_{P}}^{a_{Q}b_{R}}(i_{A}k_{P}||a_{Q}b _{R}) \tag{47}\] where \(P\), \(Q\), and \(R\) run over \(A\) and \(B\) subsystem indexes. In the case \(P=A\), the integrals \((j_{B}k_{A}||ab)=0\), which includes \(t_{j_{B}k_{A}}^{ab}\), and in the case \(P=B\), all integrals \((i_{A}k_{B}||ab)=0\), resulting in \(W_{i_{A},j_{B}}=0\lor k_{P}\). The only terms that survive are those where \(i\), \(j\) and \(k\) belong to the same subsystem. Of those, the integrals \((i_{A}k_{A}||a_{B}b_{A})=(i_{A}k_{A}||a_{A}b_{B})=(i_{A}k_{A}||a_{B}b_{B})=0\) as these are excitations from occupied orbitals in one subsystem to virtual orbitals in another. Thus, the only integrals that are nonzero are those that satisfy \(\{i,j,k,a,b\}\in A\) or \(\{i,j,k,a,b\}\in B\). The matrix \(\mathbf{W}\) therefore takes the form, \[\mathbf{W}=\begin{bmatrix}\mathbf{W}_{AA}&\mathbf{0}\\ \mathbf{0}&\mathbf{W}_{BB}\end{bmatrix} \tag{48}\] whenever \(i\) and \(j\) are disjoint. This result verifies that \(\mathbf{W}\) itself does not couple non-interacting subsystems, just like the Fock matrix, \(\mathbf{F}\) or the two-electron integral tensor. In addition, just like \(\mathbf{F}_{AA}\), contributions to \(\mathbf{W}_{AA}\) are completely independent of the presence of \(B\) and vice-versa. Next, we turn to the correlation energy, which can be written in the dressed orbital basis as, \[E_{c}^{(2)}=-\frac{1}{4}\sum_{ijab}\frac{|(ij||ab)|^{2}}{\Delta_{ij}^{ab}+ \frac{1}{2}(W_{ii}+W_{jj})} \tag{49}\] Since we ruled out any cross terms from \(\mathbf{W}\) above, \(W_{ii}\) and \(W_{jj}\) simply shift the orbital energies \(i\) and \(j\) within their respective subsystems, independent of the presence of other subsystems. This establishes the BW-s2 energy of each subsystem is independent of the other, since we can tag each molecular orbital with a subsystem index and repeat the above exercise that was carried out for the elements of \(\mathbf{W}\), \[E_{c}^{(2)}=-\frac{1}{4}\sum_{PQRS}\sum_{i\nu j_{Q}a_{B}b_{S}}\frac{|(i\nu j_{ Q}||a_{R}b_{S})|^{2}}{\Delta_{i\nu j_{Q}}^{a_{B}b_{S}}+\frac{1}{2}(W_{i\nu j_{P}}+W_{ j_{Q}j_{Q}})} \tag{50}\] Hence, we find that the only terms that survive are \(\{i,j,a,b\}\in A\) and \(\{i,j,a,b\}\in B\). Thus, BW-s2 is size-consistent and by trivial extension, size-extensive.
2301.08425
Graphene as Infrared Light Sensor Material
The infrared (IR) photoresponse of graphene synthesized by an atmospheric chemical vapor deposition (CVD) system using a mixture of hydrogen and methane gases was studied. The IR sensor devices were fabricated using graphene films transferred onto a SiO2 substrate by a lift-off process. The quality of graphene was investigated with Raman spectroscopy and optical microscopy. The photoresponse was recorded under the illumination of IR light of wavelength 850 nm and intensity of around 0.216 mW/cm^2. The effects of temperature and hydrogenation on photoconductivity were also studied. It was found that the transient response and recovery times decreased with the temperature increase. The hydrogenation effect also caused a significant decrease in the photoresponse of the device. Although the net change in the photoresponse for IR light was lower at low illumination intensity levels, the transient responses were observed around 100 times faster than the recently reported CNT-based IR sensors.
Ahalapitiya H. Jayatissa, Madhav Gautam
2023-01-20T05:25:39Z
http://arxiv.org/abs/2301.08425v1
# Graphene as Infrared Light Sensor Material ###### Abstract The infrared (IR) photoresponse of graphene synthesized by atmospheric chemical vapor deposition (CVD) system using a mixture of hydrogen and methane gases was studied. The IR sensor devices were fabricated using graphene films transferred on to a SiO\({}_{2}\) substrate by a lift off process. The quality of graphene was investigated with the Raman spectroscopy and optical microscopy. The photoresponse was recorded under the illumination of IR light of wavelength 850 nm and intensity of around 2.16 uW/mm\({}^{2}\). The effects of temperature and hydrogenation on photoconductivity were also studied. It was found that the transient response and recovery times decreased with the increase of the temperature. Hydrogenation effect also caused the significant decrease in the photoresponse of the device. Although the net change in the photoresponse for IR light was lower at low illumination intensity levels, the transient responses were observed around 100 times faster than the recently reported CNT-based IR sensors. Keywords:CVD graphene, single layer, Infra-Red light, photoconductivity, 2D sensor materials + Footnote †: journal: Carbon Material ## 1 Introduction Optoelectronic devices working in near infra-red (NIR) (800 - 2000 nm) are always demanding for different applications [1-4]. There has been significant works reported on the fabrication of optoelectronic devices using NIR materials [5-12]. In recent years, single walled carbon nanotubes (SWCNTs) have been investigated extensively as a semiconducting material for IR sensors because of its strong absorption behavior in NIR region [7-12]. One of the key challenges in developing NIR detectors is the finding of ultra fast optical response in the sensor material [5-8]. Recently, strong absorption behavior in NIR region has been reported for thermally reduced graphene oxides [1,2]. This provides a pathway to use graphene as an optoelectronic material for IR detection. Although the optical properties of graphene in visible region have been reported by many researchers [13-15], we have not found any research work related to the photoresponse of graphene in IR region of the spectrum. In this paper, photoresponse of graphene film on macro-scale has been reported in different conditions. Graphene is a monolayered carbon film with a film thickness of around 0.32A [13 - 15], where carbon atoms are arranged in a two-dimensional hexagonal lattice structure. It can be thought of as a single layer peeled off from the graphite stack. It has evolved as an interesting material due to its unique physical and electrical properties [16]. This material is different from most of the conventional semiconductors because of its zero bandgap semi-conducting behavior [17]. For example, graphene-based transistor devices may operate very faster than traditional silicon devices due to high intrinsic carrier mobility (\(\sim\) 2x10\({}^{5}\) cm\({}^{2}\)v\({}^{-1}\)s\({}^{-1}\)) [1, 2, 18]. Being the material of high mechanical stress and low density (2.2 gm/cm\({}^{3}\)), it may lead to the application in nano-robotics [19, 20]. We have investigated the photoconductivity of graphene layers synthesized in atmospheric chemical vapor deposition (CVD) of CH\({}_{4}\) on a copper substrate. The devices were fabricated by transferred CVD graphene onto a SiO\({}_{2}\)/Si substrate. The investigations were carried out to understand the temperature dependence and hydrogenation effect on photoconductivity of graphene in NIR region. Although the net change in the photoresponse for IR light was lower at low illumination intensity levels (2.16 uW/mm\({}^{2}\)), the transient responses were observed around 100 times faster than photoconductivity of CNT for NIR lights. ## 2 Experimental Procedures The growth of graphene films was carried out on a copper (Cu) substrate (25 \(\mathrm{\SIUnitSymbolMicro m}\) thick) in an alumina tube furnace system under the flow of methane (CH\({}_{4}\)) and hydrogen (H\({}_{2}\)) gases. Copper substrate (99.999% pure, Alfa Aesar) was heated in a tube furnace under the 150 standard cubic centimeters per minute (sccm) flow of mixture of hydrogen and Argon (10% H\({}_{2}\), 90% Ar) and annealed at 1100 \({}^{0}\)C for one hour. After annealing, graphene deposition was carried out by passing a mixture of methane and argon (5% CH\({}_{4}\), 95% Ar) followed by the immediate cooling. Graphene deposited on copper by CVD method was transferred to SiO\({}_{2}\)/Si substrate by wet etching of Cu [15, 21-23]. The thickness of the thermally-grown SiO\({}_{2}\) was 118 nm as confirmed by UV spectrometry [24]. The Raman spectra of these films were recorded with the excitation wavelength of 530 nm. In order to fabricate the IR sensors, a thin layer of gold (about 100 nm) was coated onto the transferred graphene film by a vacuum evaporation method. The gold electrodes were patterned by lithography followed by etching of gold with aqueous KI/I\({}_{2}\) solution. The spacing and the length of these electrodes were 6 mm and 4 mm, respectively. Fig. 1 shows the schematic diagram of the fabricated IR sensor and photoresponse measurement circuit. The device was biased with a constant voltage (1.0 V) during collection of the data. To understand the reflection of light from graphene, reflectance from bilayer substrate (SiO\({}_{2}\)/Si) and tri-layer substrate (graphene/SiO\({}_{2}\)/Si) were measured with a double beam UV/Visible spectrometer (Shimadzu). The reflectance spectra were investigated in the spectral range 300-1100 nm. ## 3 Results and Discussions ### Surface Characterization The Raman spectroscopy has been used to characterize the quality of graphene. The Raman spectrum of Graphene gives for main bands corresponding to the vibration mode of graphene. Fig. 2 shows the as-measured Raman spectra of graphene films produced on SiO\({}_{2}\) surface. The spectrum was normalized with respect to the intensity level of 2D band. The peak at around 1580 cm-1 and 2660 cm-1, respectively, indicate the G band and the 2D band, which are characteristics Raman peaks of graphene. It has been reported that the defect free monolayer graphene can be identified with characteristic features of Raman band intensities [25]. The intensity of 2D band is \(\sim\)2 times larger than the intensity of G band suggesting that the presence of less defective graphene on SiO\({}_{2}\) surface. This fact is also supported by the weak intensity of D-band (1350 cm-1). ### Photoconductivity #### 3.2.1 Dynamic response Fig. 3 shows the dynamic response of photoconductivity of graphene film for the NIR light at room temperature. Fig. 3(a) shows the response and recovery of the device when the IR light was turned on and off, respectively, whereas Fig. 3(b) indicates the same characteristic for one cycle only. The intensity of the IR light source used was 2.16 \(\upmu\)W/mm\({}^{2}\) at the device surface. Although the intensity level was very low, a clear photoresponse of device was measured. The photogeneration of carriers can be primarily attributed to the creation of bands at the defect of graphene sheets. When graphene is deposited on a copper plate, defects are developed at the grain boundary of polycrystalline copper films. We believe that these defects are responsible for the creation of localized photoactive regions, which contribute to the photogeneration of carriers [26, 27]. The photoresponse could be characterized with a time step function. In both the photocurrent increase and drop cases, the experimental data were fitted well into the exponential form as [10], \[I=I_{o}+A_{o}\exp\Biggl{(}\frac{-t}{\tau}\Biggr{)}. \tag{1}\] Here, \(I\) is the current, \(t\) is the response time and \(I_{o}\), \(\tau\) and \(A_{0}\) are constants. Fig. 4(a) and 4(b) show the fit of the response in the form explained above. The data analysis indicated that the time constants were 10 ms and 31 ms for rise and fall of the photocurrent, respectively. Figure 2: Raman spectra of graphene transferred to silicon wafer (SiO\({}_{2}\) + Si) scaled with respect to the maximum peak. #### 3.2.2 The effect of temperature on photoconductivity Fig. 6 shows the effect of temperature on the photoconductivity of graphene. The photoconductivity was tested at 50 \({}^{\mathrm{o}}\)C and 100 \({}^{\mathrm{o}}\)C, respectively. During the experiment, the device was heated to the desired Figure 4: The photoresponse of the device due to IR light for (a) response and (b) recovery. Figure 5: The photoresponse of the device due to IR light at (a) 50 \({}^{\mathrm{o}}\)C and (b) 100 \({}^{\mathrm{o}}\)C. Figure 3: The photoresponse of the device due to IR light for (a) different cycles and (b) for one cycle. temperature for 30 minutes to ensure the thermal equilibrium. Transient responses of the device were 10.26 ms and 6.57 ms and the transient recovery times were 12.55 ms and 5.91 ms at 50 \({}^{\mathrm{o}}\)C and 100 \({}^{\mathrm{o}}\)C, respectively. A significant difference in transient response of the device was not found when the device temperature was increased from room temperature to 50 \({}^{\mathrm{o}}\)C and transient response time decreased by 40% when the temperature was changed from 50 \({}^{\mathrm{o}}\)C to 100 \({}^{\mathrm{o}}\)C. Similarly, the transient recovery time decreased by 60% when the temperature was changed from room temperature to 50 \({}^{\mathrm{o}}\)C and it decreased by 50% when the temperature was changed from 50 \({}^{\mathrm{o}}\)C to 100 \({}^{\mathrm{o}}\)C. On the other hand, the amplitude of the photocurrent didn't show any significant difference when the temperature was changed from room temperature to 50 \({}^{\mathrm{o}}\)C whereas it decreased by 50% when the temperature was changed from 50 \({}^{\mathrm{o}}\)C to 100 \({}^{\mathrm{o}}\)C. A slight change in photocurrent at high temperature measurement (100 \({}^{\mathrm{o}}\)C) from low temperature (50 \({}^{\mathrm{o}}\)C) can be attributed to the career generation is influenced by thermal effect associated with defects. Furthermore, the increase in current due to the thermal effect of IR light is less pronounced at elevated temperatures because the change in the temperature by IR heating is negligible. Therefore, the total photocurrent generation can be attributed to the photo generation of carriers in the graphene. On the other hand, the amplitude of the photocurrent didn't show any significant difference when the temperature was changed from room temperature to 50 \({}^{\mathrm{o}}\)C whereas it decreased by 50% when the temperature was changed from 50 \({}^{\mathrm{o}}\)C to 100 \({}^{\mathrm{o}}\)C. Smaller change in low temperature gradient can be attributed to the fact that small bandgap in graphene. Furthermore, the increase in current due to the thermal effect of IR light is less pronounced at elevated temperatures because the change in the temperature by IR heating is negligible. Therefore, the total photocurrent generation can be attributed to the photo generation of carriers in the graphene. #### 3.2.3 The effect of hydrogenation on photoconductivity The effect of hydrogenation on photoresponse of the device was tested at 100 \({}^{\mathrm{o}}\)C for different concentrations of hydrogen flow rates. The device was heated at 100 \({}^{\mathrm{o}}\)C for 30 min to ensure the thermal equilibrium followed by the constant hydrogen flow for more than one hour until reach of the saturation of surface of graphene by hydrogen by adsorption. The saturation was confirmed by monitoring resistance changes against time using two-point probe method. Figure 6: The photoresponse of the device in IR light due to hydrogenation at 100sccm of hydrogen flow for (a) difference cycle and (b) one cycle. Fig. 7 shows the photoresponse of the device at different flow rates of hydrogen. Transient responses of the device were 6.05 ms and 7.27 ms in 50 sccm and 100 sccm flow rate of hydrogen gas, respectively, and the corresponding values during recovery process were 7.1 ms and 7.81 ms, respectively. The transient response of the device was found to differ by 17% in going from 50 to 100 sccm of hydrogen flow rates. Table 1 lists the transient response and recovery times at different temperatures to compare the effect of hydrogenation. The photoresponse of the device in hydrogen was also calculated and compared with that of the device in vacuum at different temperatures. Response of the device was calculated using the formula given by [25], \[S=\Biggr{(}\frac{I_{1}-I_{2}}{I_{2}}\Biggr{)}*100\%. \tag{2}\] Where, \(I_{i}\) and \(I_{2}\) are the currents with and without IR light, respectively. Generally, response is calculated in percentage. Fig. 8 shows the comparison of the responses due to hydrogenation effect at 100 \({}^{\mathrm{\text{o}}}\)C. The response was found to decrease by around 57% when the device was hydrogenated at 50 sccm flow rate of hydrogen gas while it decreased by around 68% when the flow rate was increased to 100 sccm. The effect of hydrogenation was even seen substantial at room temperature compared with hydrogenation at 100 \({}^{\mathrm{\text{o}}}\)C. The flow of hydrogen was continued during cooling. The decrease in the response of the device due to hydrogenation effect was observed as expected. The semiconducting Behaviour of graphene is attributed \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Temperature & \multicolumn{2}{c|}{Transient response (\(\tau_{1}\))} & \multicolumn{2}{c|}{Transient recovery (\(\tau_{2}\))} \\ (\({}^{\mathrm{\text{o}}}\)C) & \multicolumn{2}{c|}{(ms)} & \multicolumn{2}{c|}{(ms)} \\ \cline{2-5} & In vacuum & In hydrogen & In vacuum & In hydrogen \\ & & (100 sccm) & & (100 sccm) \\ \hline Room Tem. & 10.04 & 13.90 & 31.26 & 44.29 \\ \hline 100 & 6.57 & 7.24 & 5.91 & 7.81 \\ \hline \end{tabular} \end{table} Table 1: Transient response and recovery times at different temperatures. Figure 7: The photoresponse of the device in IR light due to hydrogenation at (a) 50 sccm and (b) 100 sccm flow rate of hydrogen gas at 100 \({}^{\mathrm{\text{o}}}\)C. to the formation of bands at the defect sites [26]. When hydronation is occurred, the conductivity can be reduced to a certain extent due to the passivation of defect sites with hydrogen. ## 4 Conclusion In this paper, a graphene-based IR sensor was investigated in different conditions in terms of the photoresponse in the presence of light. The device was fabricated between electrode materials and the presence of a monolayer of graphene was confirmed by Raman Spectroscopy. The effect of temperature on photoconductivity was recorded at different temperature conditions. The photoconductivity of graphene films was interpreted as due to the creation of localized bands in defect sites at the gran boundaries of CVD graphene. The device exhibited a temperature-dependent effect on the photoresponse behavior. The transient response and recovery times were seen reduced in the high-temperature region, indicating that the thermal effect due to heating was more pronounced than the heating effect caused by the IR light. It also revealed the fact that the net photocurrent change due to IR light decreases as the charge carriers responsible for conduction are already excited to the conduction band due to thermal heating before IR light was used. The hydrogenation effect on photoconductivity was also studied. The hydrogenation caused a significant decrease in the photoresponse of the device at high temperature as expected because the hydrogen ions were believed to be adsorbed at the grain boundaries and passivate the defects that are responsible for photoconductivity. As the device was illuminated with a low intensity (\(\sim\) 2.16 \(\mu\)W/mm\({}^{2}\)) of IR light, the net change in the photocurrent was not significant. However, the transient responses were observed around 100 times faster than the recently reported CNT-based IR sensor, which may lead to the application of graphene towards ultra-fast optical response devices. This research was supported by a grant (Grant #: ECCS 0925783) from National Science Foundation (NSF) of USA.
2310.08364
Map2Schedule: An End-to-End Link Scheduling Method for Urban V2V Communications
Urban vehicle-to-vehicle (V2V) link scheduling with shared spectrum is a challenging problem. Its main goal is to find the scheduling policy that can maximize system performance (usually the sum capacity of each link or their energy efficiency). Given that each link can experience interference from all other active links, the scheduling becomes a combinatorial integer programming problem and generally does not scale well with the number of V2V pairs. Moreover, link scheduling requires accurate channel state information (CSI), which is very difficult to estimate with good accuracy under high vehicle mobility. In this paper, we propose an end-to-end urban V2V link scheduling method called Map2Schedule, which can directly generate V2V scheduling policy from the city map and vehicle locations. Map2Schedule delivers comparable performance to the physical-model-based methods in urban settings while maintaining low computation complexity. This enhanced performance is achieved by machine learning (ML) technologies. Specifically, we first deploy the convolutional neural network (CNN) model to estimate the CSI from street layout and vehicle locations and then apply the graph embedding model for optimal scheduling policy. The results show that the proposed method can achieve high accuracy with much lower overhead and latency.
Lihao Zhang, Haijian Sun, Jin Sun, Ramviyas Parasuraman, Yinghui Ye, Rose Qingyang Hu
2023-10-12T14:35:38Z
http://arxiv.org/abs/2310.08364v1
# Map2Schedule: An End-to-End Link Scheduling Method for Urban V2V Communications ###### Abstract Urban vehicle-to-vehicle (V2V) link scheduling with shared spectrum is a challenging problem. Its main goal is to find the scheduling policy that can maximize system performance (usually the sum capacity of each link or their energy efficiency). Given that each link can experience interference from all other active links, the scheduling becomes a combinatorial integer programming problem and generally does not scale well with the number of V2V pairs. Moreover, link scheduling requires accurate channel state information (CSI), which is very difficult to estimate with good accuracy under high vehicle mobility. In this paper, we propose an end-to-end urban V2V link scheduling method called Map2Schedule, which can directly generate V2V scheduling policy from the city map and vehicle locations. Map2Schedule delivers comparable performance to the physical-model-based methods in urban settings while maintaining low computation complexity. This enhanced performance is achieved by machine learning (ML) technologies. Specifically, we first deploy the convolutional neural network (CNN) model to estimate the CSI from street layout and vehicle locations and then apply the graph embedding model for optimal scheduling policy. The results show that the proposed method can achieve high accuracy with much lower overhead and latency. V2V, spectrum sharing, link scheduling, convolutional neural network, graph neural network. ## I Introduction As vehicles become increasingly intelligent, thanks to powerful onboard chips and sensors, it is becoming both inevitable and progressively feasible to enable them to establish connections with other vehicles (V2V), infrastructure (V2I), and pedestrians (V2P) for advanced transportation applications. This is the core concept of vehicle-to-everything (V2X) technologies [1], which aims to enhance the transportation system to become more comfortable, environment-friendly, efficient, and safer than ever. Among the various types of V2X communications, V2V plays a crucial role due to its potential to facilitate autonomous driving and provide timely incident alerts. Nevertheless, a significant portion of V2V communications will occur in dense urban environments, potentially resulting in substantial mutual interference and calling for appropriate link scheduling mechanisms. To better utilize the wireless resources, a common practice is to divide the wireless channels into small resource blocks in either the time or frequency domain. While this can help mitigate interference, spectrum utilization may not be optimal and can potentially be further enhanced. Dynamic Spectrum sharing, on the other hand, allows multiple users to access the same spectrum resources. The active users are allowed to share the spectrum resources when their mutual interference is deemed low. As a result, many optimal scheduling algorithms have been developed using diverse mathematical frameworks. Nonetheless, these algorithms tend to suffer from the drawback of high computational complexity. For example, [2] proposed a global-optimal scheduling algorithm called _S-MAPEL_, which has a high complexity with respect to pairing the links. To mitigate the concern of high computational complexity, certain sub-optimal algorithms have been developed. One of the early contributions in this area is the greedy heuristic search algorithm _FlashLinQ_[3]. Authors in [4] proposed a general framework that link scheduling is optimal for the whole generalized-dimension-of-freedom (GDoF) region and within a constant gap of the capacity region when it is subject to the treat interference-as-noise (TIN) condition. Based on TIN, _ITLinQ_[5] and _ITLinQ+_[6] were then developed to find the sub-optimal scheduling policy by sequential link selection. However, the performance results of _FlashLinQ_, _ITLinQ_, and _ITLinQ+_ heavily depend on empirical parameter design. In [7], the authors proposed another iterative algorithm, _FPLinQ_, derived from fractional programming. Although this algorithm outperforms the above algorithms with no need to do parameter fine-tuning, its iterative process still reveals high computation complexity that can hardly meet real-time V2V communication requirements under high mobility. Furthermore, it is worth noting that the above methods are based on the assumption that full CSI (including direct and interference channels) is available. The direct CSI for a transmitter-receiver pair can be acquired during their communication process. However, the link scheduling problem not only needs the direct CSI but also needs the information about all the interfering channels, a task that proves to be quite challenging and time-consuming in the high-mobility scenarios. With the surge in machine learning (ML) technologies, some end-to-end ML-based scheduling methods have been developed. In [8], the authors proposed "spatial deep learning", an end-to-end ML approach that generates link scheduling based on spatial distances. In [9], the authors proposed a fast link scheduling based on the graph embedding model by treating the V2V links as graphs. However, the above schemes only considered transmitter-receiver pair distance during scheduling, which may work well in rural, wild, or flat environments as distance has a strong correlation with CSI. Distance based scheduling can lead to significant scheduling policy deviations in urban V2V scenarios where buildings and foliage are dense. To achieve real-time and optimal V2V link scheduling, it is essential to have both accurate full CSI estimation and low-complexity scheduling algorithms in place. Furthermore, it is crucial that the scheme needs to scale well with an increasing number of links. In this work, we explore a new approach driven by ML and present the following contributions. * We develop an end-to-end link scheduling method named _Map2Schedule_, which can directly generate a scheduling policy that requires only city maps and vehicle locations, thereby significantly reducing signal overhead and computation complexity. * _Map2Schedule_ is trained on physical-model-based simulation data and performs \(80\%\) higher on sum-rate metrics compared to the existing methods such as [8] and [9]. Meanwhile, _Map2Schedule_ exhibits low latency and can schedule 50 V2V links within 0.2 seconds, a reduction of two orders of magnitude compared to our performance baseline. This runtime was tested without any optimization applied to the computation process. Besides, the proposed algorithm can handle an arbitrary number of V2V links, which can greatly facilitate transfer learning and practical implementation. * We have also explored transfer learning for better adaptation to unforeseen scenarios. The results show that our model has good transferability and can quickly adapt to new tasks with a few-shot method. The rest of the paper is organized as follows. In section II, the end-to-end wireless link scheduling problem is presented and the problem is divided into two sub-problems. Section III introduces the dataset and the system model. In section IV, the experiment settings and the results are presented. Finally, this paper is concluded in section V. ## II Background and Problem Formulation As shown in the Fig. 1, we consider a typical urban V2V scenario which consists of a city map and \(N\) overlaid wireless links denoted as \(d_{i}\in\mathcal{D}\{d_{1},d_{2},d_{3},\cdots,d_{N}\}\). The gray areas are roads and the green ones are buildings. Each \(d_{i}\) refers to a pair of a transmitter \(t_{i}\) and its corresponding receiver \(r_{i}\). Consequently, link set \(\mathcal{D}\) corresponds to \(N\) transmitters \(t_{i}\in\mathcal{T}\{t_{1},t_{2},t_{3}\cdots\ t_{N}\}\) and \(N\) receivers \(r_{i}\in\mathcal{R}\{r_{1},r_{2},r_{3}\cdots\ r_{N}\}\). To boost spectrum efficiency, all the links share the whole frequency resource. We denote \(h_{ii}\) as the CSI from \(t_{i}\) to \(r_{i}\), also is the direct channel gain of link \(d_{i}\), and denote \(h_{ji}\) for CSI from \(t_{j}\) to \(r_{i}\), also is the interference gain from link \(d_{j}\) to link \(d_{i}\). In the considered urban V2X scenario, we aim to find the optimal link scheduling policy (on or off state of each \(d_{i}\)) to maximize the system sum-rate. In this paper, we only schedule the links to either be activated with power \(p_{i}\) or be turned off, which is represented by a binary indicator \(x_{i}\in\{0,1\}\) for link \(d_{i}\). Without loss of generality, we assume all links have the same power \(p_{i}\). \(\mathbf{x}=[x_{1},x_{2},\cdots,x_{N}]\) denotes the link scheduling policy of \(\mathcal{D}\). The signal-to-interference plus noise ratio (SINR) of \(d_{i}\) can be written as Eq. (1) where the \(\sigma_{N}^{2}\) denotes the additive white Gaussian noise (AWGN), and the communication rate of \(d_{i}\) can be written as Eq. (2) where \(B\) is the system bandwidth. \[\textit{SINR}_{i}(\mathbf{x})=\frac{x_{i}p_{i}|h_{ii}|^{2}}{\sigma_{N}^{2}+ \sum_{j\neq i}x_{j}p_{j}|h_{ji}|^{2}}, \tag{1}\] \[R_{i}(\mathbf{x})=B\log_{2}(1+SINR_{i}(\mathbf{x})). \tag{2}\] From Eq. (2), we could find that the communication rate of \(d_{i}\) is impacted by interference. The optimization problem can be formulated as (3), which means we should find a proper scheduling policy \(\mathbf{x}\) that only activates the "good" links with high channel gain and low interference to others. \[\begin{split}&\underset{\mathbf{x}}{max}\sum_{i=1}^{N}R_{i}( \mathbf{x})\\ & s.t.\quad x_{i}\in\{0,1\},\forall x_{i}\in\mathbf{x}.\end{split} \tag{3}\] This problem naturally splits into two sub-problems. The first sub-problem is to obtain \(h_{ij},\forall i,j\in\{1,2\...\ N\}\), the \(N\times N\) CSI values between all possible transmitter-receiver pairs. The second sub-problem is to generate the link scheduling policy based on the CSI values. ### _Channel Estimation_ In practice, CSI is obtained by periodically sending pilot signals. However, they experience high overhead and latency, not to mention implementing a centralized algorithm for indirect CSI pairs. In the literature, one of the most accurate approaches is physical-model-based simulations, like the dominant path model (DPM) [10] and ray tracing [11]. Although such simulations can provide precise CSI prediction, it will take minutes to generate full CSI. In the past few years, some ML-based CSI predicting methods emerged. However, most are developed for single transmitter-receiver pairs, like [12, 13, 14]. Such single-pair methods are not applicable to large-scale V2V cases, where the number of links can reach to hundreds. Fortunately, with recent advancements in ML research, new Fig. 1: A typical urban V2V scenario. deep learning methods were developed and can emulate the physical model with lower computation complexity. ### _Link Scheduling_ The second sub-problem is to find optimal or sub-optimal link scheduling policy based on available CSI. In fact, given all \(h_{ij}\), Eq. (3) is a classical optimization problem. Traditional scheduling algorithms use various optimization techniques to find the optimal \(\mathbf{x}\). As discussed in the introduction part, _S-MAPEL_[2], _FlashLinQ_, _ITLinQ_, _ITLinQ+_, and _FPLinQ_ represent past decade's efforts in this field. Nevertheless, they all suffer from high computation complexity and poor scalability, hence are not suitable for the considered V2V scenarios. Recently, some ML-based scheduling algorithms with low computation complexity have been proposed. The closest works are [8] and [9]. Both works are end-to-end scheduling predictions and only take link distance as a system input. Their performance builds on the implicit correlation between distance and CSI and, therefore will not work well where distance and channel have large deviations, such as in urban environments. Besides, they can also suffer from poor generalizability, which hinders practical applications. ## III Scheduling Via Map2Schedule We present the details of the proposed _Map2Schedule_ approach, which is designed to generate scheduling policy \(\mathbf{x}\) directly from information provided in Fig. 1. The proposed system is illustrated in Fig. 2. The first component is a modified CNN model from _RadioUNet_[15], which predicts the CSI from the 2-dimensional (2-D) city map and vehicle locations. The second component is the graph embedding model, which uses the predicted CSI to generate a near-optimal scheduling policy in real time. ### _Dataset_ In this paper, our dataset consists _RadioMapSeer_[15] and the near-optimal scheduling policy \(\mathbf{x}\) generated by _FPLinQ_. _RadioMapSeer_ is a dataset of 56,000 radiomaps. As shown in Fig. 3, each radiomap is an image of the signal strength distribution, and a brighter pixel corresponds to a stronger signal at that location. These radiomaps were simulated on 700 2-D city maps. For each map, there are 80 radiomaps simulated by WinProp with the DPM algorithm, corresponding to 80 transmitter locations. The map size is 256 m \(\times\) 256 m. For our scheduling task, we built V2V scenarios from the _RadioMapSeer_ dataset. These scenarios were configured based on two key parameters: the number of links and the range of link distances. The number of links is configured as 10, 20, 30, 40, and 50, and the range of link distance is configured as 2 meters to 32 meters for short-distance groups and 2 meters to 65 meters for long-distance ones. We extracted the CSI matrix from the radiomap and then utilized the _FPLinQ_ algorithm to obtain link scheduling policy as the baseline (ground truth). ### _Radiomap Prediction_ To predict the CSI, we deployed the _RadioUNet_, a CNN model based on the _UNet_ architecture. _UNet_ architecture is symmetric and with similar shape as letter "U" as shown in Fig. 2. This architecture consists of multiple levels, with two network blocks on each level. The left-side blocks perform convolution (down-sampling), activation, and pooling, generally used as feature extracting or encoding. On the right side, corresponding right-side blocks will do transposed convolution (up-sampling), activation, and pooling, acting as feature expanding or decoding. Unlike other CNN models, the prominent Fig. 3: Black edge and transmitter fading Fig. 2: Map2Schedule structure feature of _UNet_ is the encoder-decoder path on each level as the blue long arrows in Fig. 2. These paths will send the output of the left-side blocks into the input of the right-side blocks. Furthermore, _RadioUNet_ has two connected _UNet_ architectures. Both of the _UNet_ architectures have ten levels, but the first _UNet_ has more channels than the second one. The inputs of the first _UNet_ consist of one 2-D city map and one transmitter location image. Its output is a coarse radiomap prediction. After that, the inputs of the second one _UNet_ are nearly identical to the first _UNet_, except the output of the first _UNet_ is added. Original _RadioUNet_ is trained for just one transmitter. We have modified its input structure to support multiple direct pairs. Since all links use the same frequency, \(N\)-pair CSI can be obtained by stacking up each individual radiomap and fine-tuning the output layer. Besides, original _RadioUNet_ exhibits two issues. The first issue was black edges as illustrated in left side of Fig. 3, where noticeable black edges surround the "bright faces" of buildings. These black edges correspond to mispredicted zero channel gain at that location. One conjecture is that the model has learned the pattern of black edges associated with "dark faces" of buildings and mistakenly applied this knowledge to "bright faces" during prediction. To rectify this issue, a straightforward solution is to detect abrupt changes in non-building areas and replace the abnormal pixels with their largest neighbors. The second issue is "transmitter fading" as shown on the right side of Fig. 3. The transmitters in edge areas significantly "fades". This issue can be attributed to biased data in the training set, where there are few transmitters located at the map edges. Methods to mitigate data bias include adding compensatory data and data augmentation, such as segmenting the original radiomaps to generate additional samples with transmitters at the edges. In our implementation, we addressed this issue by dropping transmitters at selected locations. ### _Graph Embedding Link Scheduling_ Essentially, the link scheduling is to determine the relationship between the direct CSI (\(h_{ii}\)) and the interference to all other receiver nodes (\(h_{ij},j\neq i\)). Such a model can naturally connect with graph structure. In this paper, we applied and modified a popular graph embedding model _structure2vec_[16], which is a powerful method for data with graph structure. Its main idea is constructing embedded hidden variables via a nonlinear learnable function for each graph data point (each node in the graph). These hidden variables represent the whole graph from the perspective of each node, which means these variables encapsulate information about the node itself and its relationship to the whole graph. Depending on the problem objectives, these embedded hidden variables could be processed by classifiers, regression models, or other appropriate models. After determining the whole structure, the embedding learnable function and the subsequent task model will be optimized simultaneously through each backward propagation process. To implement the graph embedding model on link scheduling tasks, the first step is to represent each V2V scenario as a graph. While it might seem intuitive to treat each transmitter and receiver as a node, doing so will result in each node missing its node feature. This is unsuitable for graph embedding models that highly depend on the node feature to embed. Considering that our task is to find "good" links, treating each link as a node is a more natural idea and will make it easier to design the subsequent classifier. Denote \(G(V,E,\alpha)\) as the directed graph of all the links in one V2V scenario, \(v_{i}\in V\) denotes the node \(v_{i}\) in the nodes set \(V\), which represents the link \(d_{i}\). \(e_{ij}\in E\) is the edge from node \(v_{i}\) to node \(v_{j}\). \(\alpha_{ij}\) denotes the edge feature of \(e_{ij}\) which is equal to \(h_{ij}\), and \(\alpha_{ii}\) denotes the node feature of \(v_{i}\) which is equal to \(h_{ii}\). From the graph embedding theory, the embedded hidden variable \(\mu_{i}\) for node \(v_{i}\) will be updated iteratively as Eq. (4). \(N(v_{i})\) is the set of neighbors of \(v_{i}\). \[\mu_{i}^{(t+1)}=\Gamma(\alpha_{ii},\{\alpha_{ji}\}_{j|v_{j}\in N(v_{i})},\{\mu_ {j}^{(t)}\}_{v_{j}\in N(v_{i})}). \tag{4}\] Here, \(\Gamma\) represents the nonlinear learnable function for inferring the hidden variables, and there are several approximate inference algorithms. In this paper, we choose the mean-field inference algorithm, as shown in Eq. (5). \[\mu_{i}^{(t+1)}=\sigma(W_{1}\alpha_{ii}+W_{2}\sum_{j|v_{j}\in N(v_{i})}\alpha_{ ji}+W_{3}\sum_{j|v_{j}\in N(v_{i})}\mu_{j}^{(t)}). \tag{5}\] To facilitate model computation, the CSI is quantized to \(p\)-dimensional one-hot code. This quantization process compresses the continuous CSI feature space into discrete \(p\) categories. Since we aim to perform binary classification on each link, the \(p\) categories of CSI should both provide sufficient accuracy and enable faster link scheduling learning. In this paper, the node and edge features are embedded into 8-dimensional one-hot codes, the hidden variable are iterated twice, and the embedded hidden variables are represented as 32-dimensional vectors. Consequently, the shapes of \(W_{1}\) and \(W_{2}\) in Eq. (5) are both \(32\times 8\), while \(W_{3}\) is \(32\times 32\). The above setting of hyperparameters was found to be optimal through experiments in [9]. After the graph embedding model is constructed, the classifier for each hidden variable can be built. The classifier contains just one hidden variable layer. So, the size of the first weight is \(32\times 64\), and the second is \(64\times 2\). It may appear counter-intuitive that the classifier only processes one hidden variable at a time, lacking a global view. However, from Eq. (5), we can find that each embedded hidden variable is calculated from the whole graph, and the CSI values are formulated similarly to _FPLinQ_, which means those hidden variables should be able to mimic how _FPLinQ_ algorithm does link scheduling. Meanwhile, the single classifier, as well as the previous embedding model, can accommodate any number of inputs. Hence, the proposed system architecture can accommodate an arbitrary number of V2V links without requiring any modifications. This capability offers significant advantages for practical implementation, transfer learning, and more. ### _Training Process_ The proposed system is trained as two interconnected components in a supervised fashion. For the first component, the CNN model is trained using the _RadioMapSeer_ dataset. As mentioned before, the 56,000 radiomaps were simulated on 700 city maps, with 80 different transmitter locations on each map. We divide the radiomaps based on different city maps, allocating them as follows: 400 for the training set, 100 for the validation set, and 200 for the test set. The mean square error (MSE) loss function is employed and the Adam optimization algorithm is used with a learning rate of \(10^{-4}\). In the second component, the training input consists of V2V CSI extracted from DPM-simulated radiomaps, and the target link scheduling policy is generated by the _FPLinQ_ algorithm, serving as the ground truth. The dataset splits follows 800/1000/4000 for train/validation/test respectively. The reason for such a small training set is as follows. The work in [9] shows shows that training on a small dataset has a similar performance compared with large datasets. Moreover, our problem involves a smaller map size, the links cluster and more overlap in these scenarios, which lead to low link activation ratio and make our problem more challenging. Due to the low activation ratio, the original graph embedding model can not learn the pattern quickly and precisely, as shown in the result section. For example, the prediction of sum-rate can only achieve about \(60\%\) of ground truth _FPLinQ_ performance. This inconsistency comes from the biased data of much more "bad" links in each V2V scenario. Because of the utilization of the normal binary cross entropy (BCE) loss function, the model will focus on all input data samples equivalently and will be trained more by the "bad" link samples. Consequently, the trained model will have low accuracy on "good" links, which is more important to the sum-rate performance in our scenarios. To alleviate the bias, we deployed the weighted cross entropy loss function. By adjusting the weight of the classes, the accuracy of good links weighs more in the loss function. It leads the parameters of the model descent along the direction that improves the accuracy on good links and the sum-rate metric. In addition, given the variations in the CSI distribution caused by different link numbers and ranges of link distance, we conducted the grid search for optimal training hyperparameters on each scenario setting. ## IV Experimental Results Here, we present the results of our experiments. The training process and test experiments were implemented in PyTorch. ### _Sum-rate Performance_ As mentioned before, we choose _DPM_ simulation for getting accurate CSI and then use _FPLinQ_ algorithm for near-optimal scheduling as the \(100\%\) performance baseline (DPM-FP). In Fig. 4, "M2S 2m-32m" and "M2S 2m-65m" represent the sum-rate performance of _Map2Schedule_ under short (\(d_{i}\) from 2m to 32m) and long (\(d_{i}\) from 2m to 65m) link distance, respectively. The "DF-FP 2m-32m" and "DF-FP 2m-65m" represent the performance of the distance-based fading model (with only vehicle locations) and the _FPLinQ_ (DF-FP), which is the 100% performance baseline of paper [7, 9]. The distance-based fading model applied the short-range outdoor model ITU-1411 with a distance dependent path-loss. _Map2Schedule_ can achieve over \(90\%\) baseline performance on sum-rate metrics. However, the DF-FP method can only achieve approximately \(50\%\). This verified that street layout together with vehicle locations in our model, can predict almost optimal scheduling policy while distance alone cannot provide satisfactory performance. ### _Impacts of Weighted BCE Loss Function_ We compared the scheduling performance of the original graph embedding model and the second part of _Map2Schedule_ on the same CSI matrix input. As mentioned above, we deployed the weighted BCE loss function to address the scenario bias. We compare two metrics: average accuracy, which is the number of correctly predicted \(x_{i}\) over \(N\), and average sum-rate, which is the ratio of the objective function in Eq. (3) given predicted \(x_{i}\) to DPM-FP. The result in Table I reveals a slight reduction in average accuracy when using the weighted BCE loss function. This reduction can be attributed to the modification of the model, which makes the model focus more on "good" link (high data rate) samples. Therefore, the accuracy on "bad" link (low data rate) samples decreased, which contributed more to the total accuracy. However, the sum-rate metric significantly depends on the accuracy of "good" links when "good" links constitute a small proportion. As a result, _Map2Schedule_ with weighted BCE loss function was improved by an average of 35% on the sum-rate metric and 76% on the sum-rate metric in scenarios with 50 pair of longer links. ### _Transferability between Different Scenarios_ Transfer learning is a widely used ML technology that aims to improve the ML model performance on the target domain by transferring the knowledge from the original domain. In \begin{table} \begin{tabular}{|c|c|c|} \hline MetricsMethod & Map2Schedule & Original Graph \\ \cline{2-3} Average accuracy & 79.08\% & **82.35\%** \\ \hline Average sum-rate & **91.27\%** & 67.29\% \\ \hline Sum-rate in & **94.71\%** & 53.83\% \\ 50-long scenarios & **94.71\%** & **53.83\%** \\ \hline \end{tabular} \end{table} TABLE I: Impacts of weighted BCE loss function Fig. 4: Sum-rate performance V2V scenarios, the vehicle density can change significantly within a single day, leading to very different CSI patterns. In this section, we tested our long-distance group model, which is the original domain knowledge, with the short-distance group data, which is the target domain. "Zero-shot" refers to without any view of the target domains, and "few-shot" refers to only viewing a few samples of the target domains. The "non-transfer" refers to the performance of models completely trained on the target domains. From the performance in Fig. 5, we can find that our model exhibited promising transferability across different scenarios. However, the performance of transferred knowledge is lower than the original domain knowledge. The few shots slightly improved the performance, but further research on transfer learning is still needed for practical implementation. ### _Run Time and the Computation Complexity_ All the experiments were conducted on our desktop with AMD Ryzen 5955WX processor and NVIDIA RTX A6000 graphic card. The average run time is shown in Fig. 6. Specifically, the run time of DPM-FP is on the order of \(10^{1}\) seconds, and the run time of _Map2Schedule_ is on the order of \(10^{-1}\) second. Besides, further improvements can be achieved by carefully handling quantization in the data flow of embedding model. This result indicates _Map2Schedule_ has the desired low computational complexity and real-time feature. ## V Conclusion In this paper, we proposed an ML-based end-to-end wireless link scheduling approach called _Map2Schedule_, which specifically designed for urban V2V scenarios. _Map2Schedule_ can generate near-optimal link scheduling policy from vehicle locations and the city map. In urban environment, our approach remarkably outperforms those distance-based scheduling methods and competes with the state-of-the-art physical-model-based ones. Meanwhile, _Map2Schedule_ requires little computation resources, allowing it to meet the real-time requirement of V2V communications.
2301.06408
Numerical analysis of pit-to-crack transition under corrosion fatigue using a stochastic pit generation algorithm
Corrosion fatigue is a major threat to the integrity of marine structures. However, the combined damaging effect of corrosion and fatigue is not well understood due to the uncertainties associated with the stochastic nature of pitting corrosion. A pitting corrosion defect could have a quite complex morphology that varies randomly from one case to another. Nevertheless, in numerical corrosion fatigue studies, the complex pit morphology is often idealized using an overly simplified geometry with a smooth surface because of the diffi-culties involved in modelling defects with the irregular and random shapes of pitting corrosion. The present study investigates the influence of such geometrical simplifications on the results of numerical corrosion fatigue analyses. For this purpose, an isolated complex-shaped pit, generated using a hierarchical stochastic algorithm scripted in Python and linked with Abaqus/CAE, is developed in a Q235 steel dog-bone specimen. The numer-ical results obtained from this model are compared with those from another model containing an idealized counterpart of the irregular pit. A discussion on the effect of pit morphology on the stress/strain history and distribution and fatigue crack initiation is presented.
Mojtaba Mokhtari, Xintong Wang, Jorgen Amdahl
2023-01-16T12:50:15Z
http://arxiv.org/abs/2301.06408v1
Numerical analysis of pit-to-crack transition under corrosion fatigue using a stochastic pit generation algorithm ###### Abstract Corrosion fatigue is a major threat to the integrity of marine structures. However, the combined damaging effect of corrosion and fatigue is not well understood due to the uncertainties associated with the stochastic nature of pitting corrosion. A pitting corrosion defect could have a quite complex morphology that varies randomly from one case to another. Nevertheless, in numerical corrosion fatigue studies, the complex pit morphology is often idealized using an overly simplified geometry with a smooth surface because of the difficulties involved in modelling defects with the irregular and random shapes of pitting corrosion. The present study investigates the influence of such geometrical simplifications on the results of numerical corrosion fatigue analyses. For this purpose, an isolated complex-shaped pit, generated using a hierarchical stochastic algorithm scripted in Python and linked with Abaqus/CAE, is developed in a Q235 steel dog-bone specimen. The numerical results obtained from this model are compared with those from another model containing an idealized counterpart of the irregular pit. A discussion on the effect of pit morphology on the stress/strain history and distribution and fatigue crack initiation is presented. ## 1 Introduction Exposed to aggressive marine environments, marine structures and particularly their steel components are prone to corrosion. Although there are different corrosion protection methods, corrosion still occurs, accumulates, and results in accidents (Maureen et al., 2013). Both the economic and environmental losses due to corrosion are enormous (Gudze and Melchers, 2008; Odusote et al., 2021; Pipeline Significant Incident 20 Year Trends | PHMSA, Zhang et al., 2018). Typically, pitting corrosion, a localized dissolution of metals usually initiated after the breakdown of protective coating or paint (Bhandari et al., 2015; Szklarska-Smiadowska, 1986), is the most common and concerning type of corrosion in the marine industry. Many research studies have investigated the effect of different parameters on pitting corrosion in marine environments (Gudze and Melchers, 2008; Jeffrey and Melchers, 2009; Melchers, 2003; B, 2009; B, 2018; Soares et al., 2009). Within corrosion pits, there are often significant stress concentrations and large strain gradients. Therefore, under cyclic loading, a fatigue crack can be initiated from the pit site. The transition from pitting corrosion to fatigue crack has been studied for several decades by many researchers, yet it is not well understood. This is mostly due to the stochastic nature of pitting corrosion and its morphology. In early research on this topic, Kondos (1989) suggested a competition model between pitting corrosion and crack propagation. He simplified the problem by modelling the pit as an elliptical crack while the pitting corrosion can have quite complex 3-dimensional (3D) geometry. It was assumed that pit-to-crack transition occurs when the fatigue crack growth rate exceeds the pitting corrosion growth rate. Several studies followed this competition model (Dolley et al., 2000; Rokhlin et al., 1999; Zhou and Turnbull, 1999). Further research considered the effect of pits, their sizes and occasionally their shapes on the corrosion fatigue to develop models for the pit-to-crack transition yet again considerable simplifications were made in modelling the pit and/or crack shape (Medved et al., 2004; Sadananda and Vasudevan, 2020; Zhao et al., 2018). The stress concentration factor caused by pitting corrosion is analysed in several other studies and again simplistic geometries are employed to model the shape of the individual pits of interest (Cerit, 2019; Huang et al., 2014; Shojai et al., 2022; Xu and Wang, 2015). Despite numerous investigations on pitting corrosion fatigue, the combined damaging effect of corrosion and fatigue is still not well understood. As noted earlier, this is mostly due to the uncertainties associated with the stochastic nature of pitting corrosion. The morphology of the pitting corrosion defect could be quite complex such that the pit topography could
2304.05907
Diffusion models with location-scale noise
Diffusion Models (DMs) are powerful generative models that add Gaussian noise to the data and learn to remove it. We wanted to determine which noise distribution (Gaussian or non-Gaussian) led to better generated data in DMs. Since DMs do not work by design with non-Gaussian noise, we built a framework that allows reversing a diffusion process with non-Gaussian location-scale noise. We use that framework to show that the Gaussian distribution performs the best over a wide range of other distributions (Laplace, Uniform, t, Generalized-Gaussian).
Alexia Jolicoeur-Martineau, Kilian Fatras, Ke Li, Tal Kachman
2023-04-12T15:24:33Z
http://arxiv.org/abs/2304.05907v1
# Diffusion models with location-scale noise ###### Abstract Diffusion Models (DMs) are powerful generative models that add Gaussian noise to the data and learn to remove it. We wanted to determine which noise distribution (Gaussian or non-Gaussian) led to better generated data in DMs. Since DMs do not work by design with non-Gaussian noise, we built a framework that allows reversing a diffusion process with non-Gaussian location-scale noise. We use that framework to show that the Gaussian distribution performs the best over a wide range of other distributions (Laplace, Uniform, t, Generalized-Gaussian). ## 1 Introduction Diffusion models are powerful generative models that generate high-quality and diverse data. These methods inject Gaussian noise into the data through a Forward Diffusion Process (FDP), and they learn to reverse the process to go from noise to data. There are many ways to define diffusion models: Score-Based Models (SBMs) learn to predict the score (gradient log density), while (non-scored based) diffusion Models (DM) learn to predict the added Gaussian noise in order to remove it from the noisy data. SBMs and DMs generally rely on Gaussian noise. A priori, there is no apparent reason why Gaussian noise would be needed as opposed to other types of noise. Very recent works have started exploring non-Gaussian noise. Bansal et al. (2022) and Anonymous (2023) devise their own diffusion-like frameworks to sample from arbitrary distributions by going from dataset 1 to dataset 2; in both papers, they find that using non-Gaussian distributions as the second dataset (instead of Gaussian noise) significantly worsen the quality of the generated data. More related to our work, Deasy et al. (2021) shows that SBM, where we learn the score of a Generalized Normal (GN) distribution, leads to significantly worse results when moving away from the Gaussian distribution (which corresponds to the GN distribution with \(\beta=2\)). In this paper, we aim to answer the question of whether there exist non-Gaussian distributions that perform better than the Gaussian distribution in (non-scored based) DMs. Our work generalizes the DMs with learnable mean and variance by Bao et al. (2022a,b) to location-scale family noise distributions, and we test this framework on a variety of noise distributions. ## 2 Denoising Diffusion Probabilistic Models (DDPM) Let \(x_{0}\) be real data from the data distribution and \(z\) be a random sample from a \(\mathcal{N}(0,1)\). Assume \(t\in[0,1,\ldots,T]\). ### Forward process \(q(x_{t+1}|x_{t})\) In Denoising Diffusion Probabilistic Models (DDPM) (Ho et al., 2020), one define transition steps of the following type: \[x_{t+1}=\tilde{f}(t)x_{t}+\tilde{g}(t)z,\] where \(\tilde{f}(t)\) is a scaling term for the data and \(\tilde{g}(t)\) is a scaling term for the noise. The noise process is such that \(\tilde{f}(t)=\sqrt{\alpha_{t}}\) and \(\tilde{g}(t)=\sqrt{1-\alpha_{t}}\) for some \(\alpha_{t}\in[0,1]\). Let \(\bar{\alpha_{t}}=\prod_{s=1}^{t}\alpha_{s}\); then by the property of Gaussian distribution, this means that \(x_{t}~{}=\sqrt{\bar{\alpha_{t}}}x_{0}+\sqrt{1-\bar{\alpha_{t}}}z\) and we approximately have that \(x_{t}\sim\mathcal{N}(0,1)\). At \(x_{t}\), we end up with a prior distribution that does not depend on the real data. Since our goal is data generation, we want to reverse the process from noise \(x_{t}\) to data \(x_{0}\). ### Estimation We can estimate the joint distribution \(q(x_{0},x_{1},\ldots,x_{T})\) with the following parametrization: \[p(x_{0},x_{1},\ldots,x_{T})=p(x_{T})\prod_{t=1}^{T}p_{\theta}(x_{t-1}|x_{t}).\] It can be shown that optimizing the variational lower bound is equivalent to minimizing \(D_{KL}(q(x_{t}|x_{t-1})||~{}p_{\theta}(x_{t}|x_{t+1}))\) for all \(t\in[1,\ldots,T-1]\). From the Markov property, we know that \(q(x_{t}|x_{t-1})=q(x_{t}|x_{t-1},x_{0})\). We can use Bayes Rule to obtain a close form for \[q(x_{t-1}|x_{t},x_{0})=\frac{q(x_{t}|x_{t-1})q(x_{t-1}|x_{0})}{q(x_{t}|x_{0})}\] given that all three terms of the equation are known and have a close-form. It can be shown that the variational lower bound optimization can be reduced to minimizing \(D_{KL}(q(x_{t-1}|x_{t},x_{0})||~{}p_{\theta}(x_{t-1}|x_{t}))\) for all \(t\in[2,\ldots,T]\) where \(q(x_{t-1}|x_{t},x_{0})\) is a closed-form Gaussian distribution depending on \(x_{t}\) and \(x_{0}\). The Gaussian distribution has a known variance term that does not need to be estimated. Given that \(q(x_{t-1}|x_{t},x_{0})\) is Gaussian distributed with known variance, its mean is the only parameter left to be estimated. As shown by Ho et al. (2020), Nichol and Dhariwal (2021), directly minimizing the KL divergence works poorly. It can be shown that \(q(x_{t-1}|x_{t},x_{0})\) only depends on \(x_{0}\) and the noise \(z\); thus we can instead estimate \(\mathbb{E}_{q(x_{t}|x_{0})}[z|x_{t}]\) and use the close-form solution of \(q(x_{t-1}|x_{t},x_{0})\) to estimate its mean. Thus estimating the expectation of \(z\) given \(x_{t}\) is all you need to reverse the diffusion using DDPM. ## 3 Generalized denoising diffusion ### Forward process \(q(x_{t}|x_{0})\) Contrary to DDPM and other diffusion models, our generalized framework directly samples from \(q(x_{t}|x_{0})\) rather than sample from \(q(x_{t+1}|x_{t})\) one step at a time. We thus directly assume that \[x_{t}=f(t)x_{0}+g(t)z\sim F(f(t)x_{0},g(t)),\] where \(F\) is any distribution of the location-scale (Gaussian, Laplace, Uniform,...) family, and thus \(z\) has a distribution \(F(0,1)\). The noise \(z\) corresponds to the added noise/corruption. Similar to most diffusion models, we assume diagonal scaling components, thus i.i.d. noise corruptions. In our setting, we make no assumptions about the in-between steps \(q(x_{t}|x_{t-1})\). In the Gaussian case, the transition steps are just Gaussian. However, when \(z\) is non-Gaussian, the distribution of that transition step can be extremely complicated and intractable. Nevertheless, this \(q(x_{t}|x_{t-1})\) is unknown and unimportant to us in this framework, as will be seen next. ### Reverse process Our goal is to sample from \(q(x_{t-1}|x_{t})\) so that we can reverse the diffusion process from noise to data. However, as mentioned, we do not know \(q(x_{t}|x_{t-1})\), so we cannot try to match this term; it also means that we cannot get the close-form solution for \(q(x_{t-1}|x_{t},x_{0})\) using Bayes rule as it depends on the unknown transition probability \(q(x_{t}|x_{t-1})\). Thus, we cannot use the original DDPM approach discussed in Section 1. We show below how estimating the distribution of the noise \(z\) given \(x_{t}\) allows us to directly sample from \(q(x_{t-1}|x_{t})\) by plugging the sample from \(q(z|x_{t})\) into a deterministic equation. From the forward equation, we know that \[x_{0}=\frac{1}{f(t)}x_{t}-\frac{g(t)}{f(t)}z.\] Thus, if we could sample from that \(z\) conditional on \(x_{t}\), we could effectively sample from \(q(x_{0}|x_{t})\). Furthermore, taking a forward step \(q(x_{t-1}|x_{0})\) with the same \(z\), we get that: \[x_{t-1} =f(t-1)x_{0}+g(t-1)z \tag{1}\] \[=\frac{f(t-1)}{f(t)}x_{t}+\left(g(t-1)-\frac{f(t-1)g(t)}{f(t)} \right)z\] (2) \[=\bar{f}(t,t-1)x_{t}+\bar{g}(t,t-1)z, \tag{3}\] where \(\bar{f}(t,s)=\frac{f(s)}{f(t)}\) and \(\bar{g}(t,s)=g(s)-\frac{f(s)g(t)}{f(t)}\). Thus, _by sampling from \(q(z|x_{t})\), we can deterministically recover a sample from \(q(x_{t-1}|x_{t})\)_. ### Estimation We can use variational methods to estimate \(z\) as \(z(x_{t})\). Since we know that \(z\) is a sample from the distribution \(F(0,1)\) in the forward process, we propose to estimate it as \(z(x_{t})\sim F(\mu_{\theta}(x_{t}),\sigma_{\theta}(x_{t}))\) in the reverse process; this is a generalization of the variational approximation done in Extended-DDPM (Bao et al., 2022, 2022), for the non-gaussian case. Since this is a location-scale family, the reverse steps are approximated as: \[x_{t-1} =\bar{f}(t,t-1)x_{t}+\bar{g}(t,t-1)F(\mu_{\theta}(x_{t}),\sigma_{ \theta}(x_{t})) \tag{4}\] \[=\bar{f}(t,t-1)x_{t}+\bar{g}(t,t-1)\mu_{\theta}(x_{t})+\bar{g}(t,t-1)\sigma_{\theta}(x_{t})F(0,1) \tag{5}\] To estimate the \(\mu_{\theta}\) and \(\sigma_{\theta}\), the location and scale of the noise distribution, one use KL divergence minimization or equivalently Maximum Likelihood Estimation (MLE). However, similar to (Ho et al., 2020; Nichol and Dhariwal, 2021; Bao et al., 2022, 2022), we found this objective generally less numerically stable and impossible to use in some distributions (such as the uniform distribution due to the bounds on the support). In the non-Gaussian case, KL divergence minimization (or MLE) cannot be solved analytically or lead to complicated equations, making the optimization more challenging and unstable. To solve this issue, we use the Method of Moments (MoM). The MoM seeks to estimate a distribution by matching all the moments \(\mathbb{E}[z^{k}]\), for \(k=0,1,\ldots,\infty\). Thankfully, in the case of location-scale family distributions, we only need two moments \(\mathbb{E}[z]\) and \(Var[z]\) to estimate the location and scale parameters of the distribution. Thus, all we need is to estimate \(\mathbb{E}[z|x_{t}]\) and \(Var[z|x_{t}]\) and then extract the location and scale terms of the noise distribution. MLE and MoM are equivalent in the Gaussian case, but using the MoM is much more stable and simpler when handling non-Gaussian distributions, so we use it. Since we can only sample from \(q(x_{t}|x_{0})\), we cannot estimate the expectation from multiple \(x_{0}\) given one \(x_{t}\) directly. We thus make use of Monte-Carlo by estimating the moments as \[\mathbb{E}[z|x_{t}]\approx\tilde{\mu}_{\theta_{1}}(x_{t})=\operatorname*{arg\, min}_{\theta_{1}}\mathbb{E}_{q(x_{t}\mid x_{0},z)q(z)}[(z-\tilde{\mu}_{\theta_{1} }(x_{t}))^{2}]\] and \[Var[z|x_{t}]\approx\tilde{\sigma}^{2}_{\theta_{2}}(x_{t})=\operatorname*{arg\, min}_{\theta_{1}}\mathbb{E}_{q(x_{t}\mid x_{0},z)q(z)}[((z-\tilde{\mu}_{\theta_{1} }(x_{t}))^{2}-\tilde{\sigma}^{2}_{\theta_{2}}(x_{t}))^{2}].\] From the MoM, for most distributions, we can easily extract the location \(\mu_{\theta_{1}}(x_{t})\) and scale \(\sigma_{\theta}(x_{t})\) parameters from these the approximations of the two moments \(\mathbb{E}[z]\) and \(Var[z]\). This allows us to easily generalize to most distributions and use the same loss functions in all cases with minimal effort. As an example, the Laplace distribution has \(E[z]=\mu\) and \(Var[z]=\sqrt{2}\sigma^{2}\). Thus \(\mu_{\theta_{1}}(x_{t})=\tilde{\mu}_{\theta_{1}}(x_{t})\) and \(\sigma_{\theta_{2}}^{2}(x_{t})=\frac{1}{\sqrt{2}}\tilde{\sigma}_{\theta_{1}}^ {2}(x_{t})\). ### Similarities and differences to existing DMs If we don't estimate \(Var[z|x_{t}]\), our training process for the Gaussian model is equivalent to the one in DDPM (Ho et al., 2020) and the sampling process to DDIM in which case you estimate the distribution \(q(z|x_{t})\) using the single value \(\mathbb{E}[z|x_{t}]\). When we do estimate a mean and variance, our training process is equivalent for the Gaussian model to the one in Extended-DDPM (Bao et al., 2022, 2022), and our sampling process can be seen as a variational generalization of DDIM since it approximately samples from \(q(z|x_{t})\). Although both our sampling method and the one in Extended-DDPM can be seen as generalizations of DDIM in the variational case, the generalization of DDIM in Extended-DDPM differs from ours. In Extended-DDPM, they do not sample \(q(z|x_{t})\) and still use \(\mathbb{E}[z|x_{t}]\) while incorporating \(Var[z|x_{t}]\) separately with additional new noise. Contrary to other works, our theoretical framework explicitly defines the one-shot forward process \(q(x_{t}|x_{0})\), but not \(q(x_{t}|x_{t-1})\). We also use the MoM instead of minimizing a KL divergence. Finally, our method generalizes to non-Gaussian distributions using the method of moments. ## 4 Results We test this framework (GDDIM) on a wide range of location-scale family noise distributions: Gaussian, Student-t, Laplace, Generalized Gaussian (\(\beta=1.5\), \(\beta=2.5\)), and Uniform distributions. ## 5 Conclusion GDDIM performs similarly, albeit slightly worse than DDIM, but allows non-Gaussian noise distributions. The Gaussian distribution performs better than Non-Gaussian distributions, although the Laplace distribution is a close second. Lighter tails distributions lead to significantly worse performance than heavier tails distributions. Theoretical work is needed to explain the clear advantage of the Gaussian distribution over all other choices of distributions. ## 6 Acknowledgment We would like to acknowledge Yang Song for his contributions to the paper and for helping shape the ideas and concepts. T.K would like to acknowledge funding from Lineage Logistics and being hosted by Kells institute. KF was partially supported by the NSERC Discovery grant (RGPIN-2019-06512) and a Samsung grant. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & & & \multicolumn{3}{c}{Heavy Tails} & \multicolumn{1}{c}{Medium Tails} & Light Tails \\ Schedule & Sampling & t & Laplace & Gaussian & & Uniform \\ & & (df = 3) & \((b=1)\) & \((b=1.5)\) & \((b=2)\) & \((b=2.5)\) & \((b=\infty)\) \\ \hline Linear & DDIM & & & & 3.53 & & \\ & DDIM & & & & 5.02 & & \\ \hline Linear & GDDIM & 407.28 & 11.25 & 9.13 & 4.62 & 29.22 & 354.72 \\ Cosine & GDDIM & 340.85 & 10.58 & 14.86 & 4.40 & 26.87 & 274.06 \\ \hline \hline \end{tabular} \end{table} Table 1: Results on CIFAR-10 with 100 reverse steps.
2310.07057
Exploring Community-Driven Descriptions for Making Livestreams Accessible
People watch livestreams to connect with others and learn about their hobbies. Livestreams feature multiple visual streams including the main video, webcams, on-screen overlays, and chat, all of which are inaccessible to livestream viewers with visual impairments. While prior work explores creating audio descriptions for recorded videos, live videos present new challenges: authoring descriptions in real-time, describing domain-specific content, and prioritizing which complex visual information to describe. We explore inviting livestream community members who are domain experts to provide live descriptions. We first conducted a study with 18 sighted livestream community members authoring descriptions for livestreams using three different description methods: live descriptions using text, live descriptions using speech, and asynchronous descriptions using text. We then conducted a study with 9 livestream community members with visual impairments, who shared their current strategies and challenges for watching livestreams and provided feedback on the community-written descriptions. We conclude with implications for improving the accessibility of livestreams.
Daniel Killough, Amy Pavel
2023-10-10T22:44:03Z
http://arxiv.org/abs/2310.07057v1
# Exploring Community-Driven Descriptions for Making Livestreams Accessible ###### Abstract. People watch livestreams to connect with others and learn about their hobbies. Livestreams feature multiple visual streams including the main video, webcoms, on-screen overlays, and chat, all of which are inaccessible to livestream viewers with visual impairments. While prior work explores creating audio descriptions for recorded videos, live videos present new challenges: authoring descriptions in real-time, describing domain-specific content, and prioritizing which complex visual information to describe. We explore inviting livestream community members who are domain experts to provide live descriptions. We first conducted a study with 18 sighted livestream community members authoring descriptions for livestreams using three different description methods: live descriptions using text, live descriptions using speech, and asynchronous descriptions using text. We then conducted a study with 9 livestream community members with visual impairments, who shared their current strategies and challenges for watching livestreams and provided feedback on the community-written descriptions. We conclude with implications for improving the accessibility of livestreams. Live Video Streaming, Livestreaming, Accessibility, Visual Impairments, Blind and Low Vision, Audio Descriptions + Footnote †: journal: Computer supported cooperative work their current livestream viewing practices and challenges and provide feedback on the descriptions written by community members. Overall, sighted community members generated descriptions that increased the accessibility of livestreams using all description methods. While sighted community members found it more challenging to provide live rather than asynchronous descriptions, they adapted several strategies to successfully create live descriptions including: describing during the streamer's narration, using domain-specific terms to quickly author descriptions (_e.g._, "Up-B" to describe a character's special attack in a game), and primarily describing updates due to individual actions (i.e. play-by-plays) rather than the scene as a whole. For providing live descriptions, community members differed in their preference for text vs. voice for description input. However, community members provided significantly more descriptions and description words per video minute using voice input than using text input for live descriptions. Community members with visual impairments reported accessibility issues with consuming livestreams due to the platform's interface and the livestream content itself. Though most community members with visual impairments interviewed use YouTube instead of Twitch to avoid platform accessibility issues, livestream content remained inaccessible. Community members reported that the streamers' speech often diverged from describing their actions (_e.g._, telling a story while creating an art piece) and used frequent visual references to other parts of the video that were difficult to understand (_e.g._, reacting to an unknown chat message or referring an on-camera event). Viewers found community-written descriptions to be valuable in understanding the video as they filled in gaps left by the speaker. Viewers also suggested improvements for future descriptions, such as providing adjustable preferences on the expertise level, level of detail, and amount of overlap with the audio channel. We conclude with directions for future systems aiming to make livestreams accessible. In summary, we contribute: * An exploratory study with livestream community members providing descriptions of live video * Interviews with livestream viewers with visual impairments sharing current strategies and challenges for watching livestreams * Description preferences from livestream viewers with visual impairments derived from a co-watching exercise and feedback on community-written descriptions ## 2. Background Our work builds upon prior work in video, livestreaming accessibility, and crowdsourcing for accessibility. Figure 1. The _Livestream Player_ (left) features the livestream (A-D) and audience live chat (E). The livestream includes webcams for the streamer (A) and a dog (B), an overlay with status indicators (C), and the main video displaying a screenshare of a creative application (D). The _Describer Extension_ (right) enables describes to input text descriptions while the livestream plays. Pressing the backslash key while a Livestream Player window is open inserts the current video timecode into the textbox (F). Clicking a timecode (G) seeks the Livestream Player video to the corresponding playback time. Source: Twitch livestream _How to IMPROVE your SKILLS QUICKLY? Character Design Bootcamp #2 Day 06/30 Ibootcamp Jyoutube lresources_ by Kaycem [(23)]. ### Video Accessibility To make videos accessible to people with visual impairments, professionals traditionally create _audio descriptions_, or narrations of the important visual content in a scene that cannot be understood from the audio alone (Wolf et al., 2017). While audio descriptions increasingly exist for films and TV, they rarely exist for user-generated content. Prior work developed tools to make authoring audio descriptions easier by generating them automatically (Steintein et al., 2017), or aiding novices in authoring audio descriptions (Boges et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017). For example, prior work helped novices edit their descriptions to fit into times without narration (Krizhevsky et al., 2017), identify parts of the video likely to be inaccessible (Krizhevsky et al., 2017), host their descriptions (Krizhevsky et al., 2017), gain feedback on their descriptions (Krizhevsky et al., 2017), and locate silences (Boges et al., 2017; Krizhevsky et al., 2017). These systems all process recorded videos rather than live videos, such that they are not suitable for livestreams. Such video accessibility work also explored generally understandable visual content rather than the domain-specific visual content present in livestreams. We explore how community member familiar with the domain may be able to provide descriptions for live rather than recorded videos. While audio descriptions typically occur within gaps in video narration (Boges et al., 2017; Krizhevsky et al., 2017), adequate gaps do not always occur (_e.g._, for short videos (Krizhevsky et al., 2017), or videos with frequent speech (Krizhevsky et al., 2017; Krizhevsky et al., 2017)). To address this time constraint, prior work used rich audio to convey video themes (Krizhevsky et al., 2017), and provided users control over how often or when to pause a video to receive additional descriptions (Krizhevsky et al., 2017; Krizhevsky et al., 2017). Live video presents new time constraints for describers aiming to describe content as it happens, as well as for listeners aiming to keep up with the video pace. We investigate the feasibility of producing and consuming descriptions under such time constraints. ### Livestreams and Accessibility Livestreaming, broadcasting live video over the internet, has grown over recent decades with increased internet speeds and a broad selection of platforms (_e.g._, justin.tv now Twitch, Facebook Live, YouTube Live, TikTok LIVE). We discuss livestream features common on platforms such as Twitch and YouTube Live to reflect on implications for accessibility for viewers with visual impairments: **Long, real-time broadcasts**: As livestreams are broadcast in real time, streams are often unedited and occur over long durations (_e.g._, up to 5 hours or more (Krizhevsky et al., 2017)). Compared to edited, recorded videos, livestreams activate communities around watching the content in real time (Krizhevsky et al., 2017), engage viewers with one another for more time (Krizhevsky et al., 2017), and enable viewers to gain depth in the streamed activity (_e.g._ watching a game rather than highlights; seeing an artist work instead of explain the high level steps). While viewers may watch the livestream for a long time (_e.g._, 5 hours (Krizhevsky et al., 2017)), they may join in the middle of the stream and need to "catch up" (Krizhevsky et al., 2017) with what occurred earlier in the broadcast. As audio describing videos typically occurs during post-processing and requires additional editing, existing methods posed by prior work for novice use are difficult to use in real time (Boges et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017). Recent work explored sonifying live tennis matches (Krizhevsky et al., 2017), but domain-specific sonification strategies do not exist for the wide variety of streamed content. A long history of radio sports broadcasts, in which experienced announcers verbally describe a game, demonstrate that describing real-time descriptions can be understandable and engaging. Livestreams of video game tournaments often feature announcers who verbally describe ingame action. Building on prior success of describing live events, we investigate the potential for audio description novices who are experts in their domain of interest to produce live descriptions. **Synchronous interactions**: Livestreams have remote and synchronous interactions, as opposed to recorded videos that are remote and asynchronous (Krizhevsky et al., 2017). Similar to prior work on watching TV with others (i.e. "social TV" (Boges et al., 2017)), community members are able to interact synchronously with each other to build interpersonal relationships (Krizhevsky et al., 2017). Livestreams may also interact with their audience by reading chat messages or automated on-screen notifications (_e.g._, listing a new subscriber) and responding verbally or adapting their actions in response (_e.g._, "Thanks for the suggestion, I will try to make the background a farm."). To encourage interactions, streamers often complement the main streamed content with webcam videos of themselves or their environment, as well as additional on-screen overlays such as subscriber, question, or chat notifications burned into the video feed using OBS Studio (Krizhevsky et al., 2017), Streamlabs (Krizhevsky et al., 2017), or StreamYard (Krizhevsky et al., 2017). For streamers with visual impairments, it can be challenging to set up such a streaming environment (Krizhevsky et al., 2017). For viewers, it can be difficult to access these elements as they are not screen reader accessible or not directly described by the streamer. **Conversation on and off the streaming platform:** Hamilton et al. described that the use of psuedonyms and text chat can promote self-disclosure that can help people build relationships (Krizhevsky et al., 2017). People may also carry the same psuedonyms onto shared community spaces outside of streams (_e.g._, on Discord1) to continue to talk to others. We focus our study on content on the streaming platform as the precursor to other types of interactions. Footnote 1: [https://discord.com](https://discord.com) ### Crowdsourcing Accessibility Professional audio describes create highly polished audio descriptions for movies that involve scripting, voiceover, and editing to create the finished product. Given a limited amount of expert describers and the high cost of this process, professional description is not practical for user-generated videos. YouDescribe (Krizhevsky et al., 2017) offers an approach for people to request descriptions and for volunteer describers to provide descriptions. Prior work has also explored crowdsourcing for answering visual questions (Krizhevsky et al., 2017), providing captions and transcriptions (_e.g._, transcription services like Rev.com), and providing on-demand visual support (Boges et al., 2017). However, professionals and crowd workers without domain expertise alike may have difficulty describing content that is unfamiliar to them. For example, Pavel et al.'s formative work with audio describers revealed that describing a new domain can require extensive research into the domain and terminology before providing accurate descriptions (Krizhevsky et al., 2017). Instead of crowdsourcing, prior work in community sourcing (Krizhevsky et al., 2017) and learner sourcing (Krizhevsky et al., 2017) explore drawing from a pool of workers that might have expertise or vested interest in the relevant domain. This approach has had prior success in creating captions. For example, YouTube Community Captions provided community members the chance to add captions to recorded YouTube videos. Prior research also invited domain experts (student learners) who were not experts in providing captions to provide captions that accurately reflected the domain in real-time (Zhou et al., 2017). We explore community-sourcing for providing descriptions for livestreams -- a task that requires domain expertise to complete. ## 3. Describer Study Prior work has explored current challenges and approaches to authoring descriptions for visual media including slide presentations (Sang et al., 2016; Wang et al., 2017) and recorded videos (Sang et al., 2016; Wang et al., 2017; Wang et al., 2018). Livestreams necessitate live description (i.e. written synchronously), rather than asynchronous description of recorded videos. Livestreams often also feature a wide variety of content that requires domain expertise to describe (_e.g._, complex multiplayer gameplay), a breadth and depth of content that expert describers may not be familiar with. To explore the opportunities and challenges of live, community-driven descriptions, we invited 18 livestream viewers with domain expertise to describe livestreams in their domain of interest. ### Methods We conducted a remote within-subjects study with 18 participants describing videos in their domain of expertise across 7 categories. To determine the optimal method of recording live descriptions, we used three description approaches: two synchronous description input methods (one via text and one via speech) and one asynchronous description method (via text). Each participant participated in an individual, 1 hour long, remote study via Zoom (n=4) or Discord (n=14) voice call, and we compensated participants $20. #### 3.1.1. Participants We recruited 18 sighted participants (P1-P18) from Discord servers and Reddit. All participants were between the ages of 19 and 30 (median=21). Participants ranged from watching 30 minutes to 30 hours of livestreams per week. The participants with the two highest watchtimes per week, 28 and 30 hours, were streamers themselves or frequently watched streams while they performed other tasks. Participants reported their genders as: 11 male, 5 female, and 2 N/A or Non-Conforming. All participants were self-expressed experts in the video category they described and had not previously authored audio or text descriptions for videos. #### 3.1.2. Videos To explore a variety of content, we first selected 7 popular livestream categories from Twitch, a popular livestreaming platform for viewers with visual impairments (Krishnan et al., 2015) and participatory communities (Krishnan et al., 2015). The videos selected spanned video games (League of Legends, Smash Bros, Valorant, The Legend of Zelda: Breath of the Wild (BOTW)), board games (Chess), and creative work (Digital Art, Makeup). As video games represent the most common type of livestream, we selected a variety of video games: a multiplayer online battle arena game (League of Legends), a first-person shooter game (Valorant), a third-person fighting game (Smash Bros), and a single-player adventure game (The Legend of Zelda: Breath of the Wild). For each video category, we selected three livestreams from three different streamers for a total of 21 videos to represent a variety of livestream styles (Table 4). We selected a 5 minute clip from each video for the study. For five of the livestream categories, we recruited three participants with expertise in the category (Chess, Digital Art, League of Legends, Super Smash Bros., Valorant); for one of the livestream categories we recruited two participants (Makeup); and for one of the livestream categories we recruited one participant (Breath of the Wild). We downloaded the videos from Twitch for analysis. #### 3.1.3. Description Approaches During the study we asked participants to use three description approaches: synchronous text description, asynchronous text description, and synchronous audio description. Our description interfaces built on prior systems for creating audio descriptions that enabled description authors to script and edit descriptions using text (Krishnan et al., 2015; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017), and record spoken descriptions using audio (Wang et al., 2017). Our live text interface (i.e. _synchronous text description_) let describers write text descriptions. It did not enable describers to record their text descriptions using audio or edit descriptions they had already written as such actions are not possible in real-time. To investigate the impact of providing extra time on describer preference and rate, we allowed 2x video time (10 minutes) and enabled text editing along with video navigation in our _asynchronous text description_ condition. Finally, we accounted for slower typing speeds by adding _synchronous audio description_ to let participants dictate rather than type their descriptions. #### 3.1.4. Describer Extension We implemented our description approaches as a Google Chrome Extension that can be used alongside Twitch to enable real-time description authoring (Figure 1, right). The extension enables describers to watch the video while writing descriptions. Describers designate a new description by pressing 'Enter' to start a new line and optionally pressing the backslash key (') to insert a time code of the segment they are about to describe. Describers can then write their description. To review their descriptions, describers click on the time code to jump to the corresponding point in the livestream, then read back their text descriptions while rewatching the video. #### 3.1.5. Procedure We first asked participants to answer a series of demographic and background questions about their experience watching livestreams and audio describing videos. To help participants craft useful descriptions, we shared existing audio description guidelines from YouDescribe (Yi et al., 2017) and the Audio Description Project (Yi et al., 2017), and showed participants example expert descriptions of the Disney's _The Incredibles (2004)_(Disey et al., 2017). The guidelines gave participants instruction on what to describe (e.g. speakers, lighting, facial expressions, on-screen text) and how to describe it (e.g. use present tense, be objective, avoid technical terminology when possible). Participants then installed our Google Chrome extension and completed practice descriptions for one livestream using all three description methods. After the practice session, participants completed the study task, describing three 5-minute video clips within their area of expertise, each using a different description approach, provided in a random order. We counterbalanced conditions so that each video was described using each condition only once across all participants. Post-task we conducted a semi-structured interview to collect participant feedback on their strategies for describing the video, challenges they experienced in describing the video, and preferences among description approaches. #### 3.1.6. Analysis We recorded the studies using Zoom Cloud Recording for Zoom interviews and OBS Studio (Zoo et al., 2017) for Discord interviews, then automatically transcribed the videos using Descript (Disey et al., 2017). We downloaded text descriptions from our server and segmented them into individual descriptions by new lines. For audio descriptions, we transcribed the description recordings and segmented them into individual descriptions by pauses in speech. We marked the beginning of each description as the time code that the description would appear. We analyzed the interviews using affinity diagramming to group quotes into higher level themes: description strategies (e.g., priorities, challenges, commentating), modality (text, audio), timing (sync, async), and future use (e.g., motivation, scenarios, alternate uses). We analyzed the descriptions by randomly selecting a subset of 300 descriptions from the whole set of 1183 total descriptions produced by participants, then performing open coding to derive 4 higher level themes and 24 subthemes (Table 2). ### Results Overall, participants wrote 1183 descriptions over 54 total video description sessions with an average of 21.9 descriptions per video (\(\sigma=11.5\) descriptions) and 210.3 words per video (\(\sigma=143.7\) words). **Livestream description strategies.** Over a random subset of 300 descriptions, participants primarily described the main content of the stream (266 descriptions), and occasionally described additional visual content including cameras (34 descriptions) and game-specific actions performed by characters (70 descriptions). To describe the main content of livestreams, participants shared information about the high-level context of the stream (i.e. _state_ descriptions) and low-level updates as the stream continued (i.e. _play-by-play_ descriptions). State descriptions provided context for understanding play-by-play descriptions, and participants would add a new state description whenever a notable update to the entire stream state occurred. For example, P9 provided a state update for a new League of Legends game starting: _"Doublelift is in champ select. His team bans Yuumi, Poppy, Jax, Taliyah, and Pyke. The enemy team bans Master Yi, Katarina, Akali, Lulu, and Fiddlesticks. Doublelift is support and his ADC is hovering Zeri."_ (V11). Participants provided more play-by-play descriptions (215 descriptions) than state descriptions (56 descriptions). To fit play-by-play descriptions within limited time, participants often used domain-specific terminology to provide real-time updates (_e.g., "Sage plants spike"_-P16). All participants used domain-specific lingo for at least one description. For example, P15 mentioned they used several shorthand terms that refer to controller inputs including _"dair"_ for _"down air"_ (a type of attack performed by holding down on the controller's left joystick and pressing the A button while the player's character is not grounded), and _"Up B"_ (a type of attack performed by inputting a joystick angle and button combination on the player's controller). While such descriptions helped participants fit additional information about the game, participants expressed concern about the use of technical terminology. For example, P6 questioned if viewers would understand the word "_chibi_" they used to describe a Japanese art style where characters are drawn with exaggerated features. While participants had domain specific terminology for some in-game actions, participants also mentioned that they occasionally did not know how to describe actions they were seeing (_e.g.,_ complex action sequences that used glitches or exploits (P1), character poses (P7), or streamer's facial expressions (P12)), or may not be able to understand complicated action sequences that they were seeing (P4). On the other hand, participants noted it was easiest to describe objects and actions that were not domain-specific. For example, human body parts in a drawing (P7), common actions like running, swimming and shooting a bow in a game (P1), reading on-screen text verbatim (P4), or describing simple visuals (_e.g.,_ a single person on screen). Participants identified that low level, play-by-play descriptions were not always the best strategy to describe fast-paced streams or to capture important visual information. Participants responded by changing the level of granularity. For example, the pace of the chess stream on puzzles (V6) was too fast to type or speak each piece movement, so P2 described the stream by mentioning the number of puzzles completed and the number of mistakes the streamer had made. When describing art content, P6 noted that they changed their description strategy from low-level stroke-by-stroke descriptions to higher-level descriptions of what was being drawn: _"Just saying it's being drawn isn't really that helpful. Towards the end, I was trying to say like, the wings are open as if imposing, so that they can sort of imagine it's this big, otherworldly-type figure."_. Trying to add context for low-level moves in a Valorant game, P18 added commentary that could describe streamer intentions for using certain abilities or aiming certain locations. P14 mentioned that that providing descriptions felt similar to esports commentating. While commentators may provide inspiration for the style and content of the descriptions, P15 highlighted that commentating and describing serve different purposes: _"Commentating is just supplementing what people can see on the screen."_ While participants all prioritized describing the main content, they included information about other visual streams as possible, when relevant, or in reaction to unidentified sounds. P5 described: _"Td focus mainly on [...] what they were drawing, then second priority their face cam, and third priority anything else."_. 15 of 54 sessions started with descriptions of the stream's environment in addition to the main content, but most participants only described parts of the livestream other than the main content when relevant. For example, P12 mentioned that when describing a makeup video, they did not describe the background of the streamer until the streamer directly referenced background objects or walked off-screen. Other participants highlighted that they described on-screen overlays and chat only when mentioned by the streamer or when overlays prompted an unidentified noise. However, when reflecting on their performance, P5 noted that it may have been easier to follow their description if they had described the status of the stream as a whole before starting: _"I would've said, in the top left there's the face cam, below that is the dog face cam, and to the right side of the screen is just the drawing."_ P5 and P11 noted that balancing the streams was difficult due to not knowing what to prioritize (P11) or needing to pay attention to multiple screens (P5). **Comparing livestream description methods.** Overall, participants ranking the description methods from 1 (most preferred) to 3 (least preferred) ranked asynchronous text descriptions as the most preferred input method (\(\mu=1.5,\sigma=0.62\)) followed by synchronous audio (\(\mu=2,\sigma=0.91\)) and synchronous text (\(\mu=2.33,\sigma=0.69\)). A Friedman test2 indicated a significant difference in preference between description methods (\(\chi^{2}(2)=6.12,\)\(p<0.05\)), with a post hoc Wilcoxon test with Bonferroni correction indicating a significant difference only between asynchronous and synchronous text descriptions (\(p<0.01\)). Participants also produced more descriptions per video minute with synchronous audio (\(\mu=6.28\), \(\sigma=2.92\)) and asynchronous text (\(\mu=4.22\), \(\sigma=2.36\)) than they could with synchronous text (\(\mu=3.20\), \(\sigma=1.29\)). Similarly, participants produced more description words per video minute with synchronous audio (\(\mu=60.97\), \(\sigma=35.52\)) and asynchronous text (\(\mu=43.70\), \(\sigma=24.37\)) than they could with synchronous text (\(\mu=26.90\), \(\sigma=10.18\)). Friedman tests indicated significant differences in description counts (\(\chi^{2}(2)=18.77\), \(p<0.001\)) and description words (\(\chi^{2}(2)=17.44\), \(p<0.001\)) between description methods. Post hoc Wilcoxon tests with Bonferroni correction indicated significant differences (\(p<0.05\)) between all pairs of methods for both description counts and description words per video minute. _Text vs. audio descriptions._ 11 participants preferred synchronous audio to synchronous text. 6 participants expressed that speed was the key limitation for text-based methods, and P14 mentioned that their typing was error-prone. To keep up with synchronous text streams, 8 participants reported that they used hotekeys and shorthand. As P5 described, "_If you know their subscriber effects, you can write it once, and then you can just copy-paste it._" 5 participants expressed that attempting to avoid talking at the same time as the streamer was the key challenge of dictating audio descriptions. As P12 described: "_The audio was just so difficult. [...] I felt like I was butting into a conversation._" On the other hand, when P12 was using text without looking for gaps, "_If felt like I was much more descriptive and tackling more of the things that I'm supposed to be describing rather than just like, this is what's happening._" Participants also expressed the challenge of unpredictability of the length of the gap between speech: _There were moments where I would have a rather long thought about how I would describe [the stream], but I would have to stop because the streamer would start talking_" (P6). P13 noted that they would describe while the streamer focused on the game, but they didn't know when the streamer's focus would break and they start talking again (V17). _Synchronous vs. asynchronous text descriptions._ 12 participants preferred asynchronous text over synchronous text, and 1 participant rated them equally. Participants preferred asynchronous text as it felt from focus on important parts of the stream (P10), pause the video (P5, P6, P7), and not have to describe the video perfectly the first time (P6). 3 participants did not pause more than 3 times during their asynchronous text video, including P2, who preferred _synchronous_ text as it felt "more accurate" to what they wanted to say. As participants had to budget their own time for asynchronous text, 1 participant ran out of time and only described 3.5 minutes of the 5 minute clip. 8 participants reported that synchronous text descriptions added time pressure to write something down in the moment before there was something else to describe. P16 reported that "_I was gonna type some stuff, but then 40 other things also happened and like we already moved on and I was like, no, I'm just not gonna talk about this anymore._" As P15 described: "_I almost feel bad. I feel like there were details that would be nice to know that I just wasn't able to say._" **Future description.** Participants reported that composing descriptions was challenging and that they would be willing to describe videos again in the future depending on how interested they are in the video. While most participants preferred to describe videos they would watch anyway, P18 reported that they would prefer to describe videos they are _not_ as interested in so that they can focus on enjoying their streams of interest. 7 participants reported that they would provide descriptions if compensated (_e.g._ by the streamer), while 11 participants would volunteer to write descriptions. P3 compared writing descriptions to chat moderation, which is often a volunteer task. As describing is challenging, several participants mentioned that they would want to describe in smaller blocks of time, from around 15 minutes (P1) up to an hour at once (P5, P7, P16) for synchronous text. 5 participants suggested alternate use cases for using written descriptions as sighted people, including watching a stream in the background or on another monitor (3 participants), walking outside without their phone out (1 participant), driving (1 participant), or getting ready for an event (1 participant). ## 4. Audience Study We conducted a study with livestream viewers with visual impairments to learn about current livestream viewing practices and challenges and surface description preferences. ### Methods We conducted a 1 hour remote study via Zoom with 9 participants with visual impairments who used screen readers to access their device. Participants were recruited through Reddit discussion boards (Krishna et al., 2017) and email lists, and all participants had used Zoom in the past. We compensated participants $25 for their time. #### Participants Participants U1 through U9 ranged from ages 27 to 57 (6 male and 3 female) (Table 3). All participants reported that YouTube was their primary video streaming platform, with one participant watching Twitch an equal amount. Participants spent on average of 0.25 hours to 10 hours per week watching live video. #### 4.1.2. Procedure We first asked participants demographic questions and background questions about their current livestream watching practices, platform and content accessibility challenges, and strategies for gaining more information. To demonstrate current practice, participants then searched for, selected, and watched 5 minutes of any one livestream on their preferred livestream viewing platform. We invited participants to ask questions about the visual content in the video and to rate their perceived accessibility of the video from 1 (very inaccessible) to 7 (very accessible), similar to Liu et al. (Liu et al., 2018). To provide feedback on sample descriptions, participants selected one topic from the 7 livestream categories in Section 3.1.2 and watched 3 different five-minute clips on that topic. We paired each of these 3 streams with a description from a different description approach (synchronous text, synchronous audio, asynchronous text) produced during the Describer Study. All participants selecting the same category were served the same video-description approach pairs in a random order. Participants accessed descriptions via links to a webpage displaying a recording of the video and a description box (Figure 2). Before each video, the researcher provided an overview on the video context, including a brief description of the streamer and the main content of the clip. Once participants began watching each clip, descriptions were read back automatically by the participant's screen reader as the corresponding timestamp in the video overlapped with a stored description's time code. To control for audio quality and noise, all descriptions created with the synchronous audio description method were transcribed and played back as if they were written via text. After each stream, we invited participants to ask questions about the visual content in the scene, rate their perceived accessibility of the video with and without descriptions from 1 (very inaccessible) to 7 (very accessible), and provide feedback on what they liked, disliked, and wished to improve about the descriptions. Finally, we asked participants closing questions about their overall livestream description comparisons and preferences. #### 4.1.3. Analysis We asked participants to screen share with sound using Zoom, recorded the studies using Zoom Cloud Recording, then automatically transcribed the videos using Microsoft Office Word 365 (McCowell et al., 2016) and Adobe Premiere Pro CC (Adobe et al., 2016). We grouped participant responses according to our questions (e.g., current practice, strategies, challenges, and description preferences), then iteratively identified concepts using open coding. ### Results Overall, participants rated the accessibility of their preferred streaming platform as 4.7 (\(\sigma=1.2\)) out of 7 and similarly rated the live content on these platforms as 4.2 (\(\sigma=1.5\)) out of 7. Participants rated the videos they hand-selected during the co-watching study from streamers they were familiar with as 5.78 (\(\sigma=1.39\)). For our example descriptions to probe for feedback, 5 participants chose to watch Breath of the Wild (BOTW), 2 participants chose digital art, 1 participant chose chess, and 1 participant chose Super Smash Bros. Participants rated the videos they watched as 2.3 (\(\sigma=1.2\)) without descriptions and 5.2 (\(\sigma=1.4\)) with descriptions. #### 4.2.1. Current livestream viewing practices Participants primarily watched livestreams to gain information (U1, U2, U3, U4, U5, U6, U8, U9) or for entertainment (U1, U2, U3, U4, U6, U7, U8). Live videos for gaining information included live news (U6, U9), travel (U1, U3), online conferencing (U5), learning guitar (U3), cooking (U8), household repair (U8), learning game strategy (U9), and other personal interests and hobbies (U2, U3, U4, U5). Domains for live videos in entertainment included gaming (U1, U4, U6, U7, U9), music (U1, U2, U3, U8), podcasts (U2, U6), Q&A's (U1, U3), and general commentary (U3, U7). Participants reported that they watched livestreams in particular as they appreciated the ability to interact in real-time with the presenter and other viewers, including the ability to ask questions and receive information at the same time as recorded and alongside everyone else (U5, U6, U7, U8, U9). Participants reported that livestreams included "more honest reactions" (U1) from streamers compared to typical edited content (U4). Participants also appreciated learning more about other people's experiences (U1, U2), including culture (U1), travel (U1, U2), or catching up with their friends (U2). To find livestreams to watch, only U3 and U4 reported using recommendation feeds to find live videos of interest, unlike prior work for recorded videos in which most people used their recommendations (Krishnan et al., 2017). Instead, participants watched streams shared by their friends (U5, U7), users on other social media (U2, U7), or news sites (U9). Many participants also monitored notifications from channels they follow, tuning in when they go live (U1, U2, U4, U6, U9). Otherwise, participants would search for their hobbies or specific topics they're interested in and pick one of the top results (U2, U5, U6, U8). During the co-watching portion, 5 participants used YouTube search to look for specific topics or streamers they regularly watch; U4 selected a recommendation from a subscribed channel on the front page of YouTube; U5 used a recommendation from an email mailing list; U7 checked their Twitch following list, but no one was online, so they used Twitch search for specific streamers they are familiar with; and U8 used Google search and appended "YouTube" to their query. #### 4.2.2. Current accessibility of livestream content & platforms Participants reported that livestreams were most accessible when they had good audio quality, clear voices, a lack of background music (U5, U6), extensive narration from the streamer (U3, U8), and lack of fast-paced action (U3, U5, U8). U6 and U9 both picked accessible audio game streams with no visuals and presented by a visually impaired streamer-- these streams were completely accessible to them. Some reasons that livestreams were inaccessible were similar to prior work exploring the accessibility of recorded video (Krishnan et al., 2017), including: on-screen text burned into the video but not described, unclear visual references (_e.g._, "this", "there"), unidentified sounds, and lack of description of the main visual content. However, livestreams posed additional challenges: First, unexplained sounds were frequent due to sound-producing overlays added to the video (U5, U6) (_e.g._, a subscriber notification). Additionally, as livestreams are long and unedited compared to recorded videos, streamers often left long silences as they took a break from talking (U3, U4, U5, U6)-- or they would break from talking about the game to talk about miscellaneous topics, such as telling a story or responding to chat, that could make it difficult to follow the main content (U2, U3, U7). While watching a stream, U2 commented: _"I'm not sure if he's showing anything or if he's just talking or I have no idea here."_ Most participants noted that chat messages were particularly hard to read due to factors like the speed of the chat and custom comets (i.e. emoji-like images specific to the stream), such that when streamers responded to chat without describing it (_e.g._, _"Yeah, I agree with that, let's try it."_), participants were unable to understand the context for the response. Participants reported that they also wanted more information about the streamer, including facial expressions and body language (U3), as well as what they look like (U8). U7 mentioned that when streamers were playing games, they wanted more background information about the game status (_e.g._, the place on the map, the damage updates) that were typically not included in streamer narrations: _"I can hear that they're taking damage, but without a low health indicator noise you don't know how low they are."_ U9 noted that when it was a game that they were not familiar with, it was difficult to learn what was going on. Finally, the livestream platforms themselves were not fully accessible. 4 participants who used YouTube to watch livestreams mentioned wanting to watch Twitch streams outside of the study, but found the platform difficult to use due to poor labeling of interface elements and difficulty navigating using a screen reader. #### 4.2.3. Strategies for gaining information about inaccessible streams Participants mentioned strategies for handling inaccessible streams including: moving on to find another stream, asking the streamer or audience for additional information (on Discord or chat), prompting the streamer to change their narration style, and asking friends or family. When participants were not invested in a particular stream, participants indicated they would move on to find other streams (U2, U5, U6, U8): _"It's simple. I don't watch it. I mean, if it's gonna frustrate me, so some people might get mad and rant about it. I'm like, OK, I can't watch it"_ (U2). Participants U2, U4, U5, U6, and U9 reported reaching out to the streamer directly in chat or via email to provide more complete narrations for their actions. U4 mentioned that _"I have been known to reach out to the video provider to the video upload and say 'hey, I'm a blind individual consuming your content. Tell me what you're doing. Tell me what you're seeing."_ U4 reported that most of the streamers they follow on Twitch are good about trying to cue their viewers into what they're doing, and that U4 would remind them when the streamer forgets. Participants also suggested looking in chat or asking other viewers questions via chat (U1, U3, U5, U6, U7), though the chat itself was difficult to navigate. U7 described gaining additional information via live chat: _"One time I was on LilyPichu's stream and I asked why there's a tomato emoji after the name of the stream that day. And I was like, I'm blind and I'm just curious. And a few people said it's because she dyed her hair red and oh, okay. But it was hard to find that in the massive stream of faces with tears of joy."_ U7 also mentioned they look at the chat when joining a stream as people often comment on what is going on in the stream, but neither U7 (nor U6) send a message themselves unless they are particularly curious about something, as they see it as bothersome. Participants also asked sighted friends and family members to answer visual questions (U3, U6). Finally, participants used external online sources such as web search (U5, U7, U9), Twitter (U5), and wikis (U7) to learn more about the context for the stream.
2301.11174
Semi-Supervised Image Captioning by Adversarially Propagating Labeled Data
We present a novel data-efficient semi-supervised framework to improve the generalization of image captioning models. Constructing a large-scale labeled image captioning dataset is an expensive task in terms of labor, time, and cost. In contrast to manually annotating all the training samples, separately collecting uni-modal datasets is immensely easier, e.g., a large-scale image dataset and a sentence dataset. We leverage such massive unpaired image and caption data upon standard paired data by learning to associate them. To this end, our proposed semi-supervised learning method assigns pseudo-labels to unpaired samples in an adversarial learning fashion, where the joint distribution of image and caption is learned. Our method trains a captioner to learn from a paired data and to progressively associate unpaired data. This approach shows noticeable performance improvement even in challenging scenarios including out-of-task data (i.e., relational captioning, where the target task is different from the unpaired data) and web-crawled data. We also show that our proposed method is theoretically well-motivated and has a favorable global optimal property. Our extensive and comprehensive empirical results both on (1) image-based and (2) dense region-based captioning datasets followed by comprehensive analysis on the scarcely-paired COCO dataset demonstrate the consistent effectiveness of our semisupervised learning method with unpaired data compared to competing methods.
Dong-Jin Kim, Tae-Hyun Oh, Jinsoo Choi, In So Kweon
2023-01-26T15:25:43Z
http://arxiv.org/abs/2301.11174v1
# Semi-Supervised Image Captioning by Adversarially Propagating Labeled Data ###### Abstract We present a novel data-efficient _semi-supervised_ framework to improve the generalization of image captioning models. Constructing a large-scale labeled image captioning dataset is an expensive task in terms of labor, time, and cost. In contrast to manually annotating all the training samples, separately collecting uni-modal datasets is immensely easier, _e.g._, a large-scale image dataset and a sentence dataset. We leverage such massive _unpaired_ image and caption data upon standard paired data by learning to associate them. To this end, our proposed semi-supervised learning method assigns pseudo-labels to unpaired samples in an adversarial learning fashion, where the joint distribution of image and caption is learned. Our method trains a captioner to learn from a paired data and to progressively associate unpaired data. This approach shows noticeable performance improvement even in challenging scenarios including out-of-task data (_i.e._, relational captioning, where the target task is different from the unpaired data) and web-crawed data. We also show that our proposed method is theoretically well-motivated and has a favorable global optimal property. Our extensive and comprehensive empirical results both on (1) image-based and (2) dense region-based captioning datasets followed by comprehensive analysis on the scarcely-paired COCO dataset demonstrate the consistent effectiveness of our semi-supervised learning method with unpaired data compared to competing methods. Image captioning, unpaired captioning, semi-supervised learning, generative adversarial networks. ## I Introduction Image captioning is a task of automatically generating a natural language description of a given image. It is a highly useful for image understanding, in that 1) it extracts the essence of an image into a self-descriptive form, and 2) the output format is an interpretable natural language, which is free-form and easy to manipulate so that it can be beneficial to user interactable applications such as language based image retrieval [27], video summarization [13], navigation [68], and vehicle control [33]. Image captioning is also general, in that it is not confined to a few number of pre-defined classes. This enables descriptive analysis of an image. Recent research on image captioning has made impressive progress [67, 2, 15]. Despite this progress, the majority of works are trained only via supervised learning where it would be hard to transfer a model to a target domain with significant domain shift [9]. One way to improve the image captioning model's generalizability would be to add more supervised data, which is hard in practice. Specifically, the MS COCO caption dataset is constructed with 120,000 number of images that were asked annotators to provide five plausible sentences for each image, which is an expensive task in terms of labor, time, and cost. Moreover, if the target task is a higher-level task involving multiple captions and bounding boxes per image, it becomes even more challenging to annotate the dataset. For example, for the relational captioning task [29], dense and combinatorially associated caption and a pair of two bounding boxes are used as a label, and the data for this task has much higher complexity than that of the standard image captioning task. Constructing such human-labeled datasets is an immensely laborious and time-consuming task, so that building new datasets according to different needs of target themes or application scenarios would be impractical. Therefore, our goal is to effectively improve image captioning in a more data efficient way. In this work, we present a novel way of leveraging _unpaired_ image and caption data from the web upon traditional elaborately labeled paired data to effectively improve image captioning neural networks. We are motivated by the fact that images can be easily obtained from the web, and captions can be easily augmented and synthesized by replacing or adding different words for given sentences according to parts of speech as done in [75]. Moreover, given a sufficient amount of descriptive captions, it is easy to crawl _corresponding but noisy_ images through Google or Flickr image databases [63] to build a large image corpus. In this way, we can easily construct a large-scale _unpaired_ dataset of images and captions, which Fig. 1: The proposed data setup utilizes “unpaired” image-caption data upon “paired” data. We denote paired data as \(\mathcal{D}_{p}\) and unpaired image and caption datasets as \(\mathcal{D}_{u}^{u}\) and \(\mathcal{D}_{l}^{u}\) respectively. requires no or minimal human effort. Due to the unpaired nature of images (input) and captions (output supervision) in our problem, the conventional supervised learning approaches can no longer be directly used. We propose to algorithmically assign supervision labels, termed as _pseudo-labels_, to make unpaired data paired. The pseudo-label is used as a _learned_ supervision label. To develop the mechanism of pseudo-labeling, we are motivated and leverage the generative capability of generative adversarial networks (GAN) [19], for searching pseudo-labels from unpaired data. That is, in order to find the appropriate pseudo-labels for unpaired samples, we utilize an adversarial training method for training a discriminator model. Thereby, the discriminator learns to distinguish between real and fake image-caption _pairs_ and to retrieve pseudo-labels as well as to enrich the captioner training. This work is the extension of Kim _et al_. [30]. In this work, we further improve our method with simple yet significantly effective concept transfer technique and analyze our framework by extensively evaluating our method in diverse and challenging scenarios: more challenging image captioning baselines [25, 15], additional caption domain of MS COCO and Flickr [74], and the new relational captioning task [29] along with sentence based image retrieval. Other than empirical results, we also show the theoretical justification of our design of the proposed learning method with respect to a global optimum. In short, our main contributions are summarized as follows. (1) We propose a novel framework for training an image captioner with the unpaired image-caption data upon traditional paired data. (2) In order to facilitate training with unpaired data, we devise a new semi-supervised learning approach by the novel usage of the GAN discriminator. In particular, for the scenarios when the number of paired data is scarce, we additionally propose a simple yet effective teacher-student based concept transfer method to leverage an external high-level knowledge to help bridging between unpaired image and caption data in different domains from the paired data. (3) Beyond the naive image-level captioning task, we extend our method to the relational captioning task in order to demonstrate that our framework can be easily applied to region-based captioning datasets as well with a simple modification. (4) We link between our practical realization of the proposed learning method and theoretical algorithmic behaviors. (5) We show the effectiveness of our method through extensive experiments in various challenging setups compared to strong competing methods. We demonstrate that, with 60% of paired data, our method performs comparably with the model supervised by full data, and our method outperforms the competing methods with full paired data. More surprisingly, our model trained by our learning method with 1% of paired data plausibly performs well in a qualitative sense. ## II Related Work The main goal of our work is to address unpaired image-caption data to improve the generalizability of image captioning. Therefore, we mainly focus on image captioning and unpaired data handling literature. Generalizability in Image Captioning.Since the introduction of the MS COCO dataset [44], image captioning has been extensively studied in computer vision and language community [2, 15, 24, 25, 39, 51, 53, 57, 67, 71] by virtue of the advancement of deep neural networks [37]. As neural network architectures become more advanced, _e.g_., Transformer [15, 53], they require a much larger dataset scale for generalization [58] as the image captioning models tend to show limited generalizability. Despite the extensive study on network architectures, the data issues such as noisy data, partially missing data, and unpaired data have been relatively less studied in image captioning. Traditionally, utilizing unpaired image-caption data required additional information to associate images and captions. Gu _et al_. [20] introduce additional modal information, Chinese captions, and use it as strong pivot language for language pivoting [64]. Feng _et al_. [17] propose an unpaired captioning framework which trains a model without either image or sentence labels via learning a visual concept detector with external data, the OpenImage dataset [35]. Laina _et al_. [40] and Guo _et al_. [22] propose improved training methods given the same visual concept detector as Feng _et al_. trained with the OpenImage dataset. We later show that our method can be easily extended with a similar visual concept learning to enhance the performance. Gu _et al_. [21] utilize scene graph to bridge between unpaired image and caption data. Chen _et al_. [9] approach image captioning as a domain adaptation by utilizing the large-scale paired MS COCO caption dataset as the source domain and adapting on separate unpaired image or caption datasets as the target domain. Kim _et al_. [31] propose multi-task learning method to use an action recognition dataset without caption labels to improve video captioning performance. Liu _et al_. [46] use self-retrieval rewards for captioning to facilitate training a model with partially labeled data, where the self-retrieval module retrieves corresponding images with the captions generated from the model. As a separate line of work, there are novel object captioning methods [3, 66, 48] that additionally exploit unpaired image and caption data to mine a description of a novel word. Most of aforementioned works including [3, 5, 17, 20, 21, 22, 31, 40] exploit large auxiliary _supervised_ datasets such as class labels or scene graph. To the best of our knowledge, we are the first to study how to handle unpaired image and caption data for image captioning even without any auxiliary information but by leveraging semi-supervised image-caption data only. Although Chen _et al_. [9] do not use auxiliary information as well, it requires large amounts of paired source data, of which data regime is different from ours. Liu _et al_. [46] is also this case, where they use the full paired MS COCO caption dataset with an additional large unlabeled image set. Our method can deal with those regimes as well as a very scarce paired source data regime, of which scale is only \(1\%\) of the COCO dataset at minimum. Multi-modality in Unpaired Data Handling.By virtue of the advance on generative modeling, _e.g_., GAN [19], multi-modal translation recently emerged as a popular field. Among many possible modalities, image-to-image translation between two different and unpaired domains has been actively ex plored. To tackle this problem, the cycle-consistency constraint between unpaired data is exploited in CycleGAN [78] and DiscoGAN [34], and it is further improved in UNIT [45]. In this work, we regard image captioning as a multi-modal translation. Our work has a similar motivation to the unpaired image-to-image translation [78], unsupervised machine translation [4], and machine translation with monolingual data [76]. However, we show that the cycle-consistency does not work on our problem setup due to a significant modality gap. Instead, our results suggest that the traditional label propagation based semi-supervised framework [77] is more effective for our task. **Semi-supervised Learning.** In general, the goal of semi-supervised learning (SSL) is to improve the model performance by training with unlabeled data under a transductive assumption [8]. Recent deep learning based SSL methods can be divided into four main categories: (1) pseudo-label generation [41], (2) consistency regularization [50, 62], (3) combination of pseudo-labeling with consistency regularization [7, 61, 38, 6], and (4) generative model based methods [14, 18]. Our method is motivated by the generative model based semi-supervised learning [14]. While the prior work is mostly limited to deal with simple image classification, our work extend the regime to image and caption modalities. ## III Proposed Method In this section, we first brief the standard image caption learning and describe how we can leverage the unpaired dataset. Then, we introduce an adversarial learning method for obtaining a GAN model that encourages to match the distribution of latent features of images and captions. The GAN model is used for assigning pseudo-labels, which allows a challenging semi-supervised learning with both labeled and unlabeled data. Moreover, we analyze the theoretical properties of our proposed framework. Lastly, we extend our method to the relational captioning scenario. ### _Adversarial Semi-supervised Training_ Let us denote a dataset with \(N_{p}\) image-caption pairs as \(\mathcal{D}_{p}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{N_{p}}\). A typical image captioning task is defined as follows: given an image \(\mathbf{x}_{i}\), the model generates a caption \(y_{i}\) that best describes the image. Traditionally, a captioning model is trained on a large paired dataset \((\mathbf{x},y)\in\mathcal{D}_{p}\), _e.g._, the MS COCO dataset, by minimizing the negative log likelihood against the ground truth caption as follows: \[\sum_{(\mathbf{x},y)\in\mathcal{D}_{p}}L_{\text{CE}}(y,\hat{y}(\mathbf{x})), \tag{1}\] where \(L_{\text{CE}}\) denotes the cross entropy loss, and \(\hat{y}(\mathbf{x})\) denotes output of the model. Motivated by early neural machine translation literature [12], captioning frameworks have been typically implemented as an encoder-decoder architecture [67], _i.e._, CNN-RNN. The CNN encoder \(F(\mathbf{x})\) outputs a latent feature vector \(\mathbf{z}^{x}\) from a given input image \(\mathbf{x}\), followed by the RNN decoder \(H(\mathbf{z}^{x})\) to generate a caption \(y\) from \(\mathbf{z}^{x}\) in a natural language form, _i.e._, \(\hat{y}(\mathbf{x})\)=\(p(y|\mathbf{x};F,H)\)=\(H\circ F(\mathbf{x})\), as depicted in Fig. 2. **Learning with Unpaired Data.** Our problem deals with unpaired data, where the image and caption sets \(\mathcal{D}_{u}^{x}\)=\(\{\mathbf{x}_{i}\}_{i=0}^{N_{p}}\) and \(\mathcal{D}_{u}^{y}\)=\(\{y_{i}\}_{i=0}^{N_{p}}\) are not paired. Given the unpaired datasets, due to missing annotations, the loss in Eq. (1) cannot be directly computed. Motivated by the semi-supervised framework [59], we artificially generate _pseudo-labels_ for respective unpaired datasets, so that the supervision loss in Eq. (1) can be leveraged with unpaired data. Specifically, we retrieve the best matched caption \(\tilde{y}_{i}\) in \(\mathcal{D}_{u}^{y}\) given a query image \(\mathbf{x}_{i}\), assign it as a pseudo-label, and vice versa (\(\tilde{\mathbf{x}}_{i}\) for \(y_{i}\)). We express the pseudo-labeling as a function for simplicity, _i.e._, \(\tilde{y}_{i}=\tilde{y}(\mathbf{x}_{i})\). To retrieve a semantically meaningful match, we need a measure to assess proper matches. We use a discriminator network to determine real or fake pairs in a similar way to GANs, which will be described in later sections. With the retrieved pseudo-labels, now we can compute Eq. (1) with unpaired data as: \[\min_{F,H}\lambda_{\mathbf{x}}\sum_{\mathbf{x}\in\mathcal{D}_{u}^{x}}L_{ \text{CE}}(\tilde{y}(\mathbf{x}),\hat{y}(\mathbf{x}))+\lambda_{y}\sum_{y\in \mathcal{D}_{u}^{x}}L_{\text{CE}}(y,\hat{y}(\tilde{\mathbf{x}}(y))), \tag{2}\] where \(\lambda_{\{\cdot\}}\) denote the balance parameters. **Discriminator Learning by Unpaired Feature Matching.** We train via a criterion to find a semantically meaningful match, so that pseudo-labels for each modality are effectively retrieved. To this end, we pre-train a discriminator with the paired supervised dataset, and then jointly train it with the other network parts on both paired and unpaired datasets. We introduce a caption encoder, \(G(y)\), which embeds the caption \(y\) into a feature \(\mathbf{z}^{y}\). This is implemented with a single layer LSTM, and we take the output of the last time step as the caption representation \(\mathbf{z}^{y}\). Likewise, given an image \(\mathbf{x}\), we obtain \(\mathbf{z}^{x}\) by the image encoder \(F(\mathbf{x})\). Now, we have a comparable feature space of \(\mathbf{z}^{x}\) and \(\mathbf{z}^{y}\), of which the number of dimensions are set to be the same. We train the discriminator to distinguish whether the pair \((\mathbf{z}^{x},\mathbf{z}^{y})\) comes from true paired data \((\mathbf{x},y)\in\mathcal{D}_{p}\), _i.e._, the pair belongs to the real distribution \(p(\mathbf{x},y)\) or not. To train the discriminator, we could use random data of \(\mathbf{x}\) and \(y\) independently sampled from respective unpaired datasets, but we found that it is detrimental to performance due to uninformative pairs in training. Instead, we conditionally synthesize \(\mathbf{z}^{x}\) or \(\mathbf{z}^{y}\), to form a synthesized pair that appears to be as realistic as possible. We use the feature transformer networks \(\tilde{\mathbf{z}}^{y}\) = \(T_{v\to c}(\mathbf{z}^{x})\) and \(\tilde{\mathbf{z}}^{x}\) = \(T_{c\to v}(\mathbf{z}^{y})\), where \(v\)\(\to\)\(c\) denotes the mapping from visual data to caption data and vice versa, and \(\tilde{\mathbf{z}}^{(\cdot)}\) denotes the conditionally synthesized feature. \(\{T\}\) are implemented with multi-layer-perceptron with four fully-connected (FC) layers with the ReLU nonlinearity. The discriminator \(D(\cdot,\cdot)\) learns to distinguish features if they are real or not. At the same time, the other associated networks \(F\), \(G\), \(T_{\{\cdot\}}\) are learned to fool the discriminator by matching the distribution of paired and unpaired data. We formulate this adversarial training as follows: \[\begin{split}\min_{F,G,\{T\}}\max_{D}\widetilde{U}(F,G,\{T\},D)\\ =\min_{F,G,\{T\}}\max_{D}U(F,G,\{T\},D)+\operatorname*{\mathbb{E}} _{\begin{subarray}{c}(\mathbf{x}^{x},\mathbf{z}^{y})\\ \sim(F,G)\circ D_{p}\end{subarray}}[L_{reg}(\mathbf{z}^{x},\mathbf{z}^{y},\{T \})],\end{split} \tag{3}\] where \[\begin{split} U(F,G,\{T\},D)=&\operatorname*{ \mathbb{E}}_{\begin{subarray}{c}(\mathbf{x}^{x},\mathbf{z}^{y})\\ \sim(F,G)\circ D_{p}\end{subarray}}[\log(D(\mathbf{x}^{x},\mathbf{z}^{y}))] \\ &+\frac{1}{2}\operatorname*{\mathbb{E}}_{\begin{subarray}{c} \mathbf{x}\sim p(\mathbf{x})\end{subarray}}[\log(1-D(F(\mathbf{x}),T_{v\to c}(F (\mathbf{x}))))]\\ &+\operatorname*{\mathbb{E}}_{y\sim p(y)}[\log(1-D(T_{c\to v}(G(y)),G( y)))]\end{subarray},\end{split} \tag{4}\] \(L_{reg}(\mathbf{z}^{x},\mathbf{z}^{y},\{T\})\)\(=\)\(\lambda_{reg}(\|\underset{v\to F}{T}(\mathbf{z}^{x})\mathbf{-z}^{y}\|_{F}^{2}\)\(+\|\mathbf{z}^{x}\)\(-\underset{v\to F}{T}(\mathbf{z}^{y})\|_{F}^{2})\). Note that the first log term in Eq. (4) is not used for updating any learnable parameters related to \(F,G,\{T\}\), but only used for updating \(D\). The overall architecture related to this formulation is illustrated in Fig. 2. Through alternating training of the discriminator (\(D\)) and generators (\(F,G,\{T\}\)), the latent feature distribution of paired and unpaired data should be close to each other, _i.e._, \(p(\mathbf{z}^{x},\mathbf{z}^{y})\)\(\,\approx\)\(\,p_{v\to c}(\mathbf{z}^{x},\mathbf{z}^{y})\)\(\,\approx\)\(\,p_{c\to v}(\mathbf{z}^{x},\mathbf{z}^{y})\), where \(p_{v\to c}(\mathbf{x}^{x},\mathbf{z}^{y})\)\(\,=\)\(\,p(\mathbf{z}^{x})p_{v\to c}(\mathbf{z}^{y}|\mathbf{z}^{x})\), \(p_{c\to v}(\mathbf{x}^{x},\mathbf{z}^{y})\)\(\,=\)\(\,p(\mathbf{z}^{y})p_{c\to v}(\mathbf{x}^{x}|\mathbf{z}^{y})\), and \(p_{v\to c}(\mathbf{z}^{y}|\mathbf{z}^{x})\)\(\,\) and \(p_{c\to v}(\mathbf{x}^{x}|\mathbf{z}^{y})\) are modeled with \(T_{v\to c}\) and \(T_{c\to v}\), respectively. It implies that, as the generator is trained, the decision boundary of the discriminator tightens; hence, we can use the plausibly learned \(D\) to retrieve a proper pseudo-label, if the unpaired datasets are sufficiently large such that semantically meaningful matches exist between the different modality datasets. **Pseudo-label Assignment.** Given an image \(\mathbf{x}\in\mathcal{D}_{u}^{x}\), we retrieve a caption in the unpaired dataset, _i.e._, \(\tilde{y}\in\mathcal{D}_{u}^{y}\), that has the highest score obtained by the discriminator, _i.e._, the most likely caption to be paired with the given image as \[\tilde{y}_{i}=\tilde{y}(\mathbf{x}_{i})=\operatorname*{argmax}_{y\in\mathcal{ D}_{u}^{x}}\;D\left(F(\mathbf{x}_{i}),G(y)\right), \tag{5}\] vice versa for unpaired captions: \[\tilde{\mathbf{x}}_{i}=\tilde{\mathbf{x}}(y_{i})=\operatorname*{argmax}_{ \mathbf{x}\in\mathcal{D}_{u}^{x}}\;D\left(F(\mathbf{x}),G(y_{i})\right). \tag{6}\] By this retrieval process over all the unpaired datasets, we now have fully paired data; _i.e._, image-caption pairs \(\{(\mathbf{x}_{i},y_{i})\}\) from the paired data and the pairs with pseudo-labels \(\{(\mathbf{x}_{j},\tilde{y}_{j})\}\) and \(\{(\tilde{\mathbf{x}}_{k},y_{k})\}\) from the unpaired data. However, these pseudo-labels are likely to be noisy or biased, thus treating them equally with the paired ones would not be desirable [52]. Motivated by learning with noisy labels [42, 69], we re-weigh the data pairs by defining a confidence score for each of the assigned pseudo-labels. In order to obtain the confidence score, we propose to use the output score from the discriminator as the confidence score, _i.e._, \(\alpha_{i}^{x}\)\(=\)\(\hat{D}(\mathbf{x}_{i},\tilde{y}_{i})\) and \(\alpha_{i}^{y}\)\(=\)\(\hat{D}(\tilde{\mathbf{x}}_{i},y_{i})\), where we denote \(\hat{D}(\mathbf{x},y)\)\(\,=\)\(\,D(F(\mathbf{x}),G(y))\), and \(\alpha\in[0,1]\) due to the sigmoid function at the final layer. We utilize the confidence scores to assign weights to the unpaired samples. The final weighted loss \(\min_{F,G}\mathcal{L}_{cap}(F,G)\) is defined as follows: Fig. 2: Illustration of the proposed method. Dotted arrows denote the path of the gradients via back-propagation. Given any image and caption pair, CNN and RNN (LSTM) encoders encode input image and caption into the respective feature spaces. A discriminator (\(D\)) is trained to discriminate whether the given feature pairs are real or fake, while the encoders are trained to fool the discriminator. The learned discriminator is also used to assign the most likely pseudo-labels to unpaired samples through the pseudo-label search module. We additionally introduce an auxiliary multi-layer perceptron to learn external knowledge via concept transfer. \[\min_{F,H}\sum\limits_{(\mathbf{x},y)\in\mathcal{D}_{p}}L_{\text{ CE}}(y,\hat{y}(\mathbf{x}))+\lambda_{x}\sum\limits_{\mathbf{x}\in\mathcal{D}_{q}^{x}} \alpha_{(\hat{y}(\mathbf{x}),\mathbf{x})}^{x}L_{\text{CE}}(\bar{y}(\mathbf{x}), \hat{y}(\mathbf{x}))\] \[+\lambda_{y}\sum\limits_{y\in\mathcal{D}_{q}^{y}}\alpha_{(y, \mathbf{x}(y))}^{y}L_{\text{CE}}(y,\hat{y}(\mathbf{x}(y))). \tag{7}\] To ease training further, we add an additional triplet loss to Eq. (7): \[\mathcal{L}_{triplet}(F,H)=\] \[\sum\limits_{\begin{subarray}{c}(\mathbf{x}_{p},y_{p})\in \mathcal{D}_{p},\\ \mathbf{x}_{u}\in\mathcal{D}_{u}^{x},y_{u}\in\mathcal{D}_{u}^{y}\end{subarray} }-\log\frac{p(y_{p}|\mathbf{x}_{p};F,H)}{p(y_{p}|\mathbf{x}_{u};F,H)}-\log \frac{p(y_{p}|\mathbf{x}_{p};F,H)}{p(y_{u}|\mathbf{x}_{p};F,H)}, \tag{8}\] by regarding random unpaired samples as negative. This slightly improves the performance. We jointly train the model on both paired and unpaired data. **Leveraging External Knowledge via Concept Transfer.** Although our semi-supervised learning method works properly to associate unpaired image and caption data despite scarce paired data, the smaller the paired data size is, the more difficult it becomes to associate unpaired samples from _unseen_ domains. This is because the small paired data lacks the information to capture any snippet of image or text, _i.e._, concept of each data. Therefore, as an extension, we propose to borrow a large-scale pre-trained knowledge to effectively associate unpaired samples by capturing concept regardless of domain, which is crucial for semi-supervised learning especially when paired data is scarce. As an external source of knowledge, we propose to use concept embeddings obtained from an off-the-shelf and pre-trained scene understanding model that provides a high-level scene understanding. We extract a set of dense vectors1 from an image by using the pre-trained model. By averaging the vectors of the image, we obtain a single vector \(\mathbf{v}=\text{Concept}(\mathbf{x})\) that represents an image, which we call "concept vector." Footnote 1: The dense vectors can be any dense representation, _e.g._, a pixel-wise feature map, feature vectors corresponding to region proposals, _etc_. To borrow knowledge from an external pre-trained model _regardless of its network architecture_, we utilize this concept vector in a way of the knowledge distillation [23], where our image encoder \(F(\cdot)\) learns the knowledge encoded in the vector. To make the encoder deal with this auxiliary task, we add an auxiliary concept regression branch \(R(\cdot)\) to the penultimate layer. The auxiliary branch is implemented by a multi-layer perceptron to create a vector \(\hat{\mathbf{v}}=R\circ F(\mathbf{x})\) that mimics the concept vector provided by the high-level scene understanding model. Then, the image captioning model is trained by adding the additional concept regression loss \(\mathcal{L}_{external}\) as follows: \[\mathcal{L}_{external}(F)=\underset{\mathbf{x}\sim p(\mathbf{x})}{\mathbb{E}} \|R\circ F(\mathbf{x})-\text{Concept}(\mathbf{x})\|_{F}^{2}, \tag{9}\] which is described in Fig. 2. Thereby, the knowledge from the external model could be effectively transferred to the image captioning model. This simple approach significantly improves the performance of an image captioning model when the number of paired data is scarce, which is shown in Sections IV-C and IV-E. To produce the concept vector, we use the pre-trained relational captioning model [32]. We generate abundant relational caption proposals from an image by using the model, and each caption is map to an embedding by utilizing Glove word vector [55]. Then, in order to represent the global _image-level_ concept, we average out all the vectors obtained from the image to form a concept vector \(\mathbf{v}\) of the image. The concept vector encodes semantic concept of the scene. The total loss function for training our model is as follows: \[\min_{F,G,H,\{T\}}\max_{D}\mathcal{L}_{cap}(F,H)+\lambda_{1}\tilde{ U}(F,G,\{T\},D) \tag{10}\] \[+\lambda_{2}\mathcal{L}_{triplet}(F,H)+\lambda_{3}\mathcal{L}_{ external}(F),\] where \(\mathcal{L}_{cap}\) denotes the captioning loss defined in Eq. (7), \(\tilde{U}\) the loss for adversarial training defined in Eq. (3), \(\mathcal{L}_{triplet}\) the triplet loss defined in Eq. (8), \(\mathcal{L}_{external}\) the concept regression loss defined in Eq. (9), and \(\lambda_{1}=\lambda_{2}=\lambda_{3}=0.1\). ### _Theoretical Analysis_ In this section, we analyze our minimax-style learning framework and its favorable guarantees, which include a global equilibrium exists in our learning framework and it is also achievable. These analyses show that our design of the system and loss functions are well-grounded to pursue our objective of the multi-modal distribution match. To reach this conclusion, we first show the following Lemma 1. **Lemma 1**: _For any fixed generators \(F\), \(G\), and \(\{T\}\), the optimal discriminator \(D\) of the minimax game defined by the objective function \(U(F,G,\{T\},D)\) in Eq. (4) is_ \[D^{*}(\mathbf{z}^{x},\mathbf{z}^{y})=\frac{p(\mathbf{z}^{x},\mathbf{z}^{y})}{p( \mathbf{z}^{x},\mathbf{z}^{y})+p_{1/2}(\mathbf{z}^{x},\mathbf{z}^{y})}, \tag{11}\] _where \(p_{1/2}(\mathbf{z}^{x},\mathbf{z}^{y})=\frac{(p_{c\to c}(\mathbf{z}^{x}, \mathbf{z}^{y})+p_{c\to c}(\mathbf{z}^{x},\mathbf{z}^{y}))}{2}\) is a mixture distribution._ This shows that the optimal discriminator \(D^{*}\) is at the balance between the true data distribution and the mixture distribution defined by \(F\), \(G\), and \(\{T\}\). Given the fixed \(D^{*}(\mathbf{z}^{x},\mathbf{z}^{y})\), we can reformulate the minimax game with the function \(U(F,G,\{T\},D)\) as minimizing the sub-problem \(V(F,G,\{T\})=\max_{D}U\) over \(F,G\) and \(\{T\}\). Then, we have the following lemma. **Lemma 2**: _Given \(D=D^{*}(\mathbf{z}^{x},\mathbf{z}^{y})\), the global minimum of \(V(F,G,\{T\})\) is achieved if and only if \(p(\mathbf{z}^{x},\mathbf{z}^{y})=p_{1/2}(\mathbf{z}^{x},\mathbf{z}^{y})\), and the optimum value is \(-\log 4\)._ Furthermore, the marginal distributions \(p(\mathbf{z}^{x})\) and \(p(\mathbf{z}^{y})\) can be captured by the learned marginal distributions, _i.e._, \(p(\mathbf{z}^{y})=\underset{c\to v}{p}\left(\mathbf{z}^{y}\right)=\underset{v \to c}{p}\left(\mathbf{z}^{x}\right)\) and \(p(\mathbf{z}^{x})=\underset{c\to v}{p}\left(\mathbf{z}^{x}\right)=\underset{v \to c}{p}\left(\mathbf{z}^{x}\right)\). The standard adversarial training in GAN [19] uses a similar way with Lemma 2 and shows that a generator perfectly replicates the data generating process if the optimal discriminator can be found. However, Lemma 2 shows only up to the fact that our model can at least replicate data marginal distributions and a mixture of \(\{T\}\) can replicate the joint data distribution. In the next step, we show that we can actually find a global equilibrium point \(p(\mathbf{z}^{x},\mathbf{z}^{y})=p_{c\to c}(\mathbf{z}^{x},\mathbf{z}^{y})=p_{c \to v}(\mathbf{z}^{x},\mathbf{z}^{y})\) that mimics the data generating (transformation) process in both directions as follows. **Theorem 1**: _Given an augmented objective function defined as:_ \[\begin{split} U(F,G,\{T\},D)&+\operatorname{KL}\left[p( \mathbf{z}^{x}|\mathbf{z}^{y})||p_{c\to v}(\mathbf{z}^{x}|\mathbf{z}^{y}) \right]\\ &+\operatorname{KL}\left[p(\mathbf{z}^{y}|\mathbf{z}^{x})||p_{v \to c}(\mathbf{z}^{y}|\mathbf{z}^{x})\right].\end{split} \tag{12}\] The equilibrium of Eq. (12) is achieved if and only if \(p(\mathbf{z}^{x},\mathbf{z}^{y})=p_{v\to c}(\mathbf{z}^{x},\mathbf{z}^{y})=p_{c \to v}(\mathbf{z}^{x},\mathbf{z}^{y})\). Lemma 2 and Theorem 1 show that, without the additional regularization, the learned distribution is only matched up to marginal distributions and the true data distribution may be achieved with the non-unique mix of two distributions, \(p_{1/2}(\mathbf{z}^{x},\mathbf{z}^{y})\). With the additional regularization, Theorem 1 shows that the true distribution can be matched with the favorable unique global equilibrium guarantee. Finally, by Theorem 1, we can ensure that \(F\), \(G\), and \(\{T\}\) will converge to the true distribution if \(F\), \(G\), and \(\{T\}\) have enough capacity and each model has been trained to achieve the optimum. Unfortunately, directly minimizing the KL divergence terms in Eq. (12) is infeasible in practice. In Eq. (3), we use the simple alternative of \(L_{reg}\) as a practical solution, which can be regarded as a Monte Carlo approximation of distribution matching and is proportional to those matching. Note that, despite departing from the theoretical guarantees, the noticeable performance improvement in our empirical study suggests that our method is indeed a reasonable realization of the theory. ### _Extension to Region-based Captioning_ Our semi-supervised learning method can be extended to other advanced visual captioning tasks. In this work, we extend our approach to region-based image captioning tasks, which require to localize object instances in the scenes [26, 29]. We especially focus on the relational captioning task [29], where the task is to caption the interactions of object instances in the visual scene, which can be regarded as a generalization of the instance-wise captioning [26]. The pipeline of the relational captioning [29] work is as follows. Given an input image, \(B\) number of object proposals from the region proposal network (RPN) [56] are obtained to localize each object instance. To take interactions between objects into account, the combination layer [29] produces the subject-object region pairs of the object proposals by assigning each instance into either subject or object role, _i.e._, \(B\times(B-1)\) subject-object region pairs. Given a region pair, we obtain a triplet of features consisting of the subject (\(\mathbf{z}^{x}_{s}\)), object (\(\mathbf{z}^{x}_{o}\)), and the union of their regions (\(\mathbf{z}^{x}_{u}\)), as illustrated in Fig. 3. In this task, our semi-supervised method (illustrated in Fig. 2) is applied to captions in the dataset (denoted as \(y\)) and the union region features (denoted as \(\mathbf{z}^{x}_{u}\)) in addition to the supervised loss with the target task data. Thereby, the learned model predicts a large number of relational captions describing each pair of objects in the input image. As the region-based caption labels in the existing datasets [26, 29] are mostly in the form of subject-predicate-object triplet, most descriptive phrases in general can be thought of as following a similar form. Therefore, we postulate that it would be helpful to leverage more natural human-labeled language (caption) datasets as unpaired caption information \(\mathcal{D}^{y}_{u}\). Also, in these region-based tasks, we can leverage the Fig. 3: Illustration of the proposed semi-supervised _region-based_ image captioning structure. In addition to the paired region-based image captioning data \(\mathcal{D}_{p}\), we leverage an external image captioning dataset as an unpaired caption dataset \(\mathcal{D}^{y}_{u}\), and the instances having no caption label as an unpaired image dataset \(\mathcal{D}^{z}_{u}\). instances having no caption label (_i.e._, negative sample) as an unpaired image dataset \(\mathcal{D}_{u}^{x}\) as well for further regularization. ## IV Experiments In this section, we describe the experimental setups and competing methods and demonstrate the performance of our semi-supervised captioning with both quantitative and qualitative results. ### _Experimental Setups_ We utilize the MS COCO caption dataset [44] (we will refer to MS COCO for simplicity) as our target dataset, which contains \(123\)k images with 5 caption labels per image. To validate our model, we follow _Karpathy_ splits [27], which have been broadly used in image captioning literature. The Karpathy splits contain 113k training, 5k validation, and 5k test images in total. In our experiment, to simulate the scenario that both paired and unpaired data exist, we use four different setups: 1) partially labeled COCO [46], 2) web-crawled data [17], 3) relational captioning data [29], and 4) scarcely-paired COCO setup we proposed. The data source of each experiment setup is described in Table I. For evaluation, we use the following metrics conventionally used in image captioning: BLEU [54], ROUGUE-L [43], SPICE [1], METEOR [16], and CIDEr [65]. All the evaluation is done on the MS COCO caption test set. ### _Evaluation on Partially Labeled COCO_ For the _partially labeled_ COCO experiment, we follow Liu _et al_. [46] and use the whole MS COCO caption data (paired) and add the _"Unlabeled-COCO"_ split. The Unlabeled-COCO split includes unpaired images from the official MS COCO dataset [44], which involves 123k images without any caption label (no additional unpaired caption is used). Note that the MS COCO caption dataset and the Unlabeled-COCO split do not have overlapped data. In this setup, a separate _unpaired_ caption data \(\mathcal{D}_{u}^{y}\) does not exist. To compute the cross entropy loss, we apply the pseudo-label assignment to the Unlabeled-COCO images. We use captions from the paired COCO data \(\mathcal{D}_{p}\) as pseudo-label candidates. We compare with the recent semi-supervised image captioning method, called _Self-Retrieval_[46], on the partially labeled COCO setup in Table II. For a fair comparison with it, we replace the cross entropy loss from our loss with the policy gradient method [57] to directly optimize our model with the CIDEr score as in _Self-Retrieval_[46]. As our baseline model (denoted as _Baseline_), we train a model only with the policy gradient method without the proposed GAN model. The results show that, when only using the 100% paired MS COCO caption dataset (denoted as _w/o unlabeled_), our model already shows improved performance over Self-Retrieval. Moreover, when adding the Unlabeled-COCO images (denoted as _with unlabeled_), our model outperforms Self-Retrieval in all the metrics even without the concept transfer method. The results suggest that our method is advantageous in the semi-supervised setup. To further validate our method in the semi-supervised setup, we compare on different advanced backbone architectures equipped with attention mechanism [2, 57, 67], self-attention approach [25], and the recent Transformer based architecture [15], which were originally developed for fully supervised methods. We use the same data setup with the above, but we replace CNN (\(F\)) and LSTM (\(H\)) in our framework with the image encoder and the caption decoder of their image captioning models. Then, these models are trained by our learning method as it is without the concept transfer method, which consists of alternating between the discriminator update and pseudo-labeling. Table III shows that training with the additional Unlabeled-COCO data via _our_ training scheme consistently improves all the baselines in all the metrics. ### _Evaluation on Web-Crawled Data Setup_ To simulate a more realistic scenario involving crawled data from the web, we use the setup suggested by Feng _et al._[17]. They collect a sentence corpus by crawling the image descriptions from Shutterstock2 as unpaired caption data \(\mathcal{D}_{u}^{y}\), whereby 2.2M sentences are collected. For unpaired image data \(\mathcal{D}_{u}^{x}\), they use only the images from the MS COCO data, while the captions are not used for training. For training our method, we leverage from 0.5% to 1% of the paired MS COCO caption data as our paired dataset \(\mathcal{D}_{p}\), _i.e._, very scarce data with a few hundreds or a thousand. This is an extremely challenging scenario as the paired and unpaired datasets are disjoint with different domains. In other words, there is no guarantee that all unpaired samples have their exact matches in the counterpart dataset. The results are shown in Table IV including the comparison with Feng _et al._, Guo _et al._[22], and Zhu _et al._[79]. Note that all of Feng _et al._, Guo _et al._, and Zhu _et al._ exploit external large-scale data, _i.e._, 36M images of the OpenImages dataset. Up to 0.7% of paired only data (793 pairs), the baseline shows lower scores in terms of BLEU4 and METEOR than Feng _et al._, while Ours shows comparable or favorable performance in BLEU4, ROUGE-L and METEOR against Feng _et al._, Guo _et al._, and Zhu _et al._ Ours starts to have significantly higher scores in all the metrics from 1% of paired data (1,133 pairs), even without external knowledge. Footnote 2: [https://www.shutterstock.com](https://www.shutterstock.com) Moreover, additionally applying the concept transfer with additional loss in Eq. (9) by exploiting relational captions [29] (denoted as Ours \(+\) Concept) shows significant performance improvement, especially when the number of paired samples is scarce. Note that although applying the concept transfer to the Paired only baseline also shows noticeable performance improvement, combining both Ours and the concept transfer consistently shows the best performance in all settings. With 0.5% paired data, compared to our model without the concept transfer (Ours), our final model (Ours \(+\) Concept) shows near 2 times performance improvement on average; in particular, almost 3 times in terms of the CIDEr metric. ### _Evaluation on Relational Captioning Task_ We apply our semi-supervised learning method to a dense relational object region based image captioning task, _i.e._, relational captioning [29]. For evaluation, we use the Relational Captioning dataset [29] consists of 85,200 images with 75,456 / 4,871 / 4,873 splits for train / validation / test sets, respectively. We regard the whole paired Relational Captioning dataset as our paired data \(\mathcal{D}_{p}\), and we utilize the captions from the MS COCO caption dataset as the unpaired caption dataset \(\mathcal{D}_{u}^{y}\). In particular, we define the visual features in the training batch (\(\mathbf{z}^{x}\)) as the region features from individual object regions. As the relational captioning is a region based task, we utilize the negative regions with no captions label as the unpaired image dataset \(\mathcal{D}_{u}^{x}\). We apply our method to the extended version of MTTSNet (MTTSNet + Relational embedding module; annotated with \(\dagger\)) by Kim _et al._[32] and compare with the other strong baselines. We follow the evaluation protocols suggested by Kim _et al._[29]. The relational dense captioning performance on the Relational Captioning dataset is shown in Tables V and VI. In addition, the relational dense captioning performance on the VRD dataset [47] is shown in Table VII. The extended MTTSNet trained with our proposed method shows improvement by a noticeable margin over the MTTSNet counterpart in all the metrics and all the tables. We also show the the caption based image region-pair retrieval results in Fig. 4 as an application. As the Relational Captioning dataset might have limited generalizability, MTTSNet without the proposed framework (denoted as w/o Unpaired) shows several incorrect retrieval results, whereas the extended MTTSNet trained with our framework (denoted as w/ Unpaired) correctly retrieves image region-pairs. Note that, even if the MTTSNet without our framework retrieves correct images, the semantic reasoning in the region-pairs is incorrect when we do not leverage external knowledge. We also show the quantitative results of the retrieval in Table VIII. Similar to the other experiment, the extended MTTSNet with our framework shows favorable image retrieval performance in all the metrics, which demonstrates our method is beneficial to the application level as well. ### _Analysis on Scarcely-paired COCO_ In order to understand the algorithmic characteristic of our method, we also provide extensive and comprehensive analysis on our scarcely-paired COCO dataset. For the _scarcely-paired_ COCO setup, we remove the pairing information of the MS COCO caption dataset, while leaving a small fraction of pairs unaltered. We randomly select only \(1\%\) of the total data as the paired training data \(\mathcal{D}_{p}\), and remove the pairing information of the rest to obtain unpaired data \(\mathcal{D}_{u}\). This dataset allows to evaluate the proposed framework by assessing whether small paired data can lead to learn plausible pseudo-label assignment, and what performance can be achieved compared to the fully supervised case. We follow the same setting with Vinyals _et al_. [67], if not mentioned. The performance evaluated on the MS COCO caption test set is reported. In Table IX, we compare our method with several baselines: Fig. 4: Qualitative results of the caption-based image retrieval on the Relational Captioning dataset [29]. The results are obtained by the relational captioning methods, which improves the caption-based image retrieval in multiple aspects. MTTSNet without the proposed framework (w/o Unpaired) shows a few incorrect retrieval results, whereas the extended MTTSNet trained with our framework (w/ Unpaired) correctly retrieves image region-pairs. _Paired Only_; we train our model only on the small fraction (1%) of the paired data, _CycleGAN_; we train our model with the cycle-consistency loss [78]. Additionally, we train variants of our model denoted as _Ours_ (ver1, ver2, and final). _Ours ver1_ is the base model trained with our GAN model (Eq. (3)) that distinguishes real or fake image-caption pairs. Even without pseudo-labeling, GAN training unpaired image and caption data already helps better training the encoder networks in an unsupervised way, which improves the image captioning performance. As one could expect, semi-supervising with unpaired samples from MS COCO data is more helpful to improve the performance than with unpaired web-crawled samples in Table IV. _Ours ver2_ adds training with pseudo-labeled unpaired data using Eq. (7) to _Ours ver1_, while setting the confidence scores \(\alpha^{x}\)=\(\alpha^{y}\)=\(1\) for all training samples. _Ours (final)_ add the noise handling technique to _Ours ver2_, which is done by re-weighting each sample in the loss (Eq. (7)) with the confidence scores \(\alpha^{x}\) and \(\alpha^{y}\). We present the accuracy of the fully supervised (_Fully paired_) model using 100% of the MS COCO caption training data for reference. As shown in Table IX, in a scarce data regime, utilizing the unpaired data improves the captioning performance in terms of all metrics by noticeable margins. Also, our models show favorable performance compared to the CycleGAN model in all the metrics. Our final model with the pseudo-labels and the noise handling achieves the best performance in all metrics among the baselines. In addition, applying our concept transfer by utilizing relational captions [29] as an external knowledge (Ours+Concept) further improves the image captioning performance with noticeable margins. Note that the CIDEr score of our final model with the concept transfer is almost 2 times that of the Paired only baseline. Also, note that applying our concept transfer on the Paired only baseline shows lower improvement than that of Ours, indicating that the concept transfer is helpful when combined with our semi-supervised learning framework and our integration is non-trivial. We also compare the recent unpaired image captioning methods [10, 17, 20, 21, 40, 79] in Table IX. In Gu _et al_. [20], the AIC-ICC image-to-Chinese dataset [70] is used as unpaired images \(\mathcal{D}_{u}^{x}\) and the captions from the MS COCO caption dataset are used as unpaired captions \(\mathcal{D}_{u}^{y}\). Note that our dataset setup is unfavorable to our method; in that Gu _et al_. [20] use a far larger amount of additional labeled data (10M Chinese-English parallel sentences of the AIC-MT dataset [70]), Feng _et al_. and Laina _et al_. [40] use 36M samples of the additional OpenImages dataset, and Gu _et al_. [21] use scene graphs from the Visual Genome dataset [36] (108k). In contrast, our model only uses a small amount of paired samples (1k) and 122k unpaired data. Despite far lower reliance on paired data, our final model shows favorable performance against the recent unpaired image captioners in all the metrics. Next, we study the effects of other ratios of paired data used for training (in Figs. 5 and 6). We study our final model against our _Paired Only_ baseline according to varying amounts of paired training data in Fig. 5, so that we can see how much information can be gained from the unpaired data similar to the active learning works [11, 28, 60]. From 100% to 10%, as the amount of paired samples decreases, the fluency and the accuracy of description get worse. In particular, we observe that most of the captions generated from the _Paired Only_ baseline trained with 10% of paired data (11,329 pairs) show erroneous grammatical structures. In contrast, by leveraging unpaired data, our method generates more fluent and accurate captions, compared to _Paired Only_ trained on the same amount of paired data. Note that our model trained with 60% of paired data (67,972 pairs) achieves similar performance to the _Paired Only_ baseline trained with fully paired data (113,287 pairs) already. This signifies that our method can save _near half_ of the human labeling effort used to construct a dataset. In Fig. 6, the _Paired Only_ baseline trained with 1% paired data produces erroneous captions, and the baseline with 10% paired data starts to produce plausible captions. It is based on \(10\times\) more number of paired samples, compared to our model that uses only 1% of them. We highlight that, in the two examples on the top row of Fig. 6, our model generates Fig. 5: Performance w.r.t. the amount of paired data for training. “Baseline” denotes our _Paired Only_ baseline, “Ours” is our final model, and “Reference” is _Paired Only_ trained with the full paired data. more accurate captions than the _Paired Only_ baseline trained on the 100% paired data ("baseball" to "Frisbee" on the top-left, and "man" to "woman" on the top-right). This suggests that unpaired data with our method effectively boosts the performance especially when paired data is scarce. In order to demonstrate the effectiveness of our pseudo-label assignment, we show the pseudo-labels (captions) assigned to unlabeled images from Unlabeled-COCO images in Fig. 7. As shown in the figure, despite not knowing the real pairs of these images, the reasonable pseudo-labels are assigned by our model. Note that even though there is no ground truth caption for unlabeled images in the searching pool, the model can find the most likely (semantically correlated) image-caption pair for the given images. Fig. 8 highlights interesting results of our caption generation, where the results contain words that do not exist in the paired data of the scarcely-paired COCO dataset. It shows that the pseudo-label assignment during training enables to properly learn the semantic meaning of the words, such as "growing," "growing," and "herded," which exist in the unpaired caption dataset but not in the paired dataset. These examples suggest that our method is capable to infer the semantic meaning of unpaired words to some extent, which would have been unable to be learned with only little paired data. This evidences that our method is capable to align abstract semantic spaces between two modalities, _i.e._, visual data and text. **Scarcely-Paired Setup with Different Domains.** Training with the unpaired data from the MS COCO dataset would be different from the unpaired data retrieved from the web. Thus, we also test with a similar but different domain data from the MS COCO dataset. We use the Flickr30k dataset [74] as another unpaired image-caption dataset to see whether a small amount of paired information allows to learn matches among unpaired data in a different domain as well. We add images from the Flickr30k dataset, denoted as Flickr, and also add the complement set of the 1% paired data of the scarcely-paired COCO, denoted as Unpaired. Table X still shows consistent improvement even if the MS COCO captions and the independent corpus are mixed. Given all the results, we would like to note that totally randomly sampled 1% of paired data (very scarce) would be very challenging to make other models learn to match any general image-caption pair out of the entire set (Unpaired-COCO + Flickr images). In this sense, our improvements shown in this experiment are non-trivial. **Scarcely-Paired Setup with Unlabeled-COCO Images.** Both images and captions in the scarcely-paired data come from the original MS COCO caption dataset. While it is still suitable for analyzing and validating the evaluation, it would have a gap from unpaired data in the wild, because we can find the right caption within the list of unpaired captions in this case. The right caption for an image may not even be available from the list of unpaired captions in practice. To simulate such practical scenarios with real unpaired data, we run a test with a similar setup with Table IX, but by replacing unpaired images \(\mathcal{D}_{u}^{x}\) of the scarcely-paired COCO dataset to Unlabeled-COCO images, so that there are no inherent matches between unpaired images Fig. 8: Generated caption samples containing words that do not exist in the paired dataset \(\mathcal{D}_{p}\). The novel words that are not in \(\mathcal{D}_{p}\) but in \(\mathcal{D}_{u}^{y}\) are highlighted in bold. Fig. 6: Sampled qualitative results of our model. We compare with the baseline models trained only on \(N\%\) of paired samples out of the full MS COCO caption dataset. Despite the use of only 1% paired data, our model generates reasonable captions comparable to that of the baseline models trained with more data (10% and above). Fig. 7: Examples of the pseudo-labels (captions) assigned to the unpaired images. Our model is able to sufficiently and plausibly assign image-caption pairs through the proposed adversarial training. \(\mathcal{D}_{u}^{x}\) and captions \(\mathcal{D}_{u}^{y}\). The results are presented in Table XI (refer to Table IX for the methods). The table shows the same trend as those in Table IX, which shows that our method is generalizable to the practical scenarios. ## V Conclusion We introduce a method to train an image captioning model with a large scale unpaired image and caption data upon typical paired data. Our framework achieves favorable performance compared to various methods and setups. Unpaired captions and images are the data that can be easily collected from the web. It can also facilitate application-specific captioning models, where labeled data is scarce. Before concluding our work, we discuss potential directions to further improve our method. While the theoretical analysis does not directly provide how to improve, an interesting implication of the analysis is that, while our problem setting and modeling are notably different from GAN, the result is seamlessly boiled down to a similar form of that of GAN. This suggests that one may exploit this analogy between our method and GAN to further improve our method. This would be crucial ingredients to stimulate creative follow-up research.
2310.18177
Reinterpreting Fundamental Plane Correlations with Machine Learning
This work explores the relationships between galaxy sizes and related observable galaxy properties in a large volume cosmological hydrodynamical simulation. The objectives of this work are to both develop a better understanding of the correlations between galaxy properties and the influence of environment on galaxy physics in order to build an improved model for the galaxy sizes, building off of the {\it fundamental plane}. With an accurate intrinsic galaxy size predictor, the residuals in the observed galaxy sizes can potentially be used for multiple cosmological applications, including making measurements of galaxy velocities in spectroscopic samples, estimating the rate of cosmic expansion, and constraining the uncertainties in the photometric redshifts of galaxies. Using projection pursuit regression, the model accurately predicts intrinsic galaxy sizes and have residuals which have limited correlation with galaxy properties. The model decreases the spatial correlation of galaxy size residuals by a factor of $\sim$ 5 at small scales compared to the baseline correlation when the mean size is used as a predictor.
Chad Schafer, Sukhdeep Singh, Yesukhei Jagvaral
2023-10-27T14:44:06Z
http://arxiv.org/abs/2310.18177v1
# Reinterpreting Fundamental Plane Correlations with Machine Learning ###### Abstract This work explores the relationships between galaxy sizes and related observable galaxy properties in a large volume cosmological hydrodynamical simulation. The objectives of this work are to both develop a better understanding of the correlations between galaxy properties and the influence of environment on galaxy physics in order to build an improved model for the galaxy sizes, building off of the _fundamental plane_. With an accurate intrinsic galaxy size predictor, the residuals in the observed galaxy sizes can potentially be used for multiple cosmological applications, including making measurements of galaxy velocities in spectroscopic samples, estimating the rate of cosmic expansion, and constraining the uncertainties in the photometric redshifts of galaxies. Using projection pursuit regression, the model accurately predicts intrinsic galaxy sizes and have residuals which have limited correlation with galaxy properties. The model decreases the spatial correlation of galaxy size residuals by a factor of \(\sim 5\) at small scales compared to the baseline correlation when the mean size is used as a predictor. keywords: cosmology: observations -- large-scale structure of Universe -- gravitational lensing: weak - methods: statistical ## 1 Introduction The difference between the intrinsic and observed (or inferred) size of a galaxy is influenced by several physical processes, including gravitational lensing (Bertin & Lombardi, 2006), peculiar galaxy velocities (Strauss & Willick, 1995), doppler magnification (Bonvin et al., 2017) and cosmic expansion (Blakeslee et al., 2002). With a sufficiently accurate predictor of intrinsic galaxy sizes, it is possible to construct estimators to study these effects using the size residuals, i.e., the difference between observed and predicted intrinsic size. For example, the anisotropies (the dipole) in the galaxy size cross correlations will be sensitive to the galaxy velocities; the cross correlations of galaxy size with foreground galaxies are sensitive to weak gravitational lensing caused by the foreground galaxies (galaxy-galaxy lensing cross correlations); and the relation between the galaxy size and their redshifts can be used to test the redshift-distance relation and hence models of cosmological expansion. Note that these estimators use the size information differently and hence different measurements can be carried out independently with only weak correlations/contamination between different effects. Such measurements also hold promise to constrain the uncertainties in the photometric redshifts of galaxies by exploiting the dependence of inferred galaxy size on the estimated distance to the galaxy. The ratio of galaxy-galaxy lensing cross correlations using the galaxy size residuals and galaxy shear is sensitive to the uncertainties in the galaxy redshift estimates, i.e. \[\frac{P_{g\lambda}}{P_{g\gamma}}-1\ \propto\ \delta\log D(z_{\mbox{\tiny source}}) \tag{1}\] where \(P\) is the cross power spectra (or correlation function), \(g\) refers to the foreground lens galaxy, \(\lambda\) is the estimated size residual, \(\gamma\) is the galaxy shear, and \(\delta\log D(z_{\mbox{\tiny source}})\) is the error in the estimated distance to the source galaxy (the galaxy for which we measure \(\lambda\) and \(\gamma\)) due to uncertainties in the photometric redshifts. This estimation is similar to the consistency tests that have been done between galaxy shear (spin-2) and CMB convergence maps (spin-0), e.g. Singh et al. (2017), and estimators developed for such studies can be directly applied for comparing lensing measurements using galaxy shear and galaxy size (spin-0). Further, unlike the case of CMB lensing, since the size and shear are measured on the same set of galaxies, the ratio is independent of the galaxy-matter power spectrum which is the primary observable in the galaxy-lensing cross correlations, i.e. constraints on redshift will be almost independent of the cosmological information. Independence from galaxy-matter power spectrum implies that the measurement is also independent of the cosmic variance and will only depend on the measurement noise and intrinsic scatter in the size and shear measurements. Since photometric redshift uncertainties are one of the limiting systematics when analyzing data from photometric galaxy surveys, including galaxy size estimates can potentially lead to significant improvements in cosmological inferences, beyond a simple improvement in statistical errors. An accurate and precise predictor of intrinsic galaxy size minimizes the scatter in the size residuals, which is the primary source of noise in cosmological measurements. One such size predictor is the fundamental plane (FP) of galaxies (Dressler et al., 1987; Djorgovski and Davis, 1987). FP is the relation between the size, \(R_{0}\), surface brightness, \(I_{0}\) and velocity dispersion, \(\sigma_{0}\), of elliptical galaxies given by \[\log R_{0}=a\log\sigma_{0}+b\log I_{0}+c+\sum_{i=1}^{N_{z}}d_{i}z_{i} \tag{2}\] where the redshift, \(z_{i}\), dependent terms were introduced in (Joachimi et al., 2015) to account for the redshift evolution of the plane (see also discussion in Singh et al., 2021). While studied extensively in the literature in the context of galaxy physics, a careful study of the FP in context of cosmological measurements has only recently gained traction (e.g. Joachimi et al., 2015; Saulder et al., 2019; Singh et al., 2021) and the efficacy of the FP for cosmological analysis is not well established. Singh et al. (2021) performed a detailed study of the FP residuals and the galaxy properties involved in the FP definition. The FP residuals were found to be strongly correlated with the galaxy properties, e.g. the mean of the FP residuals increases with galaxy luminosity. These correlations suggest that the scatter over the FP is not strictly random. Furthermore, the FP residuals are correlated with the galaxy density field, an effect similar to the intrinsic alignments of galaxy shapes. This effect can also be explained by the dependence of the galaxy properties on their environment. Brighter and larger galaxies tend to reside in over-dense regions, though Singh et al. (2021) observed that these galaxies have lower surface brightness. Correlations of FP with these properties explains the correlations of FP residuals with the galaxy density field. For cosmological applications, it is important to understand these correlations of galaxy properties in order to improve the galaxy size predictors and avoid biases in the cosmological inferences. The physical origins of these correlations are still not well understood and better understanding of these effects is important to improving models of galaxy physics. This work explores such correlations using state-of-the-art, large cosmological volume hydrodynamical simulations and performs a more detailed study to understand the correlations between galaxy sizes and several other galaxy properties. The use of a simulation model (Illustris TNG, described below in Section 2.1) for this purpose enables a more thorough exploration of correlations with a wider range of galaxy properties, measured with minimal error. Of course, the ultimate objective is to use these models with observed data, and a focus of this work is to develop novel methods of analysis which will enable this by using the high-resolution information available from the simulation model to guide the fitted model. Meeting the above objectives motivates the development of novel analysis methodology for incorporating the rich structural information obtained from large simulation models, and this is also a focus of this work. A fundamental question is the following: Suppose that some feature of the galaxy or its environment (e.g., a measure of 3D density) is known to be useful in predicting intrinsic galaxy size, but that such information is only available in a simulation model. Is there a way to exploit the relationship between 3D density and other **observable** galaxy properties to better predict galaxy size? One may believe that sophisticated supervised learning methods should be capable of discovering the optimal model for the relationship between observable properties and galaxy size, but the complexity of this model may make it difficult to ascertain, and difficult to interpret. We place emphasis here on an approach that balances interpretability and predictive power. The simulation model provides a useful framework around which models can be built that are not of excessive complexity, but achieve strong prediction performance. The remainder of this paper is organized as follows: Section 2 describes the simulation model utilized, and the galaxy and environment features derived from it. Section 3 presents the statistical tools behind the model and its assessment. Section 4 describes the primary model fit in this work. Section 5 discusses the results and its implications for future exploration. ## 2 Data ### The cosmological simulation Illustris TNG (Nelson et al., 2018; Pillepich et al., 2018; Springel et al., 2018; Naiman et al., 2018; Marinacci et al., 2018; Nelson et al., 2019) comprises cosmological hydrodynamical simulations that were run with the moving-mesh code Arepo (Springel, 2010). The TNG100 simulation at \(z=0\) was chosen for this study since the simulation exhibits color bimodality that agrees with SDSS data for intermediate mass galaxies (Nelson et al., 2017), as well as consistent correlations with other galaxy properties. Additionally, TNG100 provides a good balance between high resolution and a large cosmological volume. The box of 75 Mpc/h \(\sim 100\) Mpc has \(2\times 1820^{3}\) resolution elements with a gravitational softening length of 0.7 kpc/h for dark matter and star particles. The mass of dark matter and star particles are \(7.46\times 10^{6}M_{\odot}\) and \(1.39\times 10^{6}M_{\odot}\), respectively. Additionally, the simulation incorporates various physical process for galactic evolution: radiative gas cooling and heating; star formation in the ISM; stellar evolution with metal enrichment from supernovae; stellar, AGN and blackhole feedback; formation and accretion of supermassive blackholes (Pillepich et al., 2018; Weinberger et al., 2017). The dark matter halos were identified using the friends-of-friends (FoF) algorithm (Davis et al., 1985), and then the subhalos were identified using the SUBFIND algorithm (Springel et al., 2001). We employ a minimum stellar mass cut of \(\log_{10}(M_{\bullet}/M_{\sun})=9\), roughly corresponding to \(10^{3}\) star particles (Tenneti et al., 2016; Du et al., 2020). ### Galaxy and Environment Properties This section characterizes the source of galaxy properties used in the predictive models. Some standard quantities utilized, such as _size (half-mass radius)_ and _star formation rate_ arrive directly from the simulation catalog; for more information on these, we refer the interested reader to the simulation model website1. The _velocity dispersion_ of each individual galaxies was calculated using the velocities of all star particles in a galaxy. Footnote 1: [https://www.tng-project.org/](https://www.tng-project.org/) _Density Measures._ In the models below, both 2D and 3D galaxy density information is utilized. To calculate 2D density, galaxy counts are tabulated on a 1000 by 1000 grid, and then smoothed using a Gaussian kernel with a scale of 0.5 Mpc/h. This density, evaluated at the galaxy positions, is stored as delta_smooth_R. Similarly, for 3D density, galaxy counts are tabulated on a \(750\times 750\times 750\) grid, and then smoothed using a Gaussian kernel with scales of 0.5, 1.0, 2.0, and 5.0 Mpc/h. This generates measures of density, at varying scales, for the environment local to each galaxy. _Galaxy Morphological Classification._ Galaxy morphology is characterized using the probabilistic dynamical model of Jagvard et al. (2021). The model makes two physically motivated assumptions. First, it is assumed that the angular momentum of disc stars is approximately aligned with the total angular momentum of the galaxy, while the angular momentum of bulge stars angular is randomly aligned. Second, it is assumed that the orbits of disc stars are approximately circular, while the orbits of bulge stars orbits are elongated or circular. In order to quantitatively model the aforementioned assumptions, define the following: * stars, gas, dark matter) contained within that radius. * \(\cos\alpha\) is the cosine of the angle between the angular momentum vector of the star particle and the total angular momentum of the galaxy. Next consider the following model for the distribution of star particles: \[p_{\rm star}(j_{\rm r},\cos\alpha)\equiv\] \[\quad(1-f^{\rm disc})\,p_{\rm bulge}(j_{\rm r},\cos\alpha)+f^{ \rm disc}\,p_{\rm disc}(j_{\rm r},\cos\alpha). \tag{3}\] Here, \(p_{\rm bulge}\) and \(p_{\rm disc}\) are the densities (both normalized to integrate to 1) reflecting the probability that a star at a given point in this 2D space belongs to the bulge or to the disc. More details and further investigations of the model can be found in Jagvard et al. (2021). Finally, mc_disk, or the galaxy disk fraction, is calculated by adding up the mass of all of the star particles that were classified as disks and dividing by the total mass. ## 3 Methods As stated above, this work is focused not only on developing improved models for predicting galaxy size from measurable quantities, but also on providing better understanding of the relationships between these properties. Hence, a methodological focus of this work is to utilize approaches that balance modelling accuracy with scientific interpretability. This section will discuss the use of projection pursuit regression as an alternative to neural networks and other machine learning approaches. Ultimately, the residuals from these fits must be analyzed to determine if there are remaining correlations with intrinsic galaxy properties, hence this section will also discuss methods for such approaches. The _projection pursuit regression (PPR) model_(Friedman et al., 1981) is characterized as follows. The response variable \(Y\) is modelled as a additive combination of \(m\) different nonlinearly-transformed projections of the predictor vector \(\mathbf{x}\): \[Y_{i}=\sum_{j=1}^{m}\beta_{j}f_{j}\!\left(\boldsymbol{\alpha}_{j}^{T}\mathbf{ x}_{i}\right)+\epsilon_{i}. \tag{4}\] The \(\epsilon\) are assumed to be mean zero, uncorrelated _irreducible errors_, i.e. scatter around the model fit. Here, the \(\beta\), the \(\boldsymbol{\alpha}_{j}\), and the \(f_{j}\) are _estimated_ from the available training sample. The \(\boldsymbol{\alpha}_{j}\) represent the \(m\) different projections of the original predictors \(\mathbf{x}_{i}\) that are utilized by the model. This approach avoids the _curse of dimensionality_ by only considering an additive combination of what could be viewed as _designed features_\(f_{j}\!\left(\boldsymbol{\alpha}_{j}^{T}\mathbf{x}_{i}\right)\) for \(j=1,2,\ldots,m\). The model has the flexibility to learn the linear combinations of the predictors \(\boldsymbol{\alpha}_{j}\), in tandem with the nonlinear transformation \(f_{j}\), which are the most useful for predicting the response. The \(f_{j}\) will typically be estimated via standard non-parametric regression approaches such as with a smoothing spline (Reinsch, 1967). Such approaches are well-suited to one-dimensional regression problems such as this since they can flexibly fit to a wide range of relationships (here, between \(\boldsymbol{\alpha}_{j}^{T}\mathbf{x}_{i}\) and the response). Such fits are smooth, but allow the data to dictate the shape of the fit, i.e., no parametric form is assumed. It is instructive to contrast the projection pursuit model with a _fully-connected single layer neural network_ model \[Y_{i}=\sum_{j=1}^{m}\beta_{j}\phi\!\left(\mathbf{w}_{j}^{T}\mathbf{x}_{i} \right)+\epsilon_{i}, \tag{5}\] wherein the user fixes an _activation function_\(\phi\), a simple nonlinear transformation which is applied to each (of typically many) linear combinations of the predictor vector. The parameters learned from the data are solely the values of the weights \(\mathbf{w}_{j}\) applied in these linear combinations. Projection pursuit is able to use smaller \(m\) by exploiting the flexibility in the tailored, nonlinear transformation \(f_{j}\) that is applied to each. This leads to improvements in interpretability. The model is fit using a two-level iterative approach. An outer loop consists of running over \(k=1,2,\ldots,m\). For fixed \(k\), the residuals from the fit on the other \(m-1\) components is calculated: \[r_{i}=Y_{i}-\left[\sum_{j\neq k}\widehat{\beta}_{j}\widehat{f}_{j}\!\!\left( \widehat{\boldsymbol{\alpha}}_{j}^{T}\mathbf{x}_{i}\right)\right] \tag{6}\] and then \(\beta_{k}\), \(f_{k}\), and \(\boldsymbol{\alpha}_{k}\) are found such that \[r_{i}\approx\beta_{k}f_{k}\!\!\left(\boldsymbol{\alpha}_{k}^{T}\mathbf{x}_{i }\right). \tag{7}\] This relationship between the \(r_{i}\) and \((\beta_{k},f_{k},\boldsymbol{\alpha}_{k})\) uses an inner loop which alternates between estimating \(\beta_{k}f_{k}\) and \(\boldsymbol{\alpha}_{k}\). Heuristically, at this step the goal is to determine how to best fit to the portion of response that is unexplained by the other \(m-1\) terms in the model. With each update to a \((\beta_{k},f_{k},\boldsymbol{\alpha}_{k})\), the other components are eventually reconsidered as the outer loop is repeated until convergence is reached. This procedure referred to as _backfitting_(Breiman and Friedman, 1985). **Comment: Implementation.** In this work, models are fit using the function ProjectionPursuitRegressor found in of Scikit-learn. (Pedregosa et al., 2011). _Smoothing splines_(Wahba, 1990) are used in each one-dimensional nonparametric fit \(f_{j}\), of degree either two or three. (Initial models are fit using cubic splines, but in the final model degree will be chosen as part of the procedure described below.) The seemingly-redundant parameters \(\beta_{j}\) are included in the model reflecting the custom of ProjectionPursuitRegressor and other software. This was not a part of the original formulation of Friedman et al. (1981) but allows for extra generality in simultaneous fitting of multiple response vectors using the same collection of \(m\) ridge functions. ### Preliminary Models As an initial demonstration, a model is fit with log radius as the response, and log velocity dispersion and i-band magnitude as predictors, to mimic the classic fundamental plane model. Here, \(m=1\), a special case of projection pursuit called _single index regression_. Figure 1 illustrates the results. The left panel shows the weight placed on each of the two predictors to form the first projection, i.e., \(\widehat{\alpha}_{11}=0.69\) and \(\widehat{\alpha}_{12}=0.72\). The horizontal axis in the right panel shows the value of this projection for all observations in the training set. The estimated form for \(\beta_{1}f_{1}\) is shown as the solid curve on the panel. This is fit using a cubic spline. The quality of this simple fit is clearly poor (with RMSE on a test set of \(0.411\)), with deficiencies partly due to the range of different galaxy types being fit. A primary motivation of this work is to build models for intrinsic galaxy size that can be used in cases where only photometric observations are available. Hence, for comparison, next is fit a model that includes the \(griz\) magnitudes as the features, along with mc_disk and delta_smooth_R, described above in Section 2.2. Each feature is individually shifted and scaled to have mean zero and standard deviation one prior to the fit. When \(m=1\), the RMSE on a test set is \(0.290\), but this improves to \(0.257\) when \(m=4\). The results are shown in Figure 2. It is notable that the model appears to place little weight on mc_disk and delta_smooth_R, but, in fact, excluding these two predictors increases the test set RMSE to \(0.272\). **Comment: Splitting the Data.** Throughout this work, when data are divided into training, test, or other sets for the purposes of model fitting and validation, splits are done _by pixels_ formed in a \(5\times 5\times 5\) grid that covers the full simulation box of \(75\) Mpc/h. This is done to mitigate issues that could result from galaxies in close proximity sharing important physical information, and hence inappropriately influencing the quality of the model fit. ### Residual Analysis The RMSE values reported for each model only partially reveal important information regarding the quality of the fit, because minimizing prediction errors is not the primary objective of this work. To serve the cosmological motivations, the ideal model would leave no remaining relationship between the residuals from the fit and any properties of the galaxy and its environment. In other words, the model would predict intrinsic size, and the difference between the measured size and the fit size would encode useful information regarding gravitational lensing, peculiar galaxy velocities, and so forth. To this end, study of the property of the model residuals is crucial. One step in this direction is to plot residuals versus various galaxy properties, and look for patterns and/or trends. Figure 3 shows the result of comparing galaxy mass with the residuals from the model fit above to photometry-based properties. The right panel shows the clear evolution in residuals with galaxy mass, where the blue curve shows mean residuals in each of \(20\) bins. Error bars shown are calculated on each of these means using a jackknife procedure, described below. Similar comparisons can be made with other features, including those which are included in the model. This is an important step in revealing deficiencies in the model fit. Figure 4 assesses the degree of spatial correlation in the residuals. Such spatial correlations have been analyzed in the previous cosmological studies using fundamental plane of galaxies (Joachimi et al., 2015; Singh et al., 2021). These correlations exist because the galaxy formation and evolution involves complex physical processes that depend not only on the galaxy itself but also its environment. These contaminate the estimators that are used in cosmological measurements of interest using size residuals and it is desirable to null them before cosmological analysis. Again, in an ideal model there would be no remaining spatial correlation in the residuals from the model fit. However, in figure 4 we see strong correlations between the size residuals and the surrounding galaxy density field. Such signal is not totally unexpected as the size residuals are a non-linear combination of galaxy properties which are correlated with the environment (see Singh et al., 2021, for detailed explanation and analysis). This correlation of galaxy sizes with the local density field is very similar to the intrinsic alignments effect for galaxy shear. Unfortunately, this implies that the current size estimators cannot be used to perform the cosmological measurements using auto-correlations, but they are suitable for cross correlations, similar to galaxy-shear cross correlations. **Comment: Jackknife Errors.** The classic _jackknife_ (Efron & Stein, 1981) approach to calculating errors on estimators \(\widehat{\theta}\) consists of repeatedly re-calculating the estimate, each time leaving out one observation from the sample. If the estimate when observation \(i\) is removed is denoted as \(\widehat{\theta}_{(-i)}\), then \[\widehat{\mathrm{Var}}_{\mathrm{jx}_{i}}(\widehat{\theta})=\left(\frac{n-1}{n} \right)\sum_{i=1}^{n}\left(\widehat{\theta}_{(-i)}-\widehat{\theta}\right)^{2}\] can be shown to be a reliable estimator of the true variance of \(\widehat{\theta}\), in the case where the sample is drawn independent and identically distributed from some population. In this work, this assumption is clearly invalid due to dependencies present from nearby galaxies. Hence, the jackknife procedure is adapted to one in which the 3D simulation box is divided into a grid of \(7^{3}\) pixels, with each pixel left out in one iteration of the procedure. This reduces the bias that would result from taking the standard jackknife approach of leaving out one observation (galaxy) at a time. ### Kernel PCA-Based Enhancement of PPR As described above, the PPR model is built upon linear combinations of the supplied collection of features chosen to be optimal for predicting the targeted response variable. Hence, this model can be viewed as a _supervised_ companion to principal components analysis (PCA), wherein a new representation of data vectors is constructed in an _unsupervised_ manner, with a goal of finding linear combinations with maximal variance. This is motivated by the heuristic that directions in the original space along which there is the greatest variability are the projections that encode the most useful information. Thus, standard PCA used in combination with projection pursuit regression would be redundant, as nothing could be gained by considering a simple rotation of the features in Euclidean space. There exists, however, a nonlinear extension of PCA, called _Kernel PCA_(Scholkopf et al., 1998) which provides a potentially useful enhancement to the space of projections under consideration by the PPR model. It is instructive to first consider the math behind standard (linear) PCA. For additional detail, see Hastie et al. (2009). Let \(\mathbf{X}\) denote the \(n\) by \(p\) matrix whose rows are the individual feature vectors. Assume that the variables have been mean-centered so that each column of \(\mathbf{X}\) has sample mean zero, hence \(\mathbf{X^{\prime}X}/n\) is the sample covariance matrix for these data. Then, the principal components are found as the eigenvectors of \(\mathbf{X^{\prime}X}/n\), or, equivalently, of \(\mathbf{X^{\prime}X}\). Denote these eigenvectors as \(\mathbf{v_{i}},\mathbf{v_{2}},\ldots,\mathbf{v_{p}}\) and the corresponding eigenvalues with \(\lambda_{i}\). PCA can be interpreted as creating a new coordinate system, or basis, within which a data vector can be represented so that \(\mathbf{s}_{i}=\mathbf{X}\mathbf{v}_{i}\) provides the positions of all \(n\) observations along the \(i^{th}\) axis in the new coordinate system. It follows that \[\mathbf{X^{\prime}X}\mathbf{v}_{i}=\lambda_{i}\mathbf{v}_{i}\,,\;\;\mathbf{XX ^{\prime}X}\mathbf{v}_{i}=\lambda_{i}\mathbf{X}\mathbf{v}_{i}\;\;\text{and}\; \;\mathbf{XX^{\prime}s}_{i}=\lambda_{i}\mathbf{s}_{i}. \tag{8}\] Since \(\|\mathbf{s}_{i}\|^{2}=\lambda_{i}\), the standardized versions \(\mathbf{s}_{i}=\mathbf{s}_{i}/\sqrt{\lambda_{i}}\) will be orthonormal and hence are the eigenvectors of the _Gram matrix_\(\mathbf{XX^{\prime}}\). The conclusion is that the position in the new coordinate system can be found directly from the eigenvectors of the Gram matrix. In Kernel PCA, this form of the Gram matrix is generalized such that the \((i,j)\) entry is \(K(\mathbf{x}_{i},\mathbf{x}_{j})\), where \(K\) is a user-chosen _kernel function_ which measures similarity between vectors. Common choices for the kernel function include the _radial basis function kernel_ \[K(\mathbf{x},\mathbf{y})=\exp\bigl{(}-\gamma|\mathbf{x}-\mathbf{y}|^{2}\bigr{)} \tag{9}\] and the _sigmoid kernel_ \[K(\mathbf{x},\mathbf{y})=\tanh\bigl{(}\gamma\mathbf{x^{\prime}y}+c\bigr{)}\,. \tag{10}\] Both of these examples illustrate the important role of tuning parameters in the choice of a kernel, e.g., through the specification of \(\gamma\). Kernel PCA also maps the observations into a new space, with the hope that a useful lower-dimensional representation will result. Let \(\boldsymbol{\phi}(\mathbf{x})\) denote the position of \(\mathbf{x}\) in the new space defined by Kernel PCA. The \(i^{th}\) coordinate is found via the _Nystrom extension_(Nystrom, 1930), \[\phi_{i}(\mathbf{x})=\frac{1}{\sqrt{\lambda_{i}}}\sum_{j}\mathbf{s}_{ij}K( \mathbf{x},\mathbf{x}_{j}). \tag{11}\] For \(\mathbf{x}_{k}\) included in the training set, \(\phi_{i}(\mathbf{x}_{k})=\sqrt{\lambda_{i}}\;\mathbf{s}_{ik}\). Figure 1: Illustration of the results from the first, simple fit. Here, \(m=1\), and there are only two predictors, log velocity dispersion and i-band magnitude. The left figure shows the weight placed on each of these two predictors, while the right shows the non-linear function applied to this linear combination. Figure 2: Illustration of the results from the second fit. Here, there are six predictors (all based on photometry), and \(m=4\). The vertical axes of right plots is labelled “Residuals” because the figure shows the fit to what remains after the other \(m-1=3\) components is subtracted off. The connection with PPR is as follows: A linear combination of the predictors, now using the Kernel PCA representation, is \[\boldsymbol{\alpha^{\prime}\phi(x)}=\sum_{j}\bigl{(}\boldsymbol{\alpha^{\prime}s _{\cdot j}}\bigr{)}\,K(\mathbf{x},\mathbf{x}_{j}), \tag{12}\] where \(\mathbf{s}_{\cdot j}\) holds \(\mathbf{s}_{ij}\) for \(i=1,2,\ldots,n\). (In this expression, \(\lambda_{i}\) are absorbed into the individual \(\alpha_{i}\) without loss of generality.) The heuristic behind this is that varying the tuning parameter \(\gamma\) that characterizes the kernel function leads to a wide range of nonlinear transformations of the predictor vector. The model can achieve a better fit if it has a larger class of _intelligently-chosen_ directions to search over. The vector \(\boldsymbol{\phi}(\mathbf{x})\) can be of dimension up to \(n\), while the original predictor was limited to \(p\) dimensions. The choice of \(\gamma\) allows for great flexibility in the formation of this new representation. An analogous idea is often employed with a standard approach to classification, _support vector machines (SVM)_(Cortes & Vapnik, 1995). The basic SVM approach searches for a linear separator in the feature space to distinguish the two classes under consideration. Of course, a linear separator in the original feature space is rarely an adequate classifier. But by projecting the features into a much higher-dimensional space, the potential for finding a useful linear separator is greatly enhanced. This is often referred to as the _kernel trick_. Figure 5 illustrates the potential. In this fit, Kernel PCA was used to create a nonlinear transformation of galaxy properties into a ten-dimensional space. Galaxy properties utilized were photometry-based, as in the previous fit, but now in the PPR model these features are supplemented with those derived from Kernel PCA. The sigmoid kernel was used. The figure shows how the PPR model is able to exploit this new representation to find a direction in the new space along which the response evolves. The dashed lines show contours which the model is fitting to have constant log size. It is important to keep in mind that this shows one such projection; in fact, \(m=4\) in this model, so there Figure 4: Measurements of correlations between the galaxy size residuals and the galaxy density field. This effect is similar to the intrinsic alignments effect for galaxy shear. It arises because size residuals are a non-linear combination of galaxy properties which are correlated with the galaxy environment. Figure 3: Evolution of the residuals with galaxy mass. The mean residual in bins is shown in the blue curve. Error bars are constructed using a jackknife procedure. are four such projections through the ten-dimensional space created by Kernel PCA. The RMSE on a test set is reduced to 0.240. #### 3.3.1 Incorporation of Auxiliary Information In the application of interest, \(\mathbf{x}\) will be decomposed into the pair \(\mathbf{x}=[\mathbf{x}_{\mathrm{obs}},\mathbf{x}_{\mathrm{aux}}]\). Here, \(\mathbf{x}_{\mathrm{obs}}\) consists of the observable properties of the galaxy, i.e., quantities that can be measured or adequately-estimated using solely photometry. The variables in \(\mathbf{x}_{\mathrm{aux}}\) will be additional properties of a galaxy that are not observable, but are believed to encode information useful for predicting its size. These include properties such as three-dimensional density information and the galaxy's location in the central or satellite region of its cluster. This _auxiliary information_ is unobservable in photometric surveys, but will be available in a high-resolution simulation model such as the Illustris model used in this study. The objective is here is to exploit this additional information to improve predictions of intrinsic galaxy size. The approach developed here will build off the Kernel PCA-enhanced PPR model described above. First, note that in Equation 12, the \(\mathbf{s}_{,j}\) are not dependent on the particular \(\mathbf{x}\) for which the prediction is sought. These \(\mathbf{s}_{,j}\) represent the _directions_ found in the new, Kernel-PCA derived representation of the features. Since they are not dependent on \(\mathbf{x}\), these can be learned from a training set that has full access to both \(\mathbf{x}_{\mathrm{obs}}\) and \(\mathbf{x}_{\mathrm{aux}}\), e.g., the information generated from a simulation model. The dependence on \(\mathbf{x}\) arises only in the kernel function \(K(\mathbf{x},\mathbf{x}_{j})\). The additional complication of this approach comes from the need to approximate \(K(\mathbf{x},\mathbf{x}_{j})\) from \(K(\mathbf{x}_{\mathrm{obs}},\mathbf{x}_{j})\). Hence, in the first step the _auxiliary training set_ is constructed from the simulation model, i.e., for these \(n_{a}\) galaxies both \(\mathbf{x}_{\mathrm{obs}}\) and \(\mathbf{x}_{\mathrm{aux}}\) are available. From this set, a low-dimensional representation is learned using the Kernel PCA approach described above. The tuning parameters of the chosen kernel function will become tuning parameters for the final prediction model. The result of this first step is a set of vectors (directions) \(\mathbf{s}_{,j}\) for \(j=1,2,\ldots,n_{a}\). Again, these directions can exploit the rich information available in the features in both \(\mathbf{x}_{\mathrm{obs}}\) and \(\mathbf{x}_{\mathrm{aux}}\). For the next step, recall from above that the position of any \(\mathbf{x}\) in this new space can be found as \[\phi_{i}(\mathbf{x})=\frac{1}{\sqrt{\lambda_{i}}}\sum_{j}\mathbf{s}_{ij}K( \mathbf{x},\mathbf{x}_{j}). \tag{13}\] The challenge at this point is that on the actual, observed data, only \(\mathbf{x}_{\mathrm{obs}}\) is available. To account for this, the kernel function \(K\) will be approximated via \[\widehat{K}(\mathbf{x}_{\mathrm{obs}},\mathbf{x}_{j})\approx K([\mathbf{x}_{ \mathrm{obs}},\mathbf{x}_{\mathrm{aux}}],\mathbf{x}_{j}). \tag{14}\] Here, this approximation will be achieved using a neural network learned from the auxiliary training set derived from the simulation model. The natural question at this point is the following: What is gained by the incorporation of the auxiliary information? Le., is it not possible to simply model the galaxy size as a function of \(\mathbf{x}_{\mathrm{obs}}\) directly through a model? This is definitely possible, but this approach is exploiting the additional structure available in the auxiliary information. The auxiliary variables are demonstrably quite powerful source for making these predictions. This information is passed on through the vectors \(\mathbf{s}_{,j}\) which are learned from the auxiliary training set. A second evident question is as follows: Would it be better to fit one or more models that learn the relationship between \(\mathbf{x}_{\mathrm{obs}}\) and \(\mathbf{x}_{\mathrm{aux}}\), use these to impute the unavailable \(\mathbf{x}_{\mathrm{aux}}\) vectors, and then use these in a model trained on the auxiliary training set? The approach advocated for here avoids the fitting of several models, or one model with a vector-valued response, and instead focuses directly on approximating a single, real-valued quantity which encodes the important information, namely the kernel function evaluated at relevant pairs. In Section 4 below, results are presented from fitting using this procedure. ## 4 Models for Galaxy Size This section will present the results from the fitting of a more sophisticated model for intrinsic galaxy size. The approach will follow what is outlined in Section 3, with a mix of features from simulation model and photometric sources, all used in an effort to build an improved model for the size. The features based on photometry are as above: The \(griz\) magnitudes, mc_disk, and delta_smooth_R. (These latter two quantities are described in Section 2.2). The auxiliary features extracted from the simulation model are as follows: galaxy mass, velocity dispersion, star formation rate, 3D density measures, and central versus satellite classification of the galaxy's location within its cluster. ### Model Pipeline Architecture For the purposes of this modelling pipeline, the data are divided into three sets. (As mentioned above, groups are formed by pixel.) First, **Set 0** consists of those galaxies used to create the Kernel PCA representation. Here, this is done using the sigmoid kernel (Equation 10) with \(c=1\) and with \(\gamma\) the first of the tuning parameters to be optimized. (The approach to setting the values of the tuning parameters is described below.) Figure 6 depicts the first two dimensions in this representation, showing the important relationship with galaxy size. Ultimately, the number of dimensions which are used in the model is another tuning parameter. In the next step, the observations in Set 0 are further divided into a training and test set for the purposes of predicting the kernel function when evaluated at a \((\mathbf{x}_{\mathrm{obs}},\mathbf{x})\) pair. This model is fit using a fully-connected, four-layer neural network, with 1000 nodes per layer. Learning is allowed to run for 200 epochs, with learning rate fixed at 0.001. The dropout rate (applied after each layer) and the mini-batch size in the utilized ADAM optimizer are additional tuning parameters. Figure 7 shows the performance of this model on the test set in the final chosen model. The role of the aforementioned model is to allow for the prediction of the value of \(K(\mathbf{x}_{\mathrm{obs}},\mathbf{x})\) for pairs where \(\mathbf{x}_{\mathrm{obs}}\) is **not** in Set 0, but \(\mathbf{x}\) is from a galaxy which is in Set 0. To understand this step, it is useful to consider an updated version of Equation 15 above, as follows: \[\widehat{\phi}_{i}(\mathbf{x}_{\rm obs})=\frac{1}{\sqrt{\lambda_{i}}}\sum_{j} \mathbf{s}_{ij}\widehat{K}(\mathbf{x}_{\rm obs},\mathbf{x}_{j}). \tag{15}\] Here, \(\widehat{\phi}_{i}(\mathbf{x}_{\rm obs})\) is the position in the \(i^{th}\) dimension of the Kernel PCA of the galaxy with observed properties \(\mathbf{x}_{\rm obs}\), when the approximated Kernel is utilized. One can imagine that the Set 0 galaxies comprise a collection of simulation model-derived "reference points" to which the galaxies outside Set 0 are compared, albeit using an approximation to the Kernel function. The values of \(\widehat{\phi}_{i}(\mathbf{x}_{\rm obs})\) is calculated relative to these reference points. At this stage, the information is available for fitting the projection pursuit regression model that relates galaxy size (on the log scale) to the mix of observable and Kernel PCA-generated features. **Set 1** is the training set used for this model, while **Set 2** is held out as a test set. In this model, the number of Kernel PCA features, the degree of the spline functions, and the number of ridge functions are tuning parameters. Figure 5: This figure illustrates how the inclusion of Kernel PCA coordinates enhances the fit. The position of each galaxy along the first and third Kernel PCA coordinates is shown, colored by the galaxy size. The dashed contours show lines of constant galaxy size, as fit by the PPR model. Figure 6: The first two dimensions created by the Kernel PCA transformation. Each dot represents one galaxy, and color reflects the log size. The evident relationship between position in this space and galaxy size suggests that Kernel PCA is picking up important physical information. ### Selection of Tuning Parameters A challenging aspect of fitting a model of this complexity is the number of tuning parameters that result. In this pipeline, some model components are fixed to values that are deemed to be reasonable, e.g., the use of 1000 units in each layer of the neural network, and the choice of the sigmoid kernel. Other tuning parameters are set via randomization at the outset of the pipeline: * The value of \(\gamma\) in the Kernel PCA procedure, with \(\log_{10}(\gamma)\) chosen uniformly on the interval \((-5,-2)\). * The dropout rate used in the neural network model, chosen uniformly between 0.1 and 0.5. (The same dropout rate is used for all four layers.) * The batch size used in the neural network fitting algorithm, set to 16, 32, 64, or 128. * The number of Kernel PCA dimensions used in the PPR fit, set to 10, 15, or 20. * The degree of the spline functions in the PPR fit, set to either 2 or 3. The final tuning parameter is the number of ridge functions used in the PPR model. With each of the above five parameters fixed, this is varied from 2 to 16, with a cross-validation approach used to choose its value. Ultimately, the figure of merit used in choosing the global set of tuning parameters is the minimal MSE within this cross-validation procedure. The values chosen by this procedure are as follows: \(\gamma\) equals 0.00535, dropout rate equals 0.22, batch size equals 64, there are ten retained KPCA dimensions, and quadratic splines are used in the PPR model. **Comments:** As an additional hedge against overfitting, this cross-validation procedure uses the _one-SE rule_(James et al., 2013), wherein the value of the associated tuning parameter is set to the smallest such value which yields a figure of merit within one standard error of the best performing choice. The motivation behind this approach is that one should only choose a model if there is convincing evidence that the additional complexity is warranted. Also, note that this procedure avoids using the test set (Set 2) in the selection of the tuning parameters, which helps to preserve the role of the test set as an ultimate tool for assessing the performance of the model. ### Model Performance Figures 8 and 9 show the eight ridge functions fit in this model. The results show that the contrasts in the magnitudes (i.e., the colors) are clearly the most important in predicting the response values. Each of the kernel PCA-derived predictors receive small weight, but they still play a crucial role in improving the predictions, as evidenced by the reduction in the RMSE on the test set error to 0.231. Figure 10 compares the model predictions with the true galaxy size, for observations in test test set, i.e., Set 2. Figure 10 depicts a fair amount of scatter around the fitted line, but a central question is the following: To what extent does this remaining scatter correlate with physical properties of the galaxies, i.e., to what extent can the remaining scatter be attributed to intrinsic properties of the galaxies? Ideally, by incorporating these physical properties into the model we have reduced any remaining such correlation, and hence the residuals largely originate from physical effects which occur between the galaxy and when it is observed. Figure 11 explores this by comparing the residuals with galaxy features. It is observed that correlation in the residuals with each of the physical properties is largely eliminated. Figure 12 shows how the correlation of residuals from the fit vary across scales. While the reduction in the amount of spatial correlation is encouraging, there remains a clear, negative correlation on the smallest scales. The pattern of correlation in this fit is consistent with that seen in Figure 4, indicating that the additional complexity introduced in this final model did not help to further reduce this correlation. This correlation at small scales is expected, since the galaxy physics at this scale is very difficult to capture. While our relatively simple model managed to reduce the correlation by a factor \(\sim 5\), more sophisticated ML architectures may be needed in order to probe these small scale galactic Figure 7: Comparison of the actual and predicted kernel values when using a neural network model. physics, as demonstrated in Jagyaral et al. (2022), where adding graph convolutional layers to a neural network removed remaining small scale correlations. Such approaches suffer, however, from reduced interpretability due to the convolutional abstraction of the inputs. ## 5 Discussion This work demonstrates the potential of supervised learning approaches that are designed to emphasize interpretability for yielding accurate predictions of intrinsic galaxy sizes. The residuals from such a fit, estimates of the difference between the intrinsic and observed size, hold a wealth of useful cosmological information regarding topics such as gravitational lensing and the peculiar velocity of galaxies. These techniques could also potentially lead to improvements in uncertainty quantification on photometrically-estimated redshifts. The present work focuses on the use of data generated from the simulation model Illustrus-TNG, in an effort to explore the limits of the potential for such models, while also demonstrating a novel prediction approach that incorporates the learned structure in the high-resolution information only available in the simulations. The final model in this work serves as an illustration of the potential of the developed methods, but also of the directions for further improvements. The results demonstrate how photometry-only samples, in conjunction with high-resolution simulation models, could be combined as part of a framework to improve intrinsic galaxy size predictions. The model fits yield a relatively interpretable picture of the way in photometrically-derived properties relate to galaxy size. The results make it clear that magnitudes are useful predictors of galaxy size, provided a sufficiently complex model form is allowed. The residuals in the fits for intrinsic size show minimal correlation with some key physical galaxy properties, indicating that the models are successfully capturing the key relationships. It is clear, however, that there remains correlation with local environment, and that photometric data are not sufficient for capturing this correlation. This could possibly be improved by using more complex supervised learning methods. The hope is that the gain in interpretability from the proposed approach outweighs the drawback of this remaining correlation. A next step would be to explore the use of such approaches with real, photometric survey data. In such an analysis, the steps taken in this work would be repeated, with Set 0 still built from the simulation model. Set 1 should consist of a sample of real galaxies with available spectroscopic data, in order for reliable measures of galaxy size and other observable galaxy properties to be available in the training of the PPR model. This model could then be applied to a photometry-only sample to produce predictions of galaxy sizes. This approach would achieve the simultaneous goals of using a modelling approach that produces interpretable results, but also exploits the information available in the high-resolution simulation model. ## Data Availability Statement The data and software associated with this work are available on the World Wide Web. The simulation catalog data is available at [https://github.com/McWilliamsCenter/gal_decomp_paper](https://github.com/McWilliamsCenter/gal_decomp_paper). The software used for this work is available at [https://github.com/sukhdeep2/corr_pc](https://github.com/sukhdeep2/corr_pc). ## Acknowledgments CS is supported by NSF Award Number 2020295. SS is supported by McWilliams postdoctoral fellowship at CMU. YJ was supported in part by Department of Energy grant DE-SC0010118 and in part by a grant from the Simons Foundation (Simons Investigator in Astrophysics, Award ID 620789).
2310.17244
Inter-band optical transitions of helical Majorana edge modes in topological superconductors
The search for evidence of Majorana states on the edges of topological superconductors (TSCs) is challenging due to the difficulty of detecting such charge-neutral electronic quasiparticles. Local microwave spectroscopy has been shown to be a possible method to detect propagating Majorana modes, where a spatially focused light beam must be used. Here, we show that helical Majorana modes in TSCs allow inter-band transitions and thus contribute to optical conductivity under a spatially uniform light. The existence of such a signal requires the system to break certain symmetries so that the projection of the charge current operator onto helical Majorana edge states leads to inter-band hybridization terms. The general form of this contribution under a tunable time-reversal breaking field is derived, which is valid in the sub-gap low-frequency regime where the edge energy spectrum is linear, and numerical results are obtained in three TSC models, showing remarkable consistency with the analytical prediction. In comparison, the current operator for normal helical edge states, such as in quantum spin Hall insulators, does not cause inter-band transitions and the related optical conductivity vanishes unless the time-reversal symmetry is broken. Our results may help guide feasible experiments to provide evidence of Majorana edge modes in TSCs.
Han Bi, James Jun He
2023-10-26T08:43:03Z
http://arxiv.org/abs/2310.17244v1
# Inter-band optical transitions of helical Majorana edge modes in topological superconductors ###### Abstract The search for evidence of Majorana states on the edges of topological superconductors (TSCs) is challenging due to the difficulty of detecting such charge-neutral electronic quasiparticles. Local microwave spectroscopy has been shown to be a possible method to detect propagating Majorana modes, where a spatially focused light beam must be used. Here, we show that helical Majorana modes in TSCs allow inter-band transitions and thus contribute to optical conductivity under a spatially uniform light. The existence of such a signal requires the system to break certain symmetries so that the projection of the charge current operator onto helical Majorana edge states leads to inter-band hybridization terms. The general form of this contribution under a tunable time-reversal breaking field is derived, which is valid in the sub-gap low-frequency regime where the edge energy spectrum is linear, and numerical results are obtained in three TSC models, showing remarkable consistency with the analytical prediction. In comparison, the current operator for normal helical edge states, such as in quantum spin Hall insulators, does not cause inter-band transitions and the related optical conductivity vanishes unless the time-reversal symmetry is broken. Our results may help guide feasible experiments to provide evidence of Majorana edge modes in TSCs. ## I Introduction The search for Majorana modes in condensed matter physics [1; 2; 3; 4; 5; 6; 7; 8; 9] has been a critical and challenging problem. Such quasiparticles are believed to exist in topological superconductors (TSCs) where they may show up as one-dimensional (1D) propagating modes as well as zero-dimensional bound sates. Various systems have been predicted to host propagating helical [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22] or chiral [23; 24; 25; 26; 27; 28; 29; 30; 31] Majorana modes with or without time-reversal symmetry, respectively. There has been experimental results [32; 33; 34] consistent with propagating-Majorana scenarios, but conclusive evidence of such exotic states in TSCs is yet to be achieved. The difficulties not only exist in experimental techniques, but also in theoretical principles to interpret or predict the Majorana signals. Despite possessing nonzero velocity, propagating Majorana modes do not carry any charge because their particles and anti-particles are identical [1]. This neutrality makes the detection of Majorana edge modes in TSCs much harder than normal edge states. Strictly speaking, however, the neutrality is true only on average since Majorana modes alone do not preserve the U(1) gauge symmetry. This makes it possible for them to couple with external electromagnetic field, leading to particular optical responses that may serve as evidence of chiral Majorana modes [35; 36]. In this case, translation symmetry needs to be strongly broken to see the predicted optical signal due to the absence of vertical optical transition. As a result, a highly focused light beam is required. Here, we show that, without breaking the transnational symmetry along the edge, the microwave absorption of helical Majorana modes (HMMs) in time-reversal invariant TSCs is nonzero and may be used as an effective detection method. We begin with a generic discussion with an effective theory containing only the edge states. The current operator, assumed to be determined by the current operator of the bulk TSC system projected onto the edge, is directly written down by physical arguments at this stage. A general form of the optical absorption is obtained. Then we investigate this phenomenon in several models of time-reversal invariant TSCs including \(p\pm ip\)-wave superconductors, topological insulator (TI) thin films in proximity to superconductivity, and doped quantum spin Hall (QSH) insulators. We show that they all share the same features in the sub-gap low-energy region where the optical transitions happen among the Majorana edge states. ## II Effective edge theory A minimum theory of the helical Majorana edge states of a time-reversal invariant TSC may be described by the following Hamiltonian, \[\mathcal{H}=\sum_{-k_{0}<k<k_{0}}\Gamma_{k}^{\dagger}(vk\sigma_{3}+M\sigma_{2} )\Gamma_{k}, \tag{1}\] where \(\Gamma_{k}=[\gamma_{1k},\gamma_{2k}]\) denotes the edge HMMs, \(\sigma_{1,2,3}\) are the Pauli matrices, \(k_{0}\) is the momentum cutoff, and \(M\) is a time-reversal breaking term that opens a gap in the 1D spectrum. When \(M=0\), one readily find that the time-reversal symmetry \(\mathcal{T}=i\sigma_{2}\mathcal{K}\) and the particle-hole symmetry \(\mathcal{P}=\mathcal{K}\) are both preserved. When \(M\neq 0\) the energy eigenvalues are \(\pm\xi_{k}\) with \(\xi_{k}=\sqrt{(vk)^{2}+M^{2}}\), which has a gap of \(M\), as shown in Fig. 1(a). The corresponding current density operator \(j(x)\) is more conveniently written down in the real space. Considering its sign flip under both \(\mathcal{T}\) and \(\mathcal{P}\), applying the Majorana algebra \(\{\gamma_{i}(x),\gamma_{j}(x^{{}^{\prime}})\}=\delta_{ij}\delta(x-x^{{}^{ \prime}})\), and keeping up to the first order spatial derivative, one obtains the only possible form \[j(x)=-ia(\gamma_{1}\partial_{x}\gamma_{1}+\gamma_{2}\partial_{x}\gamma_{2})-ib \gamma_{1}\gamma_{2}, \tag{2}\] where \(a\) and \(b\) are real. The current operator in the momentum space is given by Fourier transformation \(j_{q}=\int j(x)e^{-iqx}dx\) and the optical conductivity is given by the Kubo formula, \[\Re[\sigma(\omega,q)]=\frac{1}{\omega L}\Im\int_{0}^{k_{B}T}d\tau e^{i\omega _{n}\tau}\langle T_{\tau}j_{-q}(\tau)j_{q}(0)\rangle. \tag{3}\] where \(\Re\) (\(\Im\)) denote the real (imaginary) part. For a uniform detecting light we only need to consider the \(q=0\) component. Eq. (3) is calculated in the eigenstate basis, yielding \[\Re[\sigma(\omega,0)]=\frac{b^{2}}{4v\omega^{2}}\sqrt{\omega^{2}-(2M)^{2}} \tanh\frac{\omega}{2k_{B}T}. \tag{4}\] When the temperature \(T\to 0\) and \(M=0\), the \(\Re[\sigma(\omega,0)]\) curve decreases as \(1/\omega\). For the time-reversal breaking case, the optical response is zero inside the edge gap, i.e. when \(\omega<2M\) which is the lowest energy to break a Cooper pair into two Majorana states with the same energy and opposite momenta. \(\Re[\sigma(\omega,0)]\) reaches the maximum value at \(\omega=2\sqrt{2}M\) and decreases with further increasing \(\omega\), as shown in Fig. 1(b). Generally, the momentum cutoff \(k_{0}\) in Eq. (1) originates from a energy cutoff \(\Delta=vk_{0}\) which corresponds to the topological gap of a TSC. This gap is usually smaller than the SC gap and is often of the order of 0.1 meV, corresponding to microwave. Note that the right-hand side of Eq. (4) vanishes if \(b=0\) and thus the last term in Eq. (2) is crucial and responsible for the inter-band transitions. This term is taken for granted up to now. In the following, we study concrete TSC models where the inter-band term of \(j(x)\) appears by breaking the inversion symmetry. ## III \(p\pm ip\) superconductors As the simplest case, let us first consider a \(p\pm ip\)-wave TSC described by the following Hamiltonian \[H_{p\pm ip} =(\frac{\hbar^{2}k^{2}}{2m}-\mu)\tau_{3}\sigma_{0}+A_{p}(k_{x} \tau_{0}\sigma_{1}+k_{y}\tau_{3}\sigma_{2}) \tag{5}\] \[+\Delta_{p}(k_{x}\tau_{1}\sigma_{0}-k_{y}\tau_{2}\sigma_{3}) \tag{6}\] with the basis \(\Psi^{\dagger}(\mathbf{k})=[\psi^{\dagger}_{\mathbf{k}\uparrow},\psi^{\dagger}_{\mathbf{k }\downarrow},\psi_{-\mathbf{k}\uparrow},\psi_{-\mathbf{k}\downarrow}]\). Here \(\mathbf{\tau}\) and \(\sigma_{i}\) are Pauli matrices (\(\sigma_{0}\) being the identity matrix) acting on the particle-hole and spin degrees of freedom respectively. A pair of gapless helical Majorana states appear at the boundary of the 2D system and are protected by the time-reversal symmetry \(\mathcal{T}\), which can be broken by a small Zeeman term \(H_{z}=B_{p}\tau_{3}\sigma_{1}\). The spin orbit coupling (SOC) term proportional to \(A_{p}\) is necessary in order to induce optical response to a uniform light, because the Hamiltonian (6) with \(A_{p}=0\) commutes with the spin operator \(\sigma_{3}\) and there is no mixing between \(\uparrow\) and \(\downarrow\), indicating \(b=0\) in Eq. (2) and thus vanishing vertical transition. The current operator along the \(\hat{\mathbf{x}}\) direction is given by \[j_{x}=\begin{pmatrix}j_{n}(\mathbf{k})&\\ &-j_{n}^{T}(-\mathbf{k})\end{pmatrix} \tag{7}\] where \(j_{n}(\mathbf{k})=e\hbar k_{x}/m\sigma_{0}+eA_{p}\sigma_{1}\) is the \(k\)-derivative of the normal state Hamiltonian \(H_{n}=(\hbar^{2}k^{2}/2m-\mu)\sigma_{0}+B_{p}\sigma_{1}+A_{p}\mathbf{k}\cdot\mathbf{\sigma}\) multiplied by the electron charge \(e\). The Kubo formula calculated in the energy eigenstate basis leads to \[\sigma(\omega)=\frac{i\hbar}{\Omega}\sum_{k_{x},m,n} \frac{|\langle nk_{x}|j_{x}mk_{x}\rangle|^{2}}{\xi_{mk_{x}}-\xi_{ nk_{x}}}\] \[\times\frac{f(\xi_{nk_{x}})-f(\xi_{mk_{x}})}{\hbar\omega+\xi_{nk_ {x}}-\xi_{mk_{x}}+i\eta}, \tag{8}\] where \(\Omega\) stands for the area of the shining light and \(f(\epsilon)\) represents the Fermi distribution function. Note that, in the absence of SOC effect, \(j_{x}\) is proportional to the identical matrix and no inter-band transition occurs due to the orthogonality of the eigenstates, \(|nk\rangle\) and \(|mk\rangle\). Adding SOC terms breaks the inversion symmetry and produces spin-mixing terms in the current operator. Then, we expect nontrivial inter-band transition described by the effective 1D theory in the previous section. Figure 2(a) shows the real-part conductivity \(\Re[\sigma(\omega)]\) obtained numerically. When the Zeeman field \(B_{p}=0\), the optical absorption induced by the in-gap HMMs di Figure 1: (a) The energy spectra of the helical Majorana modes without (\(M=0\)) and with (\(M\neq 0\)) time-reversal symmetry. Intra-band and inter-band optical transitions are schematically shown during which a Cooper pair is broken into two Majorana modes. (b) The real-part homogeneous optical conductivity, \(\Re[\sigma(\omega,0)]\), as a function of the frequency \(\omega\) given by Eq. (4). verges at as \(\omega\to 0\). This is in agreement with the prediction of the effective theory in which \(\Re[\sigma(\omega)]\sim 1/\omega\). For nonzero but small \(B_{p}\), the edge states acquire a gap, denoted by \(E_{g}\). Thus, there is no absorption for \(\omega<2E_{g}\) unless thermal excitation due to finite temperature is considered. \(\Re[\sigma]\) rapidly increases near \(\omega>2E_{g}\) and reaches a maximum at \(\omega_{nu(th)}\). It decreases as \(\omega\) further goes up until it reaches the bulk gap, where the contribution of the bulk states dominates. Figure 2(b) shows the real-part optical conductivity contributed by the edge states, with or without time-reversal symmetry, together with the corresponding analytical results of the effective 1D theory. Fig. 2(c) shows the positions of the maximum for various values of the energy gap. They both show great agreement between the numerical and the analytical results. The optical conductivity of the \(p\pm ip\)-wave superconductor in response to a locally distributed detecting light is also studied by transforming Eq.(6) into a tight-binding model. The current density along \(\hat{\mathbf{x}}\) direction is \(j_{x}(\mathbf{r})=e[\frac{i\hbar}{2m}\psi_{\mathbf{r}}^{\dagger}\psi_{\mathbf{r}+\mathbf{ \hat{x}}}+A_{p}\psi_{\mathbf{r}}^{\dagger}\psi_{\mathbf{r}}+h.c.]\) The size of detecting area is determined by the limits \(1\leq x\leq l_{x}\) and \(1\leq y\leq l_{y}\), where \(l_{y}\) is chosen to cover the spread of the edge state wave functions. The current operator to calculate the optical response in the detecting area is given by \[J_{x}=\frac{1}{l_{x}}\sum_{m=1}^{l_{x}}\sum_{n=1}^{l_{y}}j_{x}(\mathbf{r}+m\mathbf{ \hat{x}}+n\mathbf{\hat{y}}). \tag{9}\] Figure 3 shows the results for different shining lengths \(l_{x}\) obtained using the recursive Green's function method. For small \(l_{x}\) we get similar shape of the \(\Re[\sigma_{l}(\omega)]\) curve compared to the chiral case [35]. A major difference is the non-zero value of \(\Re[\sigma_{l}(\omega=0)\), which is a consequence of the inter-band term (\(\sim b\)) in the current operator of Eq. (2). As \(l_{x}\) increases, the peak shift towards lower frequency and \(\Re[\sigma_{l}(\omega)]\) and the peak height increases. For very large \(l_{x}\), the result becomes similar to the uniform case, as expected since the limit \(l_{x}\rightarrow\infty\) recovers uniformity. ## II Ti thin film The surface states of TIs can be used to design a time-reversal invariant TSC by proximity to conventional superconductors. If the SC order parameters induced on the two surfaces of a TI thin film are different by a phase \(\pi\), a pair of HMMs appear on the edges [12; 18]. With only the surface states considered, the normal state Hamiltonian can be written as \[H_{0}^{TF}(\mathbf{k})=2A_{t}\tau_{z}\mathbf{d}\cdot\mathbf{\sigma}+m_{\mathbf{k}}\tau_{x} \sigma_{0} \tag{10}\] under the basis \(\Psi_{\mathbf{k}}^{\dagger}=[c_{\mathbf{k},+,\uparrow}^{\dagger},c_{\mathbf{k},+,\downarrow }^{\dagger},c_{\mathbf{k},-,\uparrow}^{\dagger},c_{\mathbf{k},-,\downarrow}^{\dagger}]\) where \(\pm\) denotes the two surfaces. The vector \(\mathbf{d}=[k_{x},k_{y},0]\) and the function \(m_{\mathbf{k}}=m_{0}-t_{f}(k_{x}^{2}+k_{y}^{2})\). The first term of Eq. (10) describes the two Dirac ones located at the Figure 2: (a) The frequency dependence of optical conductivity for various values of the edge gap \(E_{g}\). The common parameters: pairing amplitude \(\Delta=0.2\), SOC strength \(A_{p}=0.05\), mass term \(m=1\), chemical potential \(\mu=0.5\) and \(k_{B}T=10^{-4}\). \(k_{B}\) is the Boltzmann constant. The inset shows the corresponding energy spectra. (b) The numerical (dots) and the analytical (lines, obtained with Eq. (4)) of the real-part conductivity for both the time-reversal invariant (red) and the time-reversal broken case (blue). (c) The corresponding peak position \(\omega_{nu(th)}\) where the real-part conductivity reaches its maximum under various values of \(E_{g}\). Figure 3: The frequency dependence of the real-part local conductivity \(\Re[\sigma_{l}(\omega)]\) of the \(p\pm ip\)-wave superconductors system. The results for different values of the shining length \(l_{x}\) is plotted in different colors. Other related parameters are set to be the same to those in Fig. 2 with \(E_{g}=0\). two surfaces and the second term represents the inter-surface coupling. An inversion-symmetry-breaking term (originating from the substrate, for example) is needed to induce vertical optical transition [37; 38], which may be, for example, a Rashba SOC on one surface, \[H_{R}^{TF}(\mathbf{k})=\alpha(\tau_{0}+\tau_{z})\mathbf{d}\times\mathbf{\sigma}. \tag{11}\] Equations (10)-(11) added by an s-wave pairing, \(\Delta(\mathbf{k})=i\Delta\tau_{z}\sigma_{y}\), form the total Hamiltonian. The sign difference of the order parameter between the upper and lower surfaces guarantees the time-reversal symmetry, which can be slightly broken by an external Zeeman field \(H_{z}=B\tau_{0}\sigma_{z}\) or by a deviation from the exact \(\pi\)-phase difference. Adding time-reversal breaking terms will open small gaps at these points. As shown in Fig. 4, the frequency dependence of the real-part conductivity near the \(k_{x}=0\) point (i.e., near \(\omega=0\)) has a similar functional form to that of the \(p\pm ip\)-wave TSCs and agrees with Eq. (4). ## III Doped QSH insulator Quantum spin Hall insulators are proposed to be a TSC through correlation effects [15]. With this model system, one can directly compare the optical response of the HHMs to that of helical normal fermions, which could be achieved in different parameter regimes. Consider the following Hamiltonian describing a QSH insulator [39] \[\mathcal{H}_{0}=M(\mathbf{k})\sigma_{0}\tau_{3}+A(k_{x}\sigma_{3}\tau_{1}-k_{y} \sigma_{0}\tau_{2}), \tag{12}\] where the Pauli matrices \(\sigma_{i}\) and \(\tau_{i}\) (\(i=1,2,3\)) act on the spin and orbital spaces, respectively. \(M(\mathbf{k})=m_{0}-t(k_{x}^{2}+k_{y}^{2})\) and \(m_{0}t>0\) is required to guarantee the non-trivil topology of the normal state. It has been predicted that a TSC phase with \(\Delta_{\mu\nu}^{12}(\mathbf{k})=\Delta c_{1,\mathbf{k}\mu}c_{2,-\mathbf{k}\nu}\delta_{\mu\nu}\) is favored at certain doped region with an inversion-breaking Rashba SOC [15] \[\mathcal{H}_{R}=A_{1}(k_{x}\sigma_{2}-k_{y}\sigma_{1})\otimes(\tau_{3}+\tau_{ 0}). \tag{13}\] Here the subscript \(\mu(\nu)\) labels the electron spins and the superscript \(1(2)\) represents different orbitals. The TSC phase has a pair of helical Majorana states propagating along the boundary, which is replaced by normal-fermion edge states when the pairing term vanishes and the system transforms into a QSH phase. In presence of a time-reversal breaking term \(\mathcal{H}_{Z}=Z\sigma_{3}\tau_{0}\), the edge states develop new features including a small gap, as shown in Fig. 5(b) and (c). The real-part conductivity of the TSC phase and the QSH phase under uniform detecting light are shown in Fig. 4(a), where the TSC results have similar features to the former TSC models, consistent with Eq. (4). The results for the QSH phase are rather different, with the optical conductivity inside the topological gap vanishing if \(Z=0\). When \(Z\neq 0\), it has a sharp peak at \(\omega=Z\) (diverging if \(T=0\)) above which it decreases as \(\omega\) goes up. Figure 4: The frequency dependence of the real-part conductivity of the HMMs in the TI thin film model given by Eqs. (10)-(11), with the parameters \(A_{t}=1\), \(t_{f}=1\), \(\alpha=1\), \(m_{0}=3\), and \(\Delta=2\). The inset is the energy spectrum where the edge states are highlighted by corresponding colors. Figure 5: (a) The real-part conductivity of the doped QSH system in the QSH phase and in the TSC phase. For the TSC state the parameters are: \(A=2\), \(\Delta=0.5\), \(t=2\), \(m_{0}=0.1\), \(A1=2\), \(Z=0\) (for the red line) and \(Z=0.2\) (for the blue line). For the QSH state: \(A=2\), \(\Delta=0\), \(t=2\), \(m_{0}=0.7\), \(A1=2\) and \(Z=0\) (for the black line). (b) and (c) are the energy spectrum of the TSC phase and the QSH phase, respectively, where the time-reversal invariant (broken) edge states are in red (blue). The vanishing optical absorption by the QSH edge states forms a major difference from the HMMs. It originates from the different mechanisms through which the edge states couple with electromagnetic waves. While the light-coupling of the HMMs relies on the bulk system and the corresponding current operator must be obtained by projecting the bulk version to the edges, the current operator of the QSH edge states may be directly derived within the effective edge theory which preserves the U(1) gauge symmetry. By introducing a gauge field to the edge theory, the current operator, \(j_{n}\sim v(\psi_{1}^{\dagger}\psi_{1}-\psi_{2}^{\dagger}\psi_{2})\), can be readily obtained without referring to the bulk. It is simply the \(k\)-derivative of the edge Hamiltonian \(h_{edge}(k)=vk(\psi_{1}^{\dagger}\psi_{1}-\psi_{2}^{\dagger}\psi_{2})\). Following the same procedures in the previous effective 1D theory of Majorana edges states, we get the optical absorption for the QSH edge states at zero temperature, \(\Re[\sigma_{\text{QSH}}(\omega)]\sim\omega^{-2}(\omega^{2}-4M^{2})^{-1/2}\), where \(M\) is the edge gap open by time-reversal symmetry breaking. Note that, besides the above major difference, the optical responses of the QSH edge states and the HMMs may happen in different ranges of the wavelength since a QSH insulator may have a much larger topological gap. ## Conclusion and discussion We have demonstrated that helical Majorana modes induce microwave absorption. It originates from interband optical transition processes that are made possible by the broken U(1) gauge and spatial-inversion symmetries. Analytical form of the resulting optical conductivity is obtained with an effective edge theory, which are qualitatively confirmed by numerical calculations with several models of topological superconductors. The zero-temperature real-part optical conductivity \(\Re[\sigma(\omega)]\) induced by the helical Majorana modes under uniform light is proportional to \(\omega^{-1}\). When the time-reversal symmetry is broken and an energy gap of \(M\) is opened on the edge, \(\Re[\sigma(\omega)]\) has a maximum value at \(\omega=2\sqrt{2}M\). In comparison, vertical optical transitions in helical normal edge states in quantum spin Hall insulators are forbidden unless the time-reversal symmetry is broken after which the functional form of \(\Re[\sigma(\omega)]\) becomes similar to that of Majorana modes. This difference originates from the different mechanisms of coupling with the U(1) gauge field. Our results show that optical measurements may provide evidence of Majorana edge states in time-reversal invariant topological superconductors. Different from reference [35], the detecting light here is uniform and experiments will not encounter the difficulty of focusing a light beam into a tiny spatial region. A possible difficulty may come from the background optical absorption signal induced by the bulk Cooper pairs, which may be much larger than the edge-state contribution and make the Majorana signal hard to distinguish. One way to overcome this problem is to tune the external magnetic field which changes the functional form of the Majorana contribution qualitatively while its effect on the background signal is only quantitatively. In this way, it is possible to extract the Majorana contribution. ###### Acknowledgements. We thank Qian Niu and Zhenyu Zhang for helpful discussions. J.J.H. is supported by the Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0302800) and the National Natural Science Foundation of China (Grant No. 12204451). ## Appendix: Derivation of the optical conductivity with the effective edge theory To illustrate the optical response of the HMMs we start from the real space current operator \(j(x)=j_{a}(x)+j_{b}(x)\) including both intra-band and inter-band current. They can be regarded as a projection of the bulk current onto the edge states. Notice the lowest order of the interband current does not contain other terms due to the self-conjugation of Majorana fermions. After the Fourier transformation we have \[j_{a}(q) = a\sum_{k}k(\gamma_{1,q-k}\gamma_{1,k}+\gamma_{2,q-k}\gamma_{2,k }), \tag{31}\] \[j_{b}(q) = -ib\sum_{k}\gamma_{1,q-k}\gamma_{2,k}. \tag{32}\] Define the current-current correlation function \(\Pi(q,\tau)=-\langle T_{\tau}j^{\dagger}(q,\tau)j(q)\rangle\) we get the corresponding intra-band correlation function \[\Pi_{a} = -a^{2}\sum_{i,k,k^{\prime}}kk^{{}^{\prime}}\langle T_{\tau}[ \gamma_{i,-k-q}\gamma_{i,k}]_{\tau}[\gamma_{i,q-k^{\prime}}\gamma_{i,k^{ \prime}}]\rangle \tag{33}\] \[= -a^{2}\sum_{i,k}(2k^{2}-kq)\mathcal{G}_{i}(-k,\tau)\mathcal{G}_{ i}(k-q,\tau). \tag{34}\] where \(i=1,2\) and \(\mathcal{G}_{i}\) stands for the Green's function of \(\gamma_{1}\) and \(\gamma_{2}\). In the frequency space at \(T\to 0\) limit we have \[\Pi_{a}(i\omega_{n})= \int_{0}^{k_{B}T}\Pi_{a}(q,\tau)e^{i\omega_{n}\tau} \tag{35}\] \[= -a^{2}\sum_{k}(2k^{2}-kq)[\theta(k)-\theta(k-q)]\] \[\times(\frac{1}{i\omega_{n}+vq}+\frac{1}{i\omega_{n}-vq}). \tag{36}\] The \(q=0\) current operator is given by \[j(k)=ak\sigma_{0}+b\sigma_{3} \tag{37}\] under the basis \(\Gamma_{k}^{\dagger}=[\gamma_{1,k}^{\dagger},\gamma_{2,k}^{\dagger}]\). Under the Bogoliubov transformation we can diagonalize the 1D Hamiltonian \(H\rightarrow\tilde{H}=UHU^{\dagger},\quad\Gamma^{\dagger}\rightarrow\tilde{ \Gamma}^{\dagger}=[\tilde{\gamma}_{1}^{\dagger},\tilde{\gamma}_{2}^{\dagger}]\). The current operator should become \[\tilde{j}(k) = ak\sigma_{0}+b\begin{pmatrix}u_{k}&v_{k}\\ v_{k}^{*}&-u_{k}^{*}\end{pmatrix}\sigma_{2}\begin{pmatrix}u_{k}^{*}&v_{k}\\ v_{k}^{*}&-u_{k}\end{pmatrix} \tag{20}\] \[= ak\sigma_{0}+b\begin{pmatrix}i\Im(v_{k}u_{k}^{*})&i(u_{k}^{2}+v _{k}^{2})\\ -i(u_{k}^{*2}+v_{k}^{*2})&-i\Im(u_{k}^{*}v_{k})\end{pmatrix} \tag{21}\] in the new basis. To maintain the Majorana algebra of the new quasi-particle states \(\{\tilde{\gamma}_{ik},\tilde{\gamma}_{jk^{\prime}}\}=\delta_{ij}\delta_{k,- k^{\prime}}\) the parameters \(u_{k}=|u_{k}|e^{i\phi_{a}}\)\(v_{k}=|v_{k}|e^{i\phi_{a}}\) should satisfy \(|u_{k}|^{2}=\frac{1}{2}+\frac{1}{2}\frac{\epsilon_{k}}{\xi_{k}},|v_{k}|^{2}= \frac{1}{2}-\frac{1}{2}\frac{\epsilon_{k}}{\xi_{k}}\), and \(\phi_{u}-\phi_{v}=\frac{2}{2}\). The \(ak\sigma_{0}\) term from Eq. (21) does not contribute to the correlation function because it only involves terms like \(\langle T_{\tau}\tilde{\gamma}_{i}^{\dagger}(\tau)\tilde{\gamma}_{i}(\tau) \tilde{\gamma}_{i}^{\dagger}\tilde{\gamma}_{i}\rangle\) (\(i=1,2\)). Such terms vanish under Wick's theorem since the involved Green's functions belong to the same Majorana operator. However the second term of Eq. (21) will cause connected diagrams \[\Pi_{21}(\tau)=-4b^{2}\sum_{k>0}|u_{k}|^{2}|v_{k}|^{2}\langle T_{ \tau}\tilde{\gamma}_{2k}^{\dagger}(\tau)\tilde{\gamma}_{1k}(\tau)\tilde{\gamma }_{1k}^{\dagger}\tilde{\gamma}_{2k}\rangle \tag{22}\] corresponded to the inter-band transition. In the frequency space we have \[\Pi_{21}(i\omega_{n})=-b^{2}\sum_{k>0}\frac{\epsilon_{k}^{2}}{ \xi_{k}^{2}}\frac{f(\xi_{k})-f(-\xi_{k})}{i\omega_{n}-2\xi_{k}}. \tag{23}\] And the real-part optical conductivity is given by \[\Re[\sigma(\omega)] = -\frac{1}{\omega L}\Im[\Pi_{21}(i\omega_{n})], \tag{24}\] \[= \frac{b^{2}}{2v\omega^{2}}\sqrt{\omega^{2}/4-M^{2}}\tanh\frac{ \omega}{2k_{B}T}. \tag{25}\]
2307.04185
Parton shower algorithm with saturation effect
We extend the previously developed small $x$ parton shower algorithm to include the kinematic constraint effect and $k_t$ resummation effect. This work enables the Monte Carlo generator to simultaneously resum large $k_t$ and small $x$ logarithms in the saturation regime for the first time. It is an important step towards simulating processes involving multiple well separated hard scales, such as di-jet production in eA collisions at EIC.
Yu Shi, Shu-Yi Wei, Jian Zhou
2023-07-09T14:24:36Z
http://arxiv.org/abs/2307.04185v1
# Parton shower algorithm with saturation effect ###### Abstract We extend the previously developed small \(x\) parton shower algorithm to include the kinematic constraint effect and \(k_{t}\) resummation effect. This work enables the Monte Carlo generator to simultaneously resum large \(k_{t}\) and small \(x\) logarithms in the saturation regime for the first time. It is an important step towards simulating processes involving multiple well separated hard scales, such as di-jet production in eA collisions at EIC. ## I Introduction The study of dense gluonic matter at small \(x\) inside a large nucleus and nucleon has been and continues to be an important frontier of high-energy nuclear physics. It is also one of the main objectives of the physics program of the future Electron-Ion Collider (EIC) [1; 2]. Tremendous theoretical efforts have been made to search for smoking gun evidence of saturation. To this end, hard scattering processes in eA collisions at EIC are expected to deliver crucial messages about how saturation emerges from strongly interacting gluonic matter. A Monte Carlo event generator that incorporates saturation effects could play an essential role in fully harnessing the potential of future experimental data taken from EIC. As the core of general purpose Monte Carlo event generators, parton showers describe successive radiations from highly-energetic partons that participate in the hard scattering process. While most parton branching algorithms [3; 4; 5; 6] are based on the soft and collinear approximation which effectively resums the Dokshitzer-Gribov-Levin-Altarelli-Parisi (DGLAP) [7] like logarithm to all orders, only a few parton shower generators [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18] have been developed to describe small \(x\) processes by simulating semi-hard emissions which give rise to the logarithm of the type \(\ln(1/x)\)[19; 20]. Among these generators, the Cascade [10; 11] that is built on the Catani-Ciafaloni-Fiorani-Marchesini (CCFM) evolution equation [21; 22; 23; 24] is the most widely used in the phenomenology studies (see for recent examples [25; 26]). However, none of the aforementioned parton showers takes into account the gluon recombination process occurs in the dense target. The first attempt to include saturation effect in the parton shower is presented in Ref. [27] where both the forward and the backward evolution schemes have been presented. The underlying parton branching equation employed in our formulation is the folded Gribov-Levin-Ryskin (GLR) equation [28]. Although the GLR equation is somewhat outdated compared to modern treatments of small \(x\) evolution [29; 30; 31; 32; 33; 34], it is sufficient for simulating events in eA collisions at EIC energy. This is because the gluon density probed at EIC is not high enough for the triple pomeron vertex to dominate the gluon fusion process. In the previous work [27], we performed a consistent check by comparing the transverse momentum distribution of exchanged gluons reconstructed from the parton shower generator with numerical solutions of the GLR equation. A full agreement between these two results was reached. The running coupling effect was also implemented in our Monte Carlo simulation. In the present work, we improve this parton branching algorithm by imposing the kinematic constraint arising from the requirement that the offshellness of \(t\) channel gluon should be dominated by its transverse momentum squared [35; 36; 37]. Though it is formally a sub-leading logarithm contribution, the kinematic constraint effect is known to significantly slow down the evolution speed. It is thus a necessary component of the Monte Carlo generator for any practical phenomenological studies. Actually, the angular ordering of soft emissions is automatically imposed once the kinematic constraint is applied since the angular ordering constraint is weaker than the latter [35] in the small \(x\) limit. The coherent branching effect is thus effectively included in the parton shower. On the other hand, for the case of hard scattering processes involving multiple well-separated hard scales, like di-jet production in eA collisions, the transverse momentum dependent (TMD) type large logarithm \(\alpha_{s}\ln^{2}\left(Q^{2}/k_{\perp}^{2}\right)\) and small \(x\) logarithm \(\alpha_{s}\ln\left(1/x\right)\) need to be simultaneously resummed. Such a joint resummation formalism has been established in a series of publications [38; 39; 40]. Another main objective of this work is to implement the joint resummation in the Monte Carlo simulation. The rest of the paper is organized as follows. In Sec. II, we discuss how to integrate the kinematic constraint effect into the parton shower algorithm. The formulations of both forward and backward evolution are presented. In Sec. III, the implementation of the joint resummation in the algorithm is discussed. Our starting point is the Sudakov factor derived from a folded version of the Collins-Soper (CS) and the renormalization group equation. It is shown that the \(k_{\perp}\) distribution reconstructed from the parton shower is identical to the numerical and analytical results obtained from the CS equation and renormalization group equation. The paper is summarized in Sec. IV. ## II The kinematic constraint In our previous work [27], we have developed a Monte Carlo method to simulate the parton shower at small \(x\) based on the GLR evolution equation [28]. Our formulation only takes into account the summation of the leading logarithm \(\ln{(1/x)}\) contribution which is known to result in too rapid growth of gluon number density towards small \(x\) region. From a phenomenological point of view, it is crucial to go beyond the leading logarithm accuracy and include the various sub-leading logarithm contributions [35; 36; 37; 41; 42; 43; 44; 45; 46; 47; 48], among which the kinematic constraint effect [35; 36; 37; 41] is a particularly interesting one. The kinematic constraint is required for the validity of the BFKL/GLR equation at small \(x\). The constraint is needed to ensure that the virtuality of the gluons along the chain is controlled by the transverse momenta. The implementation of the kinematic constraint can significantly slow down the small \(x\) evolution and thus lead to a better description of relevant phenomenology. Note that the angular ordering of the gluon emissions is automatically satisfied once the kinematic constraint is imposed in the small \(x\) limit. The coherent branching effect is thus effectively achieved following the steps outlined below. The starting point of the Monte Carlo implementation for such an effect is the folded GLR equation with the kinematic constraint. Following the arguments made in Refs. [35; 37], the transverse momentum square of the radiated gluon \(l_{\perp}^{2}\) must be smaller than \(\frac{1-z}{z}k_{\perp}^{2}\) where \(k_{\perp}\) and \(z\) are transverse momentum and longitudinal momentum fraction carried by the daughter gluon respectively. The inclusion of the kinematic constraint leads to a modified GLR equation, \[\frac{\partial N(\eta,k_{\perp})}{\partial\eta} = \frac{\bar{\alpha}_{s}}{\pi}\int\frac{\mathrm{d}^{2}l_{\perp}}{l _{\perp}^{2}}N\left(\eta+\ln\left[\frac{k_{\perp}^{2}}{k_{\perp}^{2}+l_{\perp} ^{2}}\right],l_{\perp}+k_{\perp}\right)-\frac{\bar{\alpha}_{s}}{\pi}\int_{0}^{ k_{\perp}}\frac{\mathrm{d}^{2}l_{\perp}}{l_{\perp}^{2}}N(\eta,k_{\perp})-\bar{ \alpha}_{s}N^{2}(\eta,k_{\perp}), \tag{1}\] with \(\bar{\alpha}_{s}=\alpha_{s}N_{c}/\pi\), \(\eta=\ln(x_{0}/x)\) and \(x_{0}=0.01\). The function \(N(\eta,k_{\perp})\) is related to the normal TMD gluon distribution \(G(\eta,k_{\perp})\) through \(N(\eta,k_{\perp})=\frac{2\alpha_{s}\pi^{2}}{8\mathcal{N}_{s}}G(\eta,k_{\perp})\) with \(S_{\perp}\) being the transverse area of nucleon/nucleus. Converting the above equation to the folded form of the GLR equation, it reads, \[\frac{\partial}{\partial\eta}\frac{N(x,k_{\perp})}{\Delta_{ns}( \eta,k_{\perp})}=\frac{\bar{\alpha}_{s}}{\pi}\int_{\Lambda_{\mathrm{cut}}} \frac{\mathrm{d}^{2}l_{\perp}}{l_{\perp}^{2}}\frac{N\left(\eta+\ln\left[\frac{ k_{\perp}^{2}}{k_{\perp}^{2}+l_{\perp}^{2}}\right],l_{\perp}+k_{\perp}\right)}{ \Delta_{ns}(\eta,k_{\perp})}. \tag{2}\] where \(\Delta_{ns}(\eta,k_{\perp})\) represents the probability of evolving from \(\eta_{0}\) to \(\eta\) without resolvable branching. It is given by, \[\Delta_{ns}(\eta,k_{\perp})=\exp\left\{-\bar{\alpha}_{s}\int_{ \eta_{0}}^{\eta}d\eta^{\prime}\left[\ln\frac{k_{\perp}^{2}}{\Lambda_{\mathrm{ cut}}^{2}}+N(\eta^{\prime},k_{\perp})\right]\right\}, \tag{3}\] where the infrared cut off \(\Lambda_{\mathrm{cut}}\) is the matter of choice about what we classify as a resolvable emission. Emitted gluons with transverse momentum \(l_{\perp}<\Lambda_{\mathrm{cut}}\) are considered as the unresolvable ones. And their contribution has been combined with the virtual correction to cancel the infrared divergence. The resolvable branchings are defined as emissions above this range. All order contributions from the virtual correction and the unresolvable real emission are resummed into \(\Delta_{ns}(\eta,k_{\perp})\) which reduces to the non-Sudakov form factor [35] in the dilute limit by neglecting the saturation term. Eq. 2 can be converted into an integral form, \[N(\eta,k_{\perp})=N(\eta_{0},k_{\perp})\Delta_{ns}(\eta,k_{\perp})+\frac{ \bar{\alpha}_{s}}{\pi}\int_{\eta_{0}}^{\eta}\mathrm{d}\eta^{\prime}\frac{ \Delta_{ns}(\eta,k_{\perp})}{\Delta_{ns}(\eta^{\prime},k_{\perp})}\int_{ \Lambda_{\mathrm{cut}}}\frac{\mathrm{d}^{2}l_{\perp}}{l_{\perp}^{2}}N(\eta^{ \prime}+\ln\left[\frac{k_{\perp}^{2}}{k_{\perp}^{2}+l_{\perp}^{2}}\right],l _{\perp}+k_{\perp}). \tag{4}\] It is evident that the kinematic constrained small \(x\) equation is no longer a local equation. Namely, the increase of gluon number density at rapidity \(\eta\) is driven by the gluon distribution at rapidity \(\eta+\ln\left[\frac{k_{\perp}^{2}}{k_{\perp}^{2}+l_{\perp}^{2}}\right]\) rather than that at the same rapidity \(\eta\). The corresponding weighting factor needs to be modified dramatically for the non-local case as shown below. ### Forward evolution With these derived folded evolution equations, we are now ready to introduce the Monte Carlo algorithm starting with the forward evolution case. For a given initial condition \(N(\eta_{i},k_{\perp,i})\), the first quantity to be generated by the algorithm is the value of \(\eta_{i+1}\). As it has been done in [27], this task can be achieved by solving the equation, \[\mathcal{R}=\exp\left[-\bar{\alpha}_{s}\int_{\eta_{i}}^{\eta_{i+1}}\mathrm{d} \eta^{\prime}\left(\ln\frac{k_{\perp,i}^{2}}{\Lambda_{\mathrm{cut}}^{2}}+N( \eta^{\prime},k_{\perp,i})\right)\right], \tag{5}\] where \(\mathcal{R}\) is a random number distributed uniformly in the interval [0,1]. Throughout this paper, we always use \(\mathcal{R}\) to denote such a random number. \(N(\eta^{\prime},k_{\perp,i})\) is pre-generated by numerically solving the GLR equation with the kinematic constraint. In contrast to the DGLAP evolution, the unitarity is not preserved during the course of the small \(x\) evolution. The number of gluons increases after each step of parton branching. The generated cascade thus needs to be re-weighted. For instance, if one neglects the saturation effect and kinematic constraint effect, the number of gluons which vanish due to the virtual correction and the unresolved branching is proportional to \(\bar{\alpha}_{s}\int_{\Lambda_{\mathrm{cut}}}^{k_{\perp,i}}\frac{\mathrm{d}l_ {\perp}^{2}}{l_{\perp}^{2}}\), while the number of gluons produced via the real correction is proportional to \(\bar{\alpha}_{s}\int_{\Lambda_{\mathrm{cut}}}^{P_{\perp}}\frac{\mathrm{d}^{2 }l_{\perp}}{l_{\perp}^{2}}\) where \(P_{\perp}\) is the UV cutoff, in the same rapidity interval. The weighting function is given by the ratio of these two contributions \(\mathcal{W}(k_{\perp,i})=\ln(\frac{P_{\perp}^{2}}{\Lambda_{\mathrm{cut}}^{2}} )/\ln(\frac{k_{\perp,i}^{2}}{\Lambda_{\mathrm{cut}}^{2}})\). It is quite non-trivial to work out the correct weighting factor when the kinematic constraint is implemented in the parton branching algorithm. Let us first discuss the derivation of the weighting factor for the case of the fixed boundary prescription. To work out the correct weighting coefficient, we first write down the expression for the fraction of gluons at \([\eta_{i+1},\eta_{i+1}+\delta\eta]\) that come form the branching between \(\eta_{i+1}\) and \(\eta_{i}\), \[\delta\eta\frac{\partial}{\partial\eta_{i+1}}\left[\frac{\bar{ \alpha}_{s}}{\pi}\int_{\eta_{i}}^{\eta_{i+1}}\mathrm{d}\eta^{\prime}\int_{ \Lambda_{\mathrm{cut}}}\frac{\mathrm{d}^{2}l_{\perp}}{l_{\perp}^{2}}e^{-\bar{ \alpha}_{s}\int_{\eta_{i}}^{\eta^{\prime}}\mathrm{d}\eta\left[\ln\frac{k_{ \perp,i}^{2}}{\Lambda_{\mathrm{cut}}^{2}}+N(\eta,k_{\perp,i})\right]}\theta \left(\frac{1-z^{\prime}}{z^{\prime}}(k_{\perp,i}-l_{\perp})^{2}-l_{\perp}^{2} \right)\right]\] \[=\delta\eta\frac{\bar{\alpha}_{s}}{\pi}\int_{\Lambda_{\mathrm{cut }}}^{\min\left[P_{\perp,i}\sqrt{(k_{\perp,i}-l_{\perp})^{2}-l_{\perp}^{2}} \right]}\frac{\mathrm{d}^{2}l_{\perp}}{l_{\perp}^{2}}e^{-\bar{\alpha}_{s}\int_ {\eta_{i}}^{\eta_{i+1}+\mathrm{i}}\frac{(k_{\perp,i}-l_{\perp})^{2}}{(k_{\perp,i}-l_{\perp})^{2}+l_{\perp}^{2}}\mathrm{d}\eta\left[\ln\frac{k_{\perp,i}^{2}} {\Lambda_{\mathrm{cut}}^{2}}+N(\eta,k_{\perp,i})\right]}, \tag{6}\] with \(z^{\prime}=x_{i+1}/x^{\prime}=\exp[\eta^{\prime}-\eta_{i+1}]\). The kinematic constraint is imposed by the \(\theta\)-function. Note that the term originating from the derivative acting on the integral boundary is equal to 0. The entire contribution comes from the derivative acting on the \(\theta\)-function. Meanwhile the fraction of gluons that leave from the rapidity interval \([\eta_{i+1},\eta_{i+1}+\delta\eta]\) due to the virtual correction is, \[\delta\eta\frac{\partial e^{-\bar{\alpha}_{s}\int_{\eta_{i}}^{\eta_{i+1}} \mathrm{d}\eta\left[\ln\frac{k_{\perp,i}^{2}}{\Lambda_{\mathrm{cut}}^{2}}+N( \eta_{i+1},k_{\perp,i})\right]}}{\partial\eta_{i+1}}=-\delta\eta\bar{\alpha}_{ s}\left[\ln\frac{k_{\perp,i}^{2}}{\Lambda_{\mathrm{cut}}^{2}}+N(\eta_{i+1},k_{ \perp,i})\right]e^{-\bar{\alpha}_{s}\int_{\eta_{i}}^{\eta_{i+1}}\mathrm{d} \eta\left[\ln\frac{k_{\perp,i}^{2}}{\Lambda_{\mathrm{cut}}^{2}}+N(\eta,k_{ \perp,i})\right]}. \tag{7}\] For the non-local small \(x\) evolution, one also needs the input for gluon distribution beyond the small \(x\) boundary \(x_{0}=0.01\). There are two common choices for the boundary conditions: i) the fixed boundary prescription, \(N(\eta<x_{0}=0.01)\), and ii) the fixed boundary prescription, \(N(\eta<x_{0}=0.01)\), and ii) the fixed boundary prescription, \(N(\eta<x_{0}=0.01)\), and iii) the fixed boundary prescription, \(N(\eta<x_{0}=0. \(0,k_{\perp})=0\); ii) the frozen boundary prescription, \(N(\eta<0,k_{\perp})=N(\eta=0,k_{\perp})\). The weighting functions are thus different for different rapidity boundary prescriptions. For the fixed boundary prescription, the re-weighting function is given by \[\mathcal{W}_{kc,1}(\eta_{i},\eta_{i+1};k_{\perp,i})=\frac{(\eta_{i+1}-\eta_{i}) \int_{\Lambda_{\rm cut}}^{\min\left[P_{\perp},\sqrt{\frac{1-z}{z}(k_{\perp,i} -l_{\perp})^{2}}\right]}\frac{{\rm d}^{2}l_{\perp}}{l_{\perp}^{2}}e^{-\bar{ \alpha}_{z}\cdot\bar{\alpha}_{z}\int_{\eta_{i+1}}^{\eta_{i+1}+\ln\frac{(k_{ \perp,i}-l_{\perp}^{\prime})^{2}}{(k_{\perp,i}-l_{\perp})^{2}+l_{\perp}^{ \prime}}}\,{\rm d}\eta\left[\ln\frac{k_{\perp,i}^{2}}{\Lambda_{\rm cut}^{2}}+ N(\eta,k_{\perp,i})\right]}{(\eta_{i+1}-\eta_{i})\ln\frac{k_{\perp,i}^{2}}{\Lambda_{ \rm cut}^{2}}+\int_{\eta_{i}}^{\eta_{i+1}}d\eta N(\eta,k_{\perp,i})}. \tag{8}\] Here, the values of \(|l_{\perp}|\) and \(\phi_{l}\) can be generated by solving the following equation \[\mathcal{R} =\frac{1}{\mathcal{C}}\frac{\bar{\alpha}_{s}}{\pi}\int_{\Lambda_{ \rm cut}}^{l_{\perp}}\frac{{\rm d}^{2}l_{\perp}^{\prime}}{l_{\perp}^{\prime 2}} \exp\left\{-\bar{\alpha}_{s}\int_{\eta_{i}}^{\eta_{i+1}+\ln\frac{(k_{\perp,i}-l _{\perp}^{\prime})^{2}}{(k_{\perp,i}-l_{\perp}^{\prime})^{2}+l_{\perp}^{\prime 2}}}\,{\rm d}\eta \left[\ln\frac{k_{\perp,i}^{2}}{\Lambda_{\rm cut}^{2}}+N(\eta,k_{\perp,i}) \right]\right\}, \tag{9}\] \[\mathcal{C} =\frac{\bar{\alpha}_{s}}{\pi}\int_{\Lambda_{\rm cut}}^{\min[P_{ \perp},\sqrt{(k_{\perp,i}-l_{\perp}^{\prime})^{2}\frac{1-z}{z}}]}\frac{{\rm d }^{2}l_{\perp}^{\prime}}{l_{\perp}^{\prime 2}}\exp\left\{-\bar{\alpha}_{s}\int_{ \eta_{i}}^{\eta_{i+1}+\ln\frac{(k_{\perp,i}-l_{\perp}^{\prime})^{2}}{(k_{\perp,i}-l_{\perp}^{\prime})^{2}+l_{\perp}^{\prime 2}}}\,{\rm d}\eta\left[\ln\frac{k_{\perp,i}^{2}}{ \Lambda_{\rm cut}^{2}}+N(\eta,k_{\perp,i})\right]\right\}, \tag{10}\] where \(\mathcal{R}\) again is a random number and \(\mathcal{C}\) is the normalization factor ensuring that the r.h.s. of Eq. 9 resides in the region of \([0,1]\). In the practical Monte Carlo implementation, a veto algorithm is used to be more efficient. Once \(|l_{\perp}|\) and \(\phi_{l}\) are generated, \(l\) and \(k_{\perp,i+1}\) then can be reconstructed subsequently. We repeat the procedure outlined above until \(\eta_{i+1}\) reach a minimal cut-off value \(\eta_{\rm min}\). Once the whole cascade is generated, we are able to reconstruct the gluon \(k_{\perp}\) distribution at arbitrary rapidity. For the frozen boundary case, the weighting factor has to be modified to \[\mathcal{W}_{kc,2}(\eta_{i},\eta_{i+1};k_{\perp,i},k_{\perp,i+1})=\frac{(\eta_ {i+1}-\eta_{i})\ln\frac{P^{2}}{\Lambda_{\rm cut}^{2}}}{(\eta_{i+1}-\eta_{i}) \ln\frac{k_{\perp,i}^{2}}{\Lambda_{\rm cut}^{2}}+\int_{\eta_{i}}^{\eta_{i+1} }d\eta N(\eta,k_{\perp,i})}\frac{N(\eta_{i}+\ln\left[\frac{k_{\perp,i+1}^{2}} {k_{\perp,i+1}^{2}+l_{\perp}^{\prime 2}}\right],k_{\perp,i})}{N(\eta_{i},k_{\perp,i})}, \tag{11}\] and the radiated gluon transverse momentum \(l_{\perp}\) is sampled solving the following equation \[\mathcal{R}=\frac{1}{\mathcal{C}}\frac{\bar{\alpha}_{s}}{\pi}\int_{\Lambda_{ \rm cut}}^{l_{\perp}}\frac{{\rm d}^{2}l_{\perp}^{\prime}}{l_{\perp}^{\prime 2}}, \tag{12}\] where the normalization factor for this case is given by \(\mathcal{C}=\frac{\bar{\alpha}_{s}}{\pi}\int_{\Lambda_{\rm cut}}^{P_{\perp}} \frac{{\rm d}^{2}l_{\perp}^{\prime}}{l_{\perp}^{\prime 2}}\). The \(k_{\perp}\) distribution of the exchanged gluons that directly attaches to the hard part can be reconstructed from the forward evolution algorithm described above. Using the recipes described above, we are now ready to generate parton cascade. Following the conventional choice, we use the MV model [49; 50] result as the initial condition at rapidity \(\eta_{0}=0\). Since we are interested in simulating events such as di-jet production in eA collisions, it is suitable to utilize the Weisz\(\ddot{a}\)ke-Williams (WW) gluon distribution as the initial condition [51]. It is given by \[N(\eta_{0},k_{\perp})=\int\frac{d^{2}r_{\perp}}{2\pi}e^{-ik_{\perp}\cdot r_{ \perp}}\frac{1}{r_{\perp}^{2}}\left(1-\exp\bigl{[}-\frac{1}{4}Q_{s0}^{2}r_{ \perp}^{2}\ln(e+\frac{1}{\Lambda r_{\perp}})\bigr{]}\right), \tag{13}\] with \(Q_{s0}^{2}=1\) GeV\({}^{2}\) and \(\Lambda=0.24\) GeV. We explored the behavior of the parton cascade with the both fixed boundary prescription and frozen boundary prescription. From Fig. 1, one can see that the \(k_{\perp}\) distribution obtained from the forward approach is in perfect agreement with the numerical solutions of the kinematic constrained GLR equation for both boundary conditions. ### Backward evolution We now turn to discuss how to implement the kinematic constraint in the backward evolution which is far more efficient in generating initial state parton shower as compared to the forward approach. The rapidity \(\eta_{i+1}\) of gluon participating hard scattering is fixed by external kinematics. \(k_{\perp,i+1}\) at the rapidity \(\eta_{i+1}\) can be sampled with the distribution \(N(\eta_{i+1},k_{\perp,i+1})\), which has to be determined beforehand by numerically solving the evolution equation. The next step is to generate \(\eta_{i}\) using a modified non-Sudakov form factor. The modified non-Sudakov form factor, \(\Pi_{ns}\), can be related to the forward non-Sudakov form factor \(\Delta_{ns}\) and the gluon distribution \(N\) as \[\Pi_{ns}(\eta_{i+1},\eta_{i};k_{\perp,i+1})=\frac{\Delta_{ns}(\eta_{i+1},k_{ \perp,i+1})N(\eta_{i},k_{\perp,i+1})}{\Delta_{ns}(\eta_{i},k_{\perp,i+1})N(\eta _{i+1},k_{\perp,i+1})}, \tag{14}\] which looks similar to that derived in our previous work [27]. However, one has to keep in mind that the gluon distributions appearing in the above formula are obtained by solving the GLR equation with the kinematic constraint. On the other hand, the non-Sudakov factor can also be expressed as [27], \[\Pi_{ns}(\eta_{i+1},\eta_{i};k_{\perp,i+1})=\exp[-\frac{\bar{\alpha}_{s}}{\pi} \int_{\eta_{i}}^{\eta_{i+1}}\,\mathrm{d}\eta\int_{\Lambda_{\mathrm{cut}}}^{P_ {\perp}}\frac{\mathrm{d}^{2}l_{\perp}}{l_{\perp}^{2}}\frac{N\left(\eta+\ln \left[\frac{k_{\perp,i+1}^{2}}{k_{\perp,i+1}^{2}+l_{\perp}^{2}}\right],k_{\perp, i+1}+l_{\perp}\right)}{N(\eta,k_{\perp,i+1})}]\,. \tag{15}\] Both non-Sudakov form factors can be equally well used to generate \(\eta_{i}\) for a given \(\eta_{i+1}\) by solving the following equation, \[\mathcal{R}=\Pi_{ns}(\eta_{i+1},\eta_{i};k_{\perp,i+1}). \tag{16}\] The transverse momentum of the radiated gluon \(l_{\perp}\) can be generated according to \[\mathcal{R} =\frac{1}{\mathcal{C}}\frac{\bar{\alpha}_{s}}{\pi}\int_{\Lambda_{ \mathrm{cut}}}^{l_{\perp}}\frac{\mathrm{d}^{2}l^{\prime}_{\perp}}{l^{\prime 2}_{ \perp}}N\left(\eta_{i+1}+\ln\left[\frac{k_{\perp,i+1}^{2}}{k_{\perp,i+1}^{2}+l ^{\prime 2}_{\perp}}\right],k_{\perp,i+1}+l^{\prime}_{\perp}\right), \tag{17}\] \[\mathcal{C} =\frac{\bar{\alpha}_{s}}{\pi}\int_{\Lambda_{\mathrm{cut}}}^{P_{ \perp}}\frac{\mathrm{d}^{2}l^{\prime}_{\perp}}{l^{\prime 2}_{\perp}}N\left(\eta_{i+1}+ \ln\left[\frac{k_{\perp,i+1}^{2}}{k_{\perp,i+1}^{2}+l^{\prime 2}_{\perp}}\right],k_{\perp,i+1}+l^{ \prime}_{\perp}\right). \tag{18}\] Once again, \(\mathcal{R}\) is a random number, \(\mathcal{C}\) is the normalization factor and a veto algorithm is employed in our practical implementation to make this sampling procedure more efficient. Similar to the forward evolution case, the generated event has to be re-weighted after each branching in the backward evolution method as well. It is important to notice that the GLR equation with the kinematic constraint is a non-local evolution equation when deriving the weighting factor. The weighting factor associated with backward evolution is the ratio of the fraction of gluons that appear from branching at the rapidity \(\eta_{i}+\ln\frac{k_{\perp,i+1}^{2}}{k_{\perp,i+1}^{2}+l_{\perp}^{2}}\) and the fraction of gluons that vanish at the rapidity \(\eta_{i}\) due to the virtual correction and the fusion process. It reads, \[\mathcal{W}_{kc,\mathrm{back}}(\eta_{i+1},\eta_{i};k_{\perp,i+1})=\frac{(\eta _{i+1}-\eta_{i})\ln\frac{k_{\perp,i+1}^{2}}{\Lambda_{\mathrm{cut}}^{2}}+\int _{\eta_{i}}^{\eta_{i+1}}\,d\eta N(\eta,k_{\perp,i})}{(\eta_{i+1}-\eta_{i})\ln \frac{p_{\perp}^{2}}{\Lambda_{\mathrm{cut}}^{2}}}\frac{N(\eta_{i},k_{\perp,i })}{N(\eta_{i}+\ln\left[\frac{k_{\perp,i+1}^{2}}{k_{\perp,i+1}^{2}+l_{\perp }^{2}}\right],k_{\perp,i})}. \tag{19}\] The procedure outlined above is repeated until \(\eta_{i}\) is smaller than \(\eta_{0}\). The last step of the simulation is to construct four momenta of the radiated gluons. Note that the minus component of the \(t\)-channel gluon's four momentum can only be reconstructed after the full cascade has been generated. By going from the last \(t\)-channel gluon (closest to the nucleus), which has the vanishing minus component, forward in the cascade to the hard scattering process, the true minus component of the \(t\)-channel gluons are constructed. In Fig. 2, we compare gluon \(k_{\perp}\) distribution at different rapidities generated from backward evolution to the numerical solutions of the GLR equation with the kinematic constraint. The perfect match between gluon \(k_{\perp}\) distributions obtained from the backward approach and by numerically solving the kinematic constrained GLR has been found. ## III \(k_{t}\) resummation in the small \(x\) limit Our ultimate goal is to build a parton shower generator for simulating events in eA collisions at EIC. The hard scattering processes occurring in eA collisions often involve multiple scales. For instance, loosely speaking, there are three well separated scales in the back-to-back di-jet production: the center mass of energy \(\sqrt{s}\), the invariant mass of the di-jet \(Q\), and the total transverse momentum of the di-jet system \(k_{\perp}\). To improve the convergence of the pertubative series, the two type large logarithms \(\alpha_{s}\ln\left(s/Q^{2}\right)\) and \(\alpha_{s}\ln^{2}\left(Q^{2}/k_{\perp}^{2}\right)\) arise in the high order calculations of the di-jet production cross section have to be summed to all orders. The summation of the logarithm contribution \(\alpha_{s}\ln\left(s/Q^{2}\right)\) is achieved by solving the small \(x\) evolution equation, while the logarithm contribution \(\alpha_{s}\ln^{2}\left(Q^{2}/k_{\perp}^{2}\right)\) can be resummed by means of the CS equation. A unified framework that allows us to resum both large logarithms simultaneously in a consistent way have been developed in a sequence of papers [38; 39; 40]. The evolved small \(x\) gluon TMD can be expressed as the convolution of the Sudakov form factor and the renormalized dipole amplitudes. It has been stressed in Refs. [39; 40] that at small \(x\), gluon TMDs only can be matched onto dipole scattering amplitudes rather than the normal gluon PDFs in the collinear factorization. We notice that such a joint resummation formalism has been studied in the various different context [52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73]. To simulate hard scattering processes involving multiple scales in a parton shower generator, it is necessary to develop a Monte Carlo branching algorithm to effectively resum both types of logarithms through an iteration procedure. The essential observation that enables the computer implementation of the joint resummation is described as the following. In the backward approach, the evolution starts from the final \(t\)-channel gluon with the most negative virtual mass-squared, which participates in the hard process. As a parton cascade develops towards the backward direction, the virtual mass of the \(t\)-channel gluon decreases by radiating soft gluons with the longitudinal momentum fraction \(1-z\to 0\). This first stage of the evolution is described by the CS equation and the renormalization group equation which resum the double leading \(k_{t}\) logarithm and the single leading \(k_{t}\) logarithm respectively. When the virtual mass of the \(t\)-channel gluon goes down to the scale which is of the order of saturation scale, we should perform the small \(x\) evolution. The precise value of this scale should be fixed by fitting the output of the cascade to the experimental data. During the course of the small \(x\) evolution, the virtual mass of the \(t\)-channel gluon stops monotonously decreasing, whereas its longitudinal momentum fraction increases rapidly until the small \(x\) evolution initial boundary is reached. In this second stage of the evolution, the development of parton cascade is mainly driven by the radiated gluons that carry the large longitudinal momentum fraction \(1-z\to 1\). Therefore, the Monte Carlo algorithm based on the GLR equation should be applied to generate the parton branching at this stage. To simulate the first stage of the evolution, our primary task is to derive a folded version of the CS equation and the renormalization group equation. To this end, we write down the CS equation in the momentum space, \[\frac{\partial N(\mu^{2},\zeta^{2},\eta,k_{\perp})}{\partial\ln\zeta^{2}}= \frac{\bar{\alpha}_{s}}{2\pi}\int_{0}^{\zeta}\frac{d^{2}l_{\perp}}{l_{\perp}^{ 2}}\left[N(\mu^{2},\zeta^{2},\eta,k_{\perp}+l_{\perp})-N(\mu^{2},\zeta^{2}, \eta,k_{\perp})\right]. \tag{20}\] which can be converted into the conventional expression of the CS equation [74] after making the Fourier transform up to the leading logarithm accuracy. Here, \(\mu\) is the factorization scale, and \(\zeta\) is a scale introduced to regularize the light cone divergence. The factorization scale dependence of the gluon TMD in the saturation regime is described by the normal renormalization group equation [39], \[\frac{\partial N(\mu^{2},\zeta^{2},\eta,k_{\perp})}{\partial\ln\mu^{2}}=\bar {\alpha}_{s}\left[\beta_{0}-\frac{1}{2}\ln\frac{\zeta^{2}}{\mu^{2}}\right]N( \mu^{2},\zeta^{2},\eta,k_{\perp}). \tag{21}\] with \(\beta_{0}=\frac{11}{12}-\frac{N_{f}}{6N_{c}}\) and \(N_{f}=3\) in this work. By choosing the factorization scale \(\mu\) to be \(\zeta\), one can combine the CS equation and the renormalization group equation together. The combined evolution equation reads, \[\frac{\partial N(Q^{2},\eta,k_{\perp})}{\partial\ln Q^{2}}=\frac{\bar{\alpha}_ {s}}{2\pi}\int_{0}^{Q}\frac{d^{2}l_{\perp}}{l_{\perp}^{2}}\left[N(Q^{2},\eta, k_{\perp}+l_{\perp})-N(Q^{2},\eta,k_{\perp})\right]+\bar{\alpha}_{s}\beta_{0}N(Q^{2}, \eta,k_{\perp}), \tag{22}\] Figure 2: Comparison of the gluon \(k_{\perp}\) distributions obtained from the backward approach with the numerical solutions of the GLR equation at different rapidities (Color online). where \(N(Q^{2},\eta,k_{\perp})\equiv N(\mu^{2}=Q^{2},\zeta^{2}=Q^{2},\eta,k_{\perp})\). Following the standard procedure, the above evolution equation can be cast into a folded equation, \[\frac{\partial}{\partial\ln Q^{2}}\frac{N(Q^{2},\eta,k_{\perp})}{ \Delta_{s}(Q^{2})}=\frac{\bar{\alpha}_{s}}{2\pi}\int_{\Lambda_{\rm cut}}^{Q} \frac{d^{2}l_{\perp}}{l_{\perp}^{2}}\frac{N(Q^{2},\eta,k_{\perp}+l_{\perp})}{ \Delta_{s}(Q^{2})}, \tag{23}\] with the Sudakov form factor being given by, \[\Delta_{s}(Q^{2})=\exp\left[-\int_{Q_{0}^{2}}^{Q^{2}}\frac{dt}{t} \frac{\bar{\alpha}_{s}(t)}{2}\left(\ln\frac{t}{\Lambda_{\rm cut}^{2}}-2\beta_ {0}\right)\right]. \tag{24}\] The Sudakov form factor is simply the probability of evolving from \(Q_{0}\) to \(Q\) without branching. Eq. 23 can be integrated to give an integral equation for \(N(Q^{2},\eta,k_{\perp})\) in terms of the gluon TMD at the initial scale \(Q_{0}\): \[N(Q^{2},\eta,k_{\perp})=N(Q_{0}^{2},\eta,k_{\perp})\Delta_{s}(Q^ {2})+\int_{Q_{0}^{2}}^{Q^{2}}\frac{dt}{t}\frac{\Delta_{s}(Q^{2})}{\Delta_{s}( t)}\frac{\bar{\alpha}_{s}(t)}{2\pi}\int_{\Lambda_{\rm cut}}^{Q}\frac{d^{2}l_{ \perp}}{l_{\perp}^{2}}N(t,\eta,k_{\perp}+l_{\perp}). \tag{25}\] With the derived folded CS and renormalization group equation, we are ready to introduce the Monte Carlo implementation of the \(k_{t}\) resummation formulated in the framework of the CGC effective theory. ### Forward evolution To have a consistency check, we first present the formulation of the forward evolution scheme. The combined CS and renormalization group equation can be solved using the forward evolution approach. We lay out the main procedures in the following. For a given virtuality scale \(Q_{i}\), either after several steps of evolution or at the initial condition, we first generate the value of a higher virtuality scale \(Q_{i+1}\), where the next branching occurs. Following the conventional method, this can be achieved by solving the following equation, \[\mathcal{R}=\exp\left[-\int_{Q_{i}^{2}}^{Q_{i+1}^{2}}\frac{dt}{t }\bar{\alpha}_{s}(t)\left(\frac{1}{2}\ln\frac{t}{\Lambda_{\rm cut}^{2}}-\beta_ {0}\right)\right]. \tag{26}\] where the argument of the running coupling \(\alpha_{s}\) is simply chosen to be the virtual mass squared. Once \(Q_{i+1}\) is generated, the transverse momentum of the radiated gluon, \(l_{\perp,i+1}\), can be determined according to the following equation \[\mathcal{R}=\frac{1}{\mathcal{C}}\int_{\Lambda_{\rm cut}}^{l_{ \perp,i+1}}\frac{d^{2}l^{\prime}}{l^{\prime}_{\perp}}, \tag{27}\] where the normalization factor reads \(\mathcal{C}=\int_{\Lambda_{\rm cut}}^{Q_{i+1}^{2}}\frac{dt}{l^{\prime}_{\perp }^{2}}\). The four momenta of the radiated gluon and the \(t\)-channel gluon can be determined from the momentum conservation and the on-shell condition. We will discuss the reconstruction of kinematics in more details in the next subsection. The generated cascade needs to be re-weighted. This is because that the unitary is no longer preserved beyond the leading double logarithm approximation. We have included the leading single logarithm contribution in the algorithm employed here, which leads to the increase of gluon number density after each splitting. The weighting factor is given by, \[\mathcal{W}_{\rm CS}(Q_{i+1}^{2},Q_{i}^{2})=\frac{\int_{Q_{i}^{2 }}^{Q_{i+1}^{2}}\frac{dt}{t}\alpha_{s}(t)\ln\frac{t}{\Lambda_{\rm cut}^{2}} }{\int_{Q_{i}^{2}}^{Q_{i+1}^{2}}\frac{dt}{t}\alpha_{s}(t)\left[\ln\frac{t}{ \Lambda_{\rm cut}^{2}}-2\beta_{0}\right]}. \tag{28}\] If the single logarithm contribution associated with the \(\beta_{0}\) term in the denominator is neglected, the weighting factor reduces to 1. With these re-weighted parton cascades, one can reconstruct the \(t\)-channel gluon \(k_{\perp}\) distribution at different scales and compare with the analytical and numerical solutions of Eq. 22. It is straightforward to numerically solve Eq. 22, while the analytical solution of Eq. 22 can also be easily obtained in the impact parameter space. After Fourier transforming back to the momentum space, the evolved gluon TMD distribution reads, \[N(Q^{2},\eta,k_{\perp})=\int\frac{d^{2}b_{\perp}}{(2\pi)^{2}}e^{ ik_{\perp}\cdot b_{\perp}}e^{-S(\mu_{0}^{2},Q^{2})}\int d^{2}l_{\perp}e^{-il_{ \perp}\cdot b_{\perp}}N(\eta,l_{\perp}), \tag{29}\] where \(N(\eta,l_{\perp})\) is the gluon distribution evolved with the GLR equation, or the initial condition computed in the MV model. The Sudakov factor at one loop level in the impact parameter (\(b_{\perp}\)) space consists of a perturbative part and a non-perturbative part. It is given by \[S(\mu_{b}^{2},Q^{2})=S_{pert}(\mu_{b*}^{2},Q^{2})+S_{NP}(b_{\perp}^{2},Q^{2}). \tag{30}\] The perturbative Sudakov factor reads \[S_{pert}(\mu_{b*}^{2},Q^{2})=\frac{N_{c}}{2\pi}\int_{\mu_{b*}^{2}}^{Q^{2}}\frac {d\mu^{2}}{\mu^{2}}\alpha_{s}(\mu)\left[\ln\frac{Q^{2}}{\mu^{2}}-2\beta_{0} \right], \tag{31}\] where \(\mu_{b*}^{2}\) is defined as \(\mu_{b*}^{2}=4e^{-2\gamma_{E}}/b_{\perp*}^{2}\), with \(b_{\perp*}=\frac{b_{\perp}}{\sqrt{1+b_{\perp}^{2}/b_{\perp*}^{2}}}\) and \(b_{\rm max}=1.5\;{\rm GeV}^{-1}\). To compare with the Monte Carlo result on the same footing, we simply neglect the non-perturbative Sudakov factor \(S_{NP}\) in the numerical calculation. The behaviour at large \(b_{\perp}\) is regulated by \(N(\eta,b_{\perp})\) which is the Fourier transform of \(N(\eta,l_{\perp})\). In this work, we use the one-loop running coupling which reads \[\alpha_{s}(\mu^{2})=\frac{1}{\beta_{0}\frac{N_{c}}{\pi}\ln(\mu^{2}/\Lambda_{ \rm QCD}^{2})}, \tag{32}\] with \(\Lambda_{\rm QCD}^{2}=0.0578\;{\rm GeV}^{2}\). We present the \(t\)-channel gluon \(k_{\perp}\) distribution constructed from the generated parton cascade and compare it with the numerical solution of the CS-renormalization group equation for the fixed coupling case in the left panel of Fig. 3. In our estimation, the MV model is employed to provide with the gluon distribution at the initial scale \(Q_{0}=\)3 GeV. In the formulation of TMD evolution, all soft-radiated gluons carry exactly zero longitudinal momentum fraction. In contrast, all radiated soft gluons carry finite longitudinal momentum fraction in the parton branching algorithm. This presents an important advantage of the Monte Carlo method comparing with the conventional analytical approach. Keeping longitudinal momentum conservation exactly in parton splitting process is often crucial to correctly account for phenomenology near the threshold region [41]. However, to make the comparisons in a consistent way, we didn't change the longitudinal momentum fraction of the \(t\)-channel gluon after each branching in our algorithm. In the right panel of Fig. 3, we compare the Monte Carlo simulation result with both the numerical solution of the CS-renormalization equation and the analytical solution for the running coupling case at the scale \(Q=13\) GeV. It is clear to see from the right panel of Fig. 3 that our algorithm yields the same \(k_{\perp}\) distribution as the numerical result. On the other hand, it differs from the analytical approach result. Such discrepancy is expected because the non-perturbative part of the CS kernel is treated differently in the analytical approach. In addition, the argument of the running coupling used in the parton branching algorithm and the numerical solution is the hard scale \(Q\), whereas the scale of running coupling is \(\mu_{b}\) in the analytical approach. Since the analytical result can describe the relevant phenomenology very well, one should use it as guidance to model the non-perturbative part of the Sudakov factor which will be introduced in the Monte Carlo algorithm in the future work. Alternatively, one could also use a relatively large infrared cutoff value \(\Lambda_{\rm cut}\) to mimic the effect of the non-perturbative Sudakov factor. We leave this for a future study. ### Backward evolution In this subsection, we will outline the essential steps of Monte Carlo implementation for the backward evolution based on the folded CS-renormalization group evolution equation. Unlike the forward evolution which can be considered as a way of solving the evolution equation, the evolved parton distributions have to be pre-generated and are used to guide the backward evolution. In the most parton branching algorithm, the \(k_{t}\) resummation is achieved by using the modified Sudakov factor incorporating the collinear Parton Distributions Functions (PDFs). However, in the saturation regime, the \(k_{t}\) resummation has to be formulated in terms of the unintegrated gluon distribution. The main procedures are summarized as follows. The modified Sudakov factor in the backward evolution approach is different from that in the forward evolution approach. It reads \[\Pi_{s}(Q_{i+1},Q_{i};k_{\perp,i+1})=\frac{\Delta_{s}(Q_{i+1}^{2})N(Q_{i}^{2}, \eta,k_{\perp,i+1})}{\Delta_{s}(Q_{i}^{2})N(Q_{i+1}^{2},\eta,k_{\perp,i+1})}. \tag{33}\] An alternative way to compute the modified Sudakov factor is given by \[\Pi_{s}(Q_{i+1},Q_{i};k_{\perp,i+1}) = \exp\left[-\int_{Q_{i}^{2}}^{Q_{i+1}^{2}}\frac{dt}{t}\frac{\bar{ \alpha}_{s}(t)}{2\pi}\int_{\Lambda_{\rm cut}}^{\sqrt{t}}\frac{d^{2}l_{\perp}} {l_{\perp}^{2}}\frac{N(t,\eta,k_{\perp,i+1}+l_{\perp})}{N(t,\eta,k_{\perp,i+1}) }\right]. \tag{34}\] It describes the probability for gluon evolving backward from \(Q_{i+1}\) to \(Q_{i}\) without branching. The transverse momentum dependent gluon distribution appearing in Eq. 33 and Eq. 34 has to be pre-generated by numerically solving the combined CS-renormalization group equation. The backward evolution starts from the \(t\)-channel gluon with the highest virtuality \(Q_{i}\). The hard scale of the partonic scattering process is denoted as \(Q_{i+1}\). We first have to sample \(k_{\perp,i+1}\) according to the following distribution \[\mathcal{R}=\frac{1}{\mathcal{C}}\int_{\Lambda_{\text{cut}}}^{k_{\perp,i+1}}d ^{2}k^{\prime}_{\perp}N(Q_{i+1}^{2},\eta,k^{\prime}_{\perp}), \tag{35}\] with \(\mathcal{C}=\int_{\Lambda_{\text{cut}}}^{Q_{i+1}}d^{2}k^{\prime}_{\perp}N(Q_ {i+1}^{2},\eta,k^{\prime}_{\perp})\) being the normalization factor. The rapidity \(\eta\) is fixed by external kinematics. The next quantity to be generated by the parton cascade algorithm is the value of virtuality \(Q_{i}\). Following the standard backward evolution strategy, \(Q_{i}\) is obtained using the backward type Sudakov factor. We can sample a \(Q_{i}\) by solving the following equation, \[\mathcal{R}=\Pi_{s}(Q_{i+1},Q_{i};k_{\perp,i+1}). \tag{36}\] As the virtual mass of \((i+1)\)th \(t\)-channel gluon, \(Q_{i}\) also serves as the hard probe scale at which the \(i\)th \(t\)-channel gluon's transverse momentum is measured. The transverse momentum of the radiated gluon \(l_{\perp,i}\) is thus sampled solving the following equation \[\mathcal{R} =\frac{1}{\mathcal{C}}\int_{\Lambda_{\text{cut}}}^{l_{\perp,i}} \frac{d^{2}l^{\prime}_{\perp}}{l^{\prime 2}_{\perp}}N(Q_{i}^{2},\eta,k_{\perp,i+1}+l^{ \prime}_{\perp}), \tag{37}\] \[\mathcal{C} =\int_{\Lambda_{\text{cut}}}^{Q_{i}}\frac{d^{2}l^{\prime}_{\perp }}{l^{\prime 2}_{\perp}}N(Q_{i}^{2},\eta,k_{\perp,i+1}+l^{\prime}_{\perp}). \tag{38}\] The longitudinal momentum fraction of the radiated gluon is determined through the on-shell condition, \[|Q_{i}^{2}|\approx\frac{z_{i}l^{2}_{\perp,i}}{1-z_{i}}+|k^{2}_{\perp,i+1}|, \tag{39}\] which is valid in the strong ordering region \(|Q_{i-1}^{2}|\ll|Q_{i}^{2}|\ll|Q_{i+1}^{2}|\). The minus component of the emitted gluon can be fixed accordingly. The \(i\)th \(t\)-channel gluon's transverse momentum is trivially obtained: \(k_{\perp,i}=k_{\perp,i+1}-l_{\perp,i}\). The virtual mass \(Q_{i-1}\) of the \(i\)th \(t\)-channel gluon is computed with Eq. 36. However, \(t\)-channel gluons' four momenta can be determined only after the whole cascade is generated. The minus component of the \(t\)-channel gluon that is directly attached to nuclear target is set to be 0. From this initial condition, the four momenta of \(t\)-channel gluons are retrospectively reconstructed by momentum conservation. As argued in the previous subsection, the generated event has to be re-weighted after each branching since the unitary is not preserved in the single leading logarithm accuracy level. In the backward evolution approach, the Figure 3: Comparison of the gluon \(k_{\perp}\) distributions obtained from the forward evolution approach with the numerical solutions of the combined CS-renormalization group equation at different scales(Color online). The left panel: the fixed coupling case. The right panel: the running coupling case(Color online). re-weighting function reads \[\mathcal{W}_{\text{CS,back}}(Q_{i+1}^{2},Q_{i}^{2})=\frac{\int_{Q_{i}^{2}}^{Q_{i}^ {2}+}\frac{dt}{t}\alpha_{s}(t)\left[\ln\frac{t}{\Lambda_{\text{cut}}^{2}}-2 \beta_{0}\right]}{\int_{Q_{i}^{2}}^{Q_{i+1}^{2}}\frac{dt}{t}\alpha_{s}(t)\ln \frac{t}{\Lambda_{\text{cut}}^{2}}}. \tag{40}\] We repeat the procedure outlined above until \(Q_{i}^{2}\) reach a minimal cut-off scale at which TMD evolution stops. The TMD evolution is driven by the soft gluon radiations which carry the vanishing longitudinal momentum fraction \(1-z_{i}\to 0\). In the practical Monte Carlo implementation, the cut-off is chosen to be \(|Q_{i}^{2}|>|l_{\perp,i}^{2}|+|k_{\perp,i+1}^{2}|\), or equivalently \(z_{i}>0.5\). Meanwhile, \(|Q_{i}^{2}|\) is also required to be larger than the saturation scale \(Q_{s}^{2}\). If these two conditions can not be met simultaneously, we terminate the TMD evolution, and start the backward small \(x\) evolution. We test the backward evolution algorithm against the numerical method as shown in Fig. 4. The MV model result is applied at the initial scale \(Q_{0}\)=3 GeV. The gluon \(k_{\perp}\) distribution at high scale \(Q=13\) GeV is obtained by numerically solving the combined CS-renormalization group equation. The cascade is generated starting from the scale \(Q=13\) GeV and evolve down to the initial scale with the backward approach. The \(t\)-channel gluon \(k_{\perp}\) distribution reconstructed from the cascade is compared with the numerical results at different scales. Gluon \(k_{\perp}\) distributions are presented in the left panel of Fig. 4 for the fixed coupling case, and in the right panel of Fig. 4 for the running coupling case. It is evident that the \(k_{\perp}\) distributions obtained from the Monte Carlo method is the same as the numerical results. We conclude that the backward evolution algorithm pass this consistency check as expected. ## IV Conclusion In this work, we extended the small \(x\) initial state parton branching algorithm developed in the previous paper to include the kinematic constraint effect. In the small \(x\) limit, the kinematic constraint leads to stronger suppression of soft gluon emissions than that caused by the angular ordering along the chain. The coherent branching effect is thus effectively implemented in the parton branching algorithm once the kinematic constraint is imposed. This is a nontrivial extension in the sense that the weighting factor and the way of sampling radiated gluon's transverse momenta are drastically altered. The \(t\)-channel gluon \(k_{\perp}\) distributions constructed from both the forward scheme and the backward scheme are shown to reproduce the numerical solutions of the kinematic constrained GLR equation. We also formulated a parton branching algorithm that enables us to resum large \(k_{t}\) logarithms at small \(x\) logarithms following a two-step evolution picture. The cascade first develops by radiating soft gluons that carry vanishing longitudinal momentum fractions in the backward approach description. At this first stage of the evolution, the parton branching is simulated with the Sudakov factor which we obtained from the folded CS equation and the renormalization group equation. The transverse momentum-dependent gluon distribution instead of gluon PDF is used to guide the evolution path toward the most populated regions of \((Q^{2},k_{\perp})\). When the virtual mass of the \(t\)-channel gluon is dominated by its transverse momentum or is of the order of saturation scale, the parton branching starts being generated according to the non-Sudakov form factor derived from the small \(x\) evolution equation. The Figure 4: Comparison of the gluon \(k_{\perp}\) distributions obtained from the backward approach with the numerical solutions of the CS-renormalzation group equation at different scales (Color online). joint \(k_{t}\) and small \(x\) resummation thus has been achieved in the Monte Carlo simulation by implementing such two-step evolution. Our study represents an important step towards practical applications of the parton shower generator in simulating scattering processes that involve multiple well-separated hard scales, such as di-jet production in eA collisions at EIC. The next step is to construct a full hadron-level Monte Carlo generator with the hadronization being performed using multi-purpose generators such as PYTHIA [75]. We also plan to integrate the algorithm into eHIJING framework [76] aiming at the simulation of events in eA collisions for the whole \(x\) range accessible at EIC in the future. _Acknowledgments:_ We thank Hai-tao Li and Shan-shan Cao for helpful discussions. This work has been supported by the National Natural Science Foundation of China under Grant No. 1217511. Y.S. is supported by the China Postdoctoral Science Foundation under Grant No. 2022M720082. S.Y.W. is also supported by the Taishan fellowship of Shandong Province for junior scientists.
2302.01623
Majorana CP violating phases
The two Majorana cp-violating phases can not be determined by experiments on neutrino oscillations. It is difficult even not possible almost to measure two Majorana cp-violating phases since they are only sensitive to lepton-number-violating processes. One must take some assumption on the structure of neutrino mass matrix and their flavor mixing mechanism hidden behind in phenomenological models in order to determine Majorana cp-violating phases. Two models on the symmetry of the neutrino mass matrix are proposed in this paper. The Majorana cp-violating phases and the effective Majorana neutrino mass $\mathcal{m}_{\mathcal{ee}}$ are computed in the two models. Using the limit of the absolute neutrino mass scale which is from cosmological observations, the numerical values of the Majorana cp-violating phases and the effective Majorana neutrino mass $\mathcal{m}_{\mathcal{ee}}$ are obtained.
Chao-Shang Huang
2023-02-03T09:50:36Z
http://arxiv.org/abs/2302.01623v2
# Majorana CP violating phases ###### Abstract In recent years much progress has been made in experiments on neutrino oscillations. However, the absolute neutrino mass scale and two Majorana CP-violating phases can not be determined by experiments on neutrino oscillations. The absolute neutrino mass scale can be determined or constrained by cosmological observations or experiments on double-beta decays with no neutrino (\(0\nu 2\beta\)). At the present stage it is difficult even not possible almost to measure two Majorana CP-violating phases since they are only sensitive to lepton-number-violating processes. One must take some assumption on the structure of neutrino mass matrix and their flavor mixing mechanism hidden behind in phenomenological models in order to determine two Majorana CP-violating phases[1]. The magic neutrino mass matrix has advantage to examine symmetries of the neutrino mixing matrix and it as well as its phenomenological implications have been studied in the literature[2; 3; 4]. Within the framework of seesaw mechanism and modula (for example, A4) flavor symmetry the magic neutrino mass matrix can be obtained[4]. Although the experimentally measured neutrino mixing matrix does not directly lead to a magical neutrino mass matrix, it would be very probable that real nature world is both that the neutrino mass matrix is magical and the mixing matrix is experimentally measured. In this letter we determine two Majorana CP-violating phases, using data in experiments on neutrino oscillations and the limit of the absolute neutrino mass scale determined due to cosmological observations, by means of the assumption that the Majorana neutrino mass matrix is magic. An \(n\times n\) matrix A will be called magic if the row sums and the column sums are all equal to a common number \(\alpha\)[2]. In the flavoured basis, where the charged lepton mass matrix is diagonal, the magic Majorana neutrino mass matrix can be expressed as \[M_{\nu}=\left(\begin{array}{ccc}a&b&c\\ b&d&a+c-d\\ c&a+c-d&b-c+d\end{array}\right) \tag{1}\] where \[a = m1c12^{2}c13^{2}e1+m2s12^{2}c13^{2}e2+m3s13^{2}ed,\] \[b = -m1c12c13(s12c23+c12s23s13ed1)e1+m2s12c13(c12c23-s12s23s13ed1)e2+m3 c13s23s13ed2,\] \[c = m1c12c13(s12s23-c12c23s13ed1)e1-m2s12c13(c12s23+s12c23s13ed1)e2+m3 c13c23s13ed2,\] \[d = (m1s12c23+c12s23s13ed1)^{2}e1+m2(c12c23-s12s23s13ed1)^{2}e2+m3 c13^{2}s23^{2}\] with \(e1=e^{i\xi 1},e2=e^{i\xi 2},ed=e^{-2i\delta},ed1=e^{i\delta},ed2=e^{-i\delta}, \xi 1,\xi 2\) and \(\delta\) being two Majorana CP-violating phases and Dirac CP-violating phase respectively since \[m_{\alpha\beta}\equiv\sum_{i=1}^{3}m_{i}U_{\alpha i}U_{\beta i},(\mbox{ for }\alpha=e,\mu,\tau\mbox{ and }i=1,2,3) \tag{3}\] with the neutrino mixing matrix \(U_{PMNS}\)[5]. It is straightforward from Eq.(1) to obtain the following two equations: \(m_{\mu\tau}=a+c-d,and\ m_{\tau\tau}=b-c+d.\) Solving them, we obtain \[e1 = (m3(s12(c23-s23)(c13^{2}ed2s13+eded1s13^{3}+c13^{3}(c23+s23)+c13ed 1ed2s13^{2}(c23+s23))\] \[-c12(-2c13c23ed2s13s23-eds13^{2}(c23+s23)+c13^{2}(c23^{3}+c23^{2} s23+c23s23^{2}+s23^{3}))))/\] \[(c12^{2}c13ed1^{2}s12s13^{2}s23^{2}+c12^{2}c23ed1^{3}s12s13^{3}s23 ^{2}+c12^{2}ed1^{2}s13^{2}s23^{3}+m^{12}(c13c23^{3}s12^{3}\] \[+c23^{3}ed1s13^{2}s13+c12c23^{2}s12^{2}s23)+m1(c12^{2}c13c23^{2} s12+c12^{2}c23^{3}ed1s13+c12^{3}c23^{3}ed12s13^{2}\] \[+c12c23^{3}ed1^{2}s12^{2}s13^{2}-c12c23^{2}s12^{2}s23+2c12^{3}c13c2 3ed1s13s23-c12^{2}c23^{2}ed1s12s13s23\] \[+2c12c13c23ed1s12^{2}s13s23-c23^{2}ed1s12^{3}s13s23+c12^{3}c23^{2} ed1^{2}s13^{2}s23+c12c23^{2}ed1^{2}s12^{3}s23\] \[-c12^{2}c13s12s23^{2}-c13s12^{3}s23^{2}+c12^{2}c23ed1s12s13s23^{2}+c23 ed1s12^{3}s13s23^{2}+c12^{3}c23ed1^{2}s13^{2}s23^{2}\] \[-c12^{2}c13ed1^{2}s12s13^{2}s23^{2}+c12c23ed1^{2}s12^{3}s23^{2}-c12 ^{2}c23ed1^{3}s12s13s23^{2}-c12^{2}ed1s12s13s23^{3}\] \[-ed1s12^{3}s13s23^{3}+c12ed1^{2}s12^{2}s13^{2}s23^{3}-c12^{3}c13^{2} (c23+s23)-c12c13^{2}s12^{2}(c23+s23))), \tag{4}\] \[e2 = m3(c13^{2}c23^{2}+c13c23ed2s13-c13ed2s13s23-c13^{2}s23^{2})/(m2(c1 2c23+c13s12-c23ed1s12s13-c12s23 \tag{5}\] \[-ed1s12s13s23)(c12c23+c23ed1s12s13+c12s23-ed1s12s13s23))+e1((-c23 ^{2}m^{2}s12^{2}-c12^{2}ed1^{2}s13^{2}s23^{2}\] \[+m1(c12c13c23s12-c12^{2}c13c23ed1s13+c12c^{2}c23^{2}ed1s13^{2}+c12 c13s12s23+c12^{2}c13ed1s13s23\] \[-4c12c23ed1s12s13s23+s12^{2}s23^{2}))/(m2(c12c23+c13s12-c23ed1s12 s13-c12s23-ed1s12s13s23)\] \[(c12c23+c23ed1s12s13+c12s23-ed1s12s13s23))).\] The newest data in experiments on neutrino oscillations is given in Table 2 in ref[6]. We divide them as four sets as follows: without the addition of tabulated SK-atm (\(\Delta\chi\))\({}^{2}\) data: 1)op1 NO(normal order of neutrino masses m1,m2,m3), 2)op2 IO(inverse order of neutrino masses m1,m2,m3), with the addition of tabulated SK-atm (\(\Delta\chi\))\({}^{2}\) data: 3)op3 NO, 4)op4 IO. And we use the best fit values in numerical computations. The definition of op is op=(s12, s23, s13, c12, c23, c13,\(\delta\),\(\Delta m^{2}_{21}\),\(\Delta m^{2}_{31}\)) and the unit of mass is eV in the latter. To input the newest data in experiments on neutrino oscillations, Eqs.(4,5) reduce to \[e1 = ((2769.06-31.9403i)m3)/((0.882948+0.469472i)(-21.8371-33.2006m1)+( 2899.56-322.186m1)m1 \tag{6}\] \[-(0.970296+0.241922i)(-223.627-14.754m1)m1-(0.743145+0.669131i)(-1.+1.m1)),\] \[e2 = ((0.00140998-0.0000714374i)(e1((126.869+31.6319i)m1+(848.825-135.89 5m1)m1+(0.882948\] (7) \[+0.469472i)(-9.21071+6.91819m1))-(129.966+3.7407i)m3)))/m2.\] for 1)op1; \[e1 = ((2540.6-254.204i)m3)/((-0.829038-0.559193i)(-22.0334-32.5576m1)+( 2877.42-313.697m1)m1 \tag{8}\] \[+(0.292372-0.956305i)(-219.66-14.2373m1)m1-(0.777146-0.62932i)(-1.+1.m1)),\] \[e2 = ((1.89812-0.519259i)(e1((0.00746221+0.00503332i)+(0.774974+0.115323 i)m1-0.128151m1^{2})\] (9) \[-(0.157733+0.0156887i)m3))/m2.\] for 2)op2; \[e1 = ((3.81986-5.16451i)m3)/((0.-0.0321921i)-0.173648(0.00233256-0.00233 256m1) \tag{10}\] \[+(0.642403-5.39249i)m1+(0.0555058+0.766044i)m1^{2}-0.642788(0.04 20237-0.766044((0.+0.00466513i)\] \[-(0.+0.00466513i)m1)-6.83682m1+1.m1^{2})),\] \[e2 = -(1/m2)(0.000849633-0.000108407i)(e1((-0.173648+0.984808i)(10.1911 -12.4824m1)\] (11) \[-(134.699+160.528i)m1+m1(-1129.13+242.508m1))-(133.341+11.7756i)m3).\] for 3)op3; \[e1 = ((2563.16-313.226i)m3)/(((-0.961262-0.275637i)(-18.0161-43.4234m1)+( 2974.46-428.713m1)m1 \tag{12}\] \[+(0.139173-0.990268i)(-275.406-23.7961m1)m1-(0.406737-0.913545i)(-1.+1.m1)),\] \[e2 = -(1/m2)(0.000957955-0.000183178i)(e1((-0.961262-0.275637i)(10.1911 -12.4824m1)\] (13) \[+(29.1644-207.516i)m1+m1(-1129.13+242.508m1))-(145.361+15.2224i)m3).\] for 4)op4. We now compute the absolute neutrino mass scale constrained by cosmological observations. For NO, \(m=(m1,m2,m3)=(m1,(a1+m1^{2})^{1/2},(a2+m1^{2})^{1/2})\), where \(a1=\Delta m^{2}_{21},\,a2=\Delta m^{2}_{31}\). The upper limit of \(sm=m1+m2+m3\), \(0.09\) eV[7], leads to \(m1=0.01745,\,m2=0.01946,\,m3=0.05309\,for\,1)op1\,and\,m1=0.01744,\,m2=0.01945,\,m3=0.05311\, for\,1\) IO, \(m=(m1,m2,m3)=(m1,(a1+m1^{2})^{1/2},(a2+m2^{2})^{1/2})\), where \(a1=\Delta m^{2}_{21},\,a2=\Delta m^{2}_{32}\). Due to negative a2, there are no solutions for m and there are still no solutions of m1, m2 even for m3=0. Therefore, we take \(sm=0.12[8]\) and obtain \(m3=0.0149670,\,m2=0.0528697,\,m1=0.0528697,\,m1=0.0528694,\,m1=0.0528694,\,m1=0.0528694,\,m1=0.0528694,\,m1=0.0528694,\,m1=0.0528694,\,m1=0.05311,\, m2=0.05311\,\,m2=0.0548694,\,m1=0.0555058,\,m1=0.05694,\,m1=0.0575058,\,m1=0.058694,\,m1=0.0595058,\,m1=0.0595058,\,m1=0.0595058,\,m1=0.0595058,\,m1=0.0595159,\,m1=0.0595159,\,m1=0.0595159,\,m1=0.0595159,\,m1=0.05952,\,m1=0.0595159,\,m1=0.0596,\,m1=0.059755,\,m1=0.05956,\,m1=0.059575,\,m1=0.059575,\,m1=0.05958,\,m1=0.05959,\,m1=0.0596,\,m1=0.05975,\,m1=0.05975,\,m1=0.0599,\,m1=0. \(0.0521633\,for\,2)op2\), \(m3=0.0149548\), \(m2=0.0528758\), \(m1=0.0521694\), \(for\,4)op4\). Substituting the values of masses, m1,m2,and m3, into Eqs.(6-9), we obtain two Majorana CP-violating phases \(\xi 1\) and \(\xi 2\) and results are listed in Table 1. Have known the Majorana CP-violating phases, it is straightforward to compute the effective Majorana neutrino masses \(m_{ee}\) of which the absolute value can be measured in the neutrinoless double beta decay (\(0\nu\beta\beta\)) experiments \[m_{ee}=m1c_{12}^{2}c_{13}^{2}e^{i\xi 1}+m2s_{12}^{2}c_{13}^{2}e^{i\xi 2}+m3s_{1 3}^{2}e^{-2i\delta}. \tag{10}\] Results are listed in Table 2. From the Table 2, predicted \(|m_{ee}|/10^{-2}\) is smaller than or equal to 5.086 in all cases and is agreed with the upper limit, 0.061-0.165, obtained in the experiments[9]. The Majorana CP-violating phases depend on the absolute neutrino mass scale heavily, as can be seen from Eqs.(6-9) and Table 1, and the absolute neutrino mass scale is not certain by cosmological observations nowadays. Therefore, it is inspired to show dependence of the Majorana CP-violating phases on the absolute neutrino mass scale. We take the case of NO 1)op1 as en example, make a direct computation and list results in Table 3. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \(sm\) & \multicolumn{2}{c|}{0.09} & \multicolumn{5}{c}{0.12} \\ \hline \hline \(op\) & 1)op1 & 2)op3 & 1)op1 & 2)op2 & 3)op3 & 4)op4 \\ \hline \hline \(m_{ee}/10^{-2}\) & 1.814 + 0.0863 i & 1.797 + 0.0971 i & 3.077 + 0.0395 i & 4.919 - 1.293 i & 3.067 + 0.0425 i & 4.932 - 1.231 i \\ \hline \hline \(|m_{ee}|/10^{-2}\) & 1.816 & 1.800 & 3.077 & 5.086 & 3.067 & 5.084 \\ \hline \end{tabular} \end{table} Table 2: The effective Majorana neutrino masses \begin{table} \begin{tabular}{c|c c c c c c c c} \hline \(dm\) & \multicolumn{2}{c|}{0.09} & \multicolumn{5}{c}{0.12} \\ \hline \hline \(op\) & 1)op1 & 2)op3 & 1)op1 & 2)op2 & 3)op3 & 4)op4 \\ \hline \hline \(\xi 1/^{\circ}\) & 14.13 & 15.59 & 6.110 & 13.60 & 6.820 & 12.43 \\ \hline \hline \(\xi 2/^{\circ}\) & 14.52 & 15.67 & 7.035 & 17.63 & 7.933 & 17.81 \\ \hline \end{tabular} \end{table} Table 1: The Majorana CP-violating phases \begin{table} \begin{tabular}{c|c c c c c c c c c} \hline \(dm\) & \multicolumn{2}{c|}{0.09} & \multicolumn{5}{c}{0.12} \\ \hline \hline \(op\) & 1)op1 & 2)op3 & 1)op1 & 2)op2 & 3)op3 & 4)op4 \\ \hline \hline \(\xi 1/^{\circ}\) & 14.13 & 15.59 & 6.110 & 13.60 & 6.820 & 12.43 \\ \hline \hline \(\xi 2/^{\circ}\) & 14.52 & 15.67 & 7.035 & 17.63 & 7.933 & 17.81 \\ \hline \end{tabular} \end{table} Table 1: The Majorana CP-violating phases \begin{table} \begin{tabular}{c|c c c c c c c c} \hline \(dm\) & \multicolumn{2}{c|}{0.09} & \multicolumn{5}{c}{0.12} \\ \hline \hline \(op\) & 1)op1 & 2)op3 & 1)op1 & 2)op2 & 3)op3 & 4)op4 \\ \hline \hline \(\xi 1/^{\circ}\) & 14.13 & 15.59 & 6.110 & 13.60 & 6.820 & 12.43 \\ \hline \(\xi 2/^{\circ}\) & 14.52 & 15.67 & 7.035 & 17.63 & 7.933 & 17.81 \\ \hline \end{tabular} \end{table} Table 1: The Majorana CP-violating phases In summary, in the frame of the magic Majorana neutrino mass matrix, we have obtained the Majorana CP-violating phases only using the nowadays knowledge from experiments and observations. By using gotten the Majorana CP-violating phases, we have also computed the effective Majorana neutrino mass \(m_{ee}\) which is agreed with the upper limit, 0.061-0.165, obtained in the neutrinoless double beta decay (\(0\nu\beta\beta\)) experiments. In particular, we have pointed that the Majorana CP-violating phases depend on the absolute neutrino mass scale heavily. The improved upper bound on the effective neutrino masse \(m_{\beta}\) for \(\beta\) decays experiment performed at KATRIN is reported to be smaller than \(0.8eV\) at 90% confidence level. Future experiments hope to reach a goal of 40 meV[10]. It is expected that the absolute neutrino mass scale shall be determined more accurately by experiments and then the Majorana CP-violating phases can be computed more accurately in the near future. ###### Acknowledgements. This research was supported in part by Projects No. 11875306, No. 11875062, and No. 12275335 supported by National Natural Science Foundation of China.
2303.09156
Ground-State Phase Diagram of the Kitaev-Heisenberg Model on a Three-dimensional Hyperhoneycomb Lattice
The Kitaev model, which hosts a quantum spin liquid (QSL) in the ground state, was originally defined on a two-dimensional honeycomb lattice, but can be straightforwardly extended to any tricoordinate lattices in any spatial dimensions. In particular, the three-dimensional (3D) extensions are of interest as a realization of 3D QSLs, and some materials like $\beta$-Li$_{2}$IrO$_{3}$, $\gamma$-Li$_2$IrO$_3$, and $\beta$-ZnIrO$_{3}$ were proposed for the candidates. However, the phase diagrams of the models for those candidates have not been fully elucidated, mainly due to the limitation of numerical methods for 3D frustrated quantum spin systems. Here we study the Kitaev-Heisenberg model defined on a 3D hyperhoneycomb lattice, by using the pseudofermion functional renormalization group method. We show that the ground-state phase diagram contains the QSL phases in the vicinities of the pristine ferromagnetic and antiferromagnetic Kitaev models, in addition to four magnetically ordered phases, similar to the two-dimensional honeycomb case. Our results respect the four-sublattice symmetry inherent in the model, which was violated in the previous study. Moreover, we also show how the phase diagram changes with the anisotropy in the interactions. The results provide a reference for the search of the hyperhoneycomb Kitaev materials.
Kiyu Fukui, Yasuyuki Kato, Yukitoshi Motome
2023-03-16T08:41:49Z
http://arxiv.org/abs/2303.09156v1
Ground-State Phase Diagram of the Kitaev-Heisenberg Model on a Three-dimensional Hyperhoneycomb Lattice ###### Abstract The Kitaev model, which hosts a quantum spin liquid (QSL) in the ground state, was originally defined on a two-dimensional honeycomb lattice, but can be straightforwardly extended to any triccoordinate lattices in any spatial dimensions. In particular, the three-dimensional (3D) extensions are of interest as a realization of 3D QSLs, and some materials like \(\beta\)-Li\({}_{2}\)IrO\({}_{3}\), \(\gamma\)-Li\({}_{2}\)IrO\({}_{3}\), and \(\beta\)-ZnIrO\({}_{3}\) were proposed for the candidates. However, the phase diagrams of the models for those candidates have not been fully elucidated, mainly due to the limitation of numerical methods for 3D frustrated quantum spin systems. Here we study the Kitaev-Heisenberg model defined on a 3D hyperhoneycomb lattice, by using the pseudofermion functional renormalization group method. We show that the ground-state phase diagram contains the QSL phases in the vicinities of the pristine ferromagnetic and antiferromagnetic Kitaev models, in addition to four magnetically ordered phases, similar to the two-dimensional honeycomb case. Our results respect the four-sublattice symmetry inherent in the model, which was violated in the previous study. Moreover, we also show how the phase diagram changes with the anisotropy in the interactions. The results provide a reference for the search of the hyperhoneycomb Kitaev materials. ## 1 Introduction The quantum spin liquid (QSL), which is a quantum disordered state in magnets with fascinating features such as quantum entanglement and fractional excitations, has been studied intensively from both theoretical and experimental points of view [1, 2, 3, 4]. Despite the long history of research, well-established examples of the QSL are limited, and the realization of the QSL in most of the candidate models and materials is still under debate. The celebrated Kitaev model has brought a revolution to this situation [5]. Despite strong frustration arising from the bond-dependent anisotropic interactions on a two-dimensional (2D) honeycomb lattice, the model is exactly solvable, and the ground state is proven to be a QSL with fractional excitations of itinerant Majorana fermions and localized \(Z_{2}\) fluxes; thus, it provides a rare example of exact QSLs in more than one dimension. Moreover, since the feasibility of the model was proposed for spin-orbit coupled Mott insulators [6], a number of intensive searches for the candidate materials have been carried out from both theoretical and experimental perspectives [7, 8, 9, 10, 11, 12], for instance, for Na\({}_{2}\)IrO\({}_{3}\)[13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], \(\alpha\)-Li\({}_{2}\)IrO\({}_{3}\)[15, 17, 23], and \(\alpha\)-RuCl\({}_{3}\)[25, 26, 27]. In recent years, a number of new candidates have been proposed, such as cobalt compounds [28, 29, 30, 31, 32, 33, 34], iridium limenites [32, 33, 34], and \(f\)-electron compounds [35, 36, 37, 38, 39]. While the Kitaev model was originally introduced on the 2D honeycomb lattice, it can be extended to any tricordinate lattices in any spatial dimensions in a straightforward manner, and in all cases the ground state is an exact QSL. A representative is an extension to a three-dimensional (3D) hyperhoneycomb lattice with space group \(Fddd\), shown in Fig. 1(a), which belongs to a series of extensions of the 2D honeycomb lattice to 3D, dubbed the harmonic honeycomb lattices [40]. Although the ground state of the 3D hyperhoneycomb Kitaev model is an exact QSL apparently similar to the 2D honeycomb case [41], finite-temperature properties are qualitatively different: One of two crossovers found in the 2D cases, which is associated with the \(Z_{2}\) fluxes [42], is replaced by a finite-temperature phase transition in the 3D cases [43, 44, 45, 46, 47]. This is caused by proliferation of the \(Z_{2}\) fluxes whose excitations form closed loops under the local constraints between the fluxes sharing edges on the 3D lattice [48]. Similar finite-temperature phase transitions were also found for other 3D extensions of the Kitaev model [49, 50, 51]. On the materials side, \(\beta\)-Li\({}_{2}\)IrO\({}_{3}\), where the edge-sharing IrO\({}_{6}\) octahedra form the 3D hyperhoneycomb network, was initially synthesized and has been investigated intensively as a candidate for the 3D Kitaev QSL [52]. The dominant Kitaev-type interactions in this material was confirmed by the first-principles calculations [53, 54]. Although the compound shows a phase transition to a magnetically ordered phase at about 40 K [55, 52], the order can be suppressed by the application of a magnetic field [56] and pressure [57, 58, 59]. Recently, a new candidate \(\beta\)-ZnIrO\({}_{3}\) was discovered, and is attracting interests because it does not show any sign of magnetic phase transitions down to 2 K at zero magnetic field and ambient pressure [60]. Furthermore, an \(f\)-electron compound \(\beta\)-Na\({}_{2}\)PrO\({}_{3}\), which was synthesized in the past in a different context [61], was theoretically proposed as a candidate with antiferromagnetic Kitaev interactions [37]. For understanding the fundamental properties of such 3D hyperhoneycomb Kitaev magnets, there have been a lot of theoretical efforts on some extensions of the Kitaev model [48, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72]. For instance, the Kitaev model with an additional Heisenberg interaction, dubbed the Kitaev-Heisenberg model, was studied by the Luttinger-Tisza method for classical spins [62] and by the graph projected entangled-pair states (gPEPS) method for quantum spins [69]. However, comprehensive understanding of the phase diagram and the stability of the QSLs in the 3D cases is still lacking, mainly due to the limited number of efficient theoretical methods for 3D frustrated quantum spin systems. In this paper, we present our numerical results on the ground state of the Kitaev-Heisenberg model on the hyperhoneycomb lattice obtained by the pseudofermion functional renormalization group (PFFRG) method [73, 74]. The PFFRG is a powerful numerical method which enables us to perform large-scale calculations for frustrated quantum spin models in any spatial dimensions. Examining the instabilities toward magnetically ordered states by calculating the spin susceptibility, we elucidate the ground-state phase diagram for both isotropic and anisotropic models. For the isotropic case, we find QSL phases around the two pristine Kitaev cases without the Heisenberg interactions, in addition to the four magnetically ordered phases, the ferromagnetic (FM), Neel antiferromagnetic (AFM), zigzag AFM, and stripy AFM phases. The results look similar to the 2D honeycomb case, but differ from the previous study by the gPEPS method for the 3D hyperhoneycomb case [69]. We confirm that our results respect the four-sublattice symmetry inherent in the model [13, 17, 76, 75, 13], which was violated in the previous result. Meanwhile, by introducing the anisotropy in the interactions, we show that the QSL region is reduced and replaced by the other magnetically ordered state, similar to the previous results obtained by the density matrix renormalization group (DMRG) for the 2D honeycomb case [77]. The structure of this paper is as follows. In Sect. 2, we introduce the Kitaev-Heisenberg model on the hyperhoneycomb lattice and briefly review the previous studies. In Sect. 3, we present the essence of the PFFRG method and the calculation conditions. We show our results for the ground-state phase diagram for the isotropic case in Sect. 4.1 and the anisotropic case in Sect. 4.2. Finally, Sect. 5 is devoted to the summary and perspectives. ## 2 Model We study the Kitaev-Heisenberg model defined on the hyperhoneycomb lattice as a minimal model for the hyperhoneycomb candidate materials. The Hamiltonian is given by \[\mathcal{H}=\sum_{\mu=\pi,j,\underline{z}}\sum_{(i,j)_{\mu}}J_{\mu}\left[2\sin \varphi\,S_{i}^{\mu}S_{j}^{\mu}+\cos\varphi\,\mathbf{S}_{i}\cdot\mathbf{S}_{j }\right], \tag{1}\] where the summation of \(\langle i,j\rangle_{\mu}\) runs over pairs of nearest-neighbor sites \(i\) and \(j\) connected by \(\mu\) bond, and \(S_{i}^{\mu}\) is the \(\mu\) component of the \(S=1/2\) spin operator at site \(i\): \(\mathbf{S}_{i}=(S_{i}^{x},\,S_{i}^{y},\,S_{i}^{z})\). The first and second terms in Eq. (1) represent the Kitaev and Heisenberg interactions, respectively; the ratio of these interactions is parametrized by \(\varphi\in[0,2\pi]\), and the overall strength is given by \(J_{\mu}\). A schematic picture of the model is shown in Fig. 1(a), in which the \(x,y,\) and \(z\) bonds are represented by blue, green, and red, respectively. Note that the \(z\) bond is crystallographically inequivalent to the rest two on the 3D hyperhoneycomb lattice, while the \(x\) and \(y\) bonds are related to each other by \(C_{2}\) symmetry around the \(z\) bonds. In Eq. (1), the Kitaev interaction is FM for \(1<\varphi/\pi<2\), while it is AFM for \(0<\varphi/\pi<1\). Meanwhile, the Heisenberg interaction is FM for \(1/2<\varphi/\pi<3/2\), while AFM for \(0\leq\varphi/\pi<1/2\) and \(3/2<\varphi/\pi\leq 2\). There are four special values of \(\varphi\): \(\varphi/\pi=0\), \(1/2\), \(1\), and \(3/2\). When \(\varphi/\pi=1/2\) and \(3/2\), the Heisenberg interaction vanishes and the Hamiltonian describes the pristine AFM and FM Kitaev models, respectively, whose ground states are QSLs [41]. Meanwhile, when \(\varphi/\pi=0\) and \(1\), the Kitaev interaction vanishes and the Hamiltonian corresponds to the AFM and FM Heisenberg models, respectively. In these cases, the system has the SU(2) symmetry, and the FM and Neel AFM orders are realized in the ground state. In addition, due to the four-sublattice symmetry [13, 17, 76, 75, 16], there are two more hidden SU(2) points at \(\varphi/\pi=3/4\) and \(7/4\) corresponding to \(\varphi/\pi=0\) and \(1\), respectively. The Kitaev-Heisenberg model was firstly introduced as an effective model defined on a 2D honeycomb lattice for the 2D candidate materials such as Na\({}_{2}\)IrO\({}_{3}\) and \(\alpha\)-Li\({}_{2}\)IrO\({}_{3}\), and Figure 1: (Color online) (a) Schematic picture of the 3D hyperhoneycomb lattice. The blue, green, and red bonds represent the \(\mu=x,\,y,\) and \(z\) bonds, respectively, in the Kitaev-Heisenberg model in Eq. (1). The gray arrows represent the primitive lattice vectors at \(\mathbf{a}=(-1/\sqrt{2},\,1/\sqrt{2},\,-\sqrt{2})\), \(\mathbf{a}_{1}=(-1/\sqrt{2},\,1/\sqrt{2},\,\sqrt{2})\) and \(\mathbf{a}_{1}=(\sqrt{2},\,2\,\sqrt{2},\,0)\) in the \(xyz\) coordinate shown in the inset. (b) Brillouin zones for the hyperhoneycomb lattice. The inner red polygon indicates the first Brillouin zone, while the outer black one indicates the Brillouin zone up to the twelfth one. (c)-(d) Spin configurations for four magnetically ordered states appearing in the Kitaev-Heisenberg model. its phase diagram was calculated by using a variety of methods: the exact diagonalization (ED) [13, 17, 13] the DMRG [77, 78], the slave-particle mean-field approximation [79], the tensor network method [80], the cluster mean-field approximation [81], the high-temperature expansion [82], the quantum Monte Carlo method [83], and the PFFRG method [84, 85]. These previous studies showed that the ground-state phase diagram contains four magnetically ordered phases, FM, Neel AFM, zigzag AFM, and stripy AFM phases, in addition to the QSL phases in the narrow regions around the two Kitaev points at \(\varphi/\pi=1/2\) and \(3/2\) where the ground states are the exact QSLs [13, 17, 62, 75, 76]. The model defined on a 3D hyperhoneycomb lattice was introduced, motivated by the synthesis of the hyperhoneycomb candidate \(\beta\)-Li-IrO\({}_{3}\)[62]. For the model with classical spins, the ground-state phase diagram was calculated by the Luttinger-Tisza method, and found to contain the four magnetically ordered phases, similar to the 2D honeycomb case [62]. Meanwhile, for the quantum spin \(S=1/2\) case, the gPEPS calculation showed that the phase diagram contains the QSL phases in the vicinity of the two Kitaev cases in addition to the four magnetically ordered phases [69]. However, the QSL phases appear in slightly different regions compared to the 2D case: In the 2D honeycomb model, the QSL phases extend from each Kitaev point to both sides of the FM and AFM Heisenberg interactions [80, 81, 83, 85, 17], but they are found almost only on one of the two sides in the gPEPS result for the 3D hyperhoneycomb model. It should be noted that the four-sublattice symmetry appears to be violated in the gPEPS result, suggesting that the accuracy is insufficient. ## 3 Method In this study, we try to elucidate the ground-state phase diagram of the \(S=1/2\) hyperhoneycomb model in Eq. (1) by using the PFFRG method. The PFFRG provides a powerful numerical method for frustrated quantum spin systems [73, 74], and has been applied to various models with the Heisenberg [73, 74], \(XXZ\)[86, 87], Kitaev-like [88, 84, 85, 89, 90], off-diagonal [91, 92], long-range dipolar [93, 94, 95], and SU(2)\(\times\)SU(2) interactions [96, 97]. It was applied to 2D systems in the early stage, but later its usefulness was proved for 3D systems [89, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 219, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 2778, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 286, 287, 288, 289, 289, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 334, 335, 341, 342, 343, 356, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 409, 400, 409, 401, 402, 403, 404, 405, 406, 407, 409, 408, 409, 400, 402, 404, 406, 407, 409, 403, 407, 408, 409, 400, 409, 400, 401, 402, 404, 409, 403, 404, 405, 406, 407, 409, 404, 408, 409, 400, 404, 407, 408, 409, 400, 400, 409, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 400, 409, 400, 402, 404, 409, 400, 403, 404, 407, 409, 400, 404, 408, 409, 400, 409, 400, 409, 400, 401, 402, 404, 409, 400, 409, 401, 403, 404, 405, 406, 407, 409, 400, 407, 408, 409, 400, 409, 400, 401, 402, 404, 409, 400, 402, 404, 409, 400, 401, 402, 404, 409, 400, 403, 404, 409, 401, 404, 409, 402, 405, 406, 407, 408, 409, 400, 409, 400, 401, 402, 404, 409, 400, 403, 404, 409, 400, 404, 409, 400, 401, 402, 409, 401, 403, 404, 409, 401, 405, 406, 407, 409, 400, 408, 409, 400, 400, 409, 400, 401, 402, 409, 400, 403, 404, 409, 400, 404, 409, 400, 401, 402, 409, 400, 403, 404, 409, 400, 404, 409, 400, 401, 402, 409, 400, 401, 402, 409, 400, 403, 404, 409, 400, 405, 409, 401, 402, 409, 400, 403, 409, 401, 404, 409, 401, 403, 409, 404, 409, 400, 401, 402, 409, 400, 409, 401, 403, 409, 400, 409, 400, 401, 402, 409, 400, 401, 402, 409, 401, 403, 409, 400, 401, 404, 409, 402, 409, 403, 404, 409, 404, 405, 406, 407, 408, 409, 409, 400, 409, 400, 409, 400, 409, 400, 401, 402, 409, 400, 409, 400, 409, 401, 409, 400, 409, 400, 409, 400, 409, 400, 409, 400, 409, 400, 409, 400, 409, 400, 409, 400, 409, 409, 400, 409, 400, 409, 400, 409, 400, 409, 400, 409, 400, 409, 400, 409, 400, 410, 409, 409, 400, 409, 400, 411, 409, 409, 400, 409, 400, 410, 409, 409, 400, 411, 409, 4 values of the critical cutoff energy scale \(\Lambda_{\rm c}\), where \(\chi^{\rm exp,\Lambda,\Lambda}({\bf k_{\rm max}})\) shows an anomaly. The typical \(\Lambda\) dependences in each region are shown in Fig. 3. We find that, despite the crystallographic inequivalence between the \(z\) and \(x\) bonds on the hyperhoneycomb lattice mentioned in Sec. 2, \(\chi^{zz,\Lambda}({\bf k_{\rm max}})\) and \(\chi^{xx,\Lambda}({\bf k_{\rm max}})\) show almost the same \(\Lambda\) dependences with anomalies at the same \(\Lambda_{\rm c}\) indicated by the black arrows in Fig. 3. Figure 4 presents the \({\bf k}\) dependences of \(\chi^{zz,\Lambda}({\bf k})\) and \(\chi^{xx,\Lambda}({\bf k})\) at \(\Lambda_{\rm c}\) for the same values of \(\varphi\) as in Fig. 3, which show distinct peaks at \({\bf k_{\rm max}}\), corresponding to each magnetic ordering. In addition to the four ordered phases, we find the Kitaev QSL phase in the two regions including the pristine AFM and FM Kitaev cases at \(\varphi/\pi=0.5\) and \(1.5\), respectively, as indicated by red in Fig. 2. Figure 5(a) shows the \(\Lambda\) dependences of \(\chi^{zz,\Lambda}({\bf k_{\rm max}})\) and \(\chi^{xx,\Lambda}({\bf k_{\rm max}})\) for the FM Kitaev case. For the AFM case, we obtain the same result since the FM and AFM Kitaev models are equivalent under the gauge transformation [5]. Both \(\chi^{zz,\Lambda}({\bf k_{\rm max}})\) and \(\chi^{xx,\Lambda}({\bf k_{\rm max}})\) show no obvious anomaly down to \(\Lambda_{\rm min}\), suggesting that the ground state is the QSL in consistent with the exact solution [41]. Figures 5(b) and 5(c) show the \({\bf k}\) dependences of \(\chi^{zz,\Lambda}({\bf k})\) and \(\chi^{xx,\Lambda}({\bf k})\) at \(\Lambda_{\rm min}\), respectively. In this FM Kitaev case, \(\chi^{zz,\Lambda}({\bf k})\)(\(k\)) shows the maximum at \(k_{x}+k_{y}=0\) (\(-k_{y}+k_{z}=0\)) with arbitrary \(k_{z}\) (\(k_{x}\)). The results indicate that \(\chi^{zz,\Lambda}({\bf k})\) and \(\chi^{xx,\Lambda}({\bf k})\) are well approximated by \(\propto\cos[(k_{x}+k_{y})/\sqrt{2}]+\) const. and \(\propto\cos[(-k_{y}+k_{z})/\sqrt{2}]+\) const., respectively. This means that the spin correlations are negligible beyond nearest neighbors, as in the 2D honeycomb case [109], which is also consistent with the exact solution [41]. We find similar behaviors even in the presence of small Heisenberg interactions for \(0.4625\lesssim\varphi/\pi\lesssim 0.5375\) and \(1.2625\lesssim\varphi/\pi\lesssim 1.6375\), as shown in Fig. 2. The latter region around the FM Kitaev point is considerably wider than the former around the AFM Kitaev point, as seen in the 2D honeycomb case [17, 80, 81, 83, 85]. We note that the previous PFFRG results for the 2D honeycomb case tend to overestimate the QSL regions compared to the ED or DMRG results [84, 85], presumably due to differences in the numerical methods and the system sizes. The same is likely to be true for the present 3D hyperhoneycomb case. In any case, our results show that the Kitaev QSL phases extend from each Kitaev point to both sides of the FM and AFM Heisenberg interactions as in the previous studies for the 2D honeycomb model [17, 80, 81, 83, 85]. This is qualitatively different from the previous results using the gPEPS method where those extend to almost only one of the two sides [69]. For further comparison with the previous gPEPS study, we examine the four-sublattice symmetry which the model in Eq. (1) satisfies [13, 17, 76, 75, 62]. Under the transformation, \(\sin\varphi\), \(\cos\varphi\), and \(\varphi\) in Eq. (1) are transformed as \[(\sin\varphi^{\prime},\cos\varphi^{\prime}) = \mathcal{N}(\sin\varphi+\cos\varphi,-\cos\varphi), \tag{4}\] \[\varphi^{\prime} = \arctan[-\tan\varphi-1], \tag{5}\] where \(\mathcal{N}\) is the normalization so that \(\sin^{2}\varphi^{\prime}+\cos^{2}\varphi^{\prime}=1\): \(\mathcal{N}=\{(\sin\varphi+\cos\varphi)^{2}+\cos^{2}\varphi\}^{-1/2}\). Hence, the value of \(\Lambda_{\rm c}\) at \(\varphi\) is transformed into \(\Lambda_{\rm c}^{\prime}\) at \(\varphi^{\prime}\) as \[\Lambda_{\rm c}^{\prime}(\varphi^{\prime})=\mathcal{N}\Lambda_{\rm c}(\varphi). \tag{6}\] The values of \(\Lambda_{\rm c}^{\prime}(\varphi^{\prime})\) obtained from our numerical estimates of \(\Lambda_{\rm c}(\varphi)\) are plotted by the dashed line in Fig. 2. We find that \(\Lambda_{\rm c}^{\prime}(\varphi^{\prime})\) almost agree with the original \(\Lambda_{\rm c}(\varphi)\). We note considerable deviations especially for \(0.0\leq\varphi/\pi\lesssim 0.5\), but it would be attributed to the finite frequency grid in our PF-FRG calculations [85]. Thus, our results respect the required four-sublattice symmetry, whereas the previous gPEPS ones which predicted largely asymmetric QSL regions do not. It is worth noting that the ground-state phase diagram in Fig. 2 is very similar to that of 2D honeycomb case obtained by the PFFRG method [85], which is shown in the top strip of the Fig. 2. This is presumably because (i) both 2D honeycomb and 3D hyperhoneycomb lattices are tricoordinated, \begin{table} \begin{tabular}{l c c} \hline phase & \({\bf k_{\rm max}}\) for \(\chi^{zz,\Lambda}({\bf k})\) & \({\bf k_{\rm max}}\) for \(\chi^{xx,\Lambda}({\bf k})\) \\ \hline Neel AFM & \(\left(\pm 2\frac{\sqrt{2}}{3}\pi,\pm\frac{\sqrt{2}}{3}\pi,0\right)\) & \(\left(\pm 2\frac{\sqrt{2}}{3}\pi,\pm\frac{\sqrt{2}}{3}\pi,0\right)\) \\ zigzag AFM & \(\left(\pm\frac{\sqrt{2}}{3}\pi,\pm\frac{\sqrt{2}}{3}\pi,0\right)\) & \(\left(\pm\frac{\sqrt{2}}{3}\pi,\pm\frac{\sqrt{2}}{3}\pi,0\right)\) \\ FM & \((0,0,0)\) & \((0,0,0)\) \\ stripy AFM & \((0,0,\pm\sqrt{2}\pi)\) & \((\pm\sqrt{2}\pi,0,0)\) \\ \hline \hline \end{tabular} \end{table} Table 1: The locations of \({\bf k_{\rm max}}\) in each magnetically ordered phase for the isotropic Kitaev-Heisenberg model. \({\bf k_{\rm max}}\) is the wave vector at which the spin susceptibilities become maximum in the reciprocal space. (ii) the spin correlations are nonzero only between the nearest neighbors in the Kitaev limit [109], and (iii) all the ordered states induced by the Heisenberg interaction have commensurate ordering vectors. Indeed, the classical energies and the ground-state phase diagram obtained by the Luttinger-Tisza method are exactly the same for the two lattices [70]. A difference between 2D honeycomb and 3D hyperhoneycomb cases is expected to be pronounced at finite temperature. For the 2D honeycomb case, the transition temperature is strictly zero in the four SU(2) cases with \(\varphi/\pi=0.0\), 0.75, 1.0, and 1.75 because of the Mermin-Wagner theorem [110], while it becomes finite for the 3D hyperhoneycomb case. In addition, in the Kitaev QSL regions, a topological transition by loop proliferation of the flux excitations is expected to occur at finite temperature in the 3D case [43, 44, 45, 46, 47], whereas it is absent and only a crossover is left in the 2D case [42]. In this respect, it is interesting to note that \(\Lambda_{\rm c}\) can be regarded as an estimate of the transition temperature \(T_{\rm c}\), by assuming a relation between the energy scale \(\Lambda\) and temperature \(T\) as \(T\simeq\frac{\pi}{2}\Lambda\), which holds for large \(\Lambda\) and \(T\)[98, 99, 92]. Indeed, the PFFRG results for the 2D honeycomb model with large spin \(S=50\) qualitatively reproduced the onset temperature of the quasi-long-range order obtained by classical Monte Carlo simulations [85]. Thus, we may conclude that \(\Lambda_{\rm c}\) in Fig. 2 gives an estimate of \(T_{\rm c}\) for the 3D hyperhoneycomb Kitaev-Heisenberg model. Then, an intriguing issue is whether the PFFRG can predict the finite-temperature topological transition expected to occur in the Kitaev QSL regions. The value Figure 4: (Color online) **k** dependences of \(\chi^{\mu\mu,\Lambda_{\rm c}}({\bf k})\) for the isotropic Kitaev-Heisenberg model at (a) and (b) \(\varphi/\pi=0.2\) (Neel), (c) and (d) \(\varphi/\pi=0.7\) (zigzag AFM), (e) and (f) \(\varphi/\pi=1.1\) (FM), and (g) and (h) \(\varphi/\pi=1.7\) (stripy AFM). (a), (c), (e), and (g) are for \(\chi^{zz,\Lambda_{\rm c}}({\bf k})\), and (b), (d), (f), and (h) are for \(\chi^{zz,\Lambda_{\rm c}}({\bf k})\). The left and right panels are the data plotted on the \([hh]\) and \([h\bar{k}0]\) planes, respectively. The inner rectangle and hexagon in the left and right panels, respectively, indicate the first Brillouin zone, while the outer hexagon and octagon in the left and right panels, respectively, indicate the zone up to twelfth one; see Fig. 1(b). of \(T_{\rm c}\) was estimated at \(\sim 0.0078\) in our energy unit[43], which corresponds to \(\Lambda_{\rm c}\sim 0.005\). This value is smaller than the minimum value of \(\Lambda\), \(\Lambda_{\rm min}\simeq 10^{-2}\), in Fig. 5. We further calculate the susceptibilities down to \(\Lambda\sim 0.0014\), but do not find any anomaly. This suggests that the finite-temperature topological transition by loop proliferation leaves no trace in the \(\Lambda\) dependence in the present PEFRG calculation. This is presumably a limitation of the correspondence between \(T\) and \(\Lambda\) in the zero-temperature PEFRG. Alternatively, this might be due to the fact that our one-loop PEFRG method incorporates only up to two-body interactions, whereas the flux is a ten-body quantity in the hyperhoneycomb lattice. ### Anisotropic case Finally, we investigate the effect of anisotropy in the magnetic interactions on the ground-state phase diagram. In this section, we consider the region \(1.5\leq\varphi/\pi\leq 2.0\), where the Kitaev and Heisenberg couplings are FM and AFM, respectively, and the effect of anisotropy was studied for the 2D honeycomb model[77]. Assuming \(J_{x}=J_{y}\), we parametrize the anisotropy as \[J_{x}=J_{y}=(3-J_{z})/2, \tag{7}\] and change \(J_{z}\in[0,3]\); the system becomes disconnected one-dimensional chains of \(x\) and \(y\) bonds in the limit of \(J_{z}\to 0\), while it becomes independent dimers of \(z\) bonds in the limit of \(J_{z}\to 3\). Figure 6 shows the ground-state phase diagram obtained by the PEFRG method. The data are limited within the region of \(0.5\leq J_{z}\leq 2.25\) because outside this region the PEFRG method cannot correctly detect anomalies of the susceptibilities due to the small energy scales in the anisotropic cases. When introducing the anisotropy, the QSL region is narrowed and replaced by the stripy AFM, while the phase boundary between the stripy and Neel AFM is almost intact. In the large \(J_{z}\) and \(\varphi\) region, the Neel AFM phase is replaced by the dimer phase, where the spin susceptibility does not show any anomaly except for a broad hump in the \(\Lambda\) dependence. These results are qualitatively similar to the previous ones for the 2D honeycomb case[77]. For comparison, we plot the phase boundaries in the anisotropic limits in Fig. 6. The two filled squares at \((\varphi/\pi,J_{z})=(1.5,0.0)\) and \((1.5,3.0)\) indicate that the QSL is unstable against infinitesimally small Heisenberg interactions in both limits, as shown for the 2D case[77]. The phase boundary between the QSL and stripy AFM states in our results should be extrapolated to these two points, despite the lack of data in the anisotropic regions. Meanwhile, the filled pentagon at \((\varphi/\pi,J_{z})=(1.75,3.0)\) indicates the phase boundary between the stripy AFM and dimer states in the limit of \(J_{z}\to 3\), which is obtained by comparing the energy of triplet and singlet states for a two-site dimer. The open square in the opposite limit of \(J_{z}\to 0\) indicates the phase boundary between the stripy and Neel AFM states, numerically estimated for the 2D case[77]. This agrees well with the estimate by the Luttinger-Tisza method for classical spins, as indicated by the vertical dashed line in Fig. 6. The phase boundary in our results appear to be consistent with these two limits. ## 5 Summary and perspectives To summarize, we have studied the Kitaev-Heisenberg model defined on the 3D hyperhoneycomb lattice by using the PEFRG method. We clarified the ground-state phase di Figure 5: (Color online) (a) Spin susceptibilities \(x^{2\Lambda}({\bf k}_{\rm max})\) and \(\chi^{xx\Lambda}({\bf k}_{\rm max})\) as functions of \(\Lambda\) for the isotropic FM Kitaev model (\(\varphi/\pi=1.5\)). \({\bf k}\) dependences are plotted in (b) for \(x^{2\Lambda}({\bf k})\) and (c) for \(\chi^{xx\Lambda}({\bf k})\) at \(\Lambda=\Lambda_{\rm min}\). The notations are common to those in Fig. 4. Figure 6: (Color online) Ground-state phase diagram of the anisotropic Kitaev-Heisenberg model in Eq. (1) with Eq. (7) on the plane of \(\varphi/\pi\) and \(J_{z}\). The symbols on the top and bottom of the phase diagram indicate the phase boundaries expected in the anisotropic limits of \(J_{x}\to 0\) and \(J_{z}\to 3\); see the main text for the details. The red dashed line indicates the phase boundary obtained by the Luttinger-Tisza method for classical spins. agram for the model with isotropic interactions by changing the ratio between the Kitaev and Heisenberg interactions. We identified the regions of two QSL phases around the two pristine Kitaev cases, in addition to the four magnetically ordered phases, Neel AFM, zigzag AFM, FM, and stripy AFM phases. Our results respect the four-sublattice symmetry, and are similar to those of the 2D honeycomb model obtained by the PFFRG method [85], in contrast to the previous study by the gPEPS method [69]. We also investigated the effect of the spatial anisotropy and showed that the QSL phase is the most stable for the isotropic case and shrinks when the anisotropy is introduced. These results are also qualitatively similar to those of the 2D honeycomb model [77]. Our results provide a reference for not only the understanding of the existing candidate materials but also the search and design of the hyperhoneycomb Kitaev materials. The candidate material \(\beta\)-Li\({}_{2}\)IrO\({}_{3}\) shows an incommensurate non-coplanar magnetic order at low temperature [52, 55], which does not appear in our phase diagram in Fig. 2. This suggests the importance of additional interactions beyond the Kitaev-Heisenberg model. Indeed, recent experimental results show that a symmetric off-diagonal interaction, called the \(\Gamma\) interaction, is not negligible [11, 12, 13]. In this compound, it was also shown that an external pressure destroys the magnetic order and stabilizes a dimer state [57, 58]. Since the hyperhoneycomb lattice is not crystallographically isotropic, the pressure may enhance or reduce the anisotropy of the interactions, and hence, an extension of our phase diagram in Fig. 6 could be useful for understanding such a transition. An external magnetic field also suppresses the magnetic order [56], which urges theoretical studies including the effect of magnetic fields, as intensively discussed for the 2D case. It would also be intriguing to understand the QSL-like state in the new candidate \(\beta\)-ZnIrO\({}_{3}\)[60]. All these extensions can be handled by the PFFRG method, and left as subjects for future study. ## Acknowledgments The authors thank T. Misawa, J. Nasu, and T. Okubo for fruitful discussions. Parts of the numerical calculations have been done using the facilities of the Supercomputer Center, the Institute for Solid State Physics, the University of Tokyo, the Information Technology Center, the University of Tokyo, and the Center for Computational Science, University of Tsukuba. This work was supported by Japan Society for the Promotion of Science (JSPS) KAKENHI Grant Nos. 19H05825 and 20H00122.
2310.11621
The Impact of Patchy Reionization on Ultra-faint Dwarf Galaxies
We investigate how patchy reionization affects the star formation history (SFH) and stellar metallicity of ultra-faint dwarf galaxies (UFDs). Patchy reionization refers to varying ultraviolet (UV) background strengths depending on a galaxy's environment. Recent observations highlight the significance of this effect on UFDs, as UFDs can have different SFHs depending on their relative position with respect to their host halo during the period of reionization. However, most cosmological hydrodynamic simulations do not consider environmental factors such as patchy reionization, and the effect of reionization is typically applied homogeneously. Using a novel approach to implement patchy reionization, we show how SFHs of simulated UFDs can change. Our cosmological hydrodynamic zoom-in simulations focus on UFD analogs with M_vir~10^9solar mass, M_star < 10^5 solar mass at $z=0$. We find that patchy reionization can weaken the effect of reionization by two orders of magnitude up to $z=3$, enabling late star formation in half of the simulated UFDs, with quenching times $\sim$460 Myr later than those with homogeneous reionization. We also show that halo merger and mass assembly can affect the SFHs of simulated UFDs, in addition to patchy reionization. The average stellar iron-to-hydrogen ratio, [Fe/H], of the simulated UFDs with patchy reionization increases by 0.22-0.42 dex. Finally, our findings suggest that patchy reionization could be responsible for the extended SFHs of Magellanic UFDs compared to non-Magellanic UFDs.
Jaeeun Kim, Myoungwon Jeon, Yumi Choi, Hannah Richstein, Elena Sacchi, Nitya Kallivayalil
2023-10-17T23:14:35Z
http://arxiv.org/abs/2310.11621v1
# The Impact of Patchy Reionization on Ultra-faint Dwarf Galaxies ###### Abstract We investigate how patchy reionization affects the star formation history (SFH) and stellar metallicity of ultra-faint dwarf galaxies (UFDs). Patchy reionization refers to varying ultraviolet (UV) background strengths depending on a galaxy's environment. Recent observations highlight the significance of this effect on UFDs, as UFDs can have different SFHs depending on their relative position with respect to their host halo during the period of reionization. However, most cosmological hydrodynamic simulations do not consider environmental factors such as patchy reionization, and the effect of reionization is typically applied homogeneously. Using a novel approach to implement patchy reionization, we show how SFHs of simulated UFDs can change. Our cosmological hydrodynamic zoom-in simulations focus on UFD analogs with \(M_{\rm vir}\sim 10^{9}\,M_{\odot}\), \(M_{*}\lesssim 10^{5}\,M_{\odot}\) at \(z=0\). We find that patchy reionization can weaken the effect of reionization by two orders of magnitude up to \(z=3\), enabling late star formation in half of the simulated UFDs, with quenching times \(\sim\)460 Myr later than those with homogeneous reionization. We also show that halo merger and mass assembly can affect the SFHs of simulated UFDs, in addition to patchy reionization. The average stellar iron-to-hydrogen ratio, [Fe/H], of the simulated UFDs with patchy reionization increases by 0.22-0.42 dex. Finally, our findings suggest that patchy reionization could be responsible for the extended SFHs of Magellanic UFDs compared to non-Magellanic UFDs. galaxies: formation - galaxies: dwarf - galaxies: star formation - methods: numerical 0000-0002-0002-3181]Jaeeun Kim 0000-0002-4882-7885]Myoungwon Jeon 0000-0002-4882-7885]Yumi Choi 0000-0002-4882-7885]Hannah Richstein 0000-0002-4882-7885]Elena Sacchi ## 1 Introduction In the context of hierarchical \(\Lambda\)CDM models of structure formation, where small galaxies are expected to form first and progressively grow into massive ones, it is essential to understand the formation and evolution of low-mass galaxies to obtain a comprehensive understanding of galaxy formation (e.g., Padmanabhan, 1993). Ultra-faint dwarf galaxies (UFDs), which are known as the smallest unit of galaxies with the lowest luminosity (\(L<10^{5}L_{\odot}\)) and stellar mass (\(10^{2}\,M_{\odot}<M_{*}<10^{5}\,M_{\odot}\)) in the universe, have been considered as the fundamental building blocks of massive galaxies (reviewed in Simon, 2019, also see Tolstoy et al., 2009; McConnachie, 2012). Additionally, due to their shallow potential wells, UFD systems are highly vulnerable to internal and external feedback effects, making them an excellent laboratory for studying how feedback mechanisms change star formation activities in small systems (e.g., Stinson et al., 2007; Sawala et al., 2010; Simpson et al., 2013; Agertz and Kravtsov, 2015; Jeon et al., 2017; Wheeler et al., 2019; Agertz et al., 2020; Rey et al., 2020; Gutcke et al., 2022; Sanati et al., 2023). The star formation histories (SFHs) of UFDs are generally governed by physical processes such as photoionization heating from stars, supernova feedback (SNe), which releases energy when stars die, and global heating caused by cosmic reionization. Interestingly, a common trait observed in the SFHs of UFD galaxies is that they likely formed the majority of their stars (about 80%) prior to the onset of reionization, which is followed by a suppression of their star formation. (e.g., Brown et al., 2014; Weisz et al., 2014). This implies that reionization played a crucial role globally in quenching the star formation of UFD galaxies, along with SN feedback, which dissipates dense gas that is eligible for star formation. Without SN feedback, SFHs of the UFDs can extend to as late as \(z\sim 2\), otherwise, they are truncated at an early stage around \(z\sim 6\)(e.g., Simpson et al., 2013; Jeon et al., 2017). However, despite the importance of cosmic reionization in shaping UFD SFHs, current implementations of reionization in simulations typically adopt a simple spatially uniform UV background model (Faucher-Giguere et al., 2009; Haardt and Madau, 2012, hereafter HM2012). To be specific, the UV background intensity used in simulations of UFD analogs is applied uniformly without taking into account the local UV fields originating from their host galaxies. According to recent research by Sacchi et al. (2021), the SFHs of UFDs associated with the Magellanic Clouds (MCs) could offer evidence of patchy reionization. To clarify, the MCs consist of the Large Magellanic Cloud (LMC) and the Small Magellanic Cloud (SMC), both being the largest satellite galaxies of the Milky Way (MW). Patchy reionization refers to the non-uniform reionization process, which varies based on the individual environments of UFDs, their proximity to their host galaxy, and the intensity of UV photon emissions from the host halo during the era of reionization. The Magellanic UFDs could present an ideal opportunity to investigate the impact of patchy reionization, primarily because recent data from Gaia DR2 proper motions suggest that they entered the MW's virial radius relatively recently, less than 3.5 billion years ago (e.g., Patel et al., 2020). This implies that Magellanic UFDs might have been situated at a considerable distance from the host halo during the reionization period, leading to a weak influence from the MW halo. Sacchi et al. (2021) conducted a comparative analysis of the SFHs of Magellanic UFDs, primarily linked to the LMC and having recently entered the MW's halo, with those of long-standing UFD satellites of the MW, which they refer to as non-Magellanic UFDs. Their findings revealed that Magellanic UFDs tend to exhibit more extended SFHs, lasting \(\sim\)600 Myr, in contrast to non-Magellanic UFDs. Given that both Magellanic and non-Magellanic UFD satellites share similar stellar masses (\(M_{*}\sim 10^{3}\,M_{\odot}\)), and a significant fraction of their stars (\(\sim 80\%\)) formed prior to reionization, the discrepancy in their SFHs could be attributed to the effects of patchy reionization. Specifically, Magellanic UFDs may have been located farther from the MW's progenitor halo during the reionization era, experiencing a weaker reionization impact compared to non-Magellanic UFDs, thus explaining their extended SFHs. As such, the effect of reionization on UFDs could vary depending on the environment in which they were located at the time of reionization, leaving an imprint on their SFH. In this study, we investigate how the local UV fields, which are mainly determined by connection with the host halo, can shape the SFHs of satellite UFDs through cosmological hydrodynamic zoom-in simulations. Several previous studies have investigated how the timing and strength of reionization affect individual dwarf galaxies by adopting uniform UV background radiation (e.g., Simpson et al., 2013; Bose et al., 2018; Garrison-Kimmel et al., 2019; Pereira-Wilson et al., 2023). Simpson et al. (2013), for example, utilized the table provided by HM2012 by varying the period of reionization, \(\Delta z\), during which the intensity of reionization increases from zero to full strength. They discovered that when \(\Delta z\) was set to \(z=7-6\), the resulting stellar mass of dwarf galaxies with a mass of \(M_{\rm vir}=10^{9}\,M_{\odot}\) at \(z=0\) could be larger by one order of magnitude than when \(\Delta z\) was set to \(z=9-8.9\). It should be mentioned, however, that the HM2012 method may overestimate the ionization emissivity at high redshifts (e.g., \(z>10\)) by extrapolating the UV background radiation that was derived for lower redshifts. This could lead to premature and excessive heating of the intergalactic medium (IGM) at high redshifts (e.g., Puchwein et al., 2015; Onorbe et al., 2017), which may significantly suppress star formation in low-mass galaxies like UFD systems. An alternative approach to implementing the effect of reionization is to solve radiative transfer equations self-consistently to trace UV photons, but this method is computationally expensive and is usually conducted on a large scale with lower mass resolution (e.g., Pawlik and Schaye, 2008; Dixon et al., 2018; Rosdahl et al., 2018). Such large-volume simulations that have focused on studying the large-scale reionization history, resultant galaxy luminosity functions, and the escape of ionizing radiation have not placed much emphasis on the detailed evolution of small satellite galaxies such as UFDs. Among these, the SPHINX simulation (Rosdahl et al., 2018) achieved high spatial resolution (\(\sim 10\) pc) and addressed the radiative transfer aspect in the context of UFD formation. This paper takes a unique approach to model the influence of reionization on UFDs by applying local UV fields from host galaxies rather than the traditional approach of using a uniform and homogeneous reionization effect. To avoid computationally expensive calculations such as radiative transfer, pre-calculated local UV fields from dark-matter-only simulations are utilized in cosmological hydrodynamic simulations of UFD analogs. In particular, we choose a halo pair consisting of a target UFD and its host halo, analogous to the MW and its satellites, from dark-matter-only simulations. Then, we calculate the strength of local UV fields from the host as a function of the distance between the target UFD and the host and the spectral energy distribution (SED) of the host galaxy. Furthermore, we test a scenario where reionization happens later than previously thought by incorporating a transition redshift, \(z_{\rm t}\), which marks the point at which the impact of the overall UV radiation in the universe becomes stronger than that of the local UV radiation from a host galaxy. Regarding the timing of reionization completion, while it is widely accepted to occur at \(z=6\)(e.g., Becker et al., 2001; Fan et al., 2006), recent spectroscopic studies of Lyman alpha emitters and distant quasars suggest a late reionization scenario, where reionization is completed up to \(z=5.5-6\), which potentially leads to prolonged SFHs (Becker et al., 2015; Choudhury et al., 2015; McGreer et al., 2015; Mesinger et al., 2015). To better understand whether this delayed reionization can result in extended SFHs in UFD galaxies, we carry out simulations by altering the value of \(z_{\rm t}\) from \(z=5.8\) to \(z=5.5\). The paper is structured as follows. In Section 2, we describe the numerical methodology used in this study, while in Section 3, we present the simulation results. Our main conclusions are summarized in Section 4. Unless stated otherwise, all distances are given in physical units for consistency. ## 2 Numerical Methodology ### Simulation Set Up We have utilized a modified version of the N-body and Smoothed Particle Hydrodynamics (SPH) code GADGET (Springel et al., 2001; Springel, 2005) to perform a set of hydrodynamic zoom-in simulations. The adopted cosmological parameters include a matter density parameter of \(\Omega_{\rm m}=1-\Omega_{\Lambda}=0.265\), a baryon density of \(\Omega_{\rm b}=0.0448\), a present-day Hubble expansion rate of \(\rm H_{0}=71\,km\ s^{-1}\ Mpc^{-1}\), a spectral index of \(n_{\rm s}=0.963\), and a normalization of \(\sigma_{8}=0.8\)(Komatsu et al., 2011; Planck Collaboration, 2016). The initial conditions are generated using the cosmological initial conditions code MUSIC (Hahn & Abel, 2011). A preliminary dark-matter-only simulation using \(128^{3}\) particles in a \(L=6.25h^{-1}\) comoving Mpc box is carried out to select target halos. The target halos chosen represent UFD analogs with a mass of \(M_{\rm vir}\sim 10^{9}\,M_{\odot}\) at \(z=0\), and their physical properties are listed in Table 1. It is important to emphasize that we choose UFD analogs that are isolated and situated at a considerable distance from the host halo throughout their evolution. Our primary purpose is to study how patchy reionization affects the SFHs of these UFD analogs, particularly during high-\(z\). By focusing on isolated UFD analogs, we can exclude the potential impact of environmental factors, such as ram pressure stripping, as they come closer to the MW. Next, we have conducted four consecutive refinements to the region enclosing the area two times the virial radius of the target UFD halo at \(z=0\). In the most refined region, the final resolution is \(2048^{3}\), with dark matter (DM) and gas-particle masses of \(m_{\rm DM}\approx 2500\,M_{\odot}\) and \(m_{\rm SPH}\approx 500\,M_{\odot}\), respectively. The softening length of DM and stellar particles is fixed at 40 pc at all redshifts in our simulations. However, for gas particles, we utilize an adaptive softening length that is proportional to the minimum value of \(\epsilon_{\rm gas,min}=2.8\) pc. At each time step, we solve the non-equilibrium rate equations for the primordial chemistry of nine atomic and molecular species (H, H+, H-, H2, H+2, He, He+, He++, e-, D, D+). Besides primordial cooling, we have incorporated metal cooling processes with carbon, oxygen, silicon, magnesium, neon, nitrogen, and iron. The cooling rates for these elements in the simulation are determined using the photo-ionization package CLOUDY (Ferland et al., 1998). ### UV background #### 2.2.1 Global reionization This section describes one of two methods used to implement the UV background in our simulations. There are two distinct approaches we have utilized for this purpose. The first method is the homogeneous and flash-like cosmic UV/X-ray background, which was proposed by Haardt & Madau (2012). This approach involves the use of redshift-dependent photoionization and photo-heating rates for H I, He I, and He II to mimic the process of reionization. This effect is uniformly applied to all galaxies within the simulation box, regardless of their position relative to their host galaxy. From this point on, we will refer to the homogeneous reionization effect as global reionization (GR). In our simulations, we start by introducing the UV background at \(z=7\) and progressively increase it to the full strength at \(z=6\) to prevent abrupt and significant heating of gas particles. Then, we maintain it at the constant 100% strength of HM2012 until \(z=0\). #### 2.2.2 Patchy reionization The second approach is a patchy UV background, which takes into account environmental factors specific to each target UFD halo, such as the proximity of ionizing sources and the distance between them, which may alter the reionization effect experienced by the target halo. To achieve the most accurate results in calculating the effect of patchy reionization (PR) for our UFD analogs, it would be ideal to track the trajectory of ionizing photons emitted from the surrounding galaxies and apply the resulting heating and ionization effects on the target dwarf galaxy. However, carrying out a hydrodynamic simulation that self-consistently solves radiative transfer equations necessitates a significant amount of computational power. To overcome this computational challenge, we have devised a novel approach where we pre-calculate the impact of local UV fields generated by the surrounding galaxies using DM-only simulations. This pre-calculated information is then utilized in our cosmological hydrodynamic simulations. In summary, obtaining pre-calculated UV fields involves three steps. Firstly, a DM-only simulation is performed to determine the environmental factors, such as the distance between a target UFD analog and the surrounding DM halos, as well as their halo masses. Secondly, we employ the abundance-matching approach proposed by Behroozi et al. (2013) to estimate the stellar mass of the surrounding halos. Finally, to attain the photoionization and photoheating rates from the neighboring galaxies, we use the Starburst99 package (Leitherer et al., 1999) to derive the SED. Below, we provide a detailed explanation for each step. The first step of the process includes carrying out a dark-matter-only simulation with \(128^{3}\) particles using the same simulation setup and cosmology as described in Section 2.1. Following this, a target UFD analog is selected, and the distance between the target UFD analog and the surrounding halos, including its host halo, as well as the mass evolution of the surrounding halos, are tracked. Figure 1 displays the DM morphology at \(z=0\), which highlights a pair consisting of the target UFD (\(M_{\rm vir}\sim 10^{9}\,M_{\odot}\)) and its host halo (\(M_{\rm vir}\sim 2\times 10^{12}\,M_{\odot}\)). In the second step, we estimate the stellar masses of the surrounding halos using the abundance matching technique (Behroozi et al., 2013). This is a widely used approach for assigning galaxy stellar mass or luminosity to halos generated in N-body simulations without having to perform hydrodynamical simulations. In the final step, we adopt the Starburst99 package to derive the SED of the simulated galaxies. To do this, we utilize standard parameters, employing the Geneva evolutionary tracks without rotation as a model for stellar evolution, and black body spectra as a model for the atmosphere. Starburst99 generates a synthetic spectrum of galaxies based on their stellar masses, which were obtained in the previous step using the abundance matching technique. Since the massive OB stars formed in the neighboring galaxies are the primary source of ionizing photons, and their main sequence lifetime is approximately 10 Myr, we divide the stellar mass evolution of the surrounding galaxies into time intervals that are similar to the lifetime of OB stars. We then estimate the change in stellar mass (\(\Delta M_{*}\)) due to newly formed stars in the surrounding galaxies. We calculate the local UV fields exerted by the neighboring galaxies onto the target UFD analog by plugging in the distance between them \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multicolumn{1}{c}{ Halo} & \(M_{\rm vir}\) & \(r_{\rm vir}\) & \(M_{*}\) & \(<\) [Fe/H] \(>\) & \(M_{\rm gas}\) & SF\({}_{\rm trun}\) & \(z_{\rm sf,end}\) \\ & [\(10^{9}\,M_{\odot}\)] & [kpc] & [\(10^{5}\,M_{\odot}\)] & - & [\(10^{6}\,M_{\odot}\)] & - & - \\ \hline Halo1-GR & 0.74 & 18.5 & 0.14 & -3.28 & 1.5 & yes & 7.036 \\ \hline Halo2-GR & 1.02 & 20.8 & 0.38 & -2.43 & 2.0 & yes & 6.998 \\ \hline Halo3-GR & 1.05 & 20.7 & 0.24 & -2.81 & 2.6 & yes & 7.106 \\ \hline Halo4-GR & 1.09 & 21.0 & 0.14 & -2.82 & 3.2 & yes & 7.037 \\ \hline Halo5-GR & 1.40 & 22.9 & 0.63 & -2.33 & 2.0 & yes & 6.917 \\ \hline Halo6-GR & 1.84 & 25.0 & 0.50 & -2.37 & 7.0 & yes & 6.895 \\ \hline \hline \end{tabular} Note. – Column (1): the name of halos. Column (2): viral mass (in units of \(10^{9}\,M_{\odot}\)). Column (3): virial radius (in kpc). Column (4): stellar mass (in \(10^{5}\,M_{\odot}\)). Column (5): average stellar iron-to-hydrogen ratios. Column (6): gas mass (in \(10^{6}\,M_{\odot}\)). Column (7): whether star formation is truncated after reionization. Column (8): the time when star formation is completed. \end{table} Table 1: Physical quantities of the simulated UFD analogs at \(z=0\) in the GR runs, assuming homogeneous reionization. and the synthetic spectrum of the surrounding galaxies into equations (1) and (2). The rates of photoionization and photoheating are given by \[\mathrm{K}_{\mathrm{ion,i}}=\int_{\nu_{\mathrm{min}}}^{\infty}\frac{\mathrm{F}_{ \nu}\sigma_{\nu}}{h\nu}\,\mathrm{d}\nu, \tag{1}\] \[\Gamma_{\mathrm{i}}=n_{\mathrm{n}}\int_{\nu_{\mathrm{min}}}^{\infty}\mathrm{F} _{\nu}\sigma_{\nu}(1-\frac{\nu_{\mathrm{min}}}{\nu})\,d\nu. \tag{2}\] Here, \(\sigma_{\nu}\) refers to the ionization cross-section (Osterbrock & Ferland, 2006), \(n_{\mathrm{n}}\) is the number density of the respective neutral species, while \(\nu_{\mathrm{min}}\) represents the ionization threshold frequency, such as \(h\nu_{\mathrm{min}}=13.6\) eV, \(h\nu_{\mathrm{min}}=24.6\) eV, and \(h\nu_{\mathrm{min}}=54.4\) eV for H I, He I, and He II, respectively. The ionizing flux, \(\mathrm{F}_{\nu}\), incident upon the target UFD analog, originating from the surrounding galaxies, is computed by taking into account the distance between the target halo and the neighboring galaxies and their SEDs. Self-shielding of the dense gas is implemented by attenuating the UV background based on \(\exp{(-N_{\mathrm{HI}}\bar{\sigma}_{\mathrm{ion}})}\), where \(N_{\mathrm{HI}}=xn_{\mathrm{HI}}\)(Simpson et al., 2013). Here, \(x\) represents the SPH kernel size, \(n_{\mathrm{HI}}\) denotes the neutral hydrogen number density, and \(\bar{\sigma}_{\mathrm{ion}}\) refers to the frequency-averaged photoionization cross-section for HI. Similarly, a simple approach to self-shielding is applied for He I and He II as well. It should be noted that the halo masses of the surrounding halos range from approximately \(M_{\mathrm{vir}}\approx 10^{10}\,M_{\odot}\) to \(M_{\mathrm{vir}}\approx 10^{12}\,M_{\odot}\) at \(z=0\). While we consider roughly 40 surrounding galaxies, including a host galaxy within the simulated box, we find that the impact of the host halo on the target UFD analog is likely to outweigh the effects of other smaller halos. Therefore, we will only focus on the influence of the host halo going forward. Additionally, we conduct simulations to examine whether the trajectories of UFD analogs in relation to the MW-like host halo remain consistent between simulations with hydrodynamics and the DM-only simulation. We verify that the difference in distance between the two sets of simulations is less than 1%, indicating that the DM-only simulation aligns with the hydro simulations in terms of proximity to the host halo. #### 2.2.3 Transition time, \(z_{\mathrm{t}}\) In the PR implementation discussed in the preceding section, we only take star-forming galaxies into consideration as the ionizing sources of reionization. However, HM2012 suggests that hard UV-emitting quasars should also be considered as a source of reionization alongside star-forming galaxies. The precise contribution of these two ionizing sources as a function of redshift remains uncertain, although it is anticipated that quasars will be the dominant source at lower redshifts (\(z<5\))(e.g., Wyithe & Loeb, 2003; Haardt & Madau, 2012).To take into account the influence of quasars on the reionization process, which is beyond the scope of this study, we combine our patchy UV background approach with the homogeneous reionization provided by HM2012. To be specific, we apply our patchy reionization implementation at relatively higher redshifts, where star-forming galaxies are the primary source of reionization and the impact of quasars is insignificant. We then designate a transition time, \(z_{\mathrm{t}}\), where we switch from the patchy reionization approach to the homogeneous reionization model proposed by HM2012. Figure 1: Dark matter morphology at \(z=0\), with each panel depicting a smaller scale from left to right. Left: final snapshot from a dark-matter-only preliminary run that includes an MW-like halo, marked as a large white circle, and a UFD analog with masses of \(M_{\mathrm{vir}}\sim 2\times 10^{12}\,M_{\odot}\) and \(M_{\mathrm{vir}}\sim 10^{9}\,M_{\odot}\) at \(z=0\), respectively. Middle: the zoom-in of the region surrounding the UFD analogs. Right: detailed look of Halo6-GR (see Table 1). The virial radius of the halo is \(\sim\) 30 kpc at \(z=0\). The transition period, ranging from \(z=5.5\) to \(z=5.8\), is inspired by late reionization scenarios, which suggest that the end of reionization could extend from \(z=6\) to as late as \(z=5.5\), as proposed in various studies (Becker et al., 2015; Choudhury et al., 2015; McGreer et al., 2015; Mesinger et al., 2015). Given the significant impact of GR, such that star formation in UFD analogs is immediately suppressed upon its introduction, the transition time can be regarded as marking the end of reionization. Indeed, this combined approach enables an exploration of how patchy reionization might impact the SFHs of the simulated UFD analogs and provides insights into when reionization reaches its completion. Moreover, we incorporate an escape fraction, \(f_{\rm esc}\), as a free parameter, which represents the fraction of ionizing photons from star-forming galaxies that can escape and propagate into the IGM. The estimate of \(f_{\rm esc}\) varies with halo mass and redshift, and theoretical studies have estimated the average value of \(<f_{\rm esc}>\) to be 0.1-0.2 at high redshifts (\(z\gtrsim 5\)) (e.g., Kimm & Cen, 2014; Bouwens et al., 2015; Mitra et al., 2015; Khaire et al., 2016; Ma et al., 2020), with some studies suggesting a somewhat higher value of \(<f_{\rm esc}>\sim 0.4\)(e.g., Yajima et al., 2011). Increasing the escape fraction has a stronger impact on reionization, resulting in the heating of gas within low-mass halos and the consequent suppression of star formation in such systems. Observational studies have extensively investigated the constraints on the escape fraction, particularly for galaxies at lower redshifts (e.g., Choi et al., 2020; Mestric et al., 2021; Naidu et al., 2022). Notably, Choi et al. (2020) proposed a value of \(\sim\)0.25 for \(f_{\rm esc}\) based on the SED analysis of resolved stars in NGC 4214. Despite being a dwarf galaxy at a low redshift, NGC 4214 exhibits properties that make it a suitable analog for studying the ionizing sources responsible for reionization. The \(f_{\rm esc}\) can also be influenced by the binary population within galaxies (e.g., Ma et al., 2016; Rosdahl et al., 2018). In particular, Rosdahl et al. (2018), who investigated the impact of binary population on \(f_{\rm esc}\) and the reionization history, proposed that considering binary stars leads to about three times higher \(f_{\rm esc}\) in observed 1500 A than a single stellar population. However, due to the challenge of constraining the frequency of binary stars, we have not accounted for the effect of binaries in our SED models, which would yield harder resulting SEDs. Choi et al. (2020) also did not incorporate stellar rotation or binary evolution in their SED models. In our simulations involving patchy UV fields, we adopt a fixed value of \(f_{\rm esc}=0.3\) for the range \(z_{\rm t}<z<7\), considering that star-forming galaxies are the primary contributors to reionization during this epoch. This choice is reasonable for the host galaxies involved in the reionization process. ### Star formation We include the formation of Population III (Pop III) stars, metal-free first-generation stars, and Population II (Pop II) stars, which form from gas clouds enriched by metals from the SN explosions of Pop III stars. The transition from Pop III to Pop II star formation occurs when the metallicity of the gas cloud exceeds a critical value of \(Z_{\rm crit}=10^{-5.5}\,{\rm Z}_{\odot}\)(Omukai, 2000; Schneider & Omukai, 2010; Safranek-Shrader et al., 2016). For a detailed description of the star formation recipe and associated stellar feedback, we refer readers to Jeon et al. (2017). In short, star formation is triggered when the hydrogen number density of a gas particle surpasses the threshold value of \(m_{\rm H,th}=100\) cm\({}^{-3}\). The gas particle is transformed into collisionless star particles with a mass of \(m_{\rm star}=500\,M_{\odot}\). Instead of treating each star particle as an individual star, we consider them as a single stellar cluster. We assume that the initial mass function (IMF) for Pop III stars is top-heavy, \(\phi_{\rm PopIII}(m)=dN/d\log m\approx m^{-\alpha}\), with a slope \(\alpha=1.0\) and covers the mass range of \([m_{0},m_{1}]=[10,150]\,M_{\odot}\). For Pop II stars, we adopt the Chabrier IMF within the mass range of \([m_{0},m_{1}]=[0.1,100]\,M_{\odot}\). We implement a stochastic conversion of gas particles to star particles based on Schmidt's law (Schmidt, 1959), where stars are formed at a rate of \(\dot{\rho}*=\rho/\tau_{*}\). Here, \(\rho\) is the gas density, and the star formation timescale, \(\tau_{*}\), is given by \(\tau_{*}=\tau_{\rm ff}/\epsilon_{\rm ff}\), where \(\tau_{\rm ff}\) corresponds to the free-fall time, and \(\epsilon_{\rm ff}\) denotes the star formation efficiency per free-fall time. During each numerical time interval of \(\Delta t\), the conversion from gas to star particle only occurs when a randomly generated number between 0 and 1 is less than the minimum of \(\Delta t/\tau_{*}\) and 1. We set the star formation efficiency, \(\epsilon_{\rm ff}\sim 0.01\), for both Pop III and Pop II stars, and the star formation timescale is given by \[\tau_{*}=\frac{\tau_{\rm ff}(n_{\rm H})}{\epsilon_{\rm ff}}\sim 400{\rm Myr} \left(\frac{n_{\rm H}}{100\ {\rm cm^{-3}}}\right)^{-1/2}, \tag{3}\] where the free fall time is \(\tau_{\rm ff}=[3\pi/(32G\rho)]^{1/2}\). The level of star formation activity in galaxies is regulated by various factors, including their mass and the potency of associated SN feedback in suppressing subsequent star formation. This interplay between galaxy mass and the effectiveness of SN feedback can lead to diverse SFHs, largely characterized by either continuous or bursty star formation. It is important to highlight that in this study, we have not considered the impact of photoionization heating from Pop III and Pop II stars, primarily due to the considerable computational cost. The influence of radiative feedback from these local sources remains a topic of ongoing debate. For instance, Hopkins et al. (2020) suggests that the stellar mass of small galaxies (with a halo mass of \(M_{\rm vir}=2-4\times 10^{9}\,M_{\odot}\)) is predominantly shaped by external UVB radiation, while local sources have a negligible effect. Conversely, Agertz et al. (2020) have presented contrasting results, suggesting that photoionization heating from stars could reduce the stellar mass by a factor of 5-10 for dwarf galaxies with a halo mass of \(M_{\rm vir}=10^{9}\,M_{\odot}\). ### Chemical feedback We account for chemical enrichment through the contribution of winds from asymptotic giant branch (AGB) stars and the explosions of CCSNe and Type Ia SNe, following the implementation described in Wiersma et al. (2009). At each simulation time step, we estimate the masses of 11 individual elements produced by dying stars and release them into the neighboring medium. These elements undergo diffusive mixing in both the interstellar medium (ISM) and the IGM. The initial masses of Pop III stars determine their nucleosynthetic yields and remnant masses. For instance, Pop III stars with initial masses between \(10\,M_{\odot}\) and \(40\,M_{\odot}\) end their lives in core-collapse supernovae (CCSNe), while those with masses between \(140\,M_{\odot}\) and \(260\,M_{\odot}\) end their lives in pair-instability supernovae (PISNe). We adopt the nucleosynthetic yields and remnant masses for CCSNe of Pop III stars from Heger and Woosley (2010) and for PISNe from Heger and Woosley (2002). Pop II stars undergo mass loss through AGB or SN in their final stage. We incorporate metallicity-dependent tables ranging from \(Z=0.0004\) to \(Z=1.0\)(Portinari et al., 1998) to determine the yield and evolution of these stars. Intermediate mass stars (\(0.8\,M_{\odot}\lesssim m_{*}\lesssim 8\,M_{\odot}\)) can lose up to 60% of their mass during the terminal AGB stage, and these yields are taken from Marigo (2001). CCSNe from massive stars (\(m_{*}\gtrsim 8\,M_{\odot}\)) release significant amounts of metals, and Type Ia SNe are expected to occur for stars with masses in the range \(3\,M_{\odot}\lesssim m_{*}\lesssim 8\,M_{\odot}\). Due to uncertainties in the detailed evolution of Type Ia SNe, we use an empirical delay time function expressed in terms of e-folding times (e.g., Barris and Tonry, 2006; Forster et al., 2006). Metals from dying Pop II stars are also transported to the IGM and ISM via diffusion-based metallicity, as described by Greif et al. (2009), where ejected metals disperse into neighboring gas particles (\(N_{\rm ngb}=48\)). The initial metallicity of the surrounding gas is given below. \[Z_{\rm i}=\frac{m_{\rm metal,i}}{m_{\rm SPH}+m_{\rm metal,i}}, \tag{4}\] where \(m_{\rm metal,i}\) represents the mass of metal assigned to one of the neighboring gas particles, and \(m_{\rm SPH}\) is the mass of a gas particle. ### Thermal feedback When a star reaches the end of its life and undergoes an SN explosion, we release the energy from the explosion to neighboring gas particles as thermal energy. However, it is well known that an over-cooling problem can arise if the SN energy is deposited onto too much surrounding gas, resulting in the thermal energy being radiated away (e.g., Stinson et al., 2007). To avoid the over-cooling problem associated with SN explosions, we use the method suggested by Dalla Vecchia and Schaye (2012), where a temperature increase of more than \(10^{7.5}\) K is guaranteed by limiting the number of neighboring particles that receive SN thermal energy. To achieve this, we deposit the energy from the SN explosion onto a single neighboring particle (\(N_{\rm nbg}=1\)) to ensure that the effect of the explosion is preserved. The total SN energy per unit solar mass, \(\epsilon_{\rm SN}\), is calculated using the adopted IMF for Pop III and Pop II stars, assuming that each SN releases \(10^{51}\) erg of energy. This is expressed as \(\epsilon_{\rm SN}=n_{\rm SN}\times 10^{51}\), where \(n_{\rm SN}\) is the number of SNe per unit mass. \(n_{\rm SN}\) is obtained by integrating the IMF, \(\phi(m)\), over the mass range from \(m_{0}\) to \(m_{1}\). Here, \(m_{0}=8\,M_{\odot}\) and \(m_{1}=100\,M_{\odot}\) are the minimum and maximum initial mass of stars that can undergo SN, respectively. For Pop III stars, the resultant value is \(\epsilon_{\rm SN,PopIII}=5.56\times 10^{49}\) erg \(\,{M_{\odot}}^{-1}\), while for Pop II stars, it is \(\epsilon_{\rm SN,PopII}=1.73\times 10^{49}\) erg \(\,{M_{\odot}}^{-1}\). ## 3 Results In this section, we present the results of our simulations, focusing on how patchy reionization affects the SFHs and stellar metallicities of our simulated UFD analogs. Specifically, we compare two scenarios: one with a homogeneous UV background throughout the entire cosmic history until \(z=0\), and the other incorporating patchy reionization effects on the galaxy analogs. Section 3.1 examines the fundamental properties of our simulated UFD analogs with a homogeneous UV background. Section 3.2 investigates the impact of patchy reionization on the SFHs of our simulated galaxies. Section 3.3 analyzes how the duration of star formation varies depending on the transition time, \(z_{\rm t}\), from patchy to homogeneous reionization. Finally, we compare the physical properties of our simulated UFDs with those of observed UFDs in the MW and discuss the implications of patchy reionization on UFDs in Section 3.4 ### Basic properties (GR) Figure 2 exhibits the evolution of the simulated UFD analogs using a homogeneous UV background from the initial star formation activities until \(z=0\). The basic properties of the simulated galaxies at \(z=0\) are provided in Table 1. The four panels, arranged clockwise from the upper left, illustrate the mass assembly for the virial and gas masses, the maximum hydrogen number density, the cumulative SFH, and the star formation rate (SFR). All physical quantities in each panel are calculated using the particles found within the virial radius of the halo at a given time. A grey-shaded region in each panel denotes the reionization period, during which a homogeneous UV background is introduced at \(z=7\), gradually ramping up to its full value by \(z=6\). Notably, all the simulated analogs have a virial mass of \(M_{\rm vir}\approx\) 1-2 \(\times 10^{9}\,M_{\odot}\) at \(z=0\). We find that the total gas mass within the halos tends to decrease after the UV background reaches its full strength, and the degree of gas loss due to reionization depends on how massive a halo is when the onset of reionization is initiated (\(z=7\)). As shown in the upper-left panel of Figure 2, four sets of halos (Halo1-GR, Halo2-GR, Halo3-GR, Halo4-GR) attain gas masses of \(\sim 5\times 10^{6}\,M_{\odot}\) at \(z=7\). In contrast, Halo5-GR, the most massive set of halos at \(z=7\), having a total mass of \(\sim\) 1.6 \(\times\)\(10^{8}\,M_{\odot}\), tends to preserve a gas mass of \(M_{\rm gas}\sim\) 1.3 \(\times\)\(10^{7}\,M_{\odot}\), which is three times larger than those of the other four halos. On the other hand, Halo2-GR loses roughly 90% of its gas mass (\(M_{\rm gas}\sim 2\times 10^{6}\,M_{\odot}\) at \(z=0\)) between \(z=6\) and \(z=0\). In addition to the total gas mass, the process of reionization also disperses dense gas clouds within the halos. The upper-right panel of Figure 2 displays the maximum hydrogen number density of gas particles within each halo, compared to the density threshold for star formation, \(n_{\rm H,th}=100\) cm\({}^{-3}\), represented by a solid gray horizontal line. Gas particles above this density threshold can be transformed into star particles with a mass of \(m_{\rm star}\). The maximum gas densities of all ha Figure 2: The evolution of the simulated UFD analogs in the GR runs. upper-left: the total mass (long-dashed line) and gas mass (short-dashed line) of the UFD analogs. upper-right: the maximum hydrogen number density. lower-left: the star formation rate. lower-right: the cumulative star formation history. The quantities presented in all panels are computed using particles residing within the virial radius of the halos. The grey-shaded regions in each panel denote the period when a homogeneous UV background field is introduced at \(z=7\), with its strength gradually increasing up to the full value by \(z=6\). The evolution of each UFD analog realization is depicted using different colors and line types. Due to the effects of reionization and SN feedback, all simulated galaxies experience truncated star formation. The upper-right panel shows that there is no dense gas above the density threshold, \(n_{\rm H,th}=100\) cm\({}^{-3}\), suggesting that no further star formation has occurred since the epoch of reionization. los decrease significantly by two orders of magnitude from \(z=7\) to \(z=6\). The UV background effectively disperses the dense gas particles, making it difficult to form new stars, resulting in no star formation in all halos below \(z\approx 7\). Interestingly, Halo2-GR, Halo4-GR, and Halo5-GR can retain the gas particles with \(n_{\rm H}\sim 1-10\) cm\({}^{-3}\) for \(\sim 250\) Myr at \(z\lesssim 6\). This is due to the relatively higher virial masses of these halos, with \(M_{\rm vir}\sim 2.0\times 10^{8}\,M_{\odot}\), \(\sim 1.6\times 10^{8}\,M_{\odot}\), and \(\sim 1.8\times 10^{8}\,M_{\odot}\) for Halo2-GR, Halo4-GR and Halo5-GR, respectively, at \(z=6\). On the other hand, Halo1-GR, which has the lowest mass with \(M_{\rm vir}\sim 8.6\times 10^{7}\,M_{\odot}\) at \(z=6\), shows a significant reduction in the maximum hydrogen number density by five orders of magnitude between \(z=7\) and \(z=6\). This quenching trend by reionization is reflected as a truncated SFH in the bottom-right panel of Figure 2. The SFHs in this panel represent the cumulative fraction of stars formed until a given time among all stars in each halo at \(z=0\). For instance, all simulated halos exhibit a ratio of unity at \(z\sim 7\), implying that all stars in each halo are formed prior to reionization. Our findings are consistent with previous studies (e.g., Brown et al., 2014; Weisz et al., 2014; Jeon et al., 2017), confirming that low-mass progenitor halos of the UFD analogs are vulnerable to internal and external processes, such as SN feedback and reionization, giving rise to short SFHs. It should be emphasized that the simulated galaxies are the result of the merging of multiple progenitor halos, indicating that they have several small-mass progenitors at high redshifts. Figure 3 illustrates the progenitor halos, which are the halos where the stars that are found within the virial radius at \(z=0\) are formed and then merged with the primary halo. Only the progenitors that contribute more than 5% of the final stellar mass of the simulated galaxies are shown, with a filled circle representing the primary halo, the most massive halo among the progenitors. The percentage of the fraction of stellar mass at \(z=7\) in comparison to the final stellar mass of the UFD analogs at \(z=0\) is presented below the halo name in each panel of Figure 3. The percentage is 100% in all panels, which is due to the truncated SFHs at \(z=7\). Our analysis shows that the simulated galaxies in Halo2-GR, Halo3-GR, and Halo6-GR have approximately three progenitors, while the stars in Halo1-GR, Halo4-GR, and Halo5-GR are primarily (\(\gtrsim 75\%\)) formed in a single primary halo. The difference between a multiple-progenitor and a single-progenitor group is attributed to the rate of halo mass growth. The primary halos of the single-progenitor group experience more rapid mass assembly than those of the multiple-progenitor group. For example, during the period of \(z\sim 10-7\), the primary halo of Halo4-GR increases its halo mass from \(M_{\rm vir}\sim 4.8\times 10^{6}\,M_{\odot}\) to \(M_{\rm vir}\sim 9.2\times 10^{7}\,M_{\odot}\), along with an increase in gas mass, resulting in a substantial amount of star formation compared to the relatively less massive progenitor halos. UFDs are known to be systems with low metallicity, with \(\rm[Fe/H]\lesssim-2\)(e.g., Martin et al., 2007; Norris et al., 2010; Kirby et al., 2013; Simon, 2019). Due to the combined effects of reionization and SN feedback, which lead to short SFHs, the chances of these systems being enriched are limited. The metal-poor nature of UFD systems is depicted in Figure 4. Within both the upper and bottom panels, the estimated \(\rm[Fe/H]\) values of in-situ stars and externally origin Figure 3: The fraction of stars formed within the progenitor halos of the UFD analog is shown in each panel as a function of halo masses at \(z=7\). Only progenitor halos that have formed stars contributing to more than 5% of the stellar mass of the UFD analog at \(z=0\) are displayed. The value in the upper corner of each panel represents the proportion of stars formed at \(z=7\) compared to the total number of stars formed by \(z=0\), implying that all stars formed at \(z=7\). The primary halo, the most massive halo at a given epoch, is depicted as a filled blue circle. This suggests that the number of progenitor halos contributing to the total stellar mass can vary depending on how fast the primary halo grows. by black and grey inverted triangles, respectively, plotted as a function of their formation time. Note that in-situ stars refer to stars that are formed within the primary progenitor halo. We focus especially on two specific cases: in Halo2-GR, stars form in multiple progenitors that merge at a later time, while in Halo4-GR, stars are primarily formed in situ within the primary progenitor halo. In both halos, the [Fe/H] values of all stars show a wide range from [Fe/H] \(\approx-6\) to [Fe/H] \(\approx-2\) (upper panels). However, the metallicity of in-situ stars tends to display a narrow metallicity range of \(-3\lesssim\) [Fe/H] \(\lesssim-2\) (bottom panels). This trend is expected as star formation can proceed steadily in a relatively massive progenitor halo, increasing the metallicity of in-situ stars over time. This is because in the primary halos, the gas is not entirely expelled by the feedback of stars, and even if it is dispersed, it rapidly recollapses, allowing star formation to continue. Consequently, new stars can form with increased metallicity before the metals associated with the gas are diffused. Meanwhile, in relatively low-mass progenitor halos, star formation is more likely to be quenched by SN feedback, making it difficult to form stars with higher metallicities ([Fe/H] \(\gtrsim-2\)). Furthermore, we observe that the formation of extremely low-metallicity stars ([Fe/H] \(\lesssim-5\)) is solely a result of external metal enrichment. This occurs when gas is contaminated by metals from nearby halos, allowing for the formation of low-metallicity stars in the absence of previous metal-free star formation. Although the metallicity trend mentioned above is applicable to both simulations, Halo2-GR and Halo4-GR, the metallicity of Halo4-GR (right panels) increases from [Fe/H] \(\sim-2.9\) to \(-2.4\) and it shows a narrow range than Halo2-GR due to the insufficient time for gas enrichment in the halo. This is because star formation begins later in Halo4-GR than in Halo2-GR. Specifically, Halo2-GR starts forming the first Pop II star at \(z=13.55\), whereas in Halo4-GR, the first Pop II star is formed at \(z=8.65\), around 270 Myr later. However, reionization at \(z\sim 7\) halts star formation in both simulations, resulting in a rather short metal enrichment history in Halo4-GR. ### Patchy reionization effects on the SFHs and stellar metallicity This section highlights the results of our galaxy simulations that are impacted by the spatially non-uniform reionization process. The intensity of this patchy reionization is ascertained by considering the environmental factors of each simulated galaxy, corresponding to the galaxy analogs listed in Table 1. As explained in Section 2.2, we calculate the impact of patchy reionization by taking into account the stellar mass of the host halo and its distance as a function of cosmic time. The resulting photoionization and photoheating rates, covering the range from \(z=7\) to \(z=0\), are illustrated in Figure 5. Although we consider the effects of PR on H I, He I, and He II, only the results for H I are presented in the upper panels of Figure 5 because we confirm that the effect of PR on the other species exhibits similar trends to that of H I. The black solid line shows the values obtained from the GR runs, while the colored solid lines represent the estimates from the PR runs. The intensity of PR is, on average, two orders of magnitude lower than that of GR up to \(z=3\), indicating that the effect of reionization on the simulated galaxies is relatively weaker when using the PR implementation. The calculated values should be regarded as the lower limit because we do not account for the complete photon flux originating from galaxies beyond the spatial scales we have modeled. Our findings indicate that the distant nature of these galaxies renders their impact insignificant, but the overall degree of reionization increases by Figure 4: The metallicity evolution of stars as a function of their formation time in Halo2-GR (left panel) and Halo4-GR (right panel). All the stars found within the virial radius of the halos at \(z=0\) are shown in the upper panels, while the bottom panels show only the in-situ stars in the primary progenitor of a halo. Among all the stars in the upper panels, we differentiate the in-situ stars by coloring them in black. As depicted in the run, Halo2-GR, stars with relatively low metallicity ([Fe/H] \(<-2\)) usually form in progenitor halos that later merge with the primary halo. The evolution of the [Fe/H] in the primary halo shows a rising trend with redshift, as seen in the case of Halo2-GR. At \(z\approx 13\), the value of [Fe/H] is [Fe/H] \(\approx-3\), which increases to [Fe/H] \(\approx-2\) at \(z\approx 6\). a factor of \(\sim 5-10\) by considering galaxies on a larger scale (refer to Appendix B for details). As described in Section 2.2, ionizing sources contributing to reionization include radiation from both star-forming galaxies and quasars, with quasars being the predominant source at lower redshifts compared to star-forming galaxies. Given that we solely consider star-forming galaxies as the ionizing source, it is crucial to improve our PR estimates to account for the impact of quasars at low redshifts. As a result, we employ a hybrid method that merges GR and PR methods, wherein the reionization process transitions from PR to GR at a designated redshift. The lower panels of Figure 5 show the intensity of the combined GR+PR reionization. From this point onward, we will refer to GR+PR reionization as PR, and we have selected the transition time from PR to GR to occur at \(z=5.8\). In the following two sections, we will explore the effects of PR on the SFH and stellar metallicity of the simulated galaxies. #### 3.2.1 Star formation histories In Figure 6, we present the resulting SFHs for both GR (left panels) and PR (right panels) implementations by showing the cumulative SFHs (top panels) and the maximum hydrogen number density (bottom panels) for our simulated halos. Furthermore, on the right panels, we overplot the values from the GR runs as grey lines to enable easy comparison. Similar to the GR runs, where all halos encounter a complete halt of star formation as a consequence of reionization, most halos except Halo5-PR experience a temporary suppression of star formation with the patchy UV background. However, considering the reduced intensity of patchy reionization, which is two orders of magnitude weaker than that in the GR scenario, unlike the GR cases where the gas density fails to recover enough for star formation, in the PR cases, the gas density bounces back to a level that allows star formation to resume. We refer to this star formation taking place below \(z=7\) in the PR runs as late star formation. We observe that Halo2-PR, Halo4-PR, and Halo5-PR form 60%, 35%, and 80% of the total stel Figure 5: The strength of patchy reionization in terms of hydrogen photoionization rate (left) and photoheating rate (right) exerted by a host galaxy onto a target UFD analog during the period from \(z=7\) to \(z=3\). The values derived from GR and PR runs are represented by solid black and colored lines, respectively. The PR values are, on average, two orders of magnitude lower than those of GR from the onset of the first star formation in UFD analogs (\(z\gtrsim 10\)) until around \(z\sim 3\). lar mass before \(z=7\) and undergo late star formation up to \(z=4.45\) after reionization. Notably, Halo5-PR exhibits a unique characteristic in its SFH, which extends 550 Myr since \(z=7\), significantly longer than the 280 Myr in Halo2-PR and 180 Myr in Halo4-PR (see Table A1). Furthermore, in contrast to Halo2-PR and Halo4-PR, where late star formation commences at \(z\sim 6\) after a temporary quenching period from \(z=7\) to \(z=6\), Halo5-PR manages to sustain star formation even when subjected to a stronger reionization effect than the other two cases, Halo2-PR and Halo4-PR. As depicted in the bottom panels of Figure 6, the maximum hydrogen number density drops to \(n_{\rm H,max}=0.1\)-\(1.0\) cm\({}^{-3}\) in the GR runs as soon as the reionization effect is introduced. On the other hand, in the PR runs, whether complete quenching occurs or late star formation is observed depends on the halo mass at the onset of reionization, corresponding to \(z=7\). For instance, we find that three halos, Halo2-PR, Halo4-PR, and Halo5-PR, with masses of \(M_{\rm vir}\sim 7.6\times 10^{7}\,M_{\odot}\), \(\sim 9.7\times 10^{7}\,M_{\odot}\), and \(\sim 1.6\times 10^{8}\,M_{\odot}\) at \(z=7\), respectively, exhibit late star formation. In contrast, relatively less massive halos, such as Halo1-PR and Halo3-PR, undergo complete quenching. Notably, Halo5-PR is the most massive halo at \(z=7\) among the three halos (Halo2-PR, Halo4-PR, and Halo5-PR). By maintaining high-density gas particles within its virial radius, Halo5-PR can sustain continuous star formation. We find that stars produced during the late star formation period contribute to 40% (Halo2-PR), 65% (Halo4-PR), and 20% (Halo5-PR) of the total stellar mass in each respective run. Interestingly, there seems to be a negative correlation between the duration of late star formation and the fraction of stars formed during that period, with Halo5-PR showing the lowest value of the fraction of late star formation; however, this correlation is not meaningful. The fraction of stars formed through late star formation within a given halo is determined by a complex interplay of factors, including the halo mass at the onset of reionization, the total duration of star formation activity, and the burstiness of star formation. It is important to note that the galaxies in Halo2-PR, Halo4-PR, and Halo5-PR attain comparable halo masses at \(z=5\). Nonetheless, the fraction of stars formed in late star formation can differ significantly based on when the halo growth occurs, either Figure 6: Cumulative SFHs (top panel) and the maximum hydrogen number density of the gas (bottom panel) of the simulated UFD analogs, comparing the runs using GR (left panel) and PR (right panel) implementations. The evolution of each UFD analog is represented by different colors and line types. The simulations with GR implementation indicate complete cessation of star formation due to reionization, whereas three runs (Halo2-PR, Halo4-PR, and Halo5-PR) using PR exhibit prolonged SFHs compared to those of GR runs. Among the runs adopting PR implementation, Halo5-PR shows the most extended SFH, forming stars until around \(z\sim 4.45\). The late star formation phase due to patchy reionization lasts for 280 Myr, 180 Myr, and 550 Myr in Halo2-PR, Halo4-PR, and Halo5-PR, respectively. The lower right panel indicates that halos with extended SFH tend to retain dense gas particles with a threshold density of \(n_{\rm H,th}=100\) cm\({}^{-3}\), which can facilitate late star formation due to the weaker UV field compared to that of GR. before or after reionization. For example, Halo2-PR and Halo4-PR form a substantial amount of stars after reionization, whereas in the case of Halo5-PR, as mentioned earlier, even though the galaxy exhibits the most extended SFH until \(z\sim 4.45\), the majority of its stars (\(\sim 80\%\)) have already formed prior to reionization. Furthermore, while Halo4-PR has a shorter duration of late star formation of \(\sim\)180 Myr, around 100 Myr less than that of Halo2-PR, a larger fraction of stars are formed in the late phase in Halo4-PR (roughly 65%) compared to Halo2-PR, which forms about 40% of its stars below \(z=7\). This difference can be ascribed to the galaxy in Halo4-PR commencing star formation at a later redshift (\(z\approx 8\)) relative to the galaxy in Halo2-PR. Moreover, the galaxy in Halo4-PR undergoes a phase of bursty star formation at \(z=6\), during which stars with a total mass of \(M_{*}\approx 1.1\times 10^{4}\,M_{\odot}\) form within a 10 Myr span. Consequently, the duration of late star formation in Halo4-PR is shorter than that of Halo2-PR, due to the strong stellar feedback from the bursty star formation event that interrupts subsequent star formation. It should be noted that using the PR approach in simulating UFD analogs does not guarantee extended SFHs. Achieving late star formation becomes challenging if the progenitor halo's mass is relatively small, leading to complete quenching despite the weak intensity from PR. For example, as illustrated in Figure 5, the PR intensity applied to the galaxy in Halo3-PR is comparable to that of Halo5-PR between \(z=7\) and \(z=6\). However, the halo mass of Halo3-PR (\(M_{\rm vir}\sim 7.0\times 10^{7}\,M_{\odot}\)) is 2.3 times less massive than that of Halo5-PR at \(z=7\), which makes it difficult for the galaxy to maintain gas particles able to form stars after reionization. Consequently, the mass of the progenitor halo at the time of onset of reionization is critical for an extended SFH. #### 3.2.2 Stellar metallicity To investigate whether the prolonged SFHs resulting from weaker PR intensities are reflected in the stellar metallicity of the simulated galaxies, we present Figure 7. This figure illustrates the stellar metallicity as a function of the formation time of stars located within the virial radius of the simulated halos using PR at \(z=0\). In particular, we compare the [Fe/H] values for all stars (gray and black) within the virial radius of all the simulated galaxies employing PR to those formed internally within the primary progenitor halos (black). We find that almost all stars are metal-poor, ranging from \(\rm[Fe/H]=-2\) to \(\rm[Fe/H]=-5\), and the in-situ stars (black) also exhibit a similar metallicity range to that of all stars. We denote the stars formed during late star formation with blue inverted triangles in each panel. The metallicity of these late-forming stars, ranging between \(-3\lesssim\rm[Fe/H]\lesssim-2\), is comparable to that of in-situ stars formed at \(z\sim 7\), just before a patchy UV field is introduced. We observe two distinct characteristics of stars resulting from late star formation. Firstly, all stars originating from the late star formation phase are formed internally within the halo rather than being accreted stars. This occurs because only the primary halo can sustain star formation under the influence of reionization, while less massive progenitors experience a total cessation of star formation due to reionization. Secondly, even though these newly formed in-situ stars originate at relatively lower redshifts, they tend to display metallicities similar to or lower than the peak [Fe/H] of stars formed prior to reionization. This is attributed to the temporary quenching experienced by the simulated galaxies, even with the weak impact of patchy reionization, which Figure 7: The same as Figure 4, but it displays the metallicity for stars from all PR runs. The blue-shaded region in each panel indicates the period where the patchy reionization effect is applied, and the blue inverted triangles within this region represent the stars formed during the late star formation phase in all PR runs. It is interesting to note that the stars formed during the late phase of star formation are likely to exhibit metallicity levels similar to or even lower than those of stars with the highest metallicity formed prior to reionization. causes dense gas and metals to dissipate. Given that the late star formation in Halo2-PR and Halo4-PR commences later than in Halo5-PR, in-situ stars from the late star formation at \(z\sim 6\) in Halo2-PR and Halo4-PR exhibit lower metallicities ([Fe/H] \(\sim\) -2.7) than those in Halo5-PR by approximately 0.6 dex. This implies that the longer it takes for the late star formation to begin, the more dense gas in gas in the primary progenitor halo disperses. Consequently, it becomes difficult to form high metallicity stars ([Fe/H] \(\sim-2\)). Once the late star formation starts, the peak [Fe/H] of in-situ stars increases from [Fe/H] \(=-2.7\) to [Fe/H] \(=-2.2\) over time in both Halo2-PR and Halo4-PR runs, due to the increase in metals from SN explosions occurring within the halo. In Figure 8, we show the metallicity distribution function (MDF) of the simulated galaxies, where we categorize Pop II stars from each halo, spanning from [Fe/H] \(=-3.9\) to [Fe/H] \(=-1.1\), with intervals of 0.2 dex. Specifically, we compare the MDF of Pop II stars from GR runs, depicted in orange, with those from PR runs, illustrated in blue. First, the MDF of galaxies with a stellar mass larger than \(M_{*}>10^{4}\,M_{\odot}\) at \(z=0\) tends to peak at [Fe/H] \(\sim-2\). We observe no significant difference in the MDF, except for Halo4-PR, among the three runs: Halo2-PR, Halo4-PR, and Halo5-PR, which have extended SFHs due to adopting patchy reionization. Only in the run Halo4-PR do we find the MDF shifts as a result of the patchy reionization effect, extending towards a higher metallicity by 0.22 dex. The cause of this minimal change in the MDF, shown in Halo2-PR and Halo5-PR, is that the simulated galaxies experience a temporary quenching due to weak patchy reionization rather than continuously forming stars. Consequently, metals are expelled along with the gas, reducing the gas metallicity when the halo is replenished to form subsequent stars. ### Patchy UV background with transition time In this section, the question we aim to address is how the duration of late star formation and stellar metallicity may change if we adopt a transition time for the PR to GR implementation up to \(z_{\rm t}=5.5\), which is later than the fiducial value of \(z_{\rm t}=5.8\). To investigate this, we have conducted simulations on the same runs as in the PR cases, but with a gradual change in the transition time from \(z_{\rm t}=5.8\) to \(z_{\rm t}=5.5\), incrementing by 0.1. We explore the effects of a delayed transition time for two primary reasons: (1) there is no consensus on when reionization is completed, and (2) recent observational studies have proposed a late reionization scenario, indicating that reionization may have been completed between \(z=5.5\) and \(z=6\)(e.g., Becker et al., 2015; Choudhury et al., 2015; McGreer et al., 2015; Mesinger et al., 2015). As such, by Figure 8: The normalized metallicity distribution function of stars is shown for each simulated galaxy, with the range spanning from [Fe/H] \(=-3.9\) to [Fe/H] \(=-1.1\) and an interval of 0.2 dex. The orange (blue) histogram exhibits the simulated halo adopting the GR (PR) implementation. In each halo, the peak of the MDF of stars ranges from [Fe/H] \(=-2.5\) to [Fe/H] \(=-2\). The MDFs of halos with PR implementation, especially Halo2-PR, Halo4-PR, Halo5-PR, display a shift towards higher metallicity compared to those with GR. This shift is attributed to the extended star formation period in the PR runs, which leads to the formation of stars with higher metallicity. incorporating a delayed transition time, we are also able to examine the implications of a late reionization scenario. The properties of the simulated halos with delayed transition times are summarized in Table 11. As explained in Section 3.2, the notation PR-z'5.x' in the halo name signifies that the transition from PR to GR takes place at \(z=5.\)x. For example, in the PR-z5.5 run, the transition from PR to GR occurs 0.3 later in redshift compared to the PR-z5.8 run, resulting in a longer application of the PR effect. It is important to note that all physical quantities in Table 11 are calculated based on the particles within the virial radius of each halo at \(z=3\). We halted all simulations adopting PR at \(z=3\) because no more late star formation is expected in the simulated halos beyond this period, as demonstrated in the GR runs. Figure 9 presents the SFHs of the simulated halos as a function of the transition time. Notably, we find that in line with the fiducial PR runs, the relatively less massive halos during the onset of reionization (Halo1-PR, Halo3-PR, Halo6-PR) still fail to achieve sufficient density for star formation to take place, even when the patchy reionization period is prolonged by postponing the transition time. Meanwhile, the impact of the late transition time is evident in the runs Halo2-PR, Halo4-PR, and Halo5-PR as follows. As expected, when the period of patchy reionization is extended by delaying the transition from PR to GR, the simulated galaxies tend to form more stars through late star formation. For example, with the transition time of \(z_{\rm t}=5.8\) (\(z_{\rm t}=5.5\)), Halo2-PR in Figure 9 accounts for 40% (50%) of stars from late star formation, so more stars are formed in Halo2 by late reionization. Consequently, the stars produced by late star formation contribute to an increase in stellar mass. For instance, when adopting (\(z_{\rm t}=5.5\)), the stellar mass of Halo2-PR increases by up to 31%, compared to those of Halo2-PR-z5.8 (refer to Table 11). We observe a slight correlation between the transition time and the duration of late star formation. As indicated in Table 11, by postponing the transition time from \(z_{\rm t}=5.8\) to \(z_{\rm t}=5.5\), the duration of late star formation in Halo4-PR can be extended by approximately 240 Myr, while Halo2-PR experiences a minimal change in duration, at around 15 Myr. Interestingly, the duration of late star formation and the resulting stellar masses are not always proportional, as evidenced by the weak correlation in Halo4-PR. To be specific, the stellar mass of Halo4-PR-z5.6 is similar to that of Halo4-PR-z5.7, with a difference in halo mass of less than 3%. However, Halo4-PR-z5.6 has a late star formation duration that is shorter than Halo4-PR-z5.7 by approximately 140 Myr. This difference is ascribed to whether the galaxy undergoes bursty or continuous star formation. In Halo4 Figure 9: The cumulative SFHs of the simulated galaxies are shown as the transition time decreases from \(z_{\rm t}=5.8\) to \(z_{\rm t}=5.5\). As the completion of reionization is delayed, the halos tend to form more stars through late star formation, and the duration of the late star formation generally increases. Consequently, the stars produced by late star formation contribute to an increase in stellar mass. For example, when the transition time is changed from \(z_{\rm t}=5.8\) to \(z_{\rm t}=5.5\), the stellar mass of Halo2-PR and Halo4-PR increases by up to 31% and 63%, respectively. Furthermore, our findings suggest that the extent of late star formation in the simulated galaxies is not only dependent on the transition time from PR to GR implementation but also on the nature of the late star formation, whether it occurs in episodic bursts or as a continuous process. PR-z5.6, stars with a total stellar mass of \(M_{*}=1.2\times 10^{4}\,M_{\odot}\) are formed during a short period of 4.4 Myr, causing the galaxy to experience a more episodic and bursty star formation compared to Halo4-PR-z5.7. Stellar feedback from such bursty star formation significantly reduces the gas density by two orders of magnitude, effectively suppressing further star formation. Overall, the extent of the late star formation of the simulated galaxies depends not only on the transition time from PR to GR implementation but also on the nature of their late star formation, whether occurring in episodic bursts or as a continuous process. To examine the primary factor shaping the characteristics of SFHs, we carry out additional simulations employing various star formation random seeds. Our results reveal that the trends, which suggest that a delayed transition results in a more prolonged SFH, remain consistent, albeit with slight variations in the duration of late star formation due to its inherent nature. Since Halo2-PR and Halo4-PR have similar masses at the onset of reionization, it is useful to compare their evolution while keeping the transition time constant, given that the intensity of patchy reionization in Halo4-PR is one-third that of Halo2-PR. Our comparison reveals that when \(z_{\rm t}=5.5\) is employed, the SFH of Halo4-PR extends for 90 Myrs longer than that of Halo2-PR. Furthermore, it would be meaningful to consider Halo2-PR and Halo4-PR as non-Magellanic and Magellanic UFDs, respectively, as proposed by Sacchi et al. (2021). Assuming that Magellanic UFDs may have been situated farther from the host halo during reionization, the strength of reionization could be weaker than that applied to non-Magellanic satellites that had already entered the host environment at that time. Consequently, Magellanic systems could be expected to exhibit longer SFHs, similar to Halo4-PR. To explore how the stellar metallicity may vary based on the transition time, we compare the MDF of Pop II stars in Halo4-PR-z5.8 (filled cyan) and Halo4-PR-z5.5 (blue) in Figure 10. Notably, it is evident that the relatively high metallicity stars with \(\rm[Fe/H]\gtrsim-2.3\) are formed only in the Halo4-PR-z5.5 run during late star formation, resulting in a median metallicity of \(\rm[Fe/H]=-2.24\) which is greater than the median value of \(\rm[Fe/H]=-2.51\) shown in Halo4-PR-z5.8. As discussed in Section 3.2.2, if the duration of late star formation resulting from weak patchy reionization is temporary, it may be difficult to discern a significant difference in the subsequent stellar metallicity. This is because metals are expelled alongside the gas during the short quenching period, leading to the formation of stars with metallicities that are either lower or similar to those formed prior to reionization. On the other hand, if the duration of late star formation is prolonged by postponing the transition time, the patchy reionization effect becomes more apparent in the emerging stellar metallicity. ### Comparison with observations and theoretical work So far, we have demonstrated the impact of patchy reionization on the SFHs and stellar metallicity of UFD galaxy analogs while taking into account environmental conditions. In this section, we will compare our simulated UFD analogs, which incorporate a transition redshift of \(z_{\rm t}=5.5\), with observed UFD satellites in the MW. Additionally, we will compare our findings with other theoretical studies that have investigated the metallicity of UFDs. The objective is to provide novel insights into star formation, stellar metallicity, and the galactocentric distance of observed UFD satellites through the implementation of our patchy reionization model. #### 3.4.1 Star formation histories of the observed UFDs Figure 11 depicts a comparison of the cumulative fractional SFH obtained from simulated galaxies with those of Magellanic UFDs, including Horologium I, Figure 10: The normalized MDF of stars from Halo4-PR is shown to compare the effect of transition time, with the cyan-filled MDF corresponding to \(z_{\rm t}=5.8\), and the blue MDF representing the run adopting \(z_{\rm t}=5.5\). As the transition time is moved to lower redshift, the fraction of late star formation in Halo4-PR increases from 65% to 80%. This contributes to the shift towards higher metallicity in the MDF, as previously discussed in Figure 8. For Halo4-PR, we have found that delaying the transition time from \(z_{\rm t}=5.8\) to \(z_{\rm t}=5.5\) results in a mean stellar metallicity increase of 0.2 dex. Phoenix II, and Reticulum II, along with their averaged SFH denoted as MC mean (Sacchi et al., 2021). We have specifically selected the simulation runs that incorporate patchy reionization with a transition time of \(z_{\rm t}=5.5\), as these runs demonstrate the most extended SFHs. Based on the reconstructed SFHs of observed Magellanic UFDs, it is likely that they formed a significant proportion of stars (mean value of \(\sim 80\%\)) before \(z=7\), while the simulated galaxies tend to display a broader range in terms of the fraction of stars formed prior to \(z=7\), which spans from 20% to 70%. In addition, the observed Magellanic UFDs are expected to continue forming the remaining 20% of stars for a more extended duration, up to a redshift of \(z=1.2\), while the simulated galaxies complete their star formation by \(z\approx 3.56\). Table 2 shows a summary of the differences between the simulated galaxies and observed Magellanic UFDs in terms of the quenching time, including \(\tau_{50}\) and \(\tau_{90}\), which represent the time required to form 50% and 90% of the final stellar mass (e.g., Weisz et al., 2019), and SF\({}_{\rm end}\), which indicates the point at which star formation is entirely quenched, both as a look-back cosmic time. The table reveals that the quenching time, \(\tau_{90}\), is similar for Magellanic UFDs and PR-25.5 runs with \(\tau_{90}=12.68\) Gyr ago and \(\tau_{90}=12.55\) Gyr ago, respectively, implying that 90% of stars are formed by \(z\approx 5\) in both cases. However, the timing of the most rapid star formation is different between the two, with Magellanic UFDs forming stars early, represented as \(\tau_{50}=13.5\) Gyr ago, while it occurs relatively late, showing \(\tau_{50}=12.7\) Gyr ago for the PR-25.5 runs. As discussed in Section 3.2.1, the fraction of stars formed before cosmic reionization is greater when the halo mass is higher, particularly in the early stages of its evolution, which is evident in the case of Halo 5. Conversely, for smaller halo masses during the early stages of evolution, such as Halo 2 and Halo 4, the impact of patchy reionization is more significant. The implication is that patchy reionization has the potential to enable galaxies that would have otherwise experienced complete quenching \(z\sim 7\) with the GR model, to form more stars instead. This, in turn, leads to a smaller value of \(\tau_{50}\). Magellanic UFDs exhibit a more prolonged star formation period compared to the PR runs. For instance, Magellanic UFDs with SF\({}_{\rm end}=8.5\) Gyr ago have SFHs that are extended by at least 3 Gyr compared to those of the simulated galaxies. The mean value for SF\({}_{\rm end}\) of the PR runs with \(z_{\rm t}=5.5\) is SF\({}_{\rm end}=12.5\) Gyr ago. It is possible that the difference in SF\({}_{\rm end}\) between Magellanic UFDs and the simulated galaxies could be due to weaker reionization experienced by the former from the surrounding galaxies. This suggests that the strength of patchy reionization applied in our PR-25.5 runs might be greater than what was experienced by the Magellanic UFDs. Furthermore, as discussed in Section 3.2.1, comparing Halo2 and Halo4, the duration of star formation can also depend on whether the star formation is continuous or bursty, while patchy reionization is taking place even for halos of the same mass. In other words, the more bursty the star formation, the greater the impact of the powerful SN explosion effect, result \begin{table} \begin{tabular}{c c c c} \hline \hline Galaxy & \(\tau_{50}\)(Gyr ago) & \(\tau_{90}\)(Gyr ago) & SF\({}_{\rm end}\)(Gyr ago) \\ \hline non-MC mean & 13.40 \(\pm\) 0.06 & 12.68 \(\pm\) 0.23 & 11.5 \\ MC mean & 13.49 \(\pm\) 0.09 & 12.06 \(\pm\) 0.72 & 8.5 \\ \hline GR mean & 13.15 & 13.01 & 12.96 \\ PR-25.5 mean & 12.82 & 12.55 & 12.28 \\ \hline \end{tabular} Note. –Column (1) specifies the names of each UFD group. Columns (2), (3), and (4) represent the times at which UFDs reach 50%, 90%, and 100% of their final stellar mass, respectively. \end{table} Table 2: The quenching times and the completion time of star formation in Ultra-Faint Dwarf galaxies (UFDs) are provided on average Figure 11: Cumulative fractional SFHs of the simulated UFD analogs, alongside those of observed MW UFD satellites, represented by different colors. The PR runs are distinguished by colored lines if they exhibit extended SFHs, while gray is used for PR runs without extended SFHs. Here, we focus on the cumulative SFHs of Magellanic UFDs, which are believed to have recently entered the MW environment and are, therefore, suitable samples for studying the effects of patchy reionization. Compared to our simulations, the observed UFDs tend to have formed a majority of their stars (\(\sim 90\%\)) prior to reionization, but they also exhibit extended star formation that continues for at least 3 Gyr longer than what is found in our simulated halos. ing in complete quenching and, consequently, a shorter duration of star formation. Although our simulations may not perfectly replicate the extent of star formation in the observed Magellanic UFDs, the fact that patchy reionization can extend the duration of star formation to a greater extent than homogeneous reionization is in line with the findings of Sacchi et al. (2021). For instance, the quenching times of the simulated UFDs with extended SFHs (Halo2-PR, Halo4-PR, and Halo5-PR), on average, are \(\tau_{90}=12.55\) Gyr ago, compared to \(\tau_{90}=13.01\) Gyr ago for the runs with homogeneous reionization, resulting in a difference of 460 Myr. The observed difference in quenching time between non-Magellanic and Magellanic UFDs, approximately 600 Myr, aligns with our findings and lends support to theoretical models suggesting that the SFHs of MW satellite galaxies may exhibit hints of patchy reionization during the early Universe. In summary, our simulation results suggest that the extended SFHs of Magellanic UFDs, as reconstructed through a color-magnitude diagram, may be attributed to a non-uniform reionization effect or halo mass. Galaxies within massive halos can overcome the suppression of star formation by reionization, leading to longer star formation duration. However, given the negligible difference in stellar masses between Magellanic and non-Magellanic UFDs, the prolonged SFHs of Magellanic UFDs are more likely due to patchy reionization effects. The fact that approximately 90% of the stellar mass of Magellanic UFDs is estimated to have formed before reionization, and the remaining 10% during the period of patchy reionization, suggests that the intensity of patchy reionization was moderate, and star formation was not bursty. If star formation were to burst, the dense gas would have been dissipated by the powerful SN feedback, leading to shortened SFHs. Alternatively, it could be interpreted that the transition from patchy to global reionization occurred at a late time. However, such a scenario would result in higher stellar masses than those observed in Magellanic UFDs. #### 3.4.2 Stellar metallicity Figure 12 presents a comparison of the estimated stellar mass and averaged [Fe/H] relation of our simulated UFDs, indicated both by blue and red filled stars, with those from other hydrodynamic simulations (Jeon et al., 2017; Wheeler et al., 2019; Agertz et al., 2020) and observations (Kirby et al., 2013; Sacchi et al., 2021). Our simulation results show that the estimated [Fe/H] values range from \(-3\lesssim\) [Fe/H] \(\lesssim-2\) for stellar masses between \(M_{*}=10^{4}\,M_{\odot}\) and \(M_{*}=10^{5}\,M_{\odot}\). We find that the averaged [Fe/H] value of the runs with homogeneous reionization, \(<\) [Fe/H] \(>\)\(=-2.64\), is lower by 0.32 dex than that of the cases with patchy reionization. It is important to note that only three runs (Halo2-PR, Halo4-PR, Halo5-PR) are considered for the patchy reionization cases since only these runs exhibit the impact of patchy reionization. Theoretical studies tend to predict lower [Fe/H] values for UFDs with stellar masses in the range of \(M_{*}\lesssim 2\times 10^{4}\,M_{\odot}\) compared to the observed values. In order to bridge the gap between observation and theory, several theoretical studies have been conducted. For example, Wheeler et al. (2019) suggested that this discrepancy in metallicity might be due to the neglect of Pop III or a lack of environmental pre-enrichment. In addition, Jaacks et al. (2019) demonstrated that pre-enrichment by Pop III is insufficient to raise the metallicity floor above [Fe/H] \(=-4\), particularly in low-density regions. On the other hand, Applebaum et al. (2021), who adopted the same metallicity floor, successfully reproduced considerably more metal-enriched galaxies than Wheeler et al. (2019). Furthermore, strong feed Figure 12: The relationship between the stellar mass and the mean [Fe/H] of the simulated UFD analogs, in comparison with observations (Kirby et al., 2013; Sacchi et al., 2021) and other theoretical studies (Jeon et al., 2017; Wheeler et al., 2019; Agertz et al., 2020). The PR runs show that the metallicities of our simulated UFDs tend to agree well with those of observed MW satellites in the stellar mass range of \(10^{4}\,M_{\odot}\lesssim M_{*}\lesssim 10^{5}\,M_{\odot}\). However, reproducing the metallicity plateau in the lowest mass galaxy regime (\(M_{*}<10^{4}\,M_{\odot}\)), where \(-3\lesssim\) [Fe/H] \(\lesssim-2\) remains challenging for our simulations. While patchy reionization can contribute to increasing stellar metallicities, it also pushes them to the higher stellar mass regime (\(M_{*}\gtrsim 10^{4}\,M_{\odot}\)).) back from SNe may also contribute to the low metallicity, as Agertz et al. (2020) showed that SNe feedback could expel enriched gas out of the galaxy, thereby reducing the metallicity of the gas. Our simulation results are also in line with other theoretical works, given that the predicted [Fe/H] values are lower than what is observed. For example, the estimated [Fe/H] for the smallest galaxy in our simulations, with \(M_{*}\approx 10^{4}\,M_{\odot}\), is around [Fe/H] \(=-2.8\), which is on average 0.5 dex lower than the observed values for UFDs with similar stellar masses. While implementing patchy reionization in our simulations can increase the stellar metallicities due to the formation of relatively high-metallicity stars during late star formation, it also leads to an increase in the stellar masses. In Figure 13, we compare the MDF of our simulated UFDs, shown as a blue histogram, with that of the observed UFDs, specifically Reticulum II and Canes Venatici I. These observed UFDs have estimated stellar masses of \(M_{*}\sim 10^{3}\,M_{\odot}\) and \(M_{*}\sim 2\times 10^{5}\,M_{\odot}\), respectively. The metallicity measurements for Reticulum II and Canes Venatici I are provided by the Stellar Abundances for Galactic Archaeology (SAGA) database (Suda et al., 2008). Both the simulated UFDs and Reticulum II consist mainly of metal-poor stars, spanning a metallicity range from [Fe/H] \(=-3.2\) to [Fe/H] \(=-1.7\). Despite the substantial difference in stellar mass between Reticulum II and the simulated galaxies (with a mean stellar mass of \(<M_{*}>\ =\ 7.6\times 10^{4}\,M_{\odot}\)), the mean metallicity of member stars in Reticulum II (\(<\) [Fe/H] \(>\ =-2.46\)) is comparable to that of the simulated galaxies (\(<\) [Fe/H] \(>\ =-2.32\)). As illustrated in Figure 11, Reticulum II, which is one of the Magellanic UFDs expected to show the effect of patchy reionization, is likely to have formed the majority of its stars (\(\sim 90\%\)) before reionization, with only 10% of its stars forming during the late star formation phase. On the contrary, Canes Venatici I, which is approximately twice as massive as the simulated galaxy, is likely to display a wide range of metallicities, including stars with relatively high metallicity at [Fe/H] \(=-1.0\). The formation of relatively high-metallicity stars ([Fe/H] \(\sim-1.0\)) may not always occur at a later epoch when the global cosmic enrichment is achieved. Instead, during the rapid assembly process, even before reionization, high-metallicity stars can form from gas that has not had enough time to mix or diffuse into the surrounding gas. Note, however, that such a scenario is not demonstrated in our simulations. Furthermore, as demonstrated in Figure 7, we find that stars formed during the late star formation phase tend to have metallicities similar to or lower than those formed before reionization. This is because even a weak patchy reionization can temporally suppress star formation between Figure 13: The MDF of our simulated UFDs (shown in blue) in the PR runs adopting \(z_{\rm t}=5.5\) is compared with that of Reticulum II (gray), one of the Magellanic UFD satellites, and Canes Venatici I (brown), in the form of a normalized histogram. Despite Reticulum II having a stellar mass lower by 1-2 orders of magnitude than the simulated galaxies (with \(<M_{*}>\ =\ 7.6\times 10^{4}\,M_{\odot}\)), its mean stellar metallicity (\(<\) [Fe/H] \(>\ =-2.46\)) is similar to that of the simulated galaxies (\(<\) [Fe/H] \(>\ =-2.32\)). On the other hand, Canes Venatici I, which is about twice as massive as the simulated galaxy (blue), likely exhibits a wide range of metallicities, including stars with relatively high metallicity at [Fe/H] \(=-1\). Achieving such high metallicity stars is a challenging aspect of our study. \(z=7-6\), during which metals in the dense gas are dispersed by reionization, resulting in subsequent stars forming with lower metallicity. #### 3.4.3 Orbital histories of the observed UFDs In order to investigate how far the simulated galaxy was from its host halo at the time of reionization, we show the average distance of the simulated galaxy with respect to its host halo at \(z=7\) and \(z=6\), marked as pentagon symbols in Figure 14. We compare these distances with the orbital histories of Magellanic UFD satellites and the LMC itself over the past 6 Gyr. The solid lines in Figure 14 represent the reconstructed orbital histories of selected Magellanic satellites (Horologium 1, Phoenix 2, and Recticulum 2), derived from Gaia DR2 proper motion measurements, by taking into account the combined gravitational effects of the MW, LMC, and SMC (Patel et al., 2020). These results suggest that the LMC is currently on a first infall, long-period orbit and that its recent pericenter occurred 50 Myr ago, assuming an MW mass of \(M_{\rm vir}\lesssim 1.5\times 10^{12}\,M_{\odot}\). Moreover, the farthest distance from the MW over the last 6 Gyr is around 360 kpc. It should be noted that the unit of length used is physical rather than comoving units. To make a more precise comparison with our simulations, it would be useful to have knowledge of the distances between the LMC and its associated satellites and the MW progenitor halo at earlier times than 6 Gyr. However, this is challenging as it requires a sophisticated orbital model that considers the mass evolution of MW-like galaxies, which have acquired roughly 80% of their mass by 6 Gyr (e.g., Santistevan et al., 2020). Moreover, according to Patel et al. (2017), results obtained from further orbit integration earlier than 6 Gyr may deviate from the predictions made by cosmological simulations. Despite this, we can anticipate that the LMC and its satellites were located far away during reionization since they appear to be currently undergoing their first infall (e.g., Busha et al., 2011; Patel et al., 2017, 2020). In particular, Patel et al. (2017) studied LMC analogs in Illustris simulations (e.g., Vogelsberger et al., 2014; Nelson et al., 2015) and calculated the cross time, which refers to the lookback time when the LMC initially crossed the physical \(z=0\) virial radius of the MW while moving inward. Their analysis revealed that 40% of LMC analogs had a cross time of less than 2 Gyr ago, while about 20% had a crossing time of less than 4 Gyr ago. These results suggest that the LMC experienced its first infall relatively recently and was located far away from the MW progenitor halo at earlier times. Based on the reconstructed orbital histories of Magellanic systems, there is a possibility that the UV intensity field used in our simulated galaxy is underestimated when compared to what the LMC satellites received. This is because the UV field in our simulation is mainly attributed to the MW-like host, and the contribution of the LMC is not adequately taken into account. If the LMC satellites were long-term satellites that were captured early on, their proximity to the LMC would have made the UV intensity from the LMC potentially significant. Patel et al. (2020) classified the satellites associated with the LMC into two categories based on the number of passages around the LMC, which depends on the masses of the MW and LMC and the inclusion of the SMC's contribution. Reticulum II and Phoenix II are believed to be recently captured systems as they completed one bound orbit around the MCs in the last 1 Gyr, while Horologium I is classified as a long-term satellite as it completed multiple passages around the MCs in the last 6 Gyr. Considering the above information, it is possible that the local UV field intensity by the LMC may not have had a significant effect on the recently captured LMC satellites because the LMC was relatively more distant at the time of reionization. However, in the case of Figure 14: The direct orbits of the Magellanic UFD satellites and the LMC from the center of the MW, reconstructed using Gaia’s proper motion measurements (Patel et al., 2020). For comparison, the red pentagon represents the average distance of the simulated UFD analogs from the host halo at \(z=7\), while the blue pentagon represents the average distance at \(z=6\). Due to the lack of orbital histories of the Magellanic UFDs prior to 6 Gyr ago, it is difficult to determine the location of the observed UFDs during reionization. However, if the Magellanic UFDs were distant from the MW (\(d>400\) kpc) at the end of reionization, they may have been exposed to a weaker local UV field from the MW progenitor, which could have allowed for star formation to continue after reionization. long-term satellites like Horologium I, the LMC might have contributed to quenching star formation due to its own potentially significant UV intensity. ## 4 Summary and Conclusions We have conducted a study to examine how patchy reionization affects the star formation histories (SFHs) and stellar metallicity of ultra-faint dwarfs (UFDs) by utilizing a set of cosmological hydrodynamic zoom-in simulations. Our study is motivated by recent findings by Sacchi et al. (2021), who proposed that the contrasting SFHs between long-established Milky Way (MW) UFDs and recently entered ones could be attributed to the influence of patchy reionization. Specifically, the extended SFH observed in the Magellanic UFD satellites, which are estimated to have entered the MW's potential about \(\sim\)3.5 Gyr ago based on their orbital histories reconstructed using Gaia proper motion measurements, is thought to be a result of environmental factors, such as a weaker UV field during the reionization epoch compared to that experienced by the long-established MW satellites. Although patchy reionization could play a crucial role in shaping the formation and evolution of UFDs, the most common implementation of the reionization effect is simplistic, applying a uniform and instantaneous global heating to all galaxies within the simulation box without accounting for the relative distances between a target UFD and its surrounding galaxies. We employ a novel method to account for the impact of local UV fields on a target UFD analog by surrounding galaxies, particularly its host galaxy. This method involves three steps to calculate the effects of patchy reionization. First, we conduct dark-matter-only simulations to identify a target UFD analog halo and track its distance from surrounding halos, as well as their halo masses. Second, we use the abundance-matching technique to determine the stellar masses of the surrounding halos. Third, we obtain synthetic galaxy spectra for the surrounding galaxies from Starburst99 to derive flux and subsequently calculate the photoionization and photoheating rates for hydrogen and helium atoms. We then perform cosmological hydrodynamic zoom-in simulations using these rates, which are provided in a table as a function of redshift. To clarify, our study involves a series of simulations that compare two models: one that employs homogeneous global reionization (GR) and another that uses a hybrid approach. The hybrid approach starts with patchy reionization and then switches to global reionization implementation below a transition redshift \(z_{\rm t}\) (referred to as PR). Our main findings are summarized as follows. * Global reionization effect * We confirm that reionization has a significant effect in suppressing star formation in the simulated UFDs. This is evidenced by the significant decrease in the maximum gas densities of all halos by two orders of magnitude from \(z=7\) to \(z=6\), which hinders the formation of new stars and results in no star formation in all halos below \(z\approx 7\). * The amount of gas loss due to reionization depends on the halo mass at the onset of reionization. For instance, Halo5-GR, the most massive halo at \(z=7\), retains a gas mass of \(M_{\rm gas}\sim 1.3\,\times\,10^{7}\,M_{\odot}\), which is three times larger than that of the other halos, while Halo2-GR loses approximately 90% of its gas mass (\(M_{\rm gas}\sim 2\times 10^{6}\,M_{\odot}\) at \(z=0\)) between \(z=6\) and \(z=0\). * Our results indicate that stars with higher metallicities (\([{\rm Fe}/{\rm H}]\gtrsim-2\)) are challenging to form due to the combined effects of supernova feedback and reionization. Moreover, we find that stars with extremely low metallicities (\([{\rm Fe}/{\rm H}]\lesssim-5\)) are formed through external metal enrichment. * Patchy reionization effect * Most halos experience a temporary halt of star formation due to the patchy UV field during the PR runs, except for Halo5-PR. However, the reduced intensity of patchy reionization, which is about two orders of magnitude lower than that in the GR scenario, allows the gas density to recovering to a level that enables star formation to resume in some PR cases. * The occurrence of complete quenching or late star formation depends on the halo mass at the onset of reionization at \(z=7\). For example, halos with higher masses, such as Halo2-PR, Halo4-PR, and Halo5-PR, exhibit late star formation, while less massive halos, like Halo1-PR and Halo3-PR, experience complete quenching. * The simulated halos, Halo2-PR, Halo4-PR, and Halo5-PR, undergo late star formation and form a significant portion of their total stellar mass before \(z=7\). Specifically, they form 60%, 35%, and 80% of the total stellar mass, respectively. Halo5-PR is unique in having a significantly longer star formation history, extending 550 Myr since \(z\) = 7, compared to 280 Myr in Halo2-PR and 180 Myr in Halo4-PR. * Multiple factors, such as the halo mass at the time of reionization, the duration of star formation, and the degree of star formation burstiness, influence the fraction of stars formed during late star formation. * The in-situ stars formed through late star formation tend to have metallicities similar to or lower than the peak [Fe/H] of stars formed before reionization despite being formed at lower redshifts. This is due to the temporary quenching experienced by the simulated galaxies under the influence of a weak patchy reionization effect, which causes the dissipation of dense gas and the loss of associated metals. * Our simulations indicate that by delaying the transition from PR to GR from \(z_{\rm t}=5.8\) to \(z_{\rm t}=5.5\), the SFHs are more extended with an average extension of 235 Myr. This prolonged period of star formation results in an increase in stellar mass, with a 63% increase, observed in Halo4-PR. Moreover, we find that the average metallicity of the UFD increases proportionally to the fraction of late star formation, causing the metallicity distribution function to shift toward higher metallicity by 0.2 dex. * Comparison with observations * We find that \(\tau_{90}\) is similar between Magellanic UFDs and PR-25.5 runs, indicating that 90% of stars are formed by \(z\approx 5\) in both cases. However, the shape of the SFHs differs between the two, with Magellanic UFDs forming stars earlier than the PR-25.5 runs, with \(\tau_{50}\) occurring 13.5 Gyr ago and 12.7 Gyr ago, respectively. * Our simulations are consistent with the idea proposed by Sacchi et al. (2021) that patchy reionization could substantially extend the duration of star formation in UFDs. The quenching times of UFD analogs with extended SFHs in our PR runs adopting \(z_{\rm t}=5.5\) are 460 Myr later than those in the GR runs, similar to the 600 Myr more recent quenching times of Magellanic UFDs compared to non-Magellanic UFDs. * Our simulations do not perfectly reproduce the absolute duration of late star formation observed in Magellanic UFDs. As such, the observed Magellanic UFDs exhibit a longer period of star formation, with a star formation cessation (SF\({}_{\rm end}\)) occurring 8.5 Gyr ago, whereas the PR runs indicate a mean SF\({}_{\rm end}\) of 12.5 Gyr ago. This discrepancy in SF\({}_{\rm end}\) could potentially be attributed to the comparatively milder reionization experienced by the Magellanic UFDs within their specific environment. * Reproducing the high metallicity plateau of \(-2.7\lesssim\) [Fe/H] \(\lesssim-2.0\) below \(M_{*}=2\times 10^{4}\,M_{\odot}\), which is the mass range of Magellanic UFDs used for comparison, is challenging in our simulations. While patchy reionization can increase stellar metallicities by forming high-metallicity stars during late star formation, it also leads to an increase in stellar masses. * The prolonged SFHs of the Magellanic UFDs with SF\({}_{\rm end}\) = 8.5 Gyr ago could imply that the strength of the UV field they were subjected to might have been lower than the assumptions made in our PR runs. This discrepancy could suggest that the Magellanic UFDs were positioned farther away than the inferred distance of 400 kpc, based on reconstructed orbital histories at \(z\approx 0.7\). It is important to emphasize that the weakened UV background resulting from patchy reionization is not the sole explanation for the extended SFHs, and various factors can contribute to differing SFHs of dwarf galaxies. For instance, Rey et al. (2020) demonstrated that UFDs formed in halos with the same halo mass but varying assembly times can exhibit similar stellar masses while having distinct SFHs. To be specific, their study revealed that star formation can be reignited at later epochs (\(z<1\)) due to a significant increase in dynamical mass during these late periods. However, it is worth noting that the halos in their study have a mass of \(M_{\rm vir}\approx 3\times 10^{9}\,M_{\odot}\) at \(z=0\), which is 2-4 times more massive than those in our research. In a fair comparison, the least massive galaxy among their halos, with a mass of \(M_{\rm vir}\approx 1.4\times 10^{9}\,M_{\odot}\) at \(z=0\), similar to Halo5 in our study, also experiences complete quenching due to reionization. As such, the mass of the halo is one of the crucial determinants of whether stars continue to form under the influence of reionization, leading to a minimum mass below which star formation is likely to be entirely quenched. This threshold varies from \(3\times 10^{9}\,M_{\odot}\)(e.g., Jeon et al., 2017; Rey et al., 2020) to \(M_{\rm vir}=7\times 10^{9}\,M_{\odot}\)(e.g., Fitts et al., 2017; Wheeler et al., 2019). However, explaining the extended SFHs of the observed Magellanic UFDs by adopting more massive halos presents a significant challenge. This is primarily because the estimated stellar mass of the Magellanic UFDs, which is on the order of a few \(M_{*}\sim 10^{3}\,M_{\odot}\), is too small to be hosted in massive halos with \(M_{\rm vir}\gtrsim 3\times 10^{9}\,M_{\odot}\). Theoretical predictions generally suggest that \(M_{\rm vir}\lesssim 10^{9}\,M_{\odot}\) is required to produce a stellar mass of \(M_{*}\sim 10^{3}\,M_{\odot}\). However, as mentioned earlier, within this mass range, the effects of reionization are likely to result in complete quenching when adopting the conventional UV background (HM2012). Therefore, the extended SFHs observed in the Magellanic UFDs are more likely a consequence of encountering a weak UV field during the reionization epoch rather than being hosted by highly massive halos. Moreover, if massive halos are indeed the cause, an even weaker UV field than what we have adopted would be required to generate the more prolonged SFHs observed in the Magellanic UFDs. One possible approach to diminish the intensity of the UV field is to consider an alternative escape fraction of ionizing photons from galaxies, rather than the currently adopted value of \(f_{\rm esc}=0.3\). A lower escape fraction would result in more extended SFHs (refer to Appendix C). It has been suggested that there may be a correlation between star formation rate and escape fraction and that the escape fraction may evolve with redshift (e.g., Kuhlen and Faucher-Giguere, 2012). According to Ma et al. (2020), for instance, the average escape fraction of ionizing photons from galaxies increases with halo mass for \(M_{\rm vir}=10^{8}-10^{9.5}\,M_{\odot}\), remains roughly constant, \(f_{\rm esc}=0.1-0.2\), for \(M_{\rm vir}=10^{9.5}-10^{11}\,M_{\odot}\), and then drops for \(M_{\rm vir}=10^{11}\,M_{\odot}\) due to dust attenuation, with an \(f_{\rm esc}\) of less than 0.1 at the massive end. Finally, we would like to emphasize that we are not claiming that our patchy reionization model surpasses the HM2012 model, primarily because we did not incorporate the influence of galaxies on a large scale. However, it is important to note that previous simulations, especially those concentrated on UFD galaxy formation, have typically employed a flash-like reionization approach that affects all gas particles, resulting in an abrupt cessation of star formation within UFDs. In contrast, our approach in this study considers the unique environments of the simulated UFD analogs, allowing us to test the patchy reionization model as a potential explanation for the extended SFHs observed in Magellanic UFDs. To further validate our predictions regarding the effect of patchy reionization on the SFHs of observed UFDs, additional SFHs for satellite galaxies would be beneficial. The forthcoming SFHs for Magellanic satellites, obtained from deep HST and JWST imaging, will help identify differences between the SFHs of Magellanic UFDs and long-standing satellites of the MW. Additionally, spectroscopic surveys could provide more accurate constraints on metallicities and enable us to estimate more precise ages of stars. ## Acknowledgements We would like to thank Gurtina Besla for providing valuable comments. We are grateful to Volker Springel, Joop Schaye, and Claudio Dalla Vecchia for permission to use their versions of gadget. J. L. and M. J. are supported by the National Research Foundation (NRF) grant No. 2021R1A2C109491713, funded by the Korean government (MSIT). Y.C. acknowledges support from NASA grant No. HST-GO-16293. ## Appendix A Results of the UFD Analogs with Two Different Reionization Implementations ## Appendix B The strength of patchy reionization on a large scale ## Appendix C The effect of escape fraction on SFHs
2303.06265
A geometric approach to second-order differentiability of convex functions
We show a new, elementary and geometric proof of the classical Alexandrov theorem about the second order differentiability of convex functions. We also show new proofs of recent results about Lusin approximation of convex functions and convex bodies by $C^{1,1}$ convex functions and convex bodies.
Daniel Azagra, Anthony Cappello, Piotr Hajłasz
2023-03-11T01:07:50Z
http://arxiv.org/abs/2303.06265v2
# A geometric approach to second-order differentiability of convex functions ###### Abstract. We show a new, elementary and geometric proof of the classical Alexandrov theorem about the second order differentiability of convex functions. We also show new proofs of recent results about Lusin approximation of convex functions and convex bodies by \(C^{1,1}\) convex functions and convex bodies. Key words and phrases:Convex function, convex body, Alexandrov theorem, Lusin property, Lipschitz gradient 2020 Mathematics Subject Classification: 26B25, 28A75, 41A30, 52A20, 52A27, 53C45 P.H. was supported by NSF grant DMS-2055171. Introduction Let \(f:\mathbb{R}^{n}\to\mathbb{R}\) be a convex function. We denote by \(f:\mathbb{R}^{n}\to\mathbb{R}\) the _convex function_\(g:\mathbb{R}^{n}\to\mathbb{R}\) defined by \[\left\{\begin{array}{ll}f(x)=\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2} \left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{ 2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1 \right)}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2 \left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1 \right)}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2 \left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2\left(\frac{1}{2}\left(\frac{1 \left(\frac{1}{2}\left(\frac{1}{2\left(\frac{1}{2\left(\frac{1}\left(\frac{1}{2 \left(\frac{1}\left(\frac{1}\left(\frac{1}\left(\frac{1\left(\frac{1}\left(\frac{ 1\left(\frac{1}\left(\frac{1}\left(\frac{1\left(\frac{1}\left(\frac{1}\left( \frac{1\left(\frac{1}\left(\frac{1}\left(\frac{1\left(\frac{1}\left(\frac{1 \left(\frac{1}\left(\frac{1}\left(\frac{1}\left(\frac{1\left(\frac{1}\left( \frac{1\left(\frac{1}\left(\frac{1}\left(\frac{1(\frac{1}\left(\frac{1( \frac{1}\left(\left(\frac{1}\left(\frac{1(\frac{1}(\fracfrac{1((( \frac{1)}(((\fracfrac{1(((((((((((((((((((( )))))) (((((((((((((((((((((((((((((((((((((((( ))))))))) (((((((((((((((((((((((((((((((((((((((((( )))))))) ((((((((((((((((((((((((((((((((((((((( ))))))) ((((((((((((((((((((((((((((((((((((((( )))))) (((((((((((((((((((((((((((((((((( ))))) (((((((((((((((((((((((((((((((( ))))) (((((((((((((((((((((((((((((((( )))) ((((((((((((((((((((((((((((( )))) ((((((((((((((((((((((((((((( )))) ((((((((((((((((((((((((((((((( (((((((( ))) ((((((((((((((((((((( (((((((( ((((( ))) (((((((((((((((((((( ((((((((( (((( ((((( ((( ((( ((( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( } 1 1. \(S\) _does not contain any line._ 2. _For every_ \(\varepsilon>0\) _there exists a convex hypersurface_ \(S_{\varepsilon}\) _of_ \(\mathbb{R}^{n}\) _of class_ \(C^{1,1}_{\mathrm{loc}}\) _such that_ \(\mathcal{H}^{n-1}\left(S\triangle S_{\varepsilon}\right)<\varepsilon\)_._ **Remark 1.8**.: It follows from the proof that if \(S=\partial W\), where \(W\) is a (possibly unbounded) convex body that contains no lines, then there exists a (possibly unbounded) convex body \(W_{\varepsilon}\subset W\) such that \(S_{\varepsilon}:=\partial W_{\varepsilon}\in C^{1,1}_{\mathrm{loc}}\) and \(\mathcal{H}^{n-1}\left(S\triangle S_{\varepsilon}\right)<\varepsilon\). From this result we will also deduce the following new generalization of Theorem 1.6 for convex functions defined on arbitrary open convex subsets of \(\mathbb{R}^{n}\). **Theorem 1.9**.: _Let \(U\subset\mathbb{R}^{n}\) be open an convex, and let \(f:U\to\mathbb{R}\) be a convex function, such that \(f\not\in C^{1,1}_{\mathrm{loc}}(U)\). Then, the following statements are equivalent:_ 1. _For every_ \(\varepsilon>0\) _there exists a_ \(C^{1,1}_{\mathrm{loc}}\) _convex function_ \(g:U\to\mathbb{R}\) _such that_ \[|\{x\in U:f(x)\neq g(x)\}|<\varepsilon.\] 2. _The graph of_ \(f\) _does not contain any line of_ \(\mathbb{R}^{n+1}\)_._ The original proofs of Theorems 1.4, 1.5 and 1.6 used the Whitney extension theorem for convex functions [5, 6, 3], and the Alexandrov Theorem 1.1. Our proofs presented here are elementary and avoid these tools. As explained above, the proofs are based on a simple geometric idea that is also used in our proof of Alexandrov's theorem. We will prove Theorem 1.5 first and we will use it as a main tool in the proofs of Theorems 1.1 and 1.4. Indeed, the brief description of the proof of Theorem 1.1 presented above is based on the approximation of the epigraph of \(f\) by the convex set \(W(\delta)\) of class \(C^{1,1}\) and this is strictly related to Theorem 1.5. Except Section 7, our exposition is elementary and self-contained. We have made an effort to make it accessible to anyone with basic knowledge of real analysis, and no knowledge in convex analysis is required. The paper is structured as follows. In Section 2 we fix notation and recall basic definitions and facts needed to understand the paper. All results mentioned in this section are well known. In Section 3 we prove Theorem 1.5 and then we use it to prove Lemma 3.8 which is a version of Theorem 1.4. This lemma will play a central role in the proofs of Theorems 1.1, 1.4 and 1.6. Theorems 1.1 and 1.4 are proved in Sections 4 and 6 respectively. In Section 5 we prove Theorem 1.2 as a direct consequence of Theorem 1.1. This proof is independent of all other sections of the paper and it can be read independently. In Section 7 we present the proofs of Theorems 1.6, 1.7, and 1.9. We made an effort to make different parts of the paper as independent as possible. Section 3 is needed in Sections 4, 6 and 7, but the content in Sections 4, 6 and 7 of the paper stands alone and is not dependent on one another. Similarly Section 5 is independent of any other part of the paper. ## 2. Preliminaries In this brief section we will explain notation and basic facts needed in the paper. This section will also clarify necessary prerequisites. By no means the definitions and facts presented here are detailed. The reader may find missing details in standard textbooks. Balls in \(\mathbb{R}^{n}\) are denoted by \(B(x,r)\) or \(B^{n}(x,r)\). The unit sphere in \(\mathbb{R}^{n}\) that is centered at the origin is denoted by \(\mathbb{S}^{n-1}\). The interior of a set \(A\) is denoted by \(\operatorname{int}A\). An interval in \(\mathbb{R}^{n}\) with endpoints \(x,y\in\mathbb{R}^{n}\) is denoted by \([x,y]\). The scalar product of vectors \(u,v\in\mathbb{R}^{n}\) is denoted by \(\langle u,v\rangle\). The Lebesgue measure of \(A\subset\mathbb{R}^{n}\) is denoted by \(|A|\). We say that \(x\in\mathbb{R}^{n}\) is a _density point_ of a measurable set \(A\subset\mathbb{R}^{n}\) if \(\frac{|A\cap B(x,r)|}{|B(x,r)|}\to 1\) as \(r\to 0^{+}\). It follows from the Lebesgue differentiation theorem that almost all points \(x\in A\) are density points of \(A\). The Hausdorff measure is denoted by \(\mathcal{H}^{s}\). It follows from the definition that if \(f\) is \(L\)-Lipschitz, then \(\mathcal{H}^{s}(f(A))\leq L^{s}\mathcal{H}^{s}(A)\). If \(A\subset\mathbb{R}^{n}\), \(\lambda>0\) and \(\lambda A=\{\lambda x:\,x\in A\}\) is the dilation of \(A\) by the factor \(\lambda\), then \(\mathcal{H}^{s}(\lambda A)=\lambda^{s}\mathcal{H}^{s}(A)\). \(\mathcal{H}^{n}\) coincides with the Lebesgue measure in \(\mathbb{R}^{n}\). We say that \(f\in C^{1,1}(U)\) (\(f\in C^{1,1}_{\operatorname{loc}}(U)\)), if \(U\subset\mathbb{R}^{n}\) is open, \(f\in C^{1}(U)\), and the gradient \(\nabla f\) is Lipschitz (locally Lipschitz) on \(U\). If \(f\in C^{1,1}(B^{n}(0,R))\), then it follows that \[|f(y)-f(x)-Df(x)(y-x)|\leq M|y-x|^{2}\quad\text{for all }x,y\in B^{n}(0,R), \tag{4}\] where \(M\) is the Lipschitz constant of \(Df\). Indeed, we can write \(f(y)-f(x)=Df(\xi)(y-x)\) for some \(\xi\in[x,y]\) and (4) follows. This inequality implies that if \(f\in C^{1,1}_{\operatorname{loc}}(U)\), where \(U\subset\mathbb{R}^{n}\) is open, then \[f(y)=f(x)+Df(x)(y-x)+O(|y-x|^{2})\quad\text{for all }x,y\in U. \tag{5}\] We say that the boundary of a bounded domain \(U\subset\mathbb{R}^{n}\) is of class \(C^{1,1}\) if it is locally a graph of a \(C^{1,1}\) function. We use notation \(\nabla f(x)\) for the gradient vector while \(Df(x)\) is the linear derivative. With this notation we have \(Df(x)v=\langle\nabla f(x),v\rangle\). A _convex body_ is a compact convex set \(K\subset\mathbb{R}^{n}\) with non-empty interior. The _convex hull_ of a set \(A\subset\mathbb{R}^{n}\) (defined as the intersection of all convex sets containing \(A\), or equivalently, as the set of all convex combinations of points of \(A\)) is denoted by \(\operatorname{co}(A)\). Every closed and convex set \(W\subset\mathbb{R}^{n}\) is the intersection of all closed half-spaces that contain \(W\). In fact, for every \(x\in\partial W\) there is a half-space \(H_{x}\) such that \(W\subset H_{x}\) and \(x\in T_{x}\cap W\), where \(T_{x}=\partial H_{x}\). The hyperplane \(T_{x}\) is called a _hyperplane supporting_\(W\) at \(x\). Thus for every \(x\in\partial W\), there is a hyperplane supporting \(W\) at \(x\), but such a hyperplane is not necessarily unique. This implies that if \(U\subset\mathbb{R}^{n}\) is open and convex and \(f:U\to\mathbb{R}\) is convex, then for every \(x\in U\) there is \(v\in\mathbb{R}^{n}\) such that \(f(y)\geq f(x)+\langle v,y-x\rangle\) for all \(y\in U\). Indeed, on the right hand side we have an equation of the supporting hyperplane of the convex _epigraph_\(\operatorname{epi}(f)=\{(x,y)\in U\times\mathbb{R}:\,x\in U,\ y\geq f(x)\}\). The set of all such \(v\) is denoted by \(\partial f(x)\) and called the _subdifferential_ of \(f\) at \(x\). Thus \(\partial f(x)\neq\varnothing\) for any \(x\in U\). If in addition \(f\) is differentiable at \(x_{o}\), then \(\partial f(x_{o})=\{\nabla f(x_{o})\}\) i.e., \(f(y)\geq f(x_{o})+Df(x_{o})(y-x_{o})\) meaning that the tangent hyperplane to the graph of \(f\) at \(x_{o}\) is the unique hyperplane supporting the epigraph of \(f\) at \((x_{o},f(x_{o}))\). Convex functions are locally Lipschitz continuous and hence they are differentiable a.e. by the Rademacher theorem, so \(\partial f(x)=\{\nabla f(x)\}\) for almost all \(x\in U\). In fact we will prove the a.e. differentiability of convex functions directly and _without_ any reference to the Rademacher theorem, see Remark 3.10, but we will need the Rademacher theorem in the proof of Theorem 1.1, because we will need to know that the gradient of a convex function \(g\in C^{1,1}\) is differentiable a.e. In the last section of the paper we will consider _convex hypersurfaces_. We call the boundary \(\partial V\) of a closed convex set \(V\) with nonempty interior (not necessarily bounded) a _convex hypersurface_, and we say that it is of class \(C^{1,1}_{\rm loc}\) if it is locally a graph of a \(C^{1,1}\) function (if the set \(V\) is unbounded, we will say that \(V\) is an _unbounded convex body_). We call a convex function \(f:\mathbb{R}^{n}\to\mathbb{R}\)_essentially coercive_ if there exists a linear function \(\ell:\mathbb{R}^{n}\to\mathbb{R}\) such that \(\lim_{|x|\to\infty}\left(f(x)-\ell(x)\right)=\infty\); this is equivalent to saying that the epigraph of \(f\) does not contain lines. ## 3. Proof of Theorem 1.5 We will precede the proof with auxiliary results. If \(W\subset\mathbb{R}^{n}\) is a closed convex set, then it is easy to see that for every \(x\in\mathbb{R}^{n}\), there is a unique point denoted by \(\pi_{W}(x)\) such that \[\pi_{W}(x)\in W\qquad\text{ and }\qquad|x-\pi_{W}(x)|=\operatorname{dist}(x,W).\] Clearly, if \(x\not\in W\), then \(\pi_{W}(x)\in\partial W\) **Lemma 3.1**.: \(\pi_{W}:\mathbb{R}^{n}\to W\) _is \(1\)-Lipschitz._ Proof.: Let \(x,y\in\mathbb{R}^{n}\). By convexity of \(W\), \(t\pi_{W}(x)+(1-t)\pi_{W}(y)\in W\) for all \(t\in(0,1)\) and hence \[|y-\pi_{W}(y)|^{2}=\operatorname{dist}(y,W)^{2}\leq|y-(t\pi_{W}( x)+(1-t)\pi_{W}(y))|^{2}\] \[=|(y-\pi_{W}(y))-t(\pi_{W}(x)-\pi_{W}(y))|^{2}\] \[=|y-\pi_{W}(y)|^{2}-2t\langle y-\pi_{W}(y),\pi_{W}(x)-\pi_{W}(y) \rangle+t^{2}|\pi_{W}(x)-\pi_{W}(y)|^{2}\] which can be simplified to \[2\langle y-\pi_{W}(y),\pi_{W}(x)-\pi_{W}(y)\rangle\leq t|\pi_{W}(x)-\pi_{W}(y )|^{2}.\] Letting \(t\to 0^{+}\) yields \[\langle y-\pi_{W}(y),\pi_{W}(x)-\pi_{W}(y)\rangle\leq 0. \tag{6}\] By switching the role of \(x\) and \(y\) we also have \[\langle\pi_{W}(x)-x,\pi_{W}(x)-\pi_{W}(y)\rangle=\langle x-\pi_{W}(x),\pi_{W}( y)-\pi_{W}(x)\rangle\leq 0. \tag{7}\] Adding inequalities (6) and (7) yields \[|\pi_{W}(x)-\pi_{W}(y)|^{2}\leq\langle x-y,\pi_{W}(x)-\pi_{W}(y)\rangle\leq|x -y|\,|\pi_{W}(x)-\pi_{W}(y)|\] and hence \(|\pi_{W}(x)-\pi_{W}(y)|\leq|x-y|\). For a convex body \(K\subset\mathbb{R}^{n}\) and \(r>0\) we define the _inner parallel convex body_ by \[K_{r}:=\{x\in K:\,\operatorname{dist}(x,\partial K)\geq r\}.\] **Lemma 3.2**.: \(K_{r}\) _is convex for any \(r>0\)._ Proof.: Let \(x,y\in K_{r}\). We need to show that \([x,y]\subset K_{r}\). Clearly, \(\overline{B}(x,r),\overline{B}(y,r)\subset K\) and for any \(z\in[x,y]\), \(\overline{B}(z,r)\subset\operatorname{co}(\overline{B}(x,r)\cup\overline{B}(y,r ))\subset K,\) so \(\operatorname{dist}(z,\partial K)\geq r\), \(z\in K_{r}\), and hence \([x,y]\subset K_{r}\). Let \(r_{o}=\sup_{x\in K}\operatorname{dist}(x,\partial K)\). Clearly \(K_{r}=\varnothing\) for \(r>r_{o}\). \(K_{r_{o}}\neq\varnothing\), but it has empty interior. However, for \(r\in(0,r_{o})\), \(K_{r}\) has non-empty interior, so \(K_{r}\) is a convex body only for \(r\in(0,r_{o})\). For a convex body \(K\) and \(r>0\) we also define \[K(r):=\bigcup\{\overline{B}(x,r):\,\overline{B}(x,r)\subset K\}. \tag{8}\] It is easy to see that \(K(r)\) is convex and compact (it can be empty). Moreover, if \(K\) contains a ball of radius \(r_{o}\), then for any \(r\in(0,r_{o}]\), \(K(r)\) has non-empty interior and hence \(K(r)\) is a convex body. **Lemma 3.3**.: _If a convex body \(K\) contains a ball of radius \(r_{o}\), then for all \(r\in(0,r_{o})\), \(K_{r}\) is a convex body, and_ \[\mathcal{H}^{n-1}(\partial K_{r})\leq\mathcal{H}^{n-1}(\partial K\cap\partial K (r)). \tag{9}\] Proof.: Clearly, for \(r\in(0,r_{o})\), \(K_{r}\) has non-empty interior, so it is a convex body by Lemma 3.2. Observe that \[\pi_{K_{r}}(\partial K\cap\partial K(r))=\partial K_{r}. \tag{10}\] Indeed, if \(z\in\partial K_{r}\), then there is \(x\in\partial K\), such that \(|x-z|=r\). Therefore, \(x\in\overline{B}(z,r)\subset K\), and hence \(x\in K(r)\). Thus, \(x\in\partial K\cap\partial K(r)\), \(|x-z|=r\geq\operatorname{dist}(x,K_{r})\), and hence \(z=\pi_{K_{r}}(x)\). Now, (9) follows from (10) and the fact that \(\pi_{K_{r}}\) is \(1\)-Lipschitz (Lemma 3.1). The next beautiful result is due to McMullen [12]. While it can be concluded from Alexandrov's theorem, we present here a direct and surprisingly elementary proof which is a small modification of McMullen's argument. In fact, Lemma 3.4 will play an important role in our proof of Alexandrov's theorem. **Lemma 3.4**.: _If \(K\subset\mathbb{R}^{n}\) is a convex body, then \(\lim_{r\to 0^{+}}\mathcal{H}^{n-1}(\partial K\setminus\partial K(r))=0\)._ **Remark 3.5**.: Lemma 3.4 has the following geometric interpretation: for almost all \(x\in\partial K\), there is a closed ball \(\overline{B}\subset K\) touching the boundary of \(K\) at \(x\), i.e., \(x\in\overline{B}\). Proof.: Without loss of generality we may assume that \(\overline{B}(0,r_{o})\subset K\). If \(r\in(0,r_{o})\), then \(0\) belongs to the interior of \(K_{r}\). For \(\lambda>0\) we define \[\lambda K_{r}:=\{\lambda z:\,z\in K_{r}\},\] that is, \(\lambda K_{r}\) is a dilation of \(K_{r}\). For \(r\in(0,r_{o})\), let \[\lambda(r):=\inf\{\lambda>0:\,K\subset\lambda K_{r}\}.\] Clearly, \(K\subset\lambda(r)K_{r}\). It is easy to see that the function \(r\mapsto\lambda(r)\) is non-decreasing and \(\lambda(r)\to 1\) as \(r\to 0^{+}\). Indeed, for any \(\varepsilon>0\), \((1+\varepsilon)^{-1}K\subset\operatorname{int}K\), and hence \(\delta:=\operatorname{dist}((1+\varepsilon)^{-1}K,\partial K)>0\), so for all \(r\in(0,\delta]\) \[(1+\varepsilon)^{-1}K\subset K_{r},\qquad\text{i.e.,}\qquad K\subset(1+ \varepsilon)K_{r}.\] In other words \(1\leq\lambda(r)\leq 1+\varepsilon\) for all \(0<r\leq\delta\) proving that \(\lambda(r)\to 1\) as \(r\to 0^{+}\). It is easy to see that \(\pi_{K}(\partial(\lambda(r)K_{r}))=\partial K\). Indeed, if \(x\in\partial K\) and \(\nu(x)\) is the outer unit normal vector to a supporting hyperplane of \(K\) at \(x\), then there is \(t\geq 0\) such that \(z:=x+t\nu(x)\in\partial(\lambda(r)K_{r})\) and it easily follows that \(\pi_{K}(z)=x\). Since \(\pi_{K}\) is \(1\)-Lipschitz and it maps \(\partial(\lambda(r)K_{r})\) onto \(\partial K\), we have that \[\mathcal{H}^{n-1}(\partial K) \leq\mathcal{H}^{n-1}(\partial(\lambda(r)K_{r}))=\lambda(r)^{n-1 }\mathcal{H}^{n-1}(\partial K_{r})\leq\lambda(r)^{n-1}\mathcal{H}^{n-1}( \partial K\cap\partial K(r))\] \[\leq\lambda(r)^{n-1}\mathcal{H}^{n-1}(\partial K)\to\mathcal{H}^ {n-1}(\partial K)\quad\text{as $r\to 0^{+}$.}\] Therefore, \(\mathcal{H}^{n-1}(\partial K\cap\partial K(r))\to\mathcal{H}^{n-1}(\partial K)\), as \(r\to 0^{+}\). This completes the proof of Lemma 3.4. The following result, was proven in a more general form in the unpublished work [11, Theorem 1, p. 32]. It is also mentioned without any proof or reference in [10]. Although a detailed proof can be found in [8, Proposition 2.4.3], the origin of the result is not referenced in this work. **Lemma 3.6**.: _A convex body \(W\) has \(C^{1,1}\) boundary if and only if there is \(r>0\) such that \(W=W(r)\)._ **Remark 3.7**.: In other words a convex body \(W\) has boundary of class \(C^{1,1}\) if and only if there is \(r>0\) such that \(W\) is the union of closed balls of radius \(r\). Proof.: We will only prove the implication from right to left, that is we will prove that if \(W=W(r)\), then \(\partial W\) is of class \(C^{1,1}\). This is the only implication that we need in the proof of Theorem 1.5. For the proof of the implication from left to right, see [8, Proposition 2.4.3]. Thus, we assume that for each \(p\in\partial W\) there is \(h(p)\in W\) such that \(p\in\overline{B}(h(p),r)\subset W\). It follows that the hyperplane \(T_{p}\) tangent to the ball \(\overline{B}(h(p),r)\) at \(p\) is the unique hyperplane supporting \(W\) at \(p\). Note that \(\operatorname{dist}(h(p),\partial W)=r\) implies that \(h(p)\in W_{r}\), so \(|p-h(p)|=r=\operatorname{dist}(p,W_{r})\), and hence \(h(p)=\pi_{W_{r}}(p)\). The inner unit normal vector to \(T_{p}\) is given by \[\nu(p)=\frac{h(p)-p}{r}=\frac{\pi_{W_{r}}(p)-p}{r}\] and Lemma 3.1 implies that the function \(\nu:\partial W\to\mathbb{S}^{n-1}\) is Lipschitz continuous: \[|\nu(p)-\nu(q)|\leq\frac{|\pi_{W_{r}}(p)-\pi_{W_{r}}(q)|+|p-q|}{r}\leq\frac{2 }{r}|p-q|.\] This in turn, implies that the boundary \(\partial W\) is of class \(C^{1,1}\). Indeed, choose any point \(p_{o}\in\partial W\) and choose a Euclidean coordinate system \((x_{1},\dots,x_{n})=(x^{\prime},x_{n})\) such that \(p_{o}=0\) and \(T_{p_{o}}=\{x_{n}=0\}\). Then \(\partial W\) in a neighborhood \(U=B^{n-1}(0,\frac{r}{2})\) of \(p_{o}=0\) is a graph of a function \(x_{n}=f(x^{\prime})\) i.e., \(p(x^{\prime}):=(x^{\prime},f(x^{\prime}))\in\partial W\). Since for \(x^{\prime}\in U\), the graph of \(f\) lies above \(T_{p(x^{\prime})}\) and below \(\overline{B}(h(p(x^{\prime})),r)\), it follows from geometric considerations that for each \(x^{\prime}\in U\) there exists a unique supporting hyperplane at \(p(x^{\prime})\) and hence \(f\) is differentiable at any \(x^{\prime}\in U\). Note also that \(|\nabla f|\leq M\) on \(U\) for some \(M>0\), because the tangent hyperplane to the graph of \(f\) cannot intersect with \(B(h(0),r)\). It remains to show that \(\nabla f\) is Lipschitz continuous in \(U\). The inner unit normal vector in terms of \(\nabla f\) is given by \[\nu(p(x^{\prime}))=\frac{(-\nabla f(x^{\prime}),1)}{\sqrt{1+|\nabla f(x^{\prime} )|^{2}}},\qquad\text{so}\qquad\pi(\nu(p(x^{\prime}))=\frac{-\nabla f(x^{\prime} )}{\sqrt{1+|\nabla f(x^{\prime})|^{2}}},\] where \(\pi:\mathbb{R}^{n}\to\mathbb{R}^{n-1}\), \(\pi(x^{\prime},x_{n})=x^{\prime}\) is the orthogonal projection. Since \(\Psi(\Phi(z))=z\) for all \(z\in\mathbb{R}^{n-1}\), where \(\Psi(z)=-z/\sqrt{1-|z|^{2}}\) and \(\Phi(z)=-z/\sqrt{1+|z|^{2}}\), it follows that \[\nabla f(x^{\prime})=\Psi\left(\frac{-\nabla f(x^{\prime})}{\sqrt{1+|\nabla f (x^{\prime})|^{2}}}\right)=\Psi(\pi(\nu(x^{\prime},f(x^{\prime}))))\quad\text{ for }x^{\prime}\in U.\] This proves Lipschitz continuity of \(\nabla f\) in \(U\), as a composition of Lipschitz functions. The only issue could be the Lipschitz continuity of \(\Psi\): it is a smooth function defined for \(|z|<1\), but it is unbounded. However, this does not cause any problems here, because \[\left|\frac{-\nabla f(x^{\prime})}{\sqrt{1+|\nabla f(x^{\prime})|^{2}}}\right| \leq\frac{M}{\sqrt{1+M^{2}}}<1.\] Proof of Theorem 1.5.: Let \(K\subset\mathbb{R}^{n}\) be a convex body. According to Lemma 3.4, for every \(\varepsilon>0\) there is \(\delta_{o}>0\) such that for any \(\delta\in(0,\delta_{o})\), \(\mathcal{H}^{n-1}(\partial K\setminus\partial K(\delta))<\varepsilon/2\). Since \(K(\delta)\subset K\), it is easy to see that \(\pi_{K(\delta)}(\partial K)=\partial K(\delta)\) and hence Lemma 3.1 yields \(\mathcal{H}^{n-1}(\partial K(\delta)\setminus\partial K)\leq\mathcal{H}^{n-1} (\partial K\setminus\partial K(\delta))\). Therefore \(\mathcal{H}^{n-1}(\partial K\triangle\partial K(\delta))<\varepsilon\). Since the boundary of \(K(\delta)\) is of class \(C^{1,1}\) by Lemma 3.6, \(W:=K(\delta)\) satisfies the claim of the theorem. The next result is a direct consequence of Theorem 1.5 and it is a version of Theorem 1.4. We will use Lemma 3.8 in the proofs of Theorems 1.1, 1.4, and 1.6. **Lemma 3.8**.: _Let \(f:\mathbb{R}^{n}\to\mathbb{R}\) be a convex function. Then for any \(R>0\) and \(\varepsilon>0\), there is a convex function \(g\in C^{1,1}(B^{n}(0,R))\) such that \(f\leq g\) and_ \[|\{x\in B^{n}(0,R):\,f(x)\neq g(x)\}|<\varepsilon. \tag{11}\] _Moreover, if \(f(x)=g(x)\), then \(f\) is differentiable at \(x\), \(Df(x)=Dg(x)\) and_ \[f(y)=f(x)+Df(x)(y-x)+O(|y-x|^{2}). \tag{12}\] Proof.: Let \(M:=\sup_{\overline{B}^{n}(0,2R)}f(x)\) and define \[W:=\{(x,y)\in\overline{B}^{n}(0,2R)\times\mathbb{R}:\,f(x)\leq y\leq M+2R\}.\] That is, \(W\) is an \((n+1)\)-dimensional convex body bounded by the graph of \(f\), the cylinder \(\partial B^{n}(0,2R)\times\mathbb{R}\) and the hyperplane \(y=M+2R\). According to Lemma 3.4, there is \(\delta<R\) such that \[\mathcal{H}^{n}(\partial W\setminus\partial W(\delta))<\varepsilon.\] Since \(W(\delta)\) is the union of closed balls of radius \(\delta<R\) that are contained in \(W\), it follows that \[\overline{B}^{n}(0,2R)\times\{M+R\}\subset W(\delta),\] i.e., the intersection of \(W(\delta)\) with the hyperplane \(y=M+R\) is an \(n\)-dimensional closed ball of radius \(2R\). Thus, if \(\pi:\mathbb{R}^{n+1}\to\mathbb{R}\) is the orthogonal projection, \(\pi(W(\delta))=\overline{B}^{n}(0,2R)\), and hence for \(x\in\overline{B}^{n}(0,2R)\), we can define \[g(x):=\inf\{y:\,(x,y)\in W(\delta)\}.\] That is, the function \(g:\overline{B}^{n}(0,2R)\to\mathbb{R}\) parametrizes the bottom part of the boundary of \(W(\delta)\). According to Lemma 3.6, the boundary of \(W(\delta)\) is of class \(C^{1,1}\) so \(g\in C^{1,1}_{\rm loc}(B^{n}(0,2R))\) and hence \(g\) is a convex function in \(C^{1,1}(B^{n}(0,R))\). Since \(W(\delta)\) is contained in \(W\) and hence in the epigraph of \(f\), it follows that \(g\geq f\). Observe that \[|\{x\in B^{n}(0,R):\,f(x)\neq g(x)\}\subset\pi(\partial W\setminus\partial W( \delta))\] and hence \[|\{x\in B^{n}(0,R):\,f(x)\neq g(x)\}|\leq|\pi(\partial W\setminus\partial W( \delta))|\leq\mathcal{H}^{n}(\partial W\setminus\partial W(\delta))<\varepsilon,\] because the orthogonal projection does not increase the Hausdorff measure and \(\mathcal{H}^{n}\) coincides with the Lebesgue measure in \(\mathbb{R}^{n}\). It remains to prove (12). Assume that \(f(x)=g(x)\). There is \(v\in\mathbb{R}^{n}\) such that \[f(x)+\langle v,y-x\rangle\leq f(y)\leq g(y)=g(x)+\langle\nabla g(x),y-x\rangle +O(|y-x|^{2}).\] The left inequality is a consequence of convexity of \(f\). We also used (5). Since \(f(x)=g(x)\), we have \[\langle v-\nabla g(x),y-x\rangle\leq O(|y-x|^{2}),\] which easily implies that \(v=\nabla g(x)\). Hence \[f(x)+\langle\nabla g(x),y-x\rangle\leq f(y)\leq f(x)+\langle\nabla g(x),y-x \rangle+O(|y-x|^{2})\] yields (12) with \(Df(x)=Dg(x)\). **Corollary 3.9**.: _If \(f:\mathbb{R}^{n}\to\mathbb{R}\) is convex, then it is differentiable a.e. Moreover_ \[f(y)=f(x)+Df(x)(y-x)+O(|y-x|^{2})\qquad\text{for almost all $x\in\mathbb{R}^{n}$}. \tag{13}\] **Remark 3.10**.: To prove (13) we do not need full strength of Lemma 3.8. Namely (13) is guaranteed whenever there is ball in the epigraph of \(f\) that touches the graph of \(f\) at \((x,f(x))\) and it follows from Lemma 3.4 that it is true for almost all \(x\). Indeed, the boundary of the ball is parametrized by a smooth function \(g\geq f\) and the above argument leading to (12) applies verbatim. We leave details to the reader. Note that such a proof of (13) does not use Rademacher's theorem. Moreover, the estimate (13), is stronger than the a.e. differentiability of \(f\) that would follow from an application of Rademacher's theorem. We will not need Corollary 3.9 in the paper. ## 4. Proof of Theorem 1.1 **Lemma 4.1**.: _Suppose that \(f,g:B^{n}(0,R)\to\mathbb{R}\) are convex, \(f\leq g\) and \(g\in C^{1,1}(B^{n}(0,R))\). Then for almost all \(x_{o}\in\{f=g\}\) we have_ \[f(x)=f(x_{o})+Df(x_{o})(x-x_{o})+\frac{1}{2}(x-x_{o})^{T}D^{2}g(x_{o})(x-x_{o} )+o(|x-x_{o}|^{2}). \tag{14}\] **Remark 4.2**.: Note that \(D^{2}g(x_{o})\) in (14) is not a typo. Also we do not need the assumption that \(g\) is convex or \(C^{1,1}\). With a small modification, the proof works under the assumption that \(f\leq g\in C^{1}\) and \(Dg\) is differentiable at \(x_{o}\). Proof.: It follows from Lemma 3.8 that \(f\) is differentiable at every point of the set \(\{f=g\}\) and that \(Df=Dg\) in \(\{f=g\}\). Since \(Dg\) is Lipschitz continuous, \(Dg\) is differentiable a.e. by Rademacher's theorem. Therefore, it suffices to prove the result whenever \(x_{o}\in\{f=g\}\) is a density point of that set and \(Dg\) is differentiable at \(x_{o}\). To simplify notation, without loss of generality, we may assume that \(x_{o}=0\), and we need to prove that \[f(x)-f(0)-Df(0)x-\frac{1}{2}x^{T}D^{2}g(0)x=o(|x|^{2}).\] Since \(f(0)=g(0)\) and \(Df(0)=Dg(0)\), the left hand side equals \[(f(x)-g(x))+\left(g(x)-g(0)-Dg(0)x-\frac{1}{2}x^{T}D^{2}g(0)x\right)=(f(x)-g( x))+o(|x|^{2}).\] We used here the fact that \(g\) is twice differentiable at \(0\) (Taylor's theorem with the Peano remainder). Thus it remains to show that \(g(x)-f(x)=o(|x|^{2})\). Since \(0\) is a density point of the set \(\{f=g\}\), for any \(x\) we can find \(y\in\{f=g\}\) such that \(|x-y|=o(|x|)\). Clearly, \(f(y)=g(y)\) and \(Df(y)=Dg(y)\) by Lemma 3.8. Therefore, \[f(x)\geq f(y)+Df(y)(x-y)=g(y)+Dg(y)(x-y),\] where the inequality is a consequence of convexity of \(f\). Since \(f\leq g\), the above inequality and (4) yield \[0\leq g(x)-f(x)\leq g(x)-g(y)-Dg(y)(x-y)\leq M|x-y|^{2}=o(|x|^{2}).\] The proof is complete. Proof of Theorem 1.1.: Let \(f:\mathbb{R}^{n}\to\mathbb{R}\) be convex. Let \(R>0\) and \(\varepsilon>0\) and let \(g\) be as in Lemma 3.8. It follows from Lemma 4.1 that for almost all \(x\in\{f=g\}\), (2) is satisfied with \(D^{2}f(x):=D^{2}g(x)\). Hence (2) holds true in \(B(0,R)\) outside a set of measure less than \(\varepsilon\). Since it is true for any \(R>0\) and \(\varepsilon>0\), it follows that (2) is satisfied almost everywhere. ## 5. Proof of Theorem 1.2 If \(f\) is twice differentiable at \(0\) (in the Peano sense, as in (2)), then we have \[f(x)=f(0)+Df(0)x+\frac{1}{2}x^{T}D^{2}f(0)x+R(x)=f(0)+Df(0)x+\langle Ax,x \rangle+R(x),\] where \(A=\frac{1}{2}D^{2}f(0)\) and \(R(x)=o(|x|^{2})\). Note that \[a(r):=\sup_{0<|x|\leq 2r}\frac{|R(x)|}{|x|^{2}}\to 0\qquad\text{as }r\to 0^{+}.\] Moreover, \[|R(x)|\leq a\Big{(}\frac{|x|}{2}\Big{)}\,|x|^{2}\leq a(|x|)|x|^{2}.\] Proof of Theorem 1.2.: Let \(f\) be twice differentiable at \(x\) as in (2). We need to prove (3). Without loss of generality we may assume that \(x=0\), and hence we need to prove that \[\lim_{x\to 0}\frac{\sigma_{x}-Df(0)-D^{2}f(0)x}{|x|}=0\quad\text{for any $\sigma_{x}\in\partial f(x)$}.\] For \(x,y\neq 0\), we have \[f(x)=f(0)+Df(0)x+\langle Ax,x\rangle+R(x),\quad f(y)=f(0)+Df(0)y+\langle Ay,y \rangle+R(y).\] Since \(f(x)+\langle\sigma_{x},y-x\rangle\leq f(y)\), we have \[\langle\sigma_{x},y-x\rangle\leq f(y)-f(x)=Df(0)(y-x)+\langle A(x+y),y-x \rangle+R(y)-R(x).\] We used here the fact that \(A\) is symmetric and hence \(\langle Ax,y\rangle=\langle Ay,x\rangle\). Let \[y=x+w,\quad\text{where}\quad w=\sqrt{a(|x|)}\,|x|z,\ |z|=1.\] Then \[\langle\sigma_{x},w\rangle\leq Df(0)w+\langle A(2x+w),w\rangle+R(y )-R(x),\] \[\langle\sigma_{x}-Df(0)-2Ax,w\rangle\leq\langle Aw,w\rangle+R(y )-R(x).\] If \(|x|\) is sufficiently small, then \(a(|x|)\leq 1\) and hence \(|w|\leq|x|\), so \(|y|\leq 2|x|\). Therefore, \[|R(y)|\leq a\Big{(}\frac{|y|}{2}\Big{)}\,|y|^{2}\leq 4a(|x|)|x|^{2},\qquad|R(y )-R(x)|\leq 5a(|x|)|x|^{2}.\] Taking the supremum over all \(z\) with \(|z|=1\) we get \[|\sigma_{x}-Df(0)-2Ax|\sqrt{a(|x|)}|x|\leq|A|a(|x|)|x|^{2}+5a(|x|)|x|^{2},\] and hence \[\frac{|\sigma_{x}-Df(0)-2Ax|}{|x|}\leq(|A|+5)\sqrt{a(|x|)}\to 0\quad\text{as $x \to 0$}.\] Since \(2A=D^{2}f(0)\), the result follows. ## 6. Proof of Theorem 1.4 One of the differences between Lemma 3.8 and Theorem 1.4 is that the function \(g\) in Lemma 3.8 is defined on a ball only and the main step in the proof of Theorem 1.4 will be to show that the function \(g\) from Lemma 3.8 can be extended from a ball \(B^{n}(0,R-\delta)\) to a convex function of class \(C^{1,1}(\mathbb{R}^{n})\). We will do it by gluing the function \(g\) with a quadratic function of the form \(a|x|^{2}-b\) and we need to know how to glue convex functions while maintaining their smoothness. The maximum of two convex functions \[\max\{u,v\}=\frac{u+v+|u-v|}{2}\] is convex, but even if \(u,v\in C^{\infty}\), the maximum \(\max\{u,v\}\) need not be \(C^{1}\). To overcome this difficulty, we will use the so called smooth maximum that was introduced in [2]. Let \(\theta\in C^{\infty}(\mathbb{R})\) be such that \(\theta(t)=|t|\) if and only if \(|t|\geq 1\), \(\theta\) is convex, \(\theta(t)=\theta(-t)\) for all \(t\), and \(1\)-Lipschitz. It easily follows that \(\theta(t)>0\) for all \(t\) and \(|\theta^{\prime}(t)|<1\) if and only if \(|t|<1\). Then, we define the _smooth maximum_ function \(\mathcal{M}:\mathbb{R}^{2}\to\mathbb{R}\) as, \[\mathcal{M}(x,y):=\frac{x+y+\theta(x-y)}{2}.\] It is easy to see that \(\mathcal{M}\) is smooth, convex and \[\mathcal{M}(x,y)=\max\{x,y\}\quad\text{whenever}\quad|x-y|\geq 1. \tag{15}\] It is also not difficult to prove that \(\mathcal{M}(x,y)\) is non-decreasing in \(x\) and \(y\), because partial derivatives of \(\mathcal{M}\) are non-negative, see [2, Lemma 2.1(viii)]. This observation and convexity of \(\mathcal{M}\) yield (see [2, Proposition 2.2(i)]) **Lemma 6.1**.: _If \(u,v:U\to\mathbb{R}\) are convex functions defined in an open convex set \(U\subset\mathbb{R}^{n}\), then \(\mathcal{M}(u,v):U\to\mathbb{R}\) is convex._ It is also obvious that if \(u,v\in C^{1,1}_{\rm loc}(U)\), then \(\mathcal{M}(u,v)\in C^{1,1}_{\rm loc}(U)\). We will use the smooth maximum to prove the following extension result. **Proposition 6.2**.: _Let \(h\in C^{1,1}_{\rm loc}(B^{n}(0,R))\) be a convex function. Then, for every \(r\in(0,R)\), there is a convex function \(H\in C^{1,1}(\mathbb{R}^{n})\), such that_ \[H(x)=h(x)\quad\text{whenever}\;|x|\leq r. \tag{16}\] **Remark 6.3**.: If \(h\in C^{k}\), \(k\in\mathbb{N}\cup\{\infty\}\), then \(H\in C^{k}(\mathbb{R}^{n})\). The proof remains the same. Proof.: Choose \(\rho\in(r,R)\) and let \[m:=\inf_{|x|\leq r}h,\qquad M:=\sup_{|x|=\rho}h.\] Then, we can find \(a,b>0\) such that the function \(q(x):=a|x|^{2}-b\) satisfies \[q(x)<m-1\qquad\text{if}\;|x|\leq r \tag{17}\] \[q(x)>M+1\qquad\text{if}\;|x|=\rho, \tag{18}\] and we define \[H(x):=\begin{cases}\mathcal{M}(h(x),q(x))&\text{if}\;|x|\leq\rho,\\ q(x)&\text{if}\;|x|>\rho.\end{cases}\] It follows from (17) that \(h(x)>q(x)+1\) if \(|x|\leq r\), so by (15), we have \(H(x)=\mathcal{M}(h(x),q(x))=h(x)\) if \(|x|\leq r\) and the condition (16) is satisfied. It follows from (18) that there is \(\varepsilon>0\) such that \(q(x)>h(x)+1\) if \(\rho\leq|x|\leq\rho+\varepsilon\) and hence by (15), \(\mathcal{M}(h(x),q(x))=q(x)\) when \(\rho\leq|x|\leq\rho+\varepsilon\). Therefore, the convex functions \(q(x)\in C^{1,1}(\mathbb{R}^{n})\) and \(\mathcal{M}(h(x),q(x))\in C^{1,1}_{\rm loc}(B^{n}(0,R))\) coincide in the annulus \(\rho\leq|x|\leq\rho+\varepsilon\) and hence \(H\) is convex in \(\mathbb{R}^{n}\) with \(H\in C^{1,1}_{\rm loc}(\mathbb{R}^{n})\). Since \(H=q\in C^{1,1}\) outside the compact ball \(\overline{B}^{n}(0,\rho)\), it follows that \(H\in C^{1,1}(\mathbb{R}^{n})\) Proof of Theorem 1.4.: Let \(R>0\) be such that \(|A\setminus\overline{B}^{n}(0,R)|<\varepsilon/2\). According to Lemma 3.8 there is a convex function \(\widetilde{g}\in C^{1,1}(B^{n}(0,2R))\) such that \[|\{x\in B^{n}(0,2R):\,f(x)\neq\widetilde{g}(x)\}|<\frac{\varepsilon}{2}.\] Now, Proposition 6.2 yields a convex function \(g\in C^{1,1}(\mathbb{R}^{n})\) such that \(g(x)=\widetilde{g}(x)\) for \(x\in\overline{B}^{n}(0,R)\) and we have \[|\{x\in A:\,f(x)\neq g(x)\}|\leq|A\setminus\overline{B}^{n}(0,R)|+|\{x\in B^{n }(0,R):\,f(x)\neq\widetilde{g}(x)\}|<\varepsilon.\] ## 7. Proofs of Theorems 1.6, 1.7 and 1.9 This section is less self-contained than the others. The "if" part of Theorem 1.6 is easy and we will not show it here; see [4, Proposition 1.10 and Theorem 2.5]. The "only if" part of Theorem 1.6 is equivalent to the following result. **Theorem 7.1**.: _Let \(f:\mathbb{R}^{n}\to\mathbb{R}\) be a convex function such that \(\lim_{|x|\to\infty}f(x)=+\infty\). Then for every \(\varepsilon>0\) there exists a convex function \(g:\mathbb{R}^{n}\to\mathbb{R}\) of class \(C^{1,1}_{\mathrm{loc}}(\mathbb{R}^{n})\) such that \(g\geq f\) and \(|\{x\in\mathbb{R}^{n}:f(x)\neq g(x)\}|<\varepsilon\)._ Next, we give a proof of Theorem 7.1 that greatly simplifies the one provided by [4]. Its main ingredients are Lemma 3.8 above and the following lemma (whose elementary proof can be found in [3, Lemma 5.3], which in turn is a refinement of the result of [9]). **Lemma 7.2**.: _Let \(\varphi:\mathbb{R}^{n}\to\mathbb{R}\) be a continuous function such that \(\lim_{|x|\to\infty}\varphi(x)=+\infty\) and such that for every \(R>0\) there exists \(C_{R}>0\) so that for every \(x,h\in B^{n}(0,R)\) we have_ \[\varphi(x+h)+\varphi(x-h)-2\varphi(x)\leq C_{R}|h|^{2}.\] _Then the function \(F=\mathrm{conv}(\varphi)\) has a similar property: for every \(R>0\) there exists \(C^{\prime}_{R}>0\) such that for every \(x,h\in B^{n}(0,R)\) we have_ \[F(x+h)+F(x-h)-2F(x)\leq C^{\prime}_{R}|h|^{2}.\] _Therefore \(F\in C^{1,1}_{\mathrm{loc}}(\mathbb{R}^{n})\)._ Here \(\mathrm{conv}(\varphi)\) denotes the convex envelope of \(\varphi\), defined as the supremum of all convex functions less than or equal to \(\varphi\). Proof of Theorem 7.1.: By Lemma 3.8, for every \(k\in\mathbb{N}\) we can find a convex function \(g_{k}\in C^{1,1}(B^{n}(0,2k))\) such that \(f\leq g_{k}\) and \[|\{x\in B^{n}(0,2k):\,f(x)\neq g_{k}(x)\}|<\varepsilon/2^{k}.\] For every \(k\in\mathbb{N}\), let \(\theta_{k}:(k-2,k+1)\to[0,\infty)\) be a \(C^{\infty}\) convex function such that: 1. \(\theta_{k}(t)=0\) iff \(k-1\leq t\leq k\); 2. \(\lim_{t\to(k-2)^{+}}\theta_{k}(t)=+\infty\), and 3. \(\lim_{t\to(k+1)^{-}}\theta_{k}(t)=+\infty\). Define \(\varphi_{k}:\mathbb{R}^{n}\to(-\infty,+\infty]\) by \[\varphi_{k}(x)=g_{k}(x)+\theta_{k}(|x|)\text{ if }k-2<|x|<k+1,\text{ and } \varphi_{k}(x)=+\infty\text{ otherwise.}\] When \(n\geq 2\) and \(k\geq 2\), the function \(\varphi_{k}\) is not convex, but we do not need it to be. Note that \(\varphi_{k}(x)=g_{k}(x)\) on the annulus \(A_{k}:=\{x:k-1\leq|x|\leq k\}\) (or ball in the special case \(k=1\)), and consider \(\varphi:\mathbb{R}^{n}\to\mathbb{R}\) defined by \(\varphi(x)=\inf_{k\in\mathbb{N}}\varphi_{k}(x).\) It is clear that \[f\leq\varphi\text{ on }\mathbb{R}^{n},\text{ and }\varphi\leq g_{k}\text{ on }A_{k},\text{ for each }k\in\mathbb{N},\] and in particular \(\lim_{|x|\to\infty}\varphi(x)=+\infty\), though \(\varphi\) is finite everywhere. As a matter of fact, it is easily seen that \(\varphi\) is locally the minimum of at most three continuous functions, and therefore it is continuous on \(\mathbb{R}^{n}\). More precisely, for each \(k\in\mathbb{N}\) we have that \[\varphi(x)=\min\{\varphi_{k-1}(x),\varphi_{k}(x),\varphi_{k+1}(x)\}\text{ for every }x\in A_{k}.\] Moreover, since \(\lim_{|x|\to k^{-}}\varphi_{k-1}(x)=+\infty=\lim_{|x|\to k^{+}}\varphi_{k+2}(x)\) and \(\varphi_{k}\) and \(\varphi_{k+1}\) are bounded and \(C^{1,1}\) on a neighborhood of the sphere \(\{x:|x|=k\}\), there exist some \(M_{k},\delta_{k}>0\) such that \(\varphi(x)=\min\{\varphi_{k}(x),\varphi_{k+1}(x)\}\) and \(\varphi_{j}(x+h)+\varphi_{j}(x-h)-2\varphi_{j}(x)\leq M_{k}|h|^{2}\) for all \(k-\delta_{k}\leq|x|\leq k+\delta_{k}\), \(|h|\leq\delta_{k}\), and \(j=k,k+1\). These inequalities easily follow from (4). This implies that \[\varphi(x+h)+\varphi(x-h)-2\varphi(x)\leq M_{k}|h|^{2}\] for all \(k-\delta_{k}\leq|x|\leq k+\delta_{k}\), and \(|h|\leq\delta_{k}\). Similarly, there exist \(M^{\prime}_{k},\delta^{\prime}_{k}>0\) such that \(\varphi(x)=\min\{\varphi_{k-1}(x),\varphi_{k}(x),\varphi_{k+1}(x)\}\) and \(\varphi_{j}(x+h)+\varphi_{j}(x-h)-2\varphi_{j}(x)\leq M^{\prime}_{k}|h|^{2}\) for all \(k-1+\delta_{k-1}-\delta^{\prime}_{k}\leq|x|\leq k-\delta_{k}+\delta^{\prime}_{ k}\), \(|h|\leq\delta^{\prime}_{k}\), and \(j=k-1,k,k+1\), implying that \[\varphi(x+h)+\varphi(x-h)-2\varphi(x)\leq M^{\prime}_{k}|h|^{2}\] for all \(k-1+\delta_{k-1}-\delta^{\prime}_{k}\leq|x|\leq k-\delta_{k}+\delta^{\prime}_ {k}\), and \(|h|\leq\delta^{\prime}_{k}\). Since every ball is contained in a finite union of sets \(A_{k}\), these estimates imply that for every \(R>0\) there exist \(C_{R}>0\) and \(\delta_{R}>0\) so that for every \(x\in B^{n}(0,R)\) and \(|h|<\delta_{R}\) we have \[E_{h}(x):=\varphi(x+h)+\varphi(x-h)-2\varphi(x)\leq C_{R}|h|^{2}. \tag{19}\] On the other hand, for \(\delta_{R}\leq|h|\leq R\), we obviously have \(E_{h}(x)\leq 4M\leq\widetilde{C_{R}}|h|^{2}\), where \(M:=\sup_{z\in B(0,2R)}\varphi(z)\) and \(\widetilde{C_{R}}=4M/\delta_{R}^{2}\). So by replacing \(C_{R}\) with \(\max\{C_{R},\widetilde{C_{R}}\}\) we certainly have \[\varphi(x+h)+\varphi(x-h)-2\varphi(x)\leq C_{R}|h|^{2}\quad\text{for all }x,h\in B^{n}(0,R). \tag{20}\] Therefore, (20) and Lemma 7.2, imply that the function \(g:=\operatorname{conv}(\varphi)\) is of class \(C^{1,1}_{\operatorname{loc}}\) (and it obviously satisfies \(f\leq g\leq\varphi\)). Since \(|\{x\in B^{n}(0,2k):\,f(x)\neq g_{k}(x)\}|<\varepsilon/2^{k}\) and \(f\leq g\leq\varphi\leq g_{k}\) on \(A_{k}\), it follows that \(|\{x\in A_{k}:\,f(x)\neq g(x)\}|<\varepsilon/2^{k}\) for every \(k\in\mathbb{N}\), which implies that \(|\{x\in\mathbb{R}^{n}:f(x)\neq g(x)\}|\leq\varepsilon\). Proof of Theorem 1.7.: For the proof of the implication (2)\(\Rightarrow\)(1), see [4, Corollary 1.13]. Regarding the implication (1)\(\Rightarrow\)(2), the same proof as in [4, Corollary 1.13] gives us a (possibly unbounded) convex body \(W_{\varepsilon}\) with boundary \(S_{\varepsilon}\) such that \(W_{\varepsilon}=\frac{1}{t_{0}}g^{-1}(-\infty,t_{0}]\) for some \(t_{0}\in(1,2)\) and some convex function \(g\in C^{1,1}_{\operatorname{loc}}(\mathbb{R}^{n})\) such that \(\mathcal{H}^{n-1}(S\setminus S_{\varepsilon})<\varepsilon/2\), \(S=\partial W\), and \(\mu\leq g\), where \(\mu\) is the Minkowski functional of \(W\), hence \(W_{\varepsilon}\subset W\). Now it suffices to show that \(\pi_{W_{\varepsilon}}(S)=S_{\varepsilon}\),1 because this fact and Lemma 3.1 will imply \(\mathcal{H}^{n-1}(S_{\varepsilon}\setminus S)\leq\mathcal{H}^{n-1}(S\setminus S_ {\varepsilon})\), and hence \(\mathcal{H}^{n-1}\left(S_{\varepsilon}\triangle S\right)<\varepsilon\). Footnote 1: This is very easy if \(W\) is bounded and we used this fact in the proof of Theorem 1.5. Therefore, it remains to show that if \(x\in S_{\varepsilon}\), then there is \(z\in S\) such that \(\pi_{W_{\varepsilon}}(z)=x\). Let \(\nu(x)\) be the unit outward normal to \(S_{\varepsilon}\) at \(x\). It suffices to show that the ray \(R_{x}:=\{x+t\nu(x):\,t\geq 0\}\) intersects \(S\) at some point \(z\), because clearly, \(\pi_{W_{\varepsilon}}(z)=x\). Suppose to the contrary that \(R_{x}\) does not intersect with \(S\) i.e. \(R_{x}\subset\operatorname{int}W\). The tangent hyperplane to \(S_{\varepsilon}\) at \(x\) is defined by \(T_{x}:=\{x+v:\langle v,\nu(x)\rangle=0\}\) and clearly \(S_{\varepsilon}\cap F_{x}=\varnothing\), where \(F_{x}:=\{x+v:\langle v,\nu(x)\rangle>0\}\) is an open half-space bounded by \(T_{x}\). Since \(x\in\operatorname{int}W\), there is \(\delta>0\) such that \(D_{2\delta}\subset\operatorname{int}W\), where \(D_{2\delta}:=\{x+v\in T_{x}:\,|v|<2\delta\}\) is the ball in \(T_{x}\) centered at \(x\) and of radius \(2\delta\). Since \(D_{2\delta}\cup R_{x}\subset\operatorname{int}W\), it follows from the convexity of \(W\) that \(C_{x}\subset\operatorname{int}W\), where \(C_{x}:=\{p+t\nu(x):\,p\in\partial D_{\delta},\ t>0\}\) is side surface of a half-cylinder. Since \(S\) does not contain any line, it follows that \(W\) does not contain any line (cf. the argument at the beginning of the proof of Theorem 1.9). Therefore, for any \(p\in R_{x}\), and any unit vector \(v\) parallel to \(T_{x}\), the lines \(L_{p,v}:=\{p+tv:\,t\in\mathbb{R}\}\) must intersect \(S\) at least at one point. Denote all such points in \(S\) by \(A\). Since \(A\subset F_{x}\), \(F_{x}\cap S_{\varepsilon}=\varnothing\), we have that \(A\subset S\setminus S_{\varepsilon}\). It is easy to see that \(\mathcal{H}^{n-1}(A)=\infty\) and hence \(\mathcal{H}^{n-1}(S\setminus S_{\varepsilon})=\infty\) which is a contradiction. To show that \(\mathcal{H}^{n-1}(A)=\infty\), note that the radial projection \(\pi\) of \(A\) onto \(C_{x}\) along lines \(L_{p,v}\) is \(1\)-Lipschitz and hence \(\mathcal{H}^{n-1}(A)\geq\mathcal{H}^{n-1}(\pi(A))\). Now, for each two antipodal points in each sphere of radius \(\delta\) in \(C_{x}\) that is parallel to \(T_{x}\), at least one belongs to \(\pi(A)\). The mapping \(\Phi:C_{x}\to C_{x}\) that maps points in \(C_{x}\) to antipodal points is an isometry of \(C_{x}\). Hence \(\mathcal{H}^{n-1}(\pi(A))=\mathcal{H}^{n-1}(\Phi(\pi(A))\). Therefore, \[2\mathcal{H}^{n-1}(A)\geq 2\mathcal{H}^{n-1}(\pi(A))\geq\mathcal{H}^{n-1}(\pi(A) \cup\Phi(\pi(A)))=\mathcal{H}^{n-1}(C_{x})=\infty.\] Proof of Theorem 1.9.: If an unbounded convex body \(V\) contains a line \(L\), then since \(V\) is closed, it is easy to see that \(V\) contains all lines parallel to \(L\) that intersect with \(V\). In particular \(\partial V\) is the union of lines parallel to \(L\). (2)\(\Rightarrow\)(1). Let us define \(V_{f}\) as the closure of the epigraph of \(f\), then \(V_{f}\) is an unbounded convex body in \(\mathbb{R}^{n+1}\). Since the graph of \(f\) does not contain a line, \(\partial V_{f}\) does not contain any line and we may apply Theorem 1.7 and Remark 1.8 to find a \(C_{\mathrm{loc}}^{1,1}\) convex body \(W\subseteq V_{f}\) such that \(\mathcal{H}^{n}(\partial V_{f}\setminus\partial W)<\varepsilon\). Since the projection \(\pi:\mathbb{R}^{n}\times\mathbb{R}\to\mathbb{R}^{n}\) is \(1\)-Lipschitz, it follows that \(g(x):=\inf\{y:(x,y)\in W\}\) is a \(C_{\mathrm{loc}}^{1,1}\) convex function such that \(|\{x\in U:g(x)\neq f(x)\}|<\varepsilon\). Note that \(g\) is finite on all of \(U\). Indeed, let \(A:=\{x\in U:g(x)<\infty\}\); we understand that \(g(x)=\infty\) if the line \(\{(x,t):t\in\mathbb{R}\}\) does not intersect \(W\). The set \(A\) is convex because \(W\) is convex. If \(A\neq U\) then, for some \(x_{0}\in U\), some number \(c\) and some linear function \(\ell:\mathbb{R}^{n}\to\mathbb{R}\), we have \(\ell(x_{0})>c\geq\ell(x)\) for all \(x\in A\), implying that for all \(x\in U\cap\ell^{-1}(c,\infty)\) the vertical line \(\{(x,t):t\in\mathbb{R}\}\) does not intersect \(W\). But it is easy to see that the set \(\partial V_{f}\cap\{(x,t):x\in\overline{U}\cap\ell^{-1}(c,\infty)\}\) has infinite Hausdorff \(n\)-dimensional measure, therefore it must intersect \(\partial W\). Hence \(g(x)=f(x)<\infty\) for some \(x\in U\cap\ell^{-1}(c,\infty)\), a contradiction. (1)\(\Rightarrow\)(2). Suppose to the contrary that \(f\) satisfies (1) and that the graph of \(f\) contains a line \(L\). Then, as we observed above, the graph of \(f\) is the union of lines parallel to \(L\). Thus, \(U\) is the union of lines parallel to \(\pi(L)\) and clearly, \(f\) is affine on each such a line. Now, an argument similar to the proof of [4, Proposition 1.10] yields that if \(g:U\to\mathbb{R}\) is a convex function such that \(|\{x:f(x)\neq g(x)\}|<\infty\), then \(g=f\). Hence \(g\not\in C^{1,1}_{\rm loc}\), because \(f\not\in C^{1,1}_{\rm loc}\) and we arrive to a contradiction with (1).
2307.14396
Biological Modelling with Nonlocal Advection Diffusion Equations
The employment of nonlocal PDE models to describe biological aggregation and other phenomena has gained considerable traction in recent years. For cell populations, these methods grant a means of accommodating essential elements such as cell adhesion, critical to the development and structure of tissues. For animals, they can be used to describe how the nearby presence of conspecifics and/or heterospecifics influence movement behaviour. In this review, we will focus on classes of biological movement models in which the advective (or directed) component to motion is governed by an integral term that accounts for how the surrounding distribution(s) of the population(s) impact on a member's movement. We recount the fundamental motivation for these models: the intrinsic capacity of cell populations to self-organise and spatially sort within tissues; the wide-ranging tendency of animals towards spatial structuring, from the formations of herds and swarms to territorial segregation. We examine the derivation of these models from an individual level, illustrating in the process methods that allow models to be connected to data. We explore a growing analytical literature, including methods of stability and bifurcation analysis, and existence results. We conclude with a short section that lays out some future challenges and connections to the modelling of sociological phenomena including opinion dynamics.
Kevin J Painter, Thomas Hillen, Jonathan R Potts
2023-07-26T10:39:20Z
http://arxiv.org/abs/2307.14396v1
# Biological Modelling with Nonlocal Advection Diffusion Equations ###### Abstract The employment of nonlocal PDE models to describe biological aggregation and other phenomena has gained considerable traction in recent years. For cell populations, these methods grant a means of accommodating essential elements such as cell adhesion, critical to the development and structure of tissues. For animals, they can be used to describe how the nearby presence of conspecifics and/or heterospecifics influence movement behaviour. In this review, we will focus on classes of biological movement models in which the advective (or directed) component to motion is governed by an integral term that accounts for how the surrounding distribution(s) of the population(s) impact on a member's movement. We recount the fundamental motivation for these models: the intrinsic capacity of cell populations to self-organise and spatially sort within tissues; the wide-ranging tendency of animals towards spatial structuring, from the formations of herds and swarms to territorial segregation. We examine the derivation of these models from an individual level, illustrating in the process methods that allow models to be connected to data. We explore a growing analytical literature, including methods of stability and bifurcation analysis, and existence results. We conclude with a short section that lays out some future challenges and connections to the modelling of sociological phenomena including opinion dynamics. **Keywords**: Nonlocal PDEs; Interacting Particles; Aggregation, Flocking and Swarming; Sorting; Territory formation ## 1 Introduction A flamboyance of flamingos, a shiver of sharks, a confusion of wildebebeest; hundreds of collective nouns have been assigned to define the groups formed by different species. The need for these collective nouns reflects the frequency with which animal groups form across the natural world, from the gathering of a small number of individuals to billions-strong swarms of locusts [181] or a herring shoal that stretches across kilometres [131]. An ability to aggregate is a phenomenon that extends down to the microscopic level, where various bacteria [29, 30] and microorganisms[25] have been observed to organise into aggregates under certain conditions. In the context of our own cells, their capacity to bind and organise is key for the development of many tissues and organs, or their repair following injury. An essential element in the formation of many groups is the triggering of a movement-based response in an individual, according to signals and behaviours of other members. Directly, a cell may touch another cell and pass information through specialised molecules at the cell surface, or a bird may alter its flight path according to the trajectory of a neighbour. Indirectly, cells may alter motility according to a molecular signal deposited by another cell and animals may respond to territorial scent markings of conspecifics. The cumulative effect of these individual-level behaviours can result in self-organisation at the population scale, for example the rounding up of an initially dispersed population into an aggregate or the adoption of some swarm configuration. Scientific interest in self-organising phenomena has a long history, and the field forms a pillar of mathematical biology [149]. Naturally, much of the modelling within this field is indebted to the remarkable work[203] of Alan Turing through his reaction-diffusion model, proposed to explain how morphogenesis could occur. Turing's model involved only molecular components, and showed how an interplay between reaction and diffusion could break the symmetry of a spatially uniform distribution by amplifying natural stochastic fluctuations into an ordered and patterned state. This not only offered a plausible chemical blueprint for how a tissue could become patterned, but also a mathematical blueprint for determining whether self-organisation can occur in some system. Inspired by the aggregation mounds formed from starving _Dictyostelium discoideum_ cells - the initiating step during a multicellular transformation that serves as a paradigm of self-organisation at the microscopic scale [25] - the celebrated chemotaxis model of Keller and Segel [116] followed Turing's template to illustrate how a system that includes an actively migrating population could also undergo self-organisation. It shows that the positive feedback loop of chemotaxis to a self-secreted attractant could lead to mound formation. Continuous biological movement models are often formulated as an advection-diffusion equation [149], i.e. \[\partial_{t}u(\mathbf{x},t)=\nabla\cdot\left[D\nabla u(\mathbf{x},t)-\mathbf{ a}u(\mathbf{x},t)\right]\,, \tag{1.1}\] where \(u(\mathbf{x},t)\) represents the density of some population at position \(\mathbf{x}\in\Omega\subset\mathbb{R}^{n}\) and time \(t\in[0,\infty)\). \(D\) measures the diffusive (undirected) component to movement, while a is an \(n\)-dimensional vector that measures the advective (directed) component to movement. Generally, diffusion may be an \(n\times n\) diffusion tensor matrix, e.g. describing some anisotropic spread due to the environment [100], however here we will generally take an isotropic diffusion represented by a scalar coefficient \(d\), so that \(D=dI_{n}\) where \(I_{n}\) is the \(n\times n\) identity matrix. The region \(\Omega\) defines the space in which the population moves: this could range from a line if movement is effectively constrained to a one-dimensional geometry (\(n=1\), e.g. cell movement along nano-engineered channels), a two-dimensional surface (\(n=2\), e.g. animal movement across a landscape) to a three dimensional volume (\(n=3\)). If \(\Omega\) is a bounded domain, then the above model (1.1) will be equipped with appropriate boundary conditions. For the chemotaxis model of Keller and Segel [116] interactions between individuals are indirect: the individual senses (and moves in response to) another individual through following the local gradient of an attractant secreted by the population. As such, the advective velocity is taken to be proportional to the chemoattractant gradient, i.e. \(\mathbf{a}\propto\nabla v\) where \(v\) is the attractant. In other instances of group formation, however, interactions are direct: molecular binding between receptors on adjacent cell surfaces can lead to cells pulling themselves together (adhesion or attraction) or moving away from each other (repulsion); animals may also be drawn to each other or move away following a visual sighting of conspecifics. In all such instances, the interaction range becomes a crucial point for consideration: in the case of cells, this could be the range over which a cell can contact a neighbouring cell through touch, or, for animals, the range over which the perception of conspecifics influences its movement behaviour. Given the existence of an interaction range, an individual has the potential to sense multiple neighbours simultaneously. It is natural, therefore, to suppose that the movement will be based on some integrated response, i.e. according to the distribution of a population (or populations) across its interaction range. Such considerations have led to the increasing adoption of nonlocal PDE formulations [50]. The focus of attention in the present review will be on models in which the nonlocality appears within the advective term, which is calculated according to an integral that measures the influence of the surrounding population on movement. Specifically, we consider the following pair of non-local models, \[\partial_{t}u =d\Delta u-\mu\nabla\cdot\left[u\mathbf{k}_{R}*f\right], \mathbf{k}_{R}*f\left(\mathbf{x},t\right) =\int_{\Omega}\mathbf{k}_{R}(\mathbf{x},\mathbf{y})f(u(\mathbf{y },t))d\mathbf{y}\,, \tag{1.2a}\] \[\partial_{t}u =d\Delta u-\nu\nabla\cdot\left[u\nabla(w_{R}*g)\right], w_{R}*g\left(\mathbf{x},t\right) =\int_{\Omega}w_{R}(\mathbf{x},\mathbf{y})g(u(\mathbf{y},t))d \mathbf{y}\,. \tag{1.2b}\] Motivation for these two model forms can be found through a purely phenomenological argument or by applying a more physical-based reasoning. Consider first the formulation (1.2a) and its phenomenological motivation (see top row of Figure 1). Here, the nonlocal advection term is founded on the principal that the population at position \(\mathbf{y}\) influences the movements of those at \(\mathbf{x}\). The induced direction of movement and its magnitude depends on the product of a (vector-valued) function \(\mathbf{k}_{R}(\mathbf{x},\mathbf{y})\) and a (scalar-valued) function \(f(u(\mathbf{y},t))\). Specifically, \(\mathbf{k}_{R}(\mathbf{x},\mathbf{y})\) specifies a dependence on the distance of \(\mathbf{y}\) to \(\mathbf{x}\) and it identifies the direction of interaction. The function \(f(u(\mathbf{y},t))\) defines the dependence on the population size at \(\mathbf{y}\). The integral kernel \(\mathbf{k}_{R}\) is parametrised according to a sampling radius \(R\), representing the interaction range. Net movement results from integrating over all possible positions, and this directly informs the advective velocity at \(\mathbf{x}\). The parameters \(d\in\mathbb{R}^{+}\) and \(\mu\in\mathbb{R}\) describe diffusion and advection coefficients, respectively. The phenomenological motivation for (1.2b) follows a similar reasoning (see bottom row of Figure 1), although the function \(w_{R}(\mathbf{x},\mathbf{y})\) is now scalar-valued, as is the integrated quantity \(w_{R}*g\). This formulation can be interpreted analogously to the taxis-like model, with the population moving according to the gradient of a nonlocal measure of the population; for example, this could be a nonlocally-averaged density distribution. Again, the parameters \(d\in\mathbb{R}^{+}\) and \(\nu\in\mathbb{R}\) represent diffusion and advection coefficients, respectively. A physical reasoning for (1.2a) and (1.2b) follows the consideration of forces and energies; this interpretation takes on particular resonance in the context of cell migration, where translocation of a cell's body stems from forces exerted as it attaches to other cells and the substrate. Model (1.2a) can be derived through a balance between adhesive and repulsive forces that act at the cell surface (e.g. see [37]): interactions between cells cen tred at \(\mathbf{x}\) and \(\mathbf{y}\) generate local forces, with the net force according to the integral \(\mathbf{k}_{R}*f\). This quantity hence describes a force density, with units of \(N/m\), and the coefficient \(\mu\) has units of \((sN)^{-1}\). The corresponding term for (1.2b) is \(\nabla(w_{R}*g)\), and \(w_{R}*g\) will carry the units of an energy density (\(J/m\)). Viewed in this light, the advection according to \(\nabla(w_{R}*u)\) defines a movement according to an energy gradient. The summary review of [41] describes the derivations of models (1.2b) according to energy principle. If the underlying principal is a process of energy minimisation (i.e. down the energy gradient) then parameter \(\nu<0\) and, conventionally, the form (1.2b) is written with the sign of the advection term reversed, i.e. \[\partial_{t}u=d\Delta u+\gamma\nabla\cdot\left[u\nabla(w_{R}*g)\right]\,, \qquad w_{R}*g\left(\mathbf{x},t\right)=\int_{\Omega}w_{R}(\mathbf{x},\mathbf{ y})g(u(\mathbf{y},t))d\mathbf{y} \tag{1.3}\] where \(\gamma>0\) indicates energy minimisation. Note that we will adopt this convention particular in Section 5.2-5.3, where energy-based analytical methods are utilised. Since Figure 1: Illustration of the models (1.2) as formulated to describe grouping or herding, i.e. a tendency to move towards and aggregate at areas of higher population density. (Top row) For (1.2a) each individual within the interaction region (dotted circles) generates a local ‘force’ of attraction (top left); the number and direction will be different according to each individual’s position (points a,b,c). Integrating over the interaction region leads to a net movement, the strength and direction varying with position (top middle). Overall, this generates an advective field that directs population level movement (top right). (Bottom row) For (1.2b) an individual measures the (nonlocal) population density, e.g. by assessing the number of neighbours within the interaction region; at distinct positions (a,b,c), different numbers of neighbours will be detected (bottom left). Across space, this creates a population distribution map (bottom middle). The advective field for the population is according to the gradient of this distribution map (bottom right), e.g. in the direction of increasing gradient to describe a herding phenomenon. The advection fields generated through these two formulations have a similar form. energy differences lead to forces, a natural connection between these model forms is laid bare. A note of caution, though, must be applied when applying physical reasoning to biological particles such as animals or cells: attraction between conspecifics or avoidance of predators are measurable behaviours, but they cannot be directly related to a physical force or energy; similarly, a cell is a highly complex structure and its behaviour is not necessarily determined by the need to minimise energy. Models of form (1.2) have been used since the 1970s to describe ecological systems (see [115, 139, 93, 142, 182, 124, 151, 197]), since the 1990s to describe cellular systems (see [182, 8, 80, 148, 43]), and more recently to describe opinion dynamics (see [77, 88]). A particular point of mathematical interest lies in their capacity for self-organisation, in which modelling a process of self-to-self attraction between members can allow a dispersed population to organise itself into one or more aggregated groups. For this reason, they are commonly referred to as aggregation equations. However, it is important to note that the formulations (1.2) are less restrictive and can be used to model other forms of interaction, such as repulsive interactions that could lead to an enhanced dispersal. Moreover, the form of these models can be extended to describe heterogeneous populations where the interactions between different populations can be distinct (e.g. see [8, 148, 157, 173]) or incorporated within more complicated models and applied to explain specific phenomena, such as cancer invasion for cellular systems (e.g. see [80, 156, 63]) or dynamics of locust swarms in ecological systems (see [198, 78]). A multi-species generalisation of each of the models (1.2a-1.2b) can easily be formed by extending to \(\mathbf{u}(\mathbf{x},t)=(u_{1}(\mathbf{x},t),\ldots,u_{p}(\mathbf{x},t))\), where \(u_{i}\) denotes the density distribution of the \(i^{th}\) out of \(p\) populations, and considering the systems \[\partial_{t}u_{i}=d_{i}\Delta u_{i}-\sum_{j=1}^{p}\mu_{ij}\nabla \cdot[u_{i}\mathbf{k}_{ij}*f_{ij}] \tag{1.4a}\] \[\mathbf{k}_{ij}*f_{ij}=\int_{\Omega}\mathbf{k}_{ij}(\mathbf{x}, \mathbf{y})f_{ij}(\mathbf{u}(\mathbf{y},t))d\mathbf{y}\quad i=1\ldots p\,,\] \[\partial_{t}u_{i}=d_{i}\Delta u_{i}-\sum_{j=1}^{p}\nu_{ij}\nabla \cdot[u_{i}\nabla(w_{ij}*g_{ij})]\] (1.4b) \[w_{ij}*g_{ij}=\int_{\Omega}w_{ij}(\mathbf{x},\mathbf{y})g_{ij}( \mathbf{u}(\mathbf{y},t))d\mathbf{y}\quad i=1\ldots p\,.\] In model (1.4a) directed movement is now the combined result of \(N\) movement-inducing interactions, where \(\mathbf{k}_{ij}*f_{ij}\) is the nonlocal advection coefficient that defines the movement induced on members of population \(i\) due to interactions with population \(j\): \(\mathbf{k}_{ij}(\mathbf{x},\mathbf{y})\) and \(f_{ij}(\mathbf{u}(\mathbf{y},t))\) are analogous to the functions described above, and parameters \(R_{ij}\), \(d_{i}\), and \(\mu_{ij}\) define the interaction range, diffusion coefficient and advection coefficients, respectively. Note that the \(\mu_{ij}\)'s may be positive or negative, to model inter-species [173] or inter-cellular [157] attraction or repulsion, respectively. Analogous reasoning can be applied to the form (1.4b). In this article we review the increased employment of nonlocal systems of the above form within biological modelling1. In Sections 2 and 3 we outline our motivating biological systems, namely cellular adhesion and other cell-based interactions (Section 2) and ecological interactions between animals (Section 3). We describe the key biology and previous modelling that has motivated models of the form (1.2a) and (1.2b) or their multiple species extensions. In Section 4 we explore the derivations of these models from a microscopic perspective, in particular focussing on cellular adhesion. In Section 5 we consider some of the analysis used to understand these models, including linear stability analysis, bifurcation analysis and global existence. We conclude with some key challenges and future perspectives for the field. ## 2 Nonlocal models for cellular systems ### Adhesion and other cell interactions Cell adhesion is the fundamental mechanism by which a cell attaches to and interacts with its surroundings[3]. Adhesions form through specialised cell surface receptors; their binding across adjacent membranes not only attaches cells together, but also triggers a range of processes from proliferation to migration. Of the various families of adhesion Figure 2: (a) Cell-cell adhesion naturally leads to accretion, with cells attaching on contact and forming a cluster or aggregation. (b) Sorting dynamics in adhesive populations, as predicted by the DAH. In a mixture of two distinct cell populations, three principal parameters can be identified: two self-adhesion strengths (\(S_{u}\), \(S_{v}\)) and one cross-adhesion strength (\(C\)). The DAH predicts that different arrangements will arise according to the relationship between these parameters: for example, in a mixture of cells in which \(S_{u}>C>S_{v}\), the \(u\) population (red) becomes encapsulated by the \(v\) (blue) population. (c) CPM simulation (implemented via Compucell3D) showing encapsulation for a parameter setting in which adhesive interactions satisfy the aforementioned relationship. molecules, cadherins play a particularly prominent role within cell-cell adhesion processes (e.g. see [192]): E-cadherins, for example, form tight adhesive junctions between epithelial cell types; N-cadherins are more commonly associated with transient adhesive interactions between motile mesenchymal cells. Adhesion is critical for the organisation and maintenance of tissue structure. Naturally, cell-cell adhesion can lead to an accretion process, whereby contact between cells leads to attachment and the formation of a clustered population, Figure 2(a). Moreover, classic experiments indicate a role for adhesion in regulating the spatial organisation of different populations within a tissue [199]. In the differential adhesion hypothesis (DAH)[185, 201] cell sorting is suggested to result from distinct cell surface tensions, deriving in turn from the strength of adhesive interactions. The precise relationship leads to different configurations, see Figure 2(b), and experiments[76] for cell lines that express different levels of cadherins are consistent with this theory. More recently, measurements of the forces within adhesive aggregates [5, 201] have resulted in revision of the DAH to the differential interfacial tension hypothesis (DITH [28]): cell cortical contraction machinery and cell-cell adhesion combine to regulate interfacial tension, and sorting results from rearrangements that lead to a tissue-level minimisation of interfacial tension. Nevertheless, adhesion remains the driving force within the sorting and arrangement of tissues. Cell-to-cell contacts, though, can also trigger repulsion. For example, contact inhibition of locomotion (CIL)[1] forms a contact-mediated response which not only leads to cessation of cell motion, but also repolarisation and reversal of the direction of motion [40]. Cell-to-cell contacts can also lead to asymmetric responses, where the two cells display contrasting responses. One such example arises in the pigmentation of zebrafish, where interacting xanthophores and melanophores engage in a chase and run[107, 218] interaction, contact between them resulting in the melanophore moving away from the pursuing xanthophore. Other instances of contact-mediated responses that can range from attraction to repulsion include those triggered through Eph/Ephrin interactions [35] or the chase and run dynamics observed in cultures of neural crest and placode cells [195]. A complex set of migration responses that follow direct contacts have been observed among cells of the immune system, impacting on a range of processes that include inflammation and tumour progression [141]. Biological cells are small with an average diameters the order of around ten microns and contact-based interactions occur at a similarly local level. However, contacts can also be formed at considerably greater distances than the mean cell diameter. First, the cell bodies can be highly deformable, where frequent protrusions of the membrane - pseudopodia [51] - locally extend parts of the membrane far beyond the average diameter. Second, a diversity of more specialised membrane protrusions have been identified [121, 180, 21] - variously termed cytonemes, tunnelling nanotubes, microtubes - that in some cases extend the order of 100s of microns. Thus, a contact can be achieved between cells separated by multiple cell diameters, and a non-local description is warranted. ### Models for adhesion and tissue dynamics #### 2.2.1 Individual level models for adhesion and sorting Agent-based modelling (ABM) forms a natural approach for adhesive cell populations[184, 188]. The first broadly successful _in silico_ replications of cell sorting can be attributed to Graner and Glazier [91, 86], where a Potts model2 was extended to model adhesion. Subsequently dubbed the Cellular Potts Model (CPM), each biological cell occupies multiple grid cells spread across a lattice, therefore giving each cell a shape, volume, and boundary. Evolution of the shape is probabilistically determined via a hypothesised energy functional; the aim is to minimise an energy determined by adhesive contacts along shared surfaces. Selecting relationships in line with the DAH leads to the predicted cell sorting pattern [91, 86]; see Figure 2(c) for a CPM simulation in which adhesion relationships conspire to sort two populations into an encapsulated configuration. Footnote 2: A model of statistical mechanics, originally used to understand spin configurations in ferromagnets. Other ABMs have also shown to be capable of describing adhesion and sorting dynamics[205], sitting at various levels of detail: cells modelled as deformable ellipsoids [161, 160] with centres and semi-axes evolving according to the forces generated by adhesive interactions with other cells and the substrate; on-lattice methods, (e.g. cellular automata type, see [60]); off-lattice centre-based models, where equations of motion describe the position and velocity of a cell's centre and the cell forms a hard or soft sphere that interacts with nearby cells (e.g. [103, 130, 48]); vertex-based models [73] which feature cell boundaries described by a polyhedron with dynamic vertices. Many of these ABMs form the basis of computational platforms for simulating cellular and tissue dynamics - CellSys3[103], CompuCell3D 4[191], Chaste 5[140], Physicell6, [82] - and their capacity to predict adhesion and sorting phenomena is regarded as a point of calibration between these diverse methodologies [152]. Footnote 3: [https://www.hoehme.com/software/tisim](https://www.hoehme.com/software/tisim) Footnote 4: [https://compucell13d.org/](https://compucell13d.org/) Footnote 5: [https://www.cs.ox.ac.uk/chaste/](https://www.cs.ox.ac.uk/chaste/) Footnote 6: [http://physicell.org/](http://physicell.org/) ### Continuous models for adhesion and sorting #### 2.3.1 Local formulations The representation of a cell population via a continuous density distribution eliminates the issue of scale inherent to agent-based models, where simulating very large cell numbers remains a computational challenge. Moreover, a well posed differential equation system gives access to a wealth of analytical methods (stability and bifurcation analysis, asymptotic approaches, travelling wave analyses) that can yield deeper understanding into the dynamics. One simple approach to include adhesion has been based on a classic advection-diffusion equation of the form (1.1), where the diffusion and/or advection coefficients depend on the local population density, i.e. the pointwise density. Such models have been proposed on phenomenological grounds (e.g. see [104]), or following a derivation from an underlying random walk description of movement (e.g. see [6, 110]) - see Section 4.1. These models capture certain features of adhesive populations - for example, restricted motility in regions of high adhesiveness - and are both analytically straightforward and simple to incorporate into models. Nevertheless, they have not been shown to allow more complicated sorting behaviour. Moreover, as discussed in greater detail below, the derived diffusion coefficients can sometimes become negative and result in a loss of regularity (for example, [6, 110]). The effects of cell-cell adhesion have also been incorporated in a phenomenological manner into various models for tumour growth (for example [38, 39, 57, 216, 56]), via the incorporation of a surface tension force at the tumour-tissue surface. #### 2.3.2 Nonlocal formulations Successful ABM approaches for cell sorting are inherently nonlocal: a cell spread across multiple lattice sites in a CPM, or centre-based approaches where the attractive and repulsive interactions form over an interaction range. This nonlocality can be incorporated into a continuum description using a nonlocal (or integral) PDE formulation. In the context of cell adhesion, the first7 models to adopt this approach were formulated to describe the aggregation of a single homogeneous population in [182] and for multiple cell populations in[8] to explore sorting via differential adhesion; closely related nonlocal models, though, have a biomodelling history that dates back at least as far as the 1970s (for example, see [115, 139, 150, 93]). Footnote 7: As far as we are aware The simplest motivation for these models is founded on phenomenological reasoning. Suppose \(u(\mathbf{x},t)\) denotes the cellular density at position \(\mathbf{x}\) in space and \(t\) in time. Ignoring (for simplicity) cellular growth or death and employing standard mass conservation arguments (e.g. see [149]) leads to the balance equation \[\partial_{t}u(\mathbf{x},t)=-\nabla\cdot\mathbf{J}(\mathbf{x},t)\,,\] where \(\mathbf{J}(\mathbf{x},t)\) denotes the population flux arising from movement. The flux can be decomposed into different terms - for example, a diffusive element to describe undirected movement and an advective component for directed movement - and we arrive at (1.1). Regarding the advective component, suppose that a cell at \(\mathbf{x}\) interacts with another cell at \(\mathbf{y}\), and that this interaction generates movement; this could be the result of forming adhesive bonds that draw the two cells together. The net movement response follows from summing over all possible interactions and we then postulate an interactive flux proportional to this sum, i.e. \[\mathbf{J}_{\mbox{\it interaction}}\propto u(\mathbf{x},t)\int\mathbf{k}_{R} (\mathbf{x},\mathbf{y})f(u(\mathbf{y},t))\,d\mathbf{y}\,.\] where \(\mathbf{k}_{R}\) and \(f(u(\mathbf{y},t))\) are as described following (1.2). Adding to the above a standard (Fickian) diffusive flux, \(\mathbf{J}_{\mbox{\it diffusion}}=-d\nabla u\), leads to (1.2a). A basic model to describe a homogeneous adhesive population sets \(\mathbf{r}=\mathbf{y}-\mathbf{x}\), \[\mathbf{k}_{R}(\mathbf{x},\mathbf{x}+\mathbf{r})=\chi_{|\mathbf{r}|<R} \vec{\mathbf{e}}_{r}\quad\mbox{and}\quad f(u(\mathbf{x}+\mathbf{r},t))\propto u (\mathbf{x}+\mathbf{r})\,, \tag{2.1}\] where \(\vec{\mathbf{e}}_{r}\) denotes the unit vector in direction of \(\mathbf{r}\), and \(\chi(\mathbf{r})\) is the indicator function. This stipulates (i) that only those cells within an interaction range \(R\) impact on movement, i.e. those within contact range for adhesive binding; and (ii) that the strength of interaction increases linearly with the density of cells at \(\mathbf{x}+\mathbf{r}\), since a higher cell density implies a greater likelihood of forming adhesive bonds. Consequently, we obtain \[\partial_{t}u=d\Delta u-\mu(R)\nabla\cdot\left(u\int_{B_{R}^{n}}u(\mathbf{x}+ \mathbf{r},t)\vec{\mathbf{e}}_{r}\,d\mathbf{r}\right)\,, \tag{2.2}\] where \(B_{R}^{n}\) is the \(n\)-dimensional ball of radius \(R\). The coefficient \(\mu>0\) is a measure of the adhesive strength; switching to \(\mu<0\) turns the interaction into a repelling one, e.g. see [157] in the context of CIL. We note that often the function \(\mathbf{k}_{R}\) is normalised, e.g. according to the volume of the interaction space and we therefore place a dependency on \(R\) in the parameter \(\mu\) for generality. Other natural choices would be to assume that the strength of interaction decreases with increasing separation, due to reduced likelihood of forming a contact: for example, the magnitude of \(\mathbf{k}\) decreasing exponentially with the distance \(|\mathbf{r}|\). Nonlinear choices for \(f\) are also logical, e.g. forms to reflect an upper bound in the adhesive pull that can be generated, see below. #### 2.3.3 Capacity for self-organisation and sorting A key strength in the model (2.2) lies in its capacity for self-organisation (see Section 5.1 for more details): for \(\mu<\mu_{crit}\), a dispersed population remains dispersed, see Figure 3(a) while for \(\mu>\mu_{crit}\) it becomes concentrated into a tight aggregate, see Figure 3(b). Under the basic model (2.2), the aggregates evolve into a highly concentrated aggregate8, even for \(\mu\gtrsim\mu_{crit}\). This can be attributed to the lack of any mechanism that reins in the amount of adhesive pull that can be generated. Footnote 8: For a discussion of global existence, see Section 5.2 Adding further detail to the model assumptions can help prevent over-accumulation within the aggregates. For example, setting \(f(u)\) to be a saturating function (which can be motivated naturally through adhesive receptor occupancy, see Section 4), then \[\partial_{t}u=d\Delta u-\mu(R)\nabla\cdot\left(u\int_{B_{R}^{n}}\frac{u( \mathbf{x}+\mathbf{r},t)}{\kappa+u(\mathbf{x}+\mathbf{r},t)}\vec{\mathbf{e}}_ {r}\,d\mathbf{r}\right)\;. \tag{2.3}\] Figure 3: Self-organisation in a nonlocal model for adhesion, homogeneous population. The initial distribution sets a ’loose aggregate’, the spatial extent of which is indicated by the dashed line in each frame. (a) Dispersal scenario for (2.2), with \(d=R=1\) and \(\mu=3.5/\pi\); (b) Aggregation for (2.2), with \(d=R=1\) and \(\mu=4/\pi\). (c) Aggregation for (2.3), for \(d=R=K=1\) and \(\mu=13.5/\pi\); (d) Aggregation for (2.4), for \(d=R=K=1\) and \(\mu=13.5/\pi\). The overall domain \(\Omega\) is of size 10\(\times\)10. We refer to [79, 81] for details of the numerical implementation. This leads to aggregations that are capped at lower densities, see Figure 3(c). Other possible modifications include the addition of 'volume-filling' (e.g. see [157, 43]), or adapting diffusion to a density-dependent and degenerate form (e.g. see [144, 32, 148, 33, 43]). The addition of the latter to (2.3) leads to \[\partial_{t}u=d\nabla\cdot\left[u\nabla u-\mu(R)\left(u\int_{B_{R}^{n}}\frac{u (\mathbf{x}+\mathbf{r},t)}{\kappa+u(\mathbf{x}+\mathbf{r},t)}\vec{\mathbf{e}} _{r}\,d\mathbf{r}\right)\right]. \tag{2.4}\] This adaptation limits a diffusive spread at the cluster boundary, the aggregate taking on a compact form with a sharp interface, Figure 3(d). As noted earlier, nonlocal formulations can be easily extended to include multiple populations, see (1.4). A natural question, therefore, is whether cell sorting can be repli Figure 4: Cell sorting in a nonlocal heterogeneous two population model for adhesion. Initially, the two populations are mixed within a loose aggregate, left column. First row shows a simulation of the basic model (2.5) under \(S_{u}=4,S_{v}=1,C=2\). Second to fifth rows show simulations of the advanced model (2.6) under the following scenarios: ‘mixing’ (\(S_{u}=S_{v}=C=8\), second row); ‘encapsulation’ (\(S_{u}=10,S_{v}=4,C=6\), third row); ‘partial sorting’ (\(S_{u}=10,S_{v}=8,C=3\), fourth row); ‘complete sorting’ (\(S_{u}=S_{v}=10,C=0\), fifth row). All other parameters set at \(d_{u}=d_{v}=R=\kappa_{u}=\kappa_{v}=1\). The domain \(\Omega\) is of size 10\(\times\)10. cated under a nonlocal formulation. Consider two populations \(u\) and \(v\) and assume equivalently simple forms to (2.1), then a basic model to describe cell sorting can be stated by the equations \[\partial_{t}u =d_{u}\Delta u-\nabla\cdot\left(u\int_{B_{R}^{n}}\left(S_{u}u( \mathbf{x}+\mathbf{r},t)+Cv(\mathbf{x}+\mathbf{r},t)\right)\vec{\mathbf{e}}_{r }\,d\mathbf{r}\right)\,, \tag{2.5a}\] \[\partial_{t}v =d_{v}\Delta v-\nabla\cdot\left(v\int_{B_{R}^{n}}\left(S_{v}v( \mathbf{x}+\mathbf{r},t)+Cu(\mathbf{x}+\mathbf{r},t)\right)\vec{\mathbf{e}}_{ r}\,d\mathbf{r}\right)\,. \tag{2.5b}\] In this model \(S_{u}\), \(S_{v}\) and \(C\) represent the \(u\)-\(u\) self-adhesion strength, the \(v\)-\(v\) self-adhesion strength, and the \(u\)-\(v\) cross-adhesion strength, respectively. Note that the interaction ranges are the same (and equal to \(R\)) and cross interactions are symmetrical, although such assumptions can be relaxed and repelling interactions can also be introduced (for example, see [157, 47]). Unfortunately, this basic formulation (2.5) proves overly simple to capture the nuances of cell sorting. As for the basic homogeneous model (2.2), the linear choices for the nonlocal terms lead to excessive attraction and the populations become highly concentrated, see Figure 4, top row. The model, as such, is unsatisfactory when it comes to resolving the subtly distinct cell sorting patterns shown in Figure 2(b). Consequently,'successful' nonlocal models [8, 81, 157, 148, 43] that are more broadly capable of replicating the spectrum of arrangements predicted by the DAH include modifications to the various terms in model (2.5). For example, this has included adding biologically-meaningful features such as a limitation or saturation to the adhesive pull (see [8, 81]), introducing volume-filling effects that prevent cell aggregation beyond a critical (packed) level (see [157, 43]), or modifying diffusion terms to include total population pressure effects (see [148, 33, 43]). To provide one concrete example, by adapting the saturating functional forms above and including population-pressure effects to create sharply segregated boundaries (see [148, 43]), we have \[\partial_{t}u =\nabla\cdot\left[d_{u}u\nabla(u+v)-u\int_{B_{R}^{n}}\frac{S_{u}u (\mathbf{x}+\mathbf{r},t)+Cv(\mathbf{x}+\mathbf{r},t)}{\kappa_{u}+u(\mathbf{x }+\mathbf{r},t)+v(\mathbf{x}+\mathbf{r},t)}\vec{\mathbf{e}}_{r}\,d\mathbf{r} \right]\,, \tag{2.6a}\] \[\partial_{t}v =\nabla\cdot\left[d_{v}v\nabla(u+v)-v\int_{B_{R}^{n}}\frac{S_{v}v (\mathbf{x}+\mathbf{r},t)+Cu(\mathbf{x}+\mathbf{r},t)}{\kappa_{v}+u(\mathbf{x }+\mathbf{r},t)+v(\mathbf{x}+\mathbf{r},t)}\vec{\mathbf{e}}_{r}\,d\mathbf{r} \right]\,. \tag{2.6b}\] This more 'advanced' sorting model is capable of replicating the nuances of cellular sorting under different adhesive relationships, e.g. for two populations it can generate the full spectrum of arrangements from mixed to complete sorting see Figure 4. Summarising, nonlocal models are capable of reaching two touchstones of adhesive behaviour: (i) capturing the adhesive or sticky-like properties of cells in close contact, and (ii) replicating cell-sorting phenomena for heterogeneous adhesive populations as predicted by the DAH. At this point we return to our earlier implication that local formulations are incapable of adequately describing adhesion and sorting dynamics, stressing that this applies to 'naive' local formulations. In fact, various local models can be shown to exhibit sorting. One method (though not directly describing adhesion) is through extension of a chemotaxis framework: effectively, a 'differential chemotaxis' system in which two populations have distinct chemotactic responses to multiple chemical factors (e.g. [155, 120]), so the interactions are indirectly mediated. Directly relevant to adhesion, an intriguing (fourth order) local model has been recently formulated in [70] and demonstrates an impressive capacity to simulate the range of cell sorting patterns described here: we return to this in the discussion. ### Further applications to cellular systems Classic cell sorting experiments [199] were first performed using embryonic cell populations, naturally leading to a conjecture that adhesion and sorting are fundamental during embryonic development (for a historial retrospective, see [186]). Consequently, a principal application for nonlocal models for cell adhesion lies in developmental processes. In fact, the first nonlocal model for adhesion[182] was proposed in the context of self-organisation of scale cells during lepidoptera (modn and butterfly) wing morphogenesis. Nonlocal adhesion models have subsequently been developed, as described above, to show fundamental cell sorting (see [8, 81, 148, 43]), somitogenesis9[9], skeletal morphogenesis10[87, 22], aspects of neural development [132, 200], and vasculogenesis11[209]. Notably, some of these applications have been directly formulated alongside experimental data, linking predictions formed from models to targeted experiments. For example, a nonlocal model of adhesion was formulated[87, 22] to describe mesenchymal cell movements which indicated a crucial aggregating role for adhesion during early skeletal morphogenesis. Experimental-theoretical studies that feature nonlocal adhesion models have also been used to understand brain development, in particular the crucial role of N-cadherin mediated adhesion in the positioning of neuronal populations during mammalian cortex development [132] and the visual centre of the fly _Drosophila melanogaster_[200]. Footnote 9: A fundamental early embryonic stage of segmented animals, whereby mesoderm tissue is sequentially discretised into blocks of cells along the head to tail axis. Footnote 10: The embryonic process during which the skeleton is formed. Footnote 11: Formation of the primitive vasculature network Abnormal regulation of adhesive processes may be a factor for various pathologies, in particular cancers [108]. For example, a point of significant focus has been on the epithelial-mesenchymal transition (EMT), where upregulation of N-cadherin accompanied by downregulation of E-cadherin allows cells to adopt a more migratory form, linked to increased invasiveness and metastasis [127]. Many mathematical models have been developed to address the roles played by cell-cell (and cell-matrix) adhesion during invasion and a growing number (e.g. [80, 118, 156, 63, 23, 24, 102, 190]) have applied nonlocal formulations: to understand how adhesion alters the shape of cancer invasion (e.g. [80, 156]); the role of cell-cell adhesion during glioma growth (e.g. [118, 190]); shaping different forms of tumour infiltration patterns in ductal carcinomas (see [63]); and, two population models, featuring cancer populations at different states of mutation (see [23, 24]). Other points of application for nonlocal models of adhesion and cell interactions include wound healing (e.g. [64, 65, 212, 213]) and modelling the interactions between liver cells [92]. ## 3 Nonlocal models for ecological systems ### Swarms, flocks, and herds Swarming, herding, and flocking phenomena are perhaps the most obvious examples of collective behaviour in ecological systems [189]. The central idea is that animals, like cells, often exhibit social interactions that cause them to aggregate. At their most basic level, social interactions may simply cause animals to be found in a particular area of space at some point in time, rather than using all the available area [197]. At a more advanced level, these interactions can cause a very wide range of complex patterns to emerge, famously exemplified by starling murmurations, but present throughout the animal kingdom [12, 189]. An enormous number of models have been formulated to understand collective animal movements [208, 17], a substantial proportion of which are based on systems of 'interacting particles'12: the position of each agent is governed by a dynamic (usually, stochastic) equation featuring terms that account for how the trajectories of neighbours influence movement (well known models include those in [7, 178, 95, 207, 54, 53, 59, 145]). Typically, the interactions lying at the heart of these models are formulated according to the 'first principles of swarming' [45]. At the shortest range, interactions are often repulsive, as animals will want to avoid physical contact. At a slightly longer range, animals will align their movements with one another. Then if animals become too far apart, they have a tendency to move towards one another to maintain the group cohesion (attraction). These three zones of nonlocal interactions13 combine to give both stationary and moving aggregations, as well as a vast swathe of spatio-temporal patterns, mimicking many of those that have been observed in nature (see [189, 208, 17]). Footnote 12: In probability theory, the term ‘interacting particle system’ has a specific definition in the context of continuous time Markov jump processes. When we refer to interacting particles within this article, we will often slip into a slightly broader sense: complex systems composed of agents that interact with each other according to their relative positions and/or velocities. Footnote 13: One of the earliest and most influential model explicitly built along these principles– the ‘Boids’ model of Reynolds [178] – was developed with the main aim of generating realistic flocking-like behaviour for the computer graphics industry, rather than the more elementary aim of understanding movement dynamics; numerous interactive online simulators of this model exist, e.g. [https://boids.cubedhuang.com/](https://boids.cubedhuang.com/). A particularly notable branch that evolved from that work was the application of swarming models to optimization, i.e. particle swarm optimization [117]. A smaller - but still substantial - literature has approached the same central problem of swarming and animal movement via a continuous framework, using ideas that surround nonlocal advection (see [142, 197, 68, 173, 210]). In fact, the earliest nonlocal biological aggregation models were developed to describe swarming-like behaviour (see [115, 139, 150, 142, 124]) and were based on the nonlocal PDE (1.2a). For example, in [142] even or odd forms of interaction kernels were explored for their capacity to generate drift-type (coherent movement of the swarm) or aggregation-type (cohesion of the swarm) behaviour. A further branch of nonlocal PDE methods are founded on hyperbolic kinetic transport equations (see [68, 67, 19]). In these models, the nonlocal terms do not enter the advection terms, but the turning behaviour of the population; consequently, they benefit from a closer description of individual behaviour and can, for instance, explicitly incorporate the above principles of swarming commonly used in particle models. However, these models represent significant and non-trivial extensions of Equations (1.2 1.4) - although it is possible to connect them [31] - and are more challenging to explore from an analytical and numerical perspective. As such, we do not go into details, instead we refer the reader to a recent book[67] that summarises developments in this area. ### Home ranges and territories via stigmergy and memory As well as the visually-impressive examples of collective movement, aggregation phenomena can also occur over longer spatial and temporal scales, becoming apparent as one observes animal locations over a period of time. For example, by plotting locations over an increasing time window, it often transpires that animals do not use as much of the available area as their locomotive capabilities allow. Instead they confine themselves to a smaller area called a home range, which they may maintain for a season or even a whole lifetime [34, 26]. This causes the spatial distribution of the animal to tend to a stationary, non-constant distribution, such as can be modelled by Equation (1.2b) or variants thereof [27]. Home ranges can emerge due to a range of biological processes. For example, animals may tend to re-visit locations remembered to be good for foraging [179]. Once they have memory of sufficiently many locations to meet their foraging needs, they may decide to stay in the vicinity of those locations (see [206, 137]). Additionally, they may need to construct a central place near to where they forage, such as a den or nest site, for reproductive purposes. The requirement to return to this central place then provides yet another mechanism of locational aggregation [143]. Finally, animals may leave traces of their past locations in the landscape (e.g. through scent marks) and use these as markers to keep them in their home range: a process called stigmergy [194]. In any of these cases, the decisions of the animal to move will tend to be spatially non-local, due to the animals' ability to sense their surroundings as they move, through sight, smell, or memory of target locations (see [168, 14, 69]). To model these biological processes, it is common to couple a nonlocal advection-diffusion equation for the location distribution to an ordinary differential equation (ODE) modelling the process of memory or stigmergy. The recent review of [210] gives a thorough exposition of these process, but perhaps the simplest example is \[\partial_{t}u =d\Delta u-\nu\nabla\cdot(u\nabla w_{R}*m), \tag{3.1}\] \[\partial_{t}m =\alpha u-\delta m, \tag{3.2}\] where \(u(\mathbf{x},t)\) is the probability distribution of the animal and \(m(\mathbf{x},t)\) denotes the cognitive map [210], which models either the density of marks left on the terrain or the amount of memory the animal has about location \(\mathbf{x}\) at time \(t\). Other notation is as in Equation (1.2b). Territoriality provides another reason why animals may confine themselves in space over long periods of time. Here, the presence of neighbouring conspecifics forces animals into a confined space (see [2, 171]). There are various mechanisms by which this can happen, but from a modelling perspective they fall into two categories. The first is via stigmergy: indirect interactions mediated by some form of marks on the terrain, such as urine, faeces, or a trail [143, 174]. In this case, animals avoid the marks left by others in the recent past, and usually these marks decay over time. The second is via memory of direct interactions, such as displays or fights [119]. Animals remember the locations of these displays or fights and may tend to avoid them in the near future [172]. In either case, as with home range formation, the movement of animals in response to these interactions is usually spatially non-local. These territorial mechanisms can be modelled using exactly the multi-population system in Equation (1.4b) with \(\nu_{ij}<0\) for \(i\neq j\) to model mutual avoidance, and \(\nu_{ii}\geq 0\). However, as with home range models, it is often valuable to model the process of memory or stigmergy explicitly via ODEs. A simple example can be given by combining the ideas behind Equations (3.1)-(3.2) with those of Equation (1.4b), as follows \[\partial_{t}u_{i} =d_{i}\Delta u_{i}-\nabla\cdot(u_{i}\nabla\sum_{j=1}^{p}\nu_{ij} w_{R}*m_{j}), \tag{3.3}\] \[\partial_{t}m_{i} =\alpha u_{i}-\delta m_{i}, \tag{3.4}\] where \(m_{i}(\mathbf{x},t)\) denotes the cognitive map of species \(i\), and models the marks left by individuals from territorial unit \(i\), whilst \(\alpha\) and \(\delta\) are constants. However more complicated versions can be considered that include extra biological realism [172, 174, 210]. ### A general framework for non-local interactions in ecology As well as territory formation, the multi-species case from Equation (1.4) enables a variety of other ecological phenomena to be modelled over timescales where births and deaths are negligible (e.g. for mammals and birds, this may be over a season or year) [173, 83]. For example, the movements of co-existing predators and prey can be modelled by assuming prey advect away from predators and predators towards prey [61]. Likewise, competing species may advect away from one another and mutualistic animals may have a tendency to move towards one another. In forager and scrounger interactions, the latter follow the former to exploit their foraging efforts (e.g. see [193]). In ecosystems consisting of many species, there will be a complex network of such interactions that can cause a wide range of emergent patterns (Figure 5c-e). As a consequence, Equation (1.4b) has been proposed as a key study system for understanding spatial distributions of interacting groups of animals that may emerge over such timescales [173]. These groups of animals may be territorial groups, populations, or whole species (but we often just use'species' for all such groups for simplicity and consistency with the rest of this review). The overall aim is to be able to provide links between the network of interactions between moving species (Figure 5b) and their pattern formation properties. For example, Figure 5a shows the predictions of linear stability analysis for four different systems of three populations (model (1.4b) for \(i=1,2,3\)) shown schematically in Figure 5b. This gives a simple categorisation into 'no patterns' (all eigenvalues having negative real parts)'stationary patterns' (the dominant eigenvalue is real and positive) or 'fluctuating patterns' (the dominant eigenvalue is non-real with positive real part). However, further away from linear stability regime, patterns in three-population systems can be quite complex and varied, including stationary patterns of aggregation and segregation (Figure 5c), travelling-wave-like solutions (Figure 5d), perpetual irregular oscillations (Figure 5e), and more [173, 84]. Figure 5: Patterns for example three-species model ecosystems of the form in Equation (1.4b). Panel (a) gives the linear pattern formation regimes for systems described by Panel (b). In each system, an arrow from \(u_{i}\) to \(u_{j}\) means that \(u_{i}\) is attracted to \(u_{j}\). An arrow away from \(u_{i}\) in the opposite direction from \(u_{j}\) means \(u_{i}\) avoids \(u_{j}\). So, for example, the top-left graph in Panel (b) might model two mutualist predator species living alongside a single prey species. Panels (c-e) give numerical examples of the patterns that can form in a three-species system. In Panel (c), the system tends to a steady state where \(u_{1}\) and \(u_{3}\) aggregate together but are segregated from \(u_{2}\). Panels (d) and (e) give example spatio-temporal patterns for \(u_{1}\) with a three-species system. In all panels, \(d_{1}=d_{2}=d_{3}=\nu_{21}=\nu_{31}=\nu_{32}=1\) and \(\nu_{13}=-1\). In Panels (c-e), \(\nu_{23}=-4\). Panels (c-e) have \(\nu_{12}=-4\), \(\nu_{12}=3.3\), and \(\nu_{12}=4\) respectively. ## 4 Derivations from the individual level and connecting to data ### Random walks When Karl Pearson coined the term 'random walk' in 1905 [163], the central question involved biological movement: if, within a particular time step, each mosquito moves some distance in a randomly chosen angle, can we estimate the distribution of a mosquito infestation? Fundamental work by Patlak [162] extended the question to include biases from the environment and persistence. Across the last few decades a vast number of studies have aimed to connect the random walk movements performed by individuals to population level measures and distributions, for both cell and animal movement (e.g. see [154, 18, 202, 52]). Specifying a position jump random walk (PJRW, see [52, 151, 158, 187, 184]) forms a particularly well trodden path. In the context of the present review, this approach can be used to motivate both local and nonlocal models for aggregation [37]. To illustrate this, we first lay down a general formalism. Let us consider the probability that a random walker has its centre at position \(\mathbf{x}\) at time \(t\). If we have a population of independent walkers, this probability can be equated with the population density \(u(\mathbf{x},t)\), and we maintain this notion. Note that the definition in terms of the centre implicitly assumes that the walker can have some finite extent, i.e. it is not necessarily a point object. For now we shall avoid any discussion of boundary conditions and assume an individual can move anywhere in space: movement is within \(\Omega=\mathbb{R}^{n}\). The time continuous Master equation for the PJRW has the following form [154, 106, 204] \[\partial_{t}u(\mathbf{x},t)=\lambda\int_{\Omega}[T(\mathbf{x},\mathbf{y})u( \mathbf{y},t)-T(\mathbf{y},\mathbf{x})u(\mathbf{x},t)]\,d\mathbf{y}, \tag{4.1}\] where \(T(\mathbf{x},\mathbf{y})\) is a probability density function for a jump from \(\mathbf{y}\in\mathbb{R}^{n}\) to \(\mathbf{x}\in\mathbb{R}^{n}\). Note that \(T\) can depend on \(t\), but we omit this dependency from the notation. \(\lambda>0\) is a rate parameter. We remark that individuals can remain at their current location through setting \(T(\mathbf{x},\mathbf{x})>0\), which we refer to as a zero-length jump. We follow the approach of [37] and rewrite the integral kernel \(T(\mathbf{x},\mathbf{y})\) according to the jump heading \(\mathbf{z}=\mathbf{x}-\mathbf{y}\). Specifically, \[T_{\mathbf{y}}(\mathbf{z}):=T(\mathbf{y}+\mathbf{z},\mathbf{y})=T(\mathbf{x}, \mathbf{y}),\qquad\mathbf{z}=\mathbf{x}-\mathbf{y},\] where we assume that \[T_{\mathbf{y}}\geq 0,\qquad T_{\mathbf{y}}\in L^{1}(\mathbb{R}^{n}),\quad\|T_{ \mathbf{y}}\|_{1}=1.\] \(T_{\mathbf{y}}\) can be split into even and odd components, \[E_{\mathbf{y}}(\mathbf{z})=\frac{1}{2}\left(T_{\mathbf{y}}(\mathbf{z})+T_{ \mathbf{y}}(-\mathbf{z})\right),\qquad O_{\mathbf{y}}(\mathbf{z})=\frac{ \mathbf{z}}{2|\mathbf{z}|}\left(T_{\mathbf{y}}(\mathbf{z})-T_{\mathbf{y}}(- \mathbf{z})\right). \tag{4.2}\] Then \[T_{\mathbf{y}}(\mathbf{z})=\begin{cases}E_{\mathbf{y}}(\mathbf{z})+O_{y}( \mathbf{z})\cdot\frac{\mathbf{z}}{|\mathbf{z}|}&\text{if }\mathbf{z}\neq 0\\ E_{\mathbf{y}}(\mathbf{z})&\text{if }\mathbf{z}=0\end{cases} \tag{4.3}\] with an even part \(E_{\mathbf{y}}\in L^{1}\) and an odd part \(O_{\mathbf{y}}\in L^{1}\), which satisfy \[E_{\mathbf{y}}(\mathbf{z})=E_{\mathbf{y}}(-\mathbf{z})\quad\text{and}\quad O _{\mathbf{y}}(\mathbf{z})=O_{\mathbf{y}}(-\mathbf{z}). \tag{4.4}\] We employ this decomposition in the general master equation (4.1) and make two further assumptions. First, that transition rates do not depend on the increment \(\mathbf{z}\), just the starting location \(\mathbf{y}\): this describes a myopic random walk. Second, non zero-length jumps are small and of fixed length \(h\ll 1\), and Taylor expansions can therefore be applied. Details of the expansions can be found in [37] where, in the limit as \(h\to 0\) and \(\lambda\to\infty\), we arrive at the advection-diffusion equation \[\partial_{t}u(\mathbf{x},t)+\nabla\cdot(\mathbf{a}(\mathbf{x},t)u(x,t))= \Delta(D(\mathbf{x},t)u(\mathbf{x},t))\,. \tag{4.5}\] We denote by \(\mathbb{S}^{n-1}\) the \(n-1\) dimensional unit sphere in \(\mathbb{R}^{n}\). The advection velocity is given by \[\mathbf{a}(\mathbf{x},t)=\lim_{h\to 0,\lambda\to\infty}\frac{\lambda h^{n}}{n}| \mathbb{S}^{n-1}|\;O_{\mathbf{x}}\,,\] and the diffusion term by \[D(\mathbf{x},t)=\lim_{h\to 0,\lambda\to\infty}\frac{\lambda h^{n+1}}{2n}| \mathbb{S}^{n-1}|\;E_{\mathbf{x}}\,.\] Particular care must be paid to the limit scalings, as they suggest different powers of \(h\): for the limits to simultaneously exist the odd part must be small (i.e. \(O_{\mathbf{x}}\sim h\)) with respect to the even part. If the odd part is of order one or larger, the diffusion term vanishes and a pure drift equation (a drift-dominated case) is derived. When the odd part is of order \(h^{2}\) or smaller, the drift term vanishes and a diffusion-dominated case arises. The value of separating \(T\) with respect to its odd and even parts becomes clear: the even component \(E_{\mathbf{x}}\) enters the diffusion term, while the odd component \(O_{\mathbf{x}}\) determines the advection term. Generally the odd and even parts can involve nonlocal terms that represent sensing up to a certain radius. We will return to this in the next section but one. #### 4.1.1 Local models We illustrate the above scaling through an interesting local case, which leads to taxis-type models. To introduce dependency according to some controlling factor, we take the standard assumption[187] of supposing that the jump probability distribution explicitly depends on a control species, which we denote \(c(\mathbf{x},t)\). For simplicity, we will restrict in this section to a symmetrical case where we set \(T_{\mathbf{y}}(\mathbf{z})=f(c(\mathbf{y},t))\) for all non-zero length jumps (i.e. \(T\) depends only locally on \(\mathbf{y}\) through \(f(c(\mathbf{y},t))\)). When movement occurs, all headings are chosen with equal probability, but this probability varies with the local level of the control species \(c\). There is no odd component to \(T\) and the limiting equation (4.5) in this case is of the form \[\partial_{t}u=d\Delta(f(c)u)=d\nabla\cdot[f(c)\nabla u+uf^{\prime}(c)\nabla c]. \tag{4.6}\] Therefore - despite an absence of directionality to the jump - a taxis-like process emerges at the macroscopic level: advection according to the gradient of \(c\). The control species can be distinctly interpreted according to the movement process. For example, it may simply define a fixed environmental variability, e.g. regions where movement is easier or more difficult. It could also change according to the distribution of the population - for example, a scent deposited by an animal or a chemical released by a cell - and therefore defined by an evolution equation such as (3.2). We refer to [99, 15] for detailed reviews on chemotaxis models. Using cell adhesion as a case study, a simple but naive approach would be to directly equate the control species with the population density. Specifically, we consider \(c\equiv u\) and hence obtain the density-dependent diffusion equation \[\partial_{t}u=\nabla\left[D(u)\nabla u\right]\quad\text{with}\quad D(u)=d\left( f(u)+uf^{\prime}(u)\right)\,. \tag{4.7}\] Considering the'stickiness' property of adhesion, a logical choice for \(f(u)\) would be a decreasing function that reflects reduced capacity to move as a cell forms adhesive attachments with its neighbours. For example, a choice \(f(u)=\frac{1}{\kappa+u}\) results in \(D(u)=\frac{ds}{(\kappa+u)^{2}}\): this reduces diffusivity in regions of higher population density, and corresponds with certain choices[104] in macroscopic (phenomenological) approaches to adhesion. Derivations of local models for adhesion that rely on the PJRW framework have been considered previously (e.g. see [6, 110, 113]). While more sophisticated than the above - for example, more complicated jump probabilities or accounting for correlations in movement - they essentially lead to the same result of a density-dependent diffusion equation. Clear advantages lie in that they can lead to models that can be fitted against experimental data (e.g. obtained from cell assays [112, 111]), and that the derived PDE form is relatively tractable, both analytically and numerically. However, while density-dependent diffusion captures one expected consequence of adhesion, it is more questionable in the context of self-organisation or cell sorting phenomena. The possibility of biological aggregation within both the underlying discrete master equation and its corresponding continuous model has been considered in various studies (for example see [126, 159, 105, 6]), and for (4.7) it is straightforward to use linear stability analysis (see Section 5.1) to show that for (4.7) this will depend on the shape of \(f(u)\): instability of the uniform steady state, and hence self-organising capacity, requires \(f(u)+uf^{\prime}(u)<0\). This is not possible for \(f(u)=\frac{1}{\kappa+u}\), but can be satisfied when \(f(u)=\frac{1}{(\kappa+u)^{q}}\) for \(q>1\). However, at this point the PDE (4.7) will become illposed and unpractical for application. #### 4.1.2 Nonlocal models Intuitively, it is the pointwise nature of the dynamics that proves problematic in the above. The random walker responded only to the strictly local information acquired at its centre: it is a point particle, and the population can potentially become trapped at singular locations of 'infinite stickiness'. A cell or organism, though, has a spatial extent and, even if interacting only through direct contact, will interact across some volume of space. This naturally leads to the question of how one can extend derivations from PJRWs in a manner that retains this nonlocality. We will again use cell adhesion as a case study and follow the approach in [37]. As noted earlier, the formation of adhesion bonds between membranes leads to the generation of (local) forces that draw cells together; cellular membranes are highly dynamic, extending and retracting protrusions that span shorter (e.g. lamellipodia) and longer (e.g. filopodia) ranges. Adhesive attachments, therefore, can create forces at a position \(\mathbf{x}+\mathbf{r}\) that act to displace a cell centred at \(\mathbf{x}\) where the distance \(\mathbf{r}\) is potentially several mean cell diameters away. The method in [37] is to consider a biased random walk where the bias results from summing over all possible local forces that can impact on the cell centred at \(\mathbf{x}\), which enter the odd component of \(T\) in (4.3). Following the scaling, one obtains a nonlocal advection velocity of the form \[\mathbf{a}(\mathbf{x})=\underbrace{\mu}_{\text{adhesive strength}}\int \underbrace{N_{b}(u(\mathbf{x}+\mathbf{r},t))}_{\text{number of bonds}}\underbrace{S(u(\mathbf{x}+\mathbf{r},t))}_{\text{free space}}\underbrace{\omega(|\mathbf{r}|)}_{\text{coll extension direction}}\underbrace{\vec{\mathbf{e}}_{r}}_{\text{direction}}\,d\mathbf{r}\,. \tag{4.8}\] In the above, \(\mu\) denotes an adhesive strength per adhesion bond, \(\mathbf{r}\) denotes the direction and length of the cell extension, \(N_{b}(u(\mathbf{x}+\mathbf{r},t))\) denotes the bound adhesion receptors that are generated with cells at location \(\mathbf{x}+\mathbf{r}\), \(S(u(\mathbf{x}+\mathbf{r},t))\) indicates the amount of free space available for cells to extend into this area, \(\omega(|\mathbf{r}|)\) denotes the ability of a cell to express adhesion receptors a distance \(|\mathbf{r}|\) away from its centre, and \(\vec{\mathbf{e}}_{r}\) accounts for that bonds generated at \(\mathbf{x}+\mathbf{r}\) will lead to a bias corresponding to that direction. The formulation in (4.8) is rather general, therefore admitting varying degrees of biological detail. For example, assuming compact support for the cell extension, no space limitation (\(S=1\)), and using mass action kinetics to set the number of bonds to be proportional to the cell density (\(N_{b}(u)\propto u\)), one essentially arrives at a model of the form (2.2). If, rather, one takes the adhesion binding to be governed by a Michaelis-Menten type binding mechanism, \(N_{b}(u(\mathbf{x}))\propto\frac{u(\mathbf{x})}{\kappa+u(\mathbf{x})}\), then we arrive at a model similar to that specified in (2.3). Consequently, through an explicit derivation from a PJRW it is possible to motivate and clarify the implicit assumptions that underlie various nonlocal models for adhesion, in particular those originally developed with phenomenological reasoning and applied to various phenomena (Section 2.4). More generally, given that the integral (4.8) will typically be a nonlinear function of the cell density \(u(\mathbf{x}+\mathbf{r})\) and the ability to form attachments varies with the distance from the cell centre, one can straightforwardly obtain the general formulation in (1.2a). #### 4.1.3 Step selection functions: connecting to data on organism movement The formalism of a PJRW also allows for relatively straightforward parameterisation of advection-diffusion equations based on data, an approach that has been used both for experimental data obtained for cell systems (say, using cellular assays, e.g.[111]) and locational data for animals (e.g. [176, 169]). Taking the example of animal movement, these data typically arrive as a time series of locations. If this time series is relatively low frequency, e.g. of the order of one location every few minutes or hours, we might use the funtion \(T(\mathbf{x},\mathbf{y})\) (Equation 4.1) to model movement between successive measured locations, from \(\mathbf{y}\) to \(\mathbf{x}\) (see [74]). Alternatively, if the time series is very high frequency, e.g. many locations per second, which is increasingly common [214], \(T(\mathbf{x},\mathbf{y})\) can be used to model movements between successive places where the animal makes a turn [147]. In this latter case, we are more accurately modelling behavioural decisions of animals, as they will likely turn for a reason [215]. Either way, a huge amount of ecological insight has been gained in recent years by fitting functions that describe a position-jump process to time series of animal location data (e.g. see [75, 196, 72]). Moreover, further understanding can be gained by scaling these processes up to distributions of broad-scale space use patterns via advection-diffusion equations, using similar techniques to those described in Section 4.1[169]. The specific position-jump model that has gained particular interest from the ecological community goes under the name'step selection function' (SSF) and has the following form14 Footnote 14: The nomenclature in the literature is not always consistent here. Sometimes SSF refers to Equation (4.9), sometimes to the numerator of this equation, and sometimes just to the function \(w(\mathbf{x},\mathbf{y})\). \[T(\mathbf{x},\mathbf{y})=\frac{\psi(\mathbf{x},\mathbf{y})w( \mathbf{x},\mathbf{y})}{\int_{\Omega}\psi(\mathbf{x},\mathbf{y})w(\mathbf{x}, \mathbf{y})d\mathbf{x}}, \tag{4.9}\] where \(\psi(\mathbf{x},\mathbf{y})\) represents something about the organism's movement capability, often a distribution of'step lengths' \(|\mathbf{x}-\mathbf{y}|^{15}\), and \(w(\mathbf{x},\mathbf{y})\) is a 'weighting function' which encapsulates anything that covaries with movement. Typically, \(w(\mathbf{x},\mathbf{y})\) is written in the following exponential form \[w(\mathbf{x},\mathbf{y})=\exp[\boldsymbol{\beta}\cdot\mathbf{Z} (\mathbf{x},\mathbf{y})], \tag{4.10}\] where \(\mathbf{Z}(\mathbf{x},\mathbf{y})\) is a vector of functions, each of which represents a movement covariate, and \(\boldsymbol{\beta}\) is a vector denoting the relative contribution of the effect of each covariate on movement. In many practical examples of step selection, \(\mathbf{Z}(\mathbf{x},\mathbf{y})\) are simply static environmental features measured at the end point of the step (so \(\mathbf{Z}(\mathbf{x},\mathbf{y})\) can be written as \(\mathbf{Z}(\mathbf{x})\)) (see [196, 72]). However, they can also represent features along a step, such as barriers [21], or dynamic quantities such as memory [136] or the presence of other organisms [170]. Memory processes lead to self-interaction, which may give rise to a single species aggregation-type equation (1.2). If co-moving animals or interacting populations are being modelled, it is necessary to write a different step selection function for each entity (individual or population) [175]. These coupled step selection functions then lead to multi-species equations, like Equation (1.4) [176]. A reason for the popularity of the functional form in Equations (4.9-4.10) is that parametrisation can be done simply and quickly using conditional logistic regression. Details of this technique are given elsewhere (see [74, 10]), but in short it involves first approximating the integral in the denominator of Equation (4.9) by sampling from \(\psi\), and then recognising the resulting function as the likelihood of a case-control study where the samples are the controls. Although there are many empirical studies using step selection functions to infer information about animal movement (e.g. see [196, 72]), there are far fewer that take the next step of deriving the associated advection-diffusion equation to understand broad-scale space use patterns [169]. Perhaps the reason for this is that such studies combine empirically-driven questions with relatively-advanced mathematical analysis, thus require strong interdisciplinary collaborations between applied mathematicians, empirical ecologists, and statisticians. The flip-side is that there is huge, fertile ground for mathematicians to collaborate with those ecologists involved in step selection studies, enhancing their data analysis and answering new scientific questions [170]. ### Derivations from interacting particle system models As mentioned earlier, many of the ABM-based approaches to cellular and animal aggregation phenomena fall into the broad class of systems of 'interacting particles'. Deriving continuous models from these models forms a very large field, and a growing literature has emerged in which nonlocal models related to (1.2) are obtained. It is significantly beyond the scope of the present article to provide a comprehensive examination of this literature. Rather, we provide a few apposite examples and refer to others (e.g. [45, 146]) for a more general review. To provide some context, we consider the following concrete example16 in one dimension; we refer to [146, 77] for more details. Let the position \(x_{i}(t)\) of agent \(i\) in a population of size \(N\) at time \(t\) be determined by the stochastic differential equation Footnote 16: We note that this particular example comes from a model formulated for opinion dynamics, rather than biological aggregation. However, the underlying principles are the same: a tendency to converge, whether in position or opinion, when agents are sufficiently close. \[dx_{i}=-\frac{1}{N}\sum_{j=1}^{N}a_{ij}(x_{i}-x_{j})dt+\sigma dW_{i}(t)\,. \tag{4.11}\] In (4.11), the \(W_{i}\)'s denote independent Brownian motions and model an uncertainty to the particle position (with strength \(\sigma\)). Interactions are incorporated through the summed term, where \(a_{ij}\) gives the strength of interaction between agents \(i\) and \(j\); the \(1/N\) factor averages across all possible interactions. This general form can be tailored to describe an attraction process between sufficiently close individuals - e.g. as relevant for cell adhesion - by setting the interaction to be a function of the distance of separation, \(|x_{i}-x_{j}|\), with compact support: i.e. no attraction above a critical interaction range. To obtain a continuous model, one can consider the following empirical probability measure for the positions of all agents at time \(t\): \[u^{N}(t)=\frac{1}{N}\sum_{i=1}^{N}\delta_{x_{i}(t)}(dx)\,,\] where \(\delta_{x}(dx)\) is the Dirac measure with point mass at position \(x\). Through application of mean field asymptotic theory, it can be shown [77] that as \(N\to\infty\) the probability measure \(u^{N}\) (weakly) converges to a deterministic density \(u\), which under certain conditions is governed by a nonlocal PDE of the form (1.2a). A number of other derivations from IPS models have also led to equations related to (1.2). In one paper[138] the starting point was an off-lattice centre-based model (see Section 2.2.1), in which the motion of each particle is governed by Newton's second law of motion under viscous forces, forces from self-propulsion and forces from interactions. The latter allowed adhesion-type interactions to be included, which followed the standard assumption of varying with the degree of separation. A hierarchical system of \(N\) nonlocal PDEs was obtained to describe the distribution of a population of \(N\) interacting cells and, again following a mean field approximation, a nonlocal aggregation model of the form (1.2a) is obtained. Nonlocal aggregation models of the form (1.2b) can also be motivated from an IPS (e.g. see [144, 32]). The motivation in [144] lay in the aggregating tendency of ants (_Polyergus rufescens_), with each ant's position evolving according to a stochastic differential equation driven by Brownian motion and an interaction drift; drift dominated over random wandering when other individuals enters an ant's interaction range. Both aggregating and repelling effects were included, with the former operating when another individual enters an attracting range and a repulsion term for when they become too close. Assuming a large population \(N\), then in the limit \(N\to\infty\) the following equation was derived for the population density: \[\partial_{t}u=d\Delta u+\nabla\cdot\left[u\nabla u-u\nabla\left(w*u\right)\right]\,,\] where \(d\) follows from the Brownian motion and the density-dependent (degenerate diffusion) and nonlocal drift terms follow from the repulsion-attraction interaction; \(w\) derives from the aggregation interaction kernel. The above essentially combines the formulation (1.2b) with an additional degenerate diffusion term, as previously described in Section 2.3.3. ## 5 Analytical properties ### Linear stability analyses A linear stability analysis can be used to demonstrate basic criteria for aggregation from a dispersed initial state, i.e. self-organisation properties. We first consider the formulation (1.2a) and, for maximum clarity, utilise the simple assumptions that lead to (2.2) and constrain to a one-dimensional infinite domain; the latter restriction circumvents the complications that arise from specific boundary conditions. Consequently, (2.2) becomes \[\partial_{t}u=d\partial_{xx}u-\mu(R)\partial_{x}\left[u\left(\int_{0}^{R}u(x+y,t)dy-\int_{0}^{R}u(x-y,t)dy\right)\right]\,, \tag{5.1}\] while under equivalent assumptions the formulation (1.2b) becomes \[\partial_{t}u=d\partial_{xx}u-\nu(R)\partial_{x}\left[u\partial_{x}\left( \int_{-R}^{R}u(x+y,t)dy\right)\right]\,. \tag{5.2}\] Assuming that the population is initially distributed about a uniform steady state \(\bar{u}\), we perform a Turing-type stability analysis (e.g. [149]) by linearising about the uniform steady state and looking for solutions to the linearised equation with mode \(k\) and eigenvalue \(\sigma\) as as \(e^{ikx+\sigma t}\). This yields the characteristic equations for the eigenvalue-wavenumber relationship \[\sigma=-dk^{2}+2\bar{u}\mu(R)(1-\cos(kR))\quad\text{and}\quad\sigma=-dk^{2}+2 \bar{u}\nu(R)k\sin(kR) \tag{5.3}\] for (5.1) and (5.2), respectively. Inhomogeneous perturbations of the steady state grow if there are unstable wavenumbers \(k\neq 0\), i.e. those for which \(\Re(\sigma(k))>0\). Straightforward inspection of the above reveals that this will hinge on the competition between stabilising (diffusion) and destabilising (aggregation) processes. In particular, the parameter regions in which self-organisation occurs17 are given by Footnote 17: Note that this is under the infinite domain assumption, thereby allowing patterns to grow with unbounded wavelengths. \[\bar{u}\mu(R)R^{2}>d\quad\text{and}\quad 2\bar{u}\nu(R)R>d \tag{5.4}\] for the formulations (5.1) and (5.2), respectively. While phenomenologically similar, these two conditions are subtly distinct according to the relationships between the strength and range parameters. Commonly, the nonlocal terms in models of type (1.2) are normalised, e.g. according to a measure of the size of the interaction space: in the context of (5.1-5.2), it is standard to choose \(\mu(R)=\mu_{0}/2R\) and \(\nu(R)=\nu_{0}/2R\). Under this choice, the instability conditions for the interaction strength (\(\mu_{0}\) or \(\nu_{0}\)) and interaction range (\(R\)) have some clear distinctions for the two models (5.1-5.2) and become \[\bar{u}\mu_{0}R>2d\quad\text{and}\quad\bar{u}\nu_{0}>d,\quad\text{ respectively.}\] The condition for (5.2) is independent of \(R\), while for (5.1) the capacity for self-organisation is lost as the interaction range decreases. We illustrate the parameter spaces in Figure 6(a). Characteristic equation curves for particular parameter values illustrate these behaviours: large \(R\) and sufficient \(\mu_{0}\) or \(\nu_{0}\) allows patterning for both models; correspondingly, a finite range of unstable wavenumbers is observed (Figure 6(b,c), black curves). Decreasing \(R\), the range of unstable wavenumbers either expands for (5.2) (Figure 6(e)) or shrinks and disappears for (5.2) (Figure 6(d)). Further insights are obtained through expanding \(u(x\pm y)\) inside (5.1-5.2) and truncating at different orders. The simplest nontrivial case (using \(\mu=\mu_{0}/2R\), \(\nu=\nu_{0}/2R\)) leads to the second order local approximations \[\partial_{t}u=\partial_{x}\left[\left(d-\frac{\mu_{0}R}{2}u\right)\partial_{x }u\right]\quad\text{and}\quad\partial_{t}u=\partial_{x}\left[(d-\nu_{0}u) \partial_{x}u\right] \tag{5.5}\] for (5.1) and (5.2), respectively. These immediately recall the density dependent diffusion forms derived in Section 4.1.1. Instability regions for these equations are identical to those defined by (5.4), however this coincides with the region in which the models become illposed (negative diffusion); this manifests through corresponding characteristic equations whereby all wavenumbers are unstable, see red curves in Figure 6 (b-e). The expansions can also be truncated at higher order terms, and in particular the fourth order approximations become \[\partial_{t}u=\left[\left(d-\frac{\mu_{0}R}{2}u\right)\partial_{x}u-\frac{\mu _{0}R^{3}}{48}u\partial_{xxx}u\right] \tag{5.6}\] and \[\partial_{t}u=\partial_{x}\left[(d-\nu_{0}u)\,\partial_{x}u-\frac{\nu_{0}R^{ 2}}{6}u\partial_{xxx}\right] \tag{5.7}\] for (5.1) and (5.2), respectively. Instability regions are again as those defined by (5.4). However, we now note that the destabilising second order term is countered by a stabilising fourth order term. The characteristic equations in this case generate finite ranges for unstable wavenumbers (blue curves, Figure 6 (b-e)), curves closely following those of the nonlocal model (black curves). The distinct limiting behaviours as \(R\to 0\) become clear from (5.6)-(5.7): the fourth order approximation to (5.1) implies convergence to a simple diffusion equation, with constant (and nonnegative) diffusion coefficient \(d\); the fourth order approximation to (5.2), however, converges to a density dependent form (second equation in (5.5)) with potential illposedness. We note that the fourth order approximations to nonlocal models have been studied in detail, e.g. in [182] for one variable models and in [70] for two variable models (see also Discussion and Challenges). Stability analyses can, of course, be extended to explore pattern formation in multi-species models, for example those formulated to simulate adhesion-driven cell sorting. Scenarios under which pattern formation can occur will inevitably become more complicated within such models, as there are more potential routes to pattern formation (e.g. through the self interactions or through the cross interactions). We refer to [157, 173] for an examples of stability analyses for multi-species situations. ### Global existence and boundedness Our above observation of illposed local models that can follow from approximations of (1.2) leads to questions regarding the local and global existence of solutions: numerical solutions suggest that aggregates can become highly concentrated (e.g. Figure 3(b)), but still appear to approach a bounded form. Does the presence of the nonlocal term lead to existence of solutions? This has formed a key point of inquiry for a number of publications (e.g. see [123, 183, 20, 64, 49, 71, 101]) related to (1.2). For (1.2a), perhaps the most general theory[101] considers the following form of sys Figure 6: (a) Parameter spaces for self-organisation as predicted by linear stability analysis, for (5.1) and (5.2) under \(\mu(R)=\mu_{0}/2R\) and \(\nu(R)=\nu_{0}/2R\), respectively. (b-e) Representative curves for the characteristic equations, corresponding to the points highlighted in (a): (b-c) formulation (5.1) and its second and fourth order approximations; (d-e) (5.2) and its second and fourth order approximations. (f-g) Simulations of (5.1) in 1D, for: (f) \((\alpha,R)=(1,3)\); (g) \((\alpha,R)=(0.1,21)\) ; density maps show the population density (white = low density, purple = density \(\geq 4\bar{u}\)), with inset figures showing the profile at the two times indicated by the dashed lines. For all plots, other parameters are set at \(d=\bar{u}=1\). tem \[\partial_{t}u=d\Delta u-\mu\nabla\cdot\left(u\int_{B_{R}(\mathbf{x})}f(u(\mathbf{x }+\mathbf{r},t))\frac{\mathbf{r}}{|\mathbf{r}|}\omega(|\mathbf{r}|)d\mathbf{r} \right), \tag{5.8}\] where \(B_{R}(\mathbf{x})\) denotes the ball of radius \(R>0\) around \(\mathbf{x}\). **Theorem 5.1** (Corollary 2.4 in [101]).: _Assume:_ * \(f\in C^{2}(\mathbb{R}^{n})\) _and there exists a value_ \(b>0\) _such that_ \(f(u)=0\) _for all_ \(u\geq b\)_;_ * \(\omega\in L^{1}(\mathbb{R}^{n})\)_,_ \(\omega\geq 0\)_;_ * _for_ \(p\geq 1\) _let_ \(u_{0}\in X_{p}:=C^{0}(\mathbb{R}^{n})\cap L^{\infty}(\mathbb{R}^{n})\cap L^{p} (\mathbb{R}^{n})\) _be non-negative._ _Then there exists a unique, global solution_ \[u\in C^{0}([0,\infty);X_{p})\cap C^{2,1}(\mathbb{R}^{n}\times(0,\infty))\] _of (5.8) in the classical sense, with \(u(\mathbf{x},0)=u_{0}(\mathbf{x})\), \(\mathbf{x}\in\mathbb{R}^{n}\)._ We remark that while the above immediately implies global existence of solutions in \(n\)-dimensions for a large class of formulations, it does not yet cover some standard choices. The off-used formulation (2.2) is particularly delicate as, formally, (A1) states that \(f\) can only be linear up to a bounded density, but then becomes zero beyond some higher density. From the point of practical application this is sufficient, as we would naturally expect a bound to arise from physical or biological constraints, e.g. space limitations or saturation of receptors. Nevertheless, covering the case \(f(u)=u\) without that explicit assumption remains an open problem. The same Theorem 5.1 can also be used in the context of other aggregation models, and in particular we refer to the formulation based on energy minimisation, (1.3). To see this, we first note the connection of (5.8) to the energy-based formulation by supposing there exists some \(W(|\mathbf{r}|)\) such that \(\nabla W(|\mathbf{r}|)=\frac{\mathbf{r}}{|\mathbf{r}|}\omega(|\mathbf{r}|)\). Recalling that \(\mathbf{r}=\mathbf{y}-\mathbf{x}\), straightforward calculations (shown in A) reveal that (5.8) can be rewritten as \[\partial_{t}u=d\Delta u+\mu\nabla\cdot\left(u\nabla(W*f(u))\right)\,. \tag{5.9}\] Therefore, we can apply Theorem 5.1 straightforwardly: **Corollary 5.1**.: _Consider the model (5.9) where \(\mu>0\) and \(W(|\mathbf{r}|)\) is a potential, a function of the distance of the interaction \(|\mathbf{r}|=|\mathbf{y}-\mathbf{x}|\). Suppose \(f\) satisfies the same conditions as (A.1), the initial condition satisfies (A.3) and \(W\) satisfies_ * \(W(|\mathbf{r}|)\in L^{\infty}\)_, and_ \(W(|\mathbf{r}|)\) _has compact support inside a ball_ \(B_{R}(0)\)_._ * _There exists a scalar function_ \(\omega(|\mathbf{r}|)\) _such that_ \(\nabla W(|\mathbf{r}|)=\frac{\mathbf{r}}{|\mathbf{r}|}\omega(|\mathbf{r}|)\) _where_ \(\omega(|\mathbf{r}|)\in L^{1}\) _and_ \(\omega(|\mathbf{r}|)\geq 0\)_._ _Then equation (5.9) has a unique global classical solution_ \[u\in C^{0}([0,\infty);X_{p})\cap C^{2,1}(\mathbb{R}^{n}\times(0,\infty)).\] **Proof**. The proof follows immediately from Theorem 5.1 by replacing \(\nabla W\) with \(\frac{\mathbf{r}}{|\mathbf{r}|}\omega(|\mathbf{r}|)\). Corollary 5.1 is the first existence result for aggregation models (1.2b) with general nonlinear response functions \(f(u)\). However, the condition (W3) is quite restrictive. Since we require \(\omega(|\mathbf{r}|)\geq 0\), condition (W2) imposes that the drift is always towards the origin, where the origin corresponds to the location of the probing individual. Hence the forces are always attractive. Examples of attractive potentials are shown in Figure 7A, and include the linear potential \(W_{TH}\) and the exponential potential (also called a Moore potential or Laplace kernel) \(W_{E}\), \[W_{TH}(|\mathbf{r}|)=\min\left\{\frac{1}{R}|\mathbf{r}|-1,0\right\},\qquad W_{ E}(|\mathbf{r}|)=-\exp\left(-\frac{4|\mathbf{r}|}{R}\right),\] where \(R\) represents an interaction range parameter. The exponential kernel has unbounded support, but converges to zero quickly for larger \(|\mathbf{r}|\); the factor of four ensures that this is close to zero for \(|\mathbf{r}|=R\). Other purely attractive potentials include the Gaussian kernel and the Hegselman-Krause potential used in opinion dynamics (e.g. see [129, 83, 125, 94]). In the cases described above the potential is strictly increasing for small values of \(|\mathbf{r}|\), hence indicating an attractive force towards the origin. Indeed, the corresponding kernels \(\omega(|\mathbf{r}|)\) are nonnegative, see Figure 7 (b). As a point of note, under the linear potential \(W_{TH}\) we obtain a so-called top-hat kernel, e.g. as previously used in (2.2). In the swarming literature it is quite common to consider potentials that describe both, attractive and repulsive effects. In such cases \(W\) is no longer monotonic, and hence \(\omega\) changes sign: when \(W\) is increasing, \(\omega>0\), and we are in an attracting region; if \(W\) is decreasing, \(\omega<0\), and we are in a repelling region. One simple example of an attraction-repulsion potential, also shown in Figure 7, is given by \[W_{AR}(|\mathbf{r}|)=\cos\left(\frac{\pi|\mathbf{r}|}{R}\right)\,.\] This stipulates a repelling region for interaction distances up to \(R\), and an attracting region from \(R\) to \(2R\). Note that the attraction-repulsion potential has a minimum at \(|\mathbf{r}|=R\), at the point at which there is a switch from repulsion to attraction, and in this context \(R\) can be regarded as the preferred distance between individuals. Other examples of attractive and repulsive potentials are discussed in [42] and include the generalized Kuramoto model, the Onsager model for liquid crystals, and the Barre-Degond-Zatorska model. ### Bifurcation analysis There are two principal techniques that have been used to analyse bifurcations in models of the type in Equations (1.2) and (1.4): weakly nonlinear analysis (WNLA) and Crandall-Rabinowitz bifurcation theory (CRBT). Both techniques are useful for separating bifurcations into sub- and super-critical regimes. However, CRBT relies on steady-state formulations, whereas WNLA can reveal the criticality of bifurcations whereby the dominant eigenvalue is non-real and so solutions just beyond the bifurcation point oscillate in time. On the other hand CRBT can be used to understand the global nature of branches [97], whereas WNLA is intrinsically local in its formulation [133]. We give examples here of both techniques, first CRBT then WNLA, applied to our models of interest, exemplifying valuable outcomes and important considerations when applying them. #### 5.3.1 Crandall-Rabinowitz type bifurcation analysis Bifurcation analyses that uses the Crandall and Rabinowitz framework [55, 177] (alongside methods from equivariant bifurcation theory [89]), have been carried out in a recent monograph [97]. To illustrate, we consider a particularly simple setting in one dimension, for the interval domain \([0,L]\) with a possibly non-linear adhesion function \(f(u)\): \[\partial_{t}u=\partial_{xx}u-\mu\partial_{x}\left[u\int_{-1}^{1}f(u(x+r,t)) \frac{r}{|r|}\omega(|r|)dr\right], \tag{5.10}\] where \(\omega(|r|)\geq 0\), \(\omega\in L^{1}(0,1)\cap L^{\infty}(0,1)\), and \(\|\omega\|_{L^{1}(0,1)}=\frac{1}{2}\). In (5.10) we implicitly assume that the integral kernel has compact support in the interval \([-1,1]\) and that \(d=1\), i.e. an assumed _a priori_ rescaling of space and time that normalises the interaction range and diffusion coefficient to \(1\). Note that we set \(L>2\), such that the boundaries cannot be simultaneously touched. We equip \([0,L]\) with periodic boundary conditions \[u(0,t)=u(L,t),\qquad\partial_{x}u(0,t)=\partial_{x}u(L,t),\] with the integral wrapped around in a natural way. The interaction strength parameter \(\mu\) is taken as the bifurcation parameter. We define the Fourier-sine coefficients of the sensing function \(\omega\) as \[M_{n}(\omega)=\int_{0}^{1}\sin\left(\frac{2\pi nr}{L}\right)\omega(|r|)dr.\] As the monograph[97] shows in detail, a number of properties can be identified for the following turning operator \[K[u](x)=\int_{-1}^{1}f(u(x+r,t))\frac{r}{|r|}\omega(|r|)dr\,.\] Specifically, \(K\) is found to be skew adjoint, \(K[1]=0\) and, for the specific case \(f(u)=u\), maps sine and cosine functions as follows: \[K\left[\sin\left(\frac{2\pi nx}{L}\right)\right] = 2M_{n}(\omega)\cos\left(\frac{2\pi nx}{L}\right),\] \[K\left[\cos\left(\frac{2\pi nx}{L}\right)\right] = -2M_{n}(\omega)\sin\left(\frac{2\pi nx}{L}\right).\] Figure 7: Examples of interaction potentials (left) and the corresponding forces (right), using \(R=5\). Here we consider the top hat potential \(W_{TH}\), the exponential potential \(W_{E}\), and the attraction-repulation potential \(W_{AR}\). Moreover, if \(u(x)\) is a steady state of (5.10), then \(u^{\prime}(x)=0\) if and only if \(K[u]=0\), \(u^{\prime\prime}(x)\leq 0\) implies \(K^{\prime}[u]\leq 0\), and \(K^{\prime}[u]\geq 0\) implies \(u^{\prime\prime}(x)\geq 0\). In this context we can view \(K[u]\) as a non-local derivative and \(K^{\prime}[u]\) as a non-local curvature of the solution. The symmetries of \(K[u]\) are also shown [97] to possess crucial properties. \(K\) has \(O(2)\) symmetry and, as a consequence, bifurcation branches arise at discrete points through the following theorem. **Theorem 5.2** (see [97]).: _Consider a constant steady state \(\bar{u}\) of (5.10) with \(f^{\prime}(\bar{u})\neq 0\). For each \(n=1,2,3,\dots\) with \(M_{n}(\omega)>0\) there exists a bifurcation value and eigenfunction as_ \[\mu_{n}=\frac{n\pi}{L\bar{u}f^{\prime}(\bar{u})M_{n}(\omega)},\qquad e_{n}(x) =\cos\left(\frac{2\pi nx}{L}\right).\] For a linear interaction function \(f(u)=u\) it is also possible to identify the type of bifurcation via higher order expansions around the bifurcation value \(\mu_{n}\). Specifically. **Theorem 5.3** (see [97]).: _If \(f(u)=u\), then the type of bifurcation at \(\mu_{n}\) is given by the sign of_ \[\beta_{n}=\frac{M_{2n}(\omega)-M_{n}(\omega)}{M_{2n}(\omega)-2M_{n}(\omega)}.\] _If \(\beta_{n}>0\) then the bifurcation at \(\mu_{n}\) is supercritical and for \(\beta_{n}<0\) it is subcritical._ Notably, the type of bifurcation turns out to be entirely determined by the Fourier sine modes of the sensing function \(\omega(r)\). **Example.** As an example, consider \(f(u)=u\) and a top-hat kernel \[\omega(r)=\frac{1}{2}\chi_{[-1,1]}(r).\] Then the Fourier sine coefficients of \(\omega\) are \[M_{n}(\omega)=\frac{L}{2\pi n}\sin^{2}\left(\frac{n\pi}{L}\right)\] and the bifurcation values are \[\mu_{n}=\frac{2\pi^{2}n^{2}}{L^{2}\bar{u}\sin^{2}\left(\frac{n\pi}{L}\right)}.\] If \(L\) is a multiple of \(\pi\), certain bifurcation values do not exist. Otherwise, all \(\mu_{n}\) are well defined. The type of bifurcation is given by the sign of \[\beta_{n}=2\left(1-\cot^{2}\left(\frac{n\pi}{L}\right)\right),\] which, indeed, can be positive or negative. As in the previous subsection, a close relationship can be observed between model (5.10) and those formulated from an energy based approach. Given the sensing function \(\omega(r)\), we define a potential \[W(r):=V(r)\chi_{[-1,1]}(r),\qquad\text{with}\quad V^{\prime}(r)=\omega(r) \tag{5.11}\] Then for smooth solutions model (5.10) is equivalent with \[\partial_{t}u=\partial_{xx}u+\mu\partial_{x}[u\partial_{x}(W*f(u))]. \tag{5.12}\] As such, the bifurcation result of Theorem 5.2 can be straightforwardly extended to this case: **Corollary 5.2**.: _Consider (5.12) where the potential is given by (5.11). Then, for each \(n=1,2,3,\ldots\) with \(M_{n}(\omega)>0\), there exists a bifurcation value and eigenfunction given by_ \[\mu_{n}=\frac{n\pi}{L\bar{u}f^{\prime}(\bar{u})M_{n}(\omega)},\qquad e_{n}(x)= \cos\left(\frac{2\pi nx}{L}\right).\] As a point of remark, in [42] the bifurcations of (5.12) were considered only for the linear case \(f(u)=u\). For that case, the bifurcation value at equilibrium \(\bar{u}=\frac{1}{L}\) was expressed as \[\mu_{n}^{*}=-\frac{(2L)^{1/2}}{\tilde{W}(n)},\] where \[\tilde{W}(n)=\sqrt{\frac{2}{L}}\int_{0}^{L}W(x)\cos\left(\frac{2\pi kx}{L} \right)dx\] denotes the Fourier cosine coefficient of the potential \(W\). We can directly compute that \[\tilde{W}(n)=-\frac{\sqrt{2L}}{\pi n}M_{n}(\omega),\] which implies \(\mu_{n}=\mu_{n}^{*}\): a satisfying confirmation of our results. Note that in [42] bifurcation analysis was also extended to arbitrary space dimensions, exceeding what has currently been performed for formulations of type (1.2a). #### 5.3.2 Weakly nonlinear analysis and conservation laws We observed above that bifurcations emerge at well-defined strictly positive wavenumbers, which is a rather typical behavior for many reaction-diffusion systems [149]. In most cases, weakly nonlinear analysis (WNLA) can be used to reveal a Stuart-Landau equation governing the amplitude of the patterns close to the bifurcation point. However, when the PDE possesses a conservation law, i.e. \(\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}udx=0\), the situation is rather more complicated. In particular, the wavenumber that is destabilised first can be arbitrarily close to the origin, often meaning that the Stuart-Landau formalism is insufficient for capturing the dynamics of the amplitude of patterns [58, 133]. Such a situation is pertinent here, as Equations (1.2) and (1.4) can all possess conservation laws under certain boundary conditions (e.g. periodic). To explain this in more detail, it is valuable to look at a specific example. To this end, we consider a recently-studied symmetric 2-species version of Equation (1.4) given by[85] \[\begin{split}\partial_{t}u_{1}&=\partial_{xx}u_{1}+ \gamma\partial_{x}\left(u_{1}\partial_{x}(K*u_{2})\right),\\ \partial_{t}u_{2}&=\partial_{xx}u_{2}+\gamma\partial _{x}\left(u_{2}\partial_{x}(K*u_{1})\right),\end{split} \tag{5.13}\] defined on \(x\in\left[-\frac{L}{2},\frac{L}{2}\right]\) for \(L>2\), with \(\text{Supp}(K)=\left[-1,1\right]\) and periodic boundary conditions. Let \(\bar{\mathbf{u}}=(\bar{u}_{1},\bar{u}_{2})\) be the constant steady state. In the case \(\gamma>0\), we can think of this as modelling two mutually-avoiding populations with identical advective and diffusive properties, for example territorial groups of animals. For \(\gamma<0\), this models mutually-attractive populations, for example symbiotic animal species, or cell-types that have mutual adhesive tendencies. As is standard in WNLA, the authors[85] first decompose space and time into short and long scales. Specifically, they define \(X=\epsilon x\) and \(T=\epsilon^{2}t\). Then they look for solutions of the form[133] \[\mathbf{u}(x,t)=\mathbf{\bar{u}}+A(X,T)e^{iq_{c}x}+A^{*}(X,T)e^{-iq_{c}x}+B(X,T), \tag{5.14}\] where \(q_{c}\) is the first wavenumber to be destabalised as \(\gamma\) passes through the bifurcation threshold. In situations where there is no conservation law, and the zero mode is stable close to the bifurcation point, there is no need to include the term \(B(X,T)\). However, the conservation law means that the zero mode always has an eigenvalue of zero so can be unstable to spatial perturbations on the slow-time, long-space scale (i.e. in \((X,T)\) coordinates). It should be noted that the amplitudes \(A\) and \(B\) depend on the macroscopic time and space scales, while the mode \(e^{iq_{c}x}\) depends on the microscale. In particular, the authors showed [85] that if \(\bar{u}_{1}\neq\bar{u}_{2}\), \(A\) is governed by the Stuart-Landau equation \[A_{T}=q_{c}^{2}A-\Lambda|A|^{2}A, \tag{5.15}\] and \(B=0\), whenever \(\gamma\) is in the linearly unstable regime. However, if \(\bar{u}_{1}=\bar{u}_{2}\) then there is a different system of amplitude equations \[A_{T} =q_{c}^{2}A-\Lambda|A|^{2}A+\frac{q_{c}^{2}}{\bar{u}_{1}}AB, \tag{5.16}\] \[B_{T} =\eta B_{XX}-\frac{1}{\bar{u}_{1}}(|A|^{2})_{XX}, \tag{5.17}\] where \(\eta\) is a function of \(\gamma_{c}\), \(\bar{u}_{1}\), and \(\hat{K}(0)\), where \(\hat{K}(q)\) is the Fourier-cosine coefficient of \(K(x)\) \[\hat{K}(q)=\int_{-1}^{1}K(x)\cos(qx)\text{d}x. \tag{5.18}\] In Equations (5.15) and (5.16), \(\Lambda\) controls the criticality of the bifurcation in \(A\), and is a function of \(\hat{K}(q_{c})\), \(\hat{K}(2q_{c})\), \(\bar{u}_{1}\), \(\bar{u}_{2}\), and \(\gamma_{c}\) (see [85] for precise functional forms of \(\Lambda\) and \(\eta\)). In the \(\bar{u}_{1}=\bar{u}_{2}\) case, due to the contribution of the function \(B(X,T)\), branches that bifurcate supercritically in \(A(X,T)\) can be unstable. Indeed, the following proposition holds. **Proposition 5.1**.: _Suppose \(\bar{u}_{1}=\bar{u}_{2}\). If \(\gamma\) is in the linearly unstable regime and \(\Lambda>0\) then small amplitude patterns to System (5.13) exist. These solutions are unstable if_ \[\Lambda<\frac{\bar{u}_{1}^{2}}{q_{c}^{2}\eta}. \tag{5.19}\] This means that, in the case \(0<\Lambda<\frac{\bar{u}_{1}^{2}}{q_{c}^{2}\eta}\), we have a supercritical bifurcation, but unlike the Stuart-Landau situation, stable patterns do not grow continuously as the bifurcation point is crossed. Rather, numerical solutions show a discontinuous jump to a higher amplitude than the supercritical branch predicts[85]. This case study shows the importance of accounting for the zero mode in bifurcation analysis of nonlocal advection-diffusion equations. Whilst we have only shown this in a single example, it is reasonable to expect that unstable supercritical branches may be a phenomenon observed more generally. ## 6 Discussion and Challenges To conclude, we outline a number of outstanding issues regarding modelling with nonlocal advection, and provide a few potential ways forward that could be fruitful in the coming years. **Existence results.** A large existence theory has been developed which covers a relatively broad spectrum of models that lie in the forms (1.2-1.4). However, the generalised structure of these equations can lead to a vast spectrum of models and an all-encompassing theory is not yet available. For example, functions \(\mathbf{k}\) (or \(w\)) can vary from positive to negative and systems with multiple species can admit a wide spectrum of cross interactions. **Steady states, stability and bifurcation structure.** Dynamically, models (1.2-1.4) are capable of an extremely rich variety of patterning, including stationary aggregate patterns, oscillating structures, travelling wave dynamics. Classical Turing-type stability analyses of nonlocal models have generally focused on one spatial dimension; intriguingly, however, recent extensions[109] to higher dimensions indicate a dimensionally-dependent self-organising capacity, with patterning possible in higher dimensions for a formulation incapable of self-organisation in one-dimension. Studies into long time behaviours have primarily relied on simulations, however this alone is far from satisfactory: transients can persist over long timescales and become confused with stationary solutions. As an example, referring to Figure 6(f-g), a coarsening sequence is observed in which aggregates collapse over time: is the long time outcome a single aggregate? Expanding analytical methods, such as energy functional approaches [46, 84], would have high value in generating a more nuanced understanding into steady states and bifurcation structures. **Boundary effects.** In a nonlocal model, individuals inside some domain \(\Omega\) may conceivably sense information from outside \(\Omega\). The act of writing the nonlocal term at or close to a boundary therefore requires thought, as its support may extend beyond the domain of definition of the model. One can sidestep this through wrapping the nonlocal term around the domain, via the imposition of periodic boundary conditions[83]. Another approach is to alter the definition of the nonlocal term, in a mathematically consistent way, as it approaches the boundary[98]. More broadly, the potential range of boundary conditions is immense and requires consideration on an application-to-application basis. For example, for adhesive populations one could allow the external space to exert varying levels of'stickiness', or be actively repelling, according to tissue structure; in the case of multiple populations, different populations may respond distinctly near the interface. Non standard boundary conditions can strongly influence patterning within classical models (e.g., for reaction diffusion systems see [62, 167]), and it is natural to expect a similarly powerful impact of boundary conditions on the aggregation models considered here. **Local formulations.** Widespread adoption of nonlocal models is hindered by the analytical and numerical challenge. While efficient numerical methods have been developed - Fast Fourier Transforms for the integral calculation [79], positivity preserving finite volume methods [44], pseudospectral methods [88] - formulating local models with similar properties could assist both numerics and analysis. As noted, second order local models can be derived from random walk models [6, 110] and formal analyses[66] have investigated convergence between local and nonlocal forms. However, the potential of illposed local forms remains an issue. Fourth order local equations provide a promising avenue, and in [70] the following two species local model for sorting was derived from an underlying nonlocal system: \[\partial_{t}u = -\nabla\cdot\left[u\nabla\left(\mu\Delta u+\beta\Delta v+\gamma u+ \delta v\right)\right]\,,\] \[\partial_{t}v = -\nabla\cdot\left[v\nabla\left(\beta\Delta u+\Delta v+\delta u+v \right)\right]\,.\] The parameters in the above relate to those in the nonlocal interaction terms and the above model was shown to be capable of reproducing a similar range of sorting dynamics to those of nonlocal models. Overall, derivation and exploration of well behaved local models is of importance. **Structured populations.** Population heterogeneity in nonlocal models is typically restricted to two state systems, i.e. two populations with distinct properties. Discretisation into distinct subpopulations is often an approximation within biological systems: for example, studies[122] of invasive breast cancer cells indicate invading cells lie on a continuum of intermediate states from epithelial to mesenchymal; individual-to-individual variation of 'animal personality'[211, 165, 114] plays an important role in collective animal movements. Instead of extending the number of subpopulations in (1.4), subtle variation could be treated through a structured population framework: extending to a density \(u(\mathbf{x},p,t)\) where \(p\) represents the phenotype state, and choosing interaction terms to describe how different phenotypes influence the dynamics [164]. **Applications to sociological systems.** This review has concentrated on nonlocal PDEs motivated by biological systems, in particular the spatiotemporal structuring of animals and cells. Naturally, the models and methods have applications beyond those areas, in particular to sociological systems. Perhaps the most germane example here would be crowds and traffic: an area that has witnessed much modelling with techniques ranging from agent-based to continuous (e.g. see [16, 90]). Concepts of stigmergy also cross to social systems, for example gang territoriality where agent-based modelling[13] has shown that territories can emerge indirectly through graffiti rather than direct conflict. Nonlocal models directly related to equation (1.2a) have been derived from agent-based models in the context of opinion dynamics (e.g. see [11, 77, 166, 88]), where movement through physical space becomes a movement across opinion space and aggregation corresponds to consensus. Undoubtedly, numerous problems may benefit from the frameworks considered here. **Testing predictions.** Mathematical modelling of biological pattern formation is often inspired by the attempt to understand patterns already observed in biological systems: as examples, here we have described nonlocal models formulated to reproduce the observed patterns from cell sorting or territory formation. However, one can also use models to predict patterns that could be observed. For example, the multi-species Equations (1.4) display rich pattern formation properties that ought to be observable in natural systems, if the models contain a sufficiently accurate representation of the underlying interactions. Patterns emerging from the model that have not yet been identified in the real world can be viewed as predictions: do these patterns actually emerge in distributions if movement data is collected and/or analysed appropriately? If so, this would lead to new knowledge on the variety of patterns that can form spontaneously in populations of moving organisms. If not, this would inform us of missing features in our models, and deepen our understanding of the drivers of organism space use. **Connecting to data.** Testing predictions demands techniques for connecting models and data. Beyond those reported here, as a further example, machine learning algorithms (e.g. see [128]) allow trajectory data to be translated into interaction kernels for ABMs, which can then be scaled to PDEs. However, deciding the most appropriate for the data and question at hand is far from straightforward. To give an example from animal ecology, there are broadly two classes of techniques for fitting PDE models to data that are currently applied. The first starts by building a PDE model based on qualitative aspects of behaviour that have been observed. Then the emergent patterns from numerical solutions of the PDE model are fitted to location data, to uncover the underlying behavioural processes in a more quantitative way. This is exemplified in studies of mechanistic home range analysis[143]. The second approach follows that described in Section 4.1.3, where a movement kernel (a.k.a. position jump process) like in Equation (4.9) is fitted to a time series of location data. Then the PDE model is derived from this movement kernel [169]. The comparison between emergent pattern in the model and in the data then serves as a kind of 'goodness-of-fit' test for the model, which can serve to uncover missing covariates of animal movement decisions [170]. Whilst this contrast in techniques has been known in the literature for some time[171], these two approaches could do with some unification to achieve the maximum scientific benefit from analysing a given dataset. **Collective cell movement.** The analysis of collective cell movement forms a highly active area of research, from embryonic development to cancer invasion processes (e.g. see [217, 134]), and a large number of modelling approaches have been developed (e.g. [36, 4]). Often, migrating cells extend long thin protrusions into their environment, possibly conferring an element of nonlocal sensing: for example, the formation of numerous lengthy filopodia appears to play an important role during effective migration of neural crest cells [135], while long thin 'tumour microtubules' play an apparently crucial role by facilitating invasion and growth of certain brain tumours (e.g. [153, 96]). Mathematical analysis of models capable of incorporating potential nonlocal impacts, as discussed here, promise new biological insight. The growth of mathematical biology in recent decades has been spectacular, crossing scales and disciplines. However, the trade off is fragmentation: mathematical ecology, mathematical oncology etc. form their own fields, collaborative networks have become specialised, and keeping pace with developments in other fields becomes a challenge. Despite this, the common language of mathematics remains. A key aim of this review has been to demonstrate this, showing the connection between nonlocal models used in ecological and cellular systems and suggesting the two fields can mutually benefit from their ongoing development. **Acknowledgements**: KJP is a member of INdAM-GNFM and acknowledges 'Miur-Dipartimento di Eccellenza' funding to the Dipartimento di Scienze, Progetto e Politiche del Territorio (DIST). JRP acknowledges support of Engineering and Physical Sciences Research Council (EPSRC) grant EP/V002988/1. TH is supported through a discovery grant of the Natural Science and Engineering Research Council of Canada (NSERC), RGPIN-2017-04158. ## Appendix A Correspondence between models We demonstrate the calculations that show the translation between (5.8) and (5.9). Specifically, we assume there exists a potential \(W(|\mathbf{r}|)\) such that \[\nabla_{\mathbf{r}}W(|\mathbf{r}|)=\frac{\mathbf{r}}{|\mathbf{r}|}\omega(| \mathbf{r}|).\] (A.1) Substituting (A.1) into (5.8) and noting \[\mathbf{y}=\mathbf{x}+\mathbf{r},\ \mathbf{r}=\mathbf{y}-\mathbf{x},\ d \mathbf{y}=d\mathbf{r},\ \nabla_{\mathbf{y}}=\nabla_{\mathbf{r}}\,,\] \[u_{t} = d\Delta u-\mu\nabla\left(u\int_{B_{R}(x)}f(u(\mathbf{x}+\mathbf{ r}))\nabla_{\mathbf{r}}W(|\mathbf{r}|)d\mathbf{r}\right)\] \[= d\Delta u-\mu\nabla\left(u\int_{B_{R}(0)}f(u(\mathbf{y}))\nabla_ {\mathbf{y}}W(|\mathbf{y}-\mathbf{x}|)d\mathbf{y}\right)\] \[= d\Delta u+\mu\nabla\left(u\int_{B_{R}(0)}f(u(\mathbf{y}))\nabla_ {\mathbf{x}}W(|\mathbf{y}-\mathbf{x}|)d\mathbf{y}\right)\] \[= d\Delta u+\mu\nabla(u(\nabla_{\mathbf{x}}W)*f(u))\] \[= d\Delta u+\mu\nabla(u\nabla(W*f(u))).\] The above shows that energy minimisation corresponds to attractive interactions between individuals. Note that where subscripts are not included \(\nabla\equiv\nabla_{\mathbf{x}}\) and \(\Delta\equiv\Delta_{\mathbf{x}}\).
2303.10407
Inverting log blow-ups in log geometry
In the category of log schemes, it is unclear how to define the blow-ups for non-strict closed immersions. In this article, we introduce the notion of divided log spaces. We obtain the category of divided log spaces by locally inverting log blow-ups in the category of log schemes. We show that blow-ups exist for closed immersions of log smooth divided log spaces. This is an ingredient of the motivic six-functor formalism for log schemes.
Doosung Park
2023-03-18T12:52:49Z
http://arxiv.org/abs/2303.10407v2
# Inverting log blow-ups in log geometry ###### Abstract. In the category of log schemes, it is unclear how to define the blow-ups for non-strict closed immersions. In this article, we introduce the notion of divided log spaces. We obtain the category of divided log spaces by locally inverting log blow-ups in the category of log schemes. We show that blow-ups exist for closed immersions of log smooth divided log spaces. This is an ingredient of the motivic six-functor formalism for log schemes. Key words and phrases:log schemes, log blow-ups, divided log spaces 2020 Mathematics Subject Classification: 14A21 ## 1. Introduction For a regular embedding \(Z\to X\) of schemes, the deformation to the normal cone construction, denoted \(\operatorname{D}_{Z}X\), plays a central role in intersection theory. For example, if \(X\) is a smooth scheme over a field \(k\), Fulton-MacPherson [5] used this construction for the diagonal morphism \(X\to X\times_{k}X\) to define the intersection product. The notion of log schemes, introduced by Fontaine-Illusie and further developed by Kato [8], can be thought of as the notion of "schemes with boundaries". The extra structure of boundaries is helpful for the compactification and degeneration problems in algebraic geometry. For example, log geometry has been applied to compactifying moduli spaces and the proof of the \(C_{st}\)-conjecture in \(p\)-adic hodge theory. To develop intersection theory or motivic homotopy theory of log schemes, the deformation to the normal cone construction for log schemes is desirable. However, there is a technical difficulty: If \(X\) is a log smooth fs log scheme over a field \(k\) whose log structure is nontrivial, then the diagonal morphism \(X\to X\times_{k}X\) is a closed immersion that is not strict. It is unclear how to define the blow-ups for non-strict closed immersions in the category of fs log schemes because a non-strict closed immersion is not defined by a sheaf of ideals. Hence the construction of \(\operatorname{D}_{X}(X\times_{k}X)\) in this category is unclear. The purpose of this article is to introduce the notion of _divided log spaces_. The category of divided log spaces **ISpc** is obtained by locally inverting log blow-ups in the category of fs log schemes **ISch**. More precisely, we consider a full subcategory **IFan** of **ISch** in Definition 2.5 such that globally inverting log blow-ups in **IFan** is reasonable. Then we define a divided space in Definition 4.1 as a presheaf on **IFan** satisfying certain conditions whose formulation is similar to that of algebraic spaces. Any morphism of divided log spaces behaves like an exact morphism. In particular, a non-strict closed immersion of fs log schemes becomes a strict closed immersion of divided log spaces. For applications, we construct the open complements (resp. blow-ups) for non-strict closed immersions \(Z\to X\) of fs log schemes (resp. log smooth fs log schemes) in the category of divided log spaces. This allows us to construct \(\mathrm{D}_{Z}X\), which is a crucial ingredient in author's forthcoming articles [14] and [15]. ### Organization of the article In Section 2, we explain the three types of covers in log geometry: dividing covers, Zariski covers, and dividing Zariski covers. The associated topologies are called the dividing, Zariski, and dividing Zariski topologies. We study several properties of dividing Zariski sheaves. In Section 3, we consider properties of representable morphisms of sheaves. For example, we define representable strict closed immersions, representable log smooth morphisms, and representable Zariski covers of sheaves. We also provide various basic lemmas about representable morphisms, which are used in later sections. The definition of divided log spaces appears in Section 4. A sheaf \(\mathcal{X}\) is called a divided log space if the diagonal morphism is a representable strict closed immersion, and if there exists a representable Zariski cover \(\mathcal{Y}\to\mathcal{X}\) such that \(\mathcal{Y}\) is representable. Properties of representable morphisms of sheaves can be restricted to divided log spaces. The purpose of Section 5 is to glue divided log spaces. Our method for this is to introduce the notion of Zariski equivalence relations, which is an analog of etale equivalence relations in the theory of algebraic spaces. In Section 6, we explain properties of morphisms of divided log spaces, which do not need to be representable. For example, we define closed immersions, log smooth morphisms, and Zariski covers of divided log spaces. We show that closed immersions of divided log spaces are representable strict closed immersions. In Section 7, we introduce several topologies on the category of divided log spaces. We also compare sheaves on the category of divided log spaces and sheaves on the category of fs log schemes. In Sections 8 and 9, we define the open complements and blow-ups of closed immersions of divided log spaces using universal properties. The open complements always exist, but we only show the existence of the blow-ups in the case of closed immersions of log smooth divided log spaces. We combine these two notions for the deformation to the normal cone construction. We also show that the open complements are closed under pullbacks and the blow-ups are closed under pullbacks along log smooth morphisms. In Appendices, we collect several results in log geometry. ### Related work Kato introduced _algebraic valuative log spaces_ in [7], and this also does a similar job of locally inverting log blow-ups. One technical advantage of our divided log spaces is that the definition resembles that of algebraic spaces. Hence we can imitate many proofs in the literature of algebraic spaces to develop our theory. Kato gave in [7, Proposition 1.4.2] a description of the Hom group in the category of algebraic valuative log spaces, whose proof was left to the reader. Assuming this, we expect that there is a fully faithful functor from the category of divided log spaces into the category of algebraic valuative log spaces since we have a similar result in Proposition 2.17. We left an investigation about the comparison to the interested reader. **Notation**.: Throughout this article, we fix a noetherian fs log scheme \(B\) of finite Krull dimension. Our standard reference for the notation and terminology in log geometry is Ogus's book [12]. The coproduct \(M\oplus_{P}N\) of saturated monoids is taken in the category of saturated monoids, and the fiber product \(X\times_{S}Y\) of saturated log schemes is taken in the category of saturated log schemes. **Acknowledgements**.: This research was conducted in the framework of the DFG-funded research training group GRK 2240: _Algebro-Geometric Methods in Algebra, Arithmetic and Topology_. ## 2. Dividing Zariski topology We want a topology that is finer than the Zariski topology and does the job of locally inverting log blow-ups in the category of sheaves. The dividing Zariski topology defined below is suited for this. **Definition 2.1**.: Let \(f\colon Y\to X\) be a quasi-compact morphism of fs log schemes. 1. We say that \(f\) is a _dividing cover_ if \(f\) is a universally surjective proper log etale monomorphism. 2. We say that \(f\) is a _Zariski cover_ if \(f\) is surjective and of the form \(\amalg_{i\in I}Y_{i}\to X\) with finite \(I\) such that each \(Y_{i}\to X\) is an open immersion. 3. We say that \(f\) is a _dividing Zariski cover_ if \(f\) is universally surjective and of the form \(\amalg_{i\in I}Y_{i}\to X\) with finite \(I\) such that each \(Y_{i}\to X\) is a log etale monomorphism. Recall that a morphism \(f\colon Y\to X\) in a category with fiber products is a monomorphism if and only if the diagonal morphism \(Y\to Y\times_{X}Y\) is an isomorphism. **Definition 2.2**.: Let \(\{Y_{i}\to X\}_{i\in I}\) be a family of quasi-compact morphisms of fs log schemes with finite \(I\). 1. The family is called a _Zariski covering family_ if \(\amalg_{i\in I}Y_{i}\to X\) is a Zariski cover. 2. The family is called a _dividing Zariski covering family_ if \(\amalg_{i\in I}Y_{i}\to X\) is a dividing Zariski cover. Every dividing cover is a dividing Zariski cover. Every pullback of a dividing (resp. dividing Zariski) cover is again a dividing (resp. dividing Zariski) cover. Every composition of dividing (resp. dividing Zariski) covers is again a dividing (resp. dividing Zariski) cover. **Definition 2.3**.: The topology on the category of quasi-compact fs log schemes generated by dividing (resp. dividing Zariski) covers is called the _dividing_ (resp. _dividing Zariski_) _topology_. Let \(div\) (resp. \(dZar\)) be the shorthand for the dividing (resp. dividing Zariski) topology. **Definition 2.4**.: We refer to [12, Definition II.1.9.2] for the definition of the category of fans. For a fan \(\Sigma\), let \(\mathbb{T}_{\Sigma}\) be the fs log scheme whose underlying scheme is the toric variety over \(\Spec(\mathbb{Z})\) associated with \(\Sigma\) and whose log structure is the compactifying log structure [12, Definition III.1.6.1] associated with the open immersion from the torus \(\mathbb{G}_{m}^{n}\), where \(n\) is the rank of \(\Sigma\). Every morphism of fans \(\theta\colon\Delta\to\Sigma\) induces a morphism of fs log schemes \(\mathbb{T}_{\theta}\colon\mathbb{T}_{\Delta}\to\mathbb{T}_{\Sigma}\). We say that \(\theta\) is a _subdivision_ if the associated homomorphism of lattices and the associated map of supports \(|\Delta|\to|\Sigma|\) are isomorphisms. In this case, \(\mathbb{T}_{\theta}\) is a proper birational morphism. **Definition 2.5**.: For an fs log scheme \(X\), a _fan chart of \(X\)_ is a fan \(\Sigma\) together with a strict morphism \(X\to\mathbb{T}_{\Sigma}\). Let \(\mathbf{lSch}/B\) be the category of noetherian schemes of finite Krull dimensions over \(B\). Let \(\mathbf{lFan}/B\) be the full subcategory of the category \(\mathbf{lSch}/B\) consisting of disjoint unions \(\Pi_{i\in I}X_{i}\) with finite \(I\) such that each \(X_{i}\) admits a fan chart. The dividing topology and dividing Zariski topology can be restricted to \(\mathbf{lFan}/B\). Any fs log scheme admits a fan chart Zariski locally by [12, Theorem III.1.2.7(1)]. Hence any \(X\in\mathbf{lSch}/B\) admits a Zariski cover \(Y\to X\) with \(Y\in\mathbf{lFan}/B\). **Definition 2.6**.: Let \(f\colon Y\to X\) be a morphism of fs log schemes. A _fan chart of \(f\)_ is a triple \((\Sigma,\Delta,\theta\colon\Delta\to\Sigma)\) such that the induced diagram commutes, the vertical morphisms are strict, and \(\theta\) is a morphism of fans. If \(X\) has a fan chart, then \(f\) has a fan chart Zariski locally on \(Y\) by [12, Theorem III.1.2.7(1)]. **Example 2.7**.: Every log blow-up [12, Definition III.2.6.2] is a dividing cover, see [2, Example A.11.1]. Let \(\theta\colon\Sigma^{\prime}\to\Sigma\) be a subdivision of fans. We have isomorphisms \[\mathbb{T}_{\Sigma^{\prime}}\simeq\mathbb{T}_{\Sigma^{\prime}\times_{\Sigma} \Sigma^{\prime}}\simeq\mathbb{T}_{\Sigma^{\prime}}\times_{\mathbb{T}_{\Sigma} }\mathbb{T}_{\Sigma^{\prime}},\] so the induced morphism \(\mathbb{T}_{\theta}\colon\mathbb{T}_{\Sigma^{\prime}}\to\mathbb{T}_{\Sigma}\) is a monomorphism. Zariski locally, \(\mathbb{T}_{\theta}\) is of the form \(\mathbb{A}_{\eta}\colon\mathbb{A}_{Q}\to\mathbb{A}_{P}\) for some injective homomorphism \(\eta\colon P\to Q\) of fs monoids such that \(\eta\colon P^{\mathrm{gp}}\to Q^{\mathrm{gp}}\) is an isomorphism. Hence [12, Corollary IV.3.1.10] shows that \(\mathbb{T}_{\theta}\) is log etale. Since \(\theta\) is a subdivision, \(\mathbb{T}_{\theta}\) is proper. As a consequence of [2, Lemma A.11.4], \(\mathbb{T}_{\theta}\) is universally surjective. Hence we have shown that \(\mathbb{T}_{\theta}\) is a dividing cover. If \(X\to\mathbb{T}_{\Sigma}\) is any morphism of quasi-compact log schemes, then the projection \(X\times_{\mathbb{T}_{\Sigma}}\mathbb{T}_{\Sigma^{\prime}}\to X\) is a dividing cover too. **Proposition 2.8**.: _We have the following list of synonyms of morphisms of fs log schemes:_ 1. _exact proper monomorphism_ \(=\) _strict closed immersion,_ 2. _exact log etale monomorphism_ \(=\) _open immersion,_ 3. _exact dividing cover_ \(=\) _isomorphism,_ 4. _exact dividing Zariski cover_ \(=\) _Zariski cover._ Proof.: (1) Consequence of Proposition B.3 and [4, Corollaire IV.18.12.6]. (2) Consequence of Proposition B.3 and [4, Theoreme IV.17.9.1]. (3) Consequence of (1) and (2). (4) Consequence of (2). The case of log etale monomorphisms in the below result is [2, Proposition A.11.5]. **Proposition 2.9**.: _Let \(f\colon Y\to X\) be a quasi-compact morphism of fs log schemes. Assume that \(X\) admits a fan chart \(\Sigma\). If \(f\) is a monomorphism (resp. proper monomorphism, resp. log etale monomorphism, resp. dividing cover, resp. dividing Zariski cover), then there exists a subdivision \(\Sigma^{\prime}\) of \(\Sigma\) such that the pullback_ \[f^{\prime}\colon Y\times_{\mathbb{T}_{\Sigma}}\mathbb{T}_{\Sigma^{\prime}} \to X\times_{\mathbb{T}_{\Sigma}}\mathbb{T}_{\Sigma^{\prime}}\] _is a strict monomorphism (resp. closed immersion, resp. open immersion, resp. isomorphism, resp. Zariski cover)._ Proof.: Combine [13, Proposition 4.2.3] (a variant of [12, Theorem III.2.6.7]) and Propositions B.3 and 2.8. This immediately implies the following. **Corollary 2.10**.: _Let \(f\colon Y\to X\) be a dividing cover of quasi-compact fs log schemes. If \(X\) admits a fan chart \(\Sigma\), then \(f\) admits a refinement of the form \(X\times_{\mathbb{T}_{\Sigma}}\mathbb{T}_{\Sigma^{\prime}}\to X\) for some subdivision \(\Sigma^{\prime}\) of \(\Sigma\)._ **Remark 2.11**.: Let \(\theta\colon\Sigma^{\prime}\to\Sigma\) be a subdivision of fans. Combine toric resolution of singularities [3, Theorem 11.1.9] and De Concini-Procesi's Theorem [11, pp. 39-40] to see that there exists a refinement \(\Sigma^{\prime\prime}\to\Sigma\) of \(\theta\) that is a composition of star subdivisions. The induced morphism \(\mathbb{T}_{\Sigma^{\prime\prime}}\to\mathbb{T}_{\Sigma}\) is a log blow-up. Hence Proposition 2.9 is valid if we further require that \(\mathbb{T}_{\Sigma^{\prime}}\to\mathbb{T}_{\Sigma}\) is a log blow-up. Using this, one can check that the dividing Zariski topology on the category of quasi-compact fs log schemes is equivalent to the _Zariski valuative topology_[10, Definition 2.23], which is the smallest Grothendieck topology containing morphisms of the form \(f\colon\ \mathrm{II}_{i\in I}\ Y_{i}\to X\) with finite \(I\) such that \(f\) is universally surjective and each \(Y_{i}\to X\) is a composition of open immersions and log blow-ups. Due to [10, Remark 2.24], the dividing Zariski topology has enough points. The next result justifies our choice of the terminology "dividing Zariski." **Proposition 2.12**.: _Let \(\mathcal{F}\) be a presheaf on \(\mathbf{lFan}/B\). Then \(\mathcal{F}\) is a dividing Zariski sheaf if and only if \(\mathcal{F}\) is both a dividing sheaf and a Zariski sheaf._ Proof.: The only if direction is trivial. For the if direction, assume that \(\mathcal{F}\) is both a dividing sheaf and a Zariski sheaf. Let \(Y\to X\) be a dividing Zariski cover in \(\mathbf{lFan}/B\). By Proposition 2.9, there exists a dividing cover \(X^{\prime}\to X\) such that the projection \(Y^{\prime}:=Y\times_{X}X^{\prime}\to X^{\prime}\) is a Zariski cover. Since \(\mathcal{F}\) is a Zariski sheaf, we obtain \[\mathcal{F}(X^{\prime})\xrightarrow{\sim}\mathrm{Eq}(\mathcal{F}(Y^{\prime}) \rightrightarrows\mathcal{F}(Y^{\prime}\times_{X^{\prime}}Y^{\prime})).\] Use the assumption that \(\mathcal{F}\) is a dividing sheaf to obtain \[\mathcal{F}(X)\xrightarrow{\sim}\mathrm{Eq}(\mathcal{F}(Y)\rightrightarrows \mathcal{F}(Y\times_{X}Y)).\] This shows that \(\mathcal{F}\) is a dividing Zariski sheaf. The dividing sheafification and dividing Zariski sheafification admit explicit descriptions as follows. **Proposition 2.13**.: _Let \(\mathcal{F}\) be a presheaf on \(\mathbf{lFan}/B\). For every \(X\in\mathbf{lFan}/B\) admitting a fan chart \(\Sigma\), there exists an isomorphism_ \[a_{\text{div}}\mathcal{F}(X)\simeq\operatorname*{colim}_{\Sigma^{\prime}\to \Sigma}\mathcal{F}(X\times_{\mathbb{T}_{\Sigma}}\mathbb{T}_{\Sigma^{\prime}}), \tag{2.1}\] _where the colimit runs over the category of the subdivisions of \(\Sigma\)._ Proof.: Let \(L_{div}\mathcal{F}\) be the separated presheaf associated with \(\mathcal{F}\) for the dividing topology, which is defined using [1, Section II.3.0.5]. For a subdivision \(\Sigma^{\prime}\) of \(\Sigma\), we set \(X_{\Sigma^{\prime}}:=X\times_{\mathbb{T}_{\Sigma}}\mathbb{T}_{\Sigma^{\prime}}\). By Corollary 2.10, we have isomorphisms \[L_{div}\mathcal{F}(X)\simeq\operatorname*{colim}_{\Sigma^{\prime}\to\Sigma} \operatorname{Eq}(\mathcal{F}(X_{\Sigma^{\prime}})\rightrightarrows\mathcal{ F}(X_{\Sigma^{\prime}}\times_{X}X_{\Sigma^{\prime}}))\simeq\operatorname*{colim}_{ \Sigma^{\prime}\to\Sigma}\mathcal{F}(X_{\Sigma^{\prime}}). \tag{2.2}\] Suppose that \(Y\to X\) is a dividing cover in \(\mathbf{IFan}/B\). Use Proposition 2.9 and the above description (2.2) to obtain an isomorphism \[L_{div}\mathcal{F}(X)\simeq L_{div}\mathcal{F}(Y).\] This means that \(L_{div}\mathcal{F}\) is already a dividing sheaf, i.e., \(L_{div}\mathcal{F}\simeq a_{div}\mathcal{F}\). For a fan \(\Sigma\), the category of subdivisions of \(\Sigma\) is a filtered category. Hence the colimit in (2.2) is filtered. **Proposition 2.14**.: _Let \(\mathcal{F}\) be a Zariski sheaf on \(\mathbf{IFan}/B\). For every \(X\in\mathbf{IFan}/B\) admitting a fan chart \(\Sigma\), there exists an isomorphism_ \[a_{dZar}\mathcal{F}(X)\simeq\operatorname*{colim}_{\Sigma^{\prime}\to\Sigma} \mathcal{F}(X\times_{\mathbb{T}_{\Sigma}}\mathbb{T}_{\Sigma^{\prime}}), \tag{2.3}\] _where the colimit runs over the category of the subdivisions of \(\Sigma\)._ Proof.: Suppose that \(X^{\prime}\to X\) is a Zariski cover. For any subdivision \(\Sigma^{\prime}\) of \(\Sigma\), we set \(X_{\Sigma^{\prime}}:=X\times_{\mathbb{T}_{\Sigma}}\mathbb{T}_{\Sigma^{\prime}}\) and \(X^{\prime}_{\Sigma^{\prime}}:=X^{\prime}\times_{X}X_{\Sigma^{\prime}}\). We obtain an isomorphism \[\mathcal{F}(X_{\Sigma^{\prime}})\xrightarrow{\simeq}\operatorname{Eq}( \mathcal{F}(X^{\prime}_{\Sigma^{\prime}})\rightrightarrows\mathcal{F}(X^{ \prime}_{\Sigma^{\prime}}\times_{X_{\Sigma^{\prime}}}X^{\prime}_{\Sigma^{ \prime}})).\] Since any filtered colimits commute with finite limits, we obtain \[\operatorname*{colim}_{\Sigma^{\prime}\to\Sigma}\mathcal{F}(X_{\Sigma^{\prime }})\xrightarrow{\simeq}\operatorname{Eq}(\operatorname*{colim}_{\Sigma^{ \prime}\to\Sigma}\mathcal{F}(X^{\prime}_{\Sigma^{\prime}})\rightrightarrows \operatorname*{colim}_{\Sigma^{\prime}\to\Sigma}\mathcal{F}(X^{\prime}_{ \Sigma^{\prime}}\times_{X_{\Sigma^{\prime}}}X^{\prime}_{\Sigma^{\prime}})).\] This shows that the right-hand side of (2.3) is a Zariski sheaf. Together with Proposition 2.12, we finish the proof. The following result enables us to work with \(\mathbf{IFan}/B\) instead of \(\mathbf{ISch}/B\) in many situations. **Proposition 2.15**.: _Let \(t\) be the topology on \(\mathbf{ISch}/B\) finer than the Zariski topology. Then the inclusion functor \(\mathbf{IFan}/B\to\mathbf{ISch}/B\) induces an equivalence_ \[\mathbf{Shv}_{t}(\mathbf{IFan}/B)\simeq\mathbf{Shv}_{t}(\mathbf{ISch}/B).\] Proof.: For every \(X\in\mathbf{ISch}/B\), there exists a Zariski cover \(Y\to X\) such that \(Y\in\mathbf{IFan}/B\). The implication (i)\(\Rightarrow\)(ii) in [1, Theoreme 4.1] finishes the proof. **Definition 2.16**.: For \(X\in\mathbf{ISch}/B\), let \(h_{X}\) be the dividing Zariski sheaf on \(\mathbf{IFan}/B\) represented by \(X\). If \(f\colon Y\to X\) is a dividing cover in \(\mathbf{IFan}/B\), then the induced morphism \(h_{f}\colon h_{Y}\to h_{X}\) is an isomorphism. Hence the Yoneda functor \[h\colon\mathbf{ISch}/B\to\mathbf{Shv}_{dZar}(\mathbf{ISch}/B)\simeq\mathbf{ Shv}_{dZar}(\mathbf{IFan}/B) \tag{2.4}\] is _not_ conservative if \(B\) is nonempty. In particular, \(h\) is not fully faithful, and the dividing Zariski topology is not subcanonical. **Proposition 2.17**.: _Suppose \(X\in\mathbf{lSch}/B\) and \(Y\in\mathbf{lFan}/B\). If \(Y\) admits a fan chart \(\Sigma\), then there exists a canonical isomorphism_ \[h_{X}(Y)\simeq\operatorname*{colim}_{\Sigma^{\prime}\to\Sigma}\operatorname*{ Hom}_{\mathbf{lSch}/B}(Y\times_{\mathbb{T}_{\Sigma}}\mathbb{T}_{\Sigma^{\prime}},X),\] _where the colimit runs over the category of subdivisions of \(\Sigma\)._ Proof.: Since \(h_{X}\) is a Zariski sheaf, we can use Proposition 2.14 for \(h_{X}\). **Proposition 2.18**.: _The Yoneda functor \(h\colon\mathbf{lSch}/B\to\mathbf{Shv}_{dZar}(\mathbf{lFan}/B)\) preserves finite limits._ Proof.: Follows from Proposition 2.17 since filtered colimits commute with finite limits. ## 3. Representable morphisms of sheaves The purpose of this section is to deal with properties of morphisms of sheaves, which is needed later to define divided log spaces. **Definition 3.1**.: We say that \(\mathcal{F}\in\mathbf{Shv}_{dZar}(\mathbf{lFan}/B)\) is _represented by \(X\in\mathbf{lFan}/B\)_ if \(\mathcal{F}\simeq h_{X}\). We say that a morphism \(\mathcal{S}\to\mathcal{F}\) in \(\mathbf{Shv}_{dZar}(\mathbf{lFan}/B)\) is _represented by a morphism \(f\colon Y\to X\) in \(\mathbf{lFan}/B\)_ if there exists a commutative square with vertical isomorphisms **Definition 3.2**.: Let \(u\colon\mathcal{S}\to\mathcal{F}\) be a morphism in \(\mathbf{Shv}_{dZar}(\mathbf{lFan}/B)\). We say that \(u\) is \(\mathbf{lFan}/B\)_-representable_ (or simply _representable_) if for every morphism \(h_{X}\to\mathcal{F}\) with \(X\in\mathbf{lFan}/B\), there exists \(Y\in\mathbf{lFan}/B\) such that \(h_{X}\times_{\mathcal{F}}\mathcal{S}\simeq h_{Y}\). Suppose that \(\mathcal{P}\) is a class of morphisms in \(\mathbf{lFan}/B\). We say that \(u\) is a _representable \(\mathcal{P}\)-morphism_ if for every morphism \(h_{X}\to\mathcal{F}\) with \(X\in\mathbf{lFan}/B\), there exists a commutative square (3.1) with vertical isomorphisms such that \(p\) is the projection and \(g\) is a \(\mathcal{P}\)-morphism. Observe that every representable \(\mathcal{P}\)-morphism is representable. If \(\mathcal{P}\) is closed under pullbacks, then the class of representable \(\mathcal{P}\)-morphisms in \(\mathbf{Shv}_{dZar}(\mathbf{lFan}/B)\) is closed under pullbacks too. **Proposition 3.3**.: _Suppose that \(\mathcal{P}\) is the class of isomorphisms in \(\mathbf{lFan}/B\). If \(f\colon\mathcal{S}\to\mathcal{F}\) is a representable \(\mathcal{P}\)-morphism in \(\mathbf{Shv}_{dZar}(\mathbf{lFan}/B)\), then \(f\) is an isomorphism._ Proof.: For every morphism \(h_{X}\to\mathcal{F}\) with \(X\in\mathbf{lFan}/B\), we have a cartesian square This means \(\mathcal{G}(X)\simeq\mathcal{F}(X)\). The next five lemmas deal with the structure of representable morphisms. **Lemma 3.4**.: _Let \(f\colon h_{Y}\to h_{X}\) be a morphism in \(\mathbf{Shv}_{dZar}(\mathbf{lFan}/B)\), where \(X,Y\in\mathbf{lFan}/B\). Then there exists a dividing cover \(v\colon V\to Y\) and a morphism \(g\colon V\to X\) in \(\mathbf{lFan}/B\) such that \(fh_{v}=h_{g}\). Furthermore, \(f\) is representable._ Proof.: The first claim is a consequence of Proposition 2.17. To show that \(f\) is representable, using the first claim, we only need to show that \(h_{V}\times_{h_{X}}h_{X^{\prime}}\) is representable for every morphism \(X^{\prime}\to X\) in \(\mathbf{lFan}/B\). This holds since \(h_{V}\times_{h_{X}}h_{X^{\prime}}\simeq h_{V\times_{X}X^{\prime}}\) by Proposition 2.18. **Lemma 3.5**.: _Suppose that \(\mathcal{P}\) is a class of morphisms in \(\mathbf{lFan}/B\) closed under pullbacks. Let \(f\colon h_{Y}\to h_{X}\) be a representable \(\mathcal{P}\)-morphism in \(\mathbf{Shv}_{dZar}(\mathbf{lFan}/B)\), where \(X,Y\in\mathbf{lFan}/B\). Then there exists a commutative square_ (3.2) _with vertical isomorphisms such that \(g\) is a \(\mathcal{P}\)-morphism and \(u\) is a dividing cover in \(\mathbf{lFan}/B\)._ Proof.: From (3.1), we have a commutative square with vertical isomorphisms such that \(f^{\prime}\) is a \(\mathcal{P}\)-morphism in \(\mathbf{lFan}/B\). Apply Lemma 3.4 to \(h_{X}\xrightarrow{\simeq}h_{X^{\prime}}\) to obtain a commutative triangle such that \(u\) is a dividing cover and \(u^{\prime}\) is a morphism in \(\mathbf{lFan}/B\). Take \(V:=Y^{\prime}\times_{X^{\prime}}U\) to conclude. **Lemma 3.6**.: _Let \(f\colon Y\to X\) be a morphism in \(\mathbf{lFan}/B\). If \(h_{f}\) is an isomorphism, then there exist dividing covers \(u\colon V\to X\) and \(v\colon V\to Y\) in \(\mathbf{lFan}/B\) such that \(fv=u\)._ Proof.: Apply Proposition 2.17 to \(h_{f}^{-1}\in h_{Y}(X)\) to obtain a morphism \(p\colon X^{\prime}\to Y\) such that \(fp\) is a dividing cover. Since \(h_{p}\) is an isomorphism, we can also apply Proposition 2.17 to \(h_{p}^{-1}\in h_{X^{\prime}}(Y)\) to obtain a morphism \(q\colon Y^{\prime}\to X^{\prime}\) such that \(pq\) is a dividing cover. We set \(V:=X^{\prime}\times_{Y}Y^{\prime}\), which is a dividing cover over \(X\). Use \(Y^{\prime}\times_{Y}Y^{\prime}\simeq Y^{\prime}\) and \(V\times_{X}V\simeq V\) to obtain an induced commutative diagram whose small squares are cartesian and vertical morphisms are dividing covers. Let \(a\colon V\to Y^{\prime}\times_{X}V\) be the graph morphism, and let \(b\colon Y^{\prime}\times_{X}V\to V\) be the projection. From the upper row of the diagram, we see that \(a\) is an inverse of \(b\). Hence \(b\) is an isomorphism, so we obtain the desired dividing covers \(V\to X\) and \(V\to Y\). **Lemma 3.7**.: _Let \(f\colon h_{Y}\to h_{X}\) be an isomorphism in \(\mathbf{Shv}_{dZar}(\mathbf{IFan}/B)\), where \(X,Y\in\mathbf{IFan}/B\). Then there exist dividing covers \(u\colon V\to X\) and \(v\colon V\to Y\) in \(\mathbf{IFan}/B\) such that \(fh_{v}=h_{u}\)._ Proof.: Lemma 3.4 yields a dividing cover \(q\colon Y^{\prime}\to Y\) and a morphism \(p\colon Y^{\prime}\to X\) such that \(fh_{q}=h_{p}\). Since \(h_{p}\) is an isomorphism, Lemma 3.6 yields a dividing cover \(V\to Y^{\prime}\) such that the composition \(V\to Y^{\prime}\xrightarrow{p}X\) is a dividing cover. The composition \(V\to Y^{\prime}\xrightarrow{q}Y\) is a dividing cover too. **Lemma 3.8**.: _Let \(f\colon Y\to X\) be a \(\mathcal{P}\)-morphism in \(\mathbf{IFan}/B\), where \(\mathcal{P}\) is a class of morphisms in \(\mathbf{IFan}/B\) closed under pullbacks. Then \(h_{f}\) is a representable \(\mathcal{P}\)-morphism._ Proof.: Let \(h_{V}\to h_{X}\) be a morphism in \(\mathbf{Shv}_{dZar}(\mathbf{IFan}/B)\) with \(V\in\mathbf{IFan}/B\). By Lemma 3.4, we can replace \(V\) with its suitable dividing cover to assume that \(h_{V}\to h_{X}\) is equal to \(h_{g}\) for some morphism \(g\colon V\to X\) in \(\mathbf{IFan}/B\). To conclude, observe that we have an isomorphism \(h_{Y}\times_{h_{X}}h_{V}\simeq h_{Y\times_{X}V}\). For a class of morphisms \(\mathcal{P}\) in \(\mathbf{IFan}/B\), we will frequently assume the following condition: 1. If \(Y\to X\) is a \(\mathcal{P}\)-morphism and \(Y^{\prime}\to Y\) is a dividing cover in \(\mathbf{IFan}/B\), then there exists a dividing cover \(X^{\prime}\to X\) in \(\mathbf{IFan}/B\) such that the projection \(Y^{\prime}\times_{X}X^{\prime}\to X^{\prime}\) is a \(\mathcal{P}\)-morphism. **Example 3.9**.: Suppose that \(\mathcal{P}\) is a class of morphisms in \(\mathbf{IFan}/B\) closed under pullbacks. Let \(Y\to X\) be a \(\mathcal{P}\)-morphism, and let \(Y^{\prime}\to Y\) be a dividing cover in \(\mathbf{IFan}/B\). (1) Assume that every morphism in \(\mathcal{P}\) is strict. If \(X\) has a fan chart \(\Sigma\), then \(Y\) has a fan chart \(\Sigma\). Proposition 2.9 yields a subdivision \(\Sigma^{\prime}\) of \(\Sigma\) such that the pullback \(Y^{\prime}\times_{\mathbb{T}_{\Sigma}}\mathbb{T}_{\Sigma^{\prime}}\to Y\times_ {\mathbb{T}_{\Sigma}}\mathbb{T}_{\Sigma^{\prime}}\) is an isomorphism. This means that the pullback \(Y^{\prime}\times_{X}X_{\Sigma}\to Y\times_{X}X_{\Sigma}\) is an isomorphism, where \(X_{\Sigma}:=X\times_{\mathbb{T}_{\Sigma}}\mathbb{T}_{\Sigma^{\prime}}\). Hence \(\mathcal{P}\) satisfies (Div). (2) Let exact \(\mathcal{P}\) be the subclass of morphisms in \(\mathcal{P}\) that are exact. Suppose that exact \(\mathcal{P}\) satisfies (Div). By [13, Proposition 4.2.3], there exists a dividing cover \(X^{\prime\prime}\to X\) such that the projection \(Y\times_{X}X^{\prime\prime}\to X^{\prime\prime}\) is exact \(\mathcal{P}\). Our assumption on exact \(\mathcal{P}\) yields a dividing cover \(X^{\prime}\to X^{\prime\prime}\) such that the projection \(Y^{\prime}\times_{X}X^{\prime}\to X^{\prime}\) is exact \(\mathcal{P}\). Hence \(\mathcal{P}\) satisfies (Div). (3) Suppose that \(\mathcal{P}\) contains all dividing covers and is closed under compositions. Observe that \(\mathcal{P}\) satisfies (Div). If \(V\to U\) is an exact \(\mathcal{P}\)-morphism and \(V^{\prime}\to V\) is a dividing cover, then there exists a dividing cover \(U^{\prime}\to U\) such that the projection \(p\colon V^{\prime}\times_{U}U^{\prime}\to U^{\prime}\) is exact by [13, Proposition 4.2.3]. It follows that \(p\) is an exact \(\mathcal{P}\)-morphism, so exact \(\mathcal{P}\) satisfies (Div). According to [13, Proposition 4.2.3], \(f\) is a representable \(\mathcal{P}\)-morphism if and only if \(f\) is a representable exact \(\mathcal{P}\)-morphism. In the following cases of \(\mathcal{P}\), all exact \(\mathcal{P}\)-morphisms are strict by Proposition 2.8 so that we can use (1) and (2): \begin{tabular}{|c|c|} \hline \(\mathcal{P}\) & exact \(\mathcal{P}\) \\ \hline dividing cover & isomorphism \\ dividing Zariski cover & Zariski cover \\ proper monomorphism & strict closed immersion \\ log etale monomorphism & open immersion \\ strict immersion & strict immersion \\ \hline \end{tabular} If \(\mathcal{P}\) is log smooth or log etale, then we can use (3). **Example 3.10**.: A morphism of fs log schemes \(i\colon Z\to X\) is called a _closed immersion_ if the underlying morphism of schemes \(\underline{i}\) is a closed immersion and the induced morphism of structure sheaves of monoids \(i^{*}_{\log}\mathcal{M}_{X}\to\mathcal{M}_{Z}\) is surjective, see [12, Definition III.2.3.1]. In this case, the construction of the fiber products in the proof of [12, Proposition III.2.1.2] shows that the diagonal morphism \(Z\to Z\times_{X}Z\) is an isomorphism, i.e., \(i\) is a monomorphism. Hence we have the following relations: (strict closed immersion) \(\;\subset\;\) (closed immersion) \(\;\subset\;\) (proper monomorphism). We deduce that a morphism \(f\) in \(\mathbf{Shv}_{dZar}(\mathbf{IFan}/B)\) is a representable closed immersion if and only if \(f\) is a representable strict closed immersion. With the condition (Div), we have a more structured version of Lemma 3.5 as follows. **Lemma 3.11**.: _Suppose that \(\mathcal{P}\) is a class of morphisms in \(\mathbf{IFan}/B\) closed under pullbacks and satisfying (Div). Let \(f\colon h_{Y}\to h_{X}\) be a representable \(\mathcal{P}\)-morphism in \(\mathbf{Shv}_{dZar}(\mathbf{IFan}/B)\), where \(X,Y\in\mathbf{IFan}/B\). Then there exists a commutative square_ (3.3) _with vertical isomorphisms such that \(g\) is a \(\mathcal{P}\)-morphism and \(u\) and \(v\) are dividing covers in \(\mathbf{IFan}/B\)._ Proof.: By Lemmas 3.5 and 3.7, there exists a commutative diagram with vertical isomorphisms such that \(f^{\prime}\) is a \(\mathcal{P}\)-morphism and \(p\), \(q\), and \(q^{\prime}\) are dividing covers in \(\mathbf{IFan}/B\). Use (Div) to obtain a cartesian square such that \(p^{\prime}\) is a dividing cover and \(f^{\prime\prime}\) is a \(\mathcal{P}\)-morphism. Take \(U:=X^{\prime\prime}\) and \(V:=Y^{\prime\prime}\times_{X^{\prime}}X^{\prime\prime}\) to obtain the desired commutative diagram. **Lemma 3.12**.: _Suppose that \(\mathcal{P}\) and \(\mathcal{Q}\) are classes of morphisms in \(\mathbf{IFan}/B\) closed under pullbacks. Let \(h_{Y}\to h_{X}\) be a representable \(\mathcal{P}\)-morphism, and let \(h_{Z}\to h_{Y}\) be a representable \(\mathcal{Q}\)-morphism with \(X,Y,Z\in\mathbf{IFan}/B\). If \(\mathcal{P}\) satisfies (Div), then there exists a commutative diagram_ _with vertical isomorphisms such that \(f\) is a \(\mathcal{P}\)-morphism, \(g\) is a \(\mathcal{Q}\)-morphism, and \(u\) is a dividing cover in \(\mathbf{IFan}/B\)._ Proof.: Use Lemma 3.5 twice to obtain a commutative diagram with vertical isomorphisms such that \(a\) is a \(\mathcal{P}\)-morphism, \(b\) is a \(\mathcal{Q}\)-morphism, and \(p\) and \(q\) are dividing covers in \(\mathbf{IFan}/B\). By (Div), there exists a dividing cover \(X^{\prime\prime}\to X^{\prime}\) such that the projection \(Y^{\prime\prime}\times_{X^{\prime}}X^{\prime\prime}\to X^{\prime\prime}\) is a \(\mathcal{P}\)-morphism. Take \(U:=X^{\prime\prime}\), \(V:=Y^{\prime\prime}\times_{X^{\prime}}X^{\prime\prime}\), and \(W:=Z^{\prime\prime}\times_{X^{\prime}}X^{\prime\prime}\) to conclude. **Proposition 3.13**.: _Suppose that \(\mathcal{P}\) is a class of morphisms in \(\mathbf{IFan}/B\) closed under compositions and pullbacks. If \(\mathcal{P}\) satisfies (Div), then the class of representable \(\mathcal{P}\)-morphisms in \(\mathbf{Shv}_{dZar}(\mathbf{IFan}/B)\) is closed under compositions._ Proof.: An immediate consequence of Lemma 3.12 when \(\mathcal{Q}:=\mathcal{P}\). **Proposition 3.14**.: _Suppose that \(\mathcal{P}\) is the class of all morphisms in \(\mathbf{IFan}/B\). Then a morphism \(f\colon\mathcal{G}\to\mathcal{F}\) in \(\mathbf{Shv}_{dZar}(\mathbf{IFan}/B)\) is representable if and only if \(f\) is a representable \(\mathcal{P}\)-morphism._ Proof.: The if direction is trivial. For the only if direction, assume that \(f\) is representable. Let \(h_{X}\to\mathcal{F}\) be a morphism with \(X\in\mathbf{IFan}/B\). Then there exists a cartesian square with \(Y\in\mathbf{IFan}/B\). To obtain (3.1), apply Lemma 3.4 to \(h_{Y}\to h_{X}\). **Proposition 3.15**.: _Let \(f\colon\mathcal{G}\to\mathcal{F}\) be a morphism in \(\mathbf{Shv}_{dZar}(\mathbf{IFan}/B)\). If \(f\) is a representable Zariski cover, then \(f\) is an epimorphism of sheaves._ Proof.: Suppose \(a\in\mathcal{F}(X)\) with \(X\in\mathbf{IFan}/B\), which can be expressed as a morphism of sheaves \(a\colon h_{X}\to\mathcal{F}\). Lemma 3.5 yields a commutative square with vertical isomorphisms such that \(g\) is a Zariski cover and \(u\) is a dividing cover in \(\mathbf{IFan}/B\). Hence \((ug)^{*}a\in\mathcal{F}(V)\) is in the image of \(f(V)\colon\mathcal{G}(V)\to\mathcal{F}(V)\). To conclude, observe that \(V\to X\) is a dividing Zariski cover. **Lemma 3.16**.: _Suppose that \(\mathcal{P}\) is one of the following classes:_ 1. _isomorphisms,_ 2. _strict immersions,_ 3. _open immersions,_ 4. _strict closed immersions,_ 5. _Zariski covers._ _Let_ (3.4) _be a cartesian square in \(\mathbf{Shv}_{dZar}(\mathbf{IFan}/B)\) such that \(u\) is a representable Zariski cover and \(v^{\prime}\) is a representable \(\mathcal{P}\)-morphism. Then \(v\) is a representable \(\mathcal{P}\)-morphism._ Proof.: We only need to consider the case when \(\mathcal{F}=h_{X}\) with \(X\in\mathbf{IFan}/B\). By Lemma 3.12, we can replace \(X\) with its suitable dividing cover to assume that (3.4) is isomorphic to a cartesian square (3.5) for some Zariski cover \(f\) and \(\mathcal{P}\)-morphism \(g^{\prime}\) in \(\mathbf{IFan}/B\). We have isomorphisms \(h_{Y^{\prime}\times_{X}Y}\simeq h_{Y^{\prime}}\times_{\mathcal{F}^{\prime}}h_ {Y^{\prime}}\simeq h_{Y\times_{X}Y^{\prime}}\). Let \(c\colon h_{Y^{\prime}\times_{X}Y}\to h_{Y\times_{X}Y^{\prime}}\) be the composition. Lemma 3.7 yields dividing covers \(a\colon V\to Y^{\prime}\times_{X}Y\) and \(b\colon V\to Y\times_{X}Y^{\prime}\) such that \(ch_{a}=h_{b}\). The induced square commutes. By Proposition 2.17, after replacing \(V\) by its suitable dividing cover, we may assume that the induced square commutes. Let \(g^{\prime\prime}\colon V\to Y\times_{X}Y\) be the composition obtained from this. We also have the induced diagram where \(p_{1}\) and \(q_{1}\) are the first projections and \(p_{2}\) and \(q_{2}\) are the second projections. The two squares formed with the left vertical or right vertical morphisms commute. By Proposition 2.17 again, after replacing \(V\) by its suitable dividing cover, we may assume that the two squares in the diagram (3.6) commute. Using [13, Proposition 4.2.3], we can replace \(X\) with its suitable dividing cover and \(Y\), \(Y^{\prime}\), and \(V\) by their corresponding pullbacks to assume that the composition \(V\to X\) is exact. In this case, \(a\) and \(b\) are exact since \(Y^{\prime}\times_{X}Y\) and \(Y\times_{X}Y^{\prime}\) are strict over \(X\). Proposition 2.8(3) shows that \(a\) and \(b\) are isomorphisms. It follows that the two squares in (3.6) are cartesian. All the morphisms in (3.6) are strict. By gluing, we obtain \(X^{\prime}\in\mathbf{lSch}/B\) with a cartesian square such that \(g\) is a \(\mathcal{P}\)-morphism. Since \(g\) is strict, we have \(X^{\prime}\in\mathbf{lFan}/B\). Proposition 3.15 shows that \(h_{f}\) is an epimorphism of sheaves. Hence \(h_{f}\) is a universal effective epimorphism of sheaves. Its pullbacks \(h_{f^{\prime}}\) and \(u^{\prime}\) are universal effective epimorphisms of sheaves too, so we have isomorphisms \[h_{X^{\prime}}\simeq\operatorname{Coeq}(h_{V}\rightrightarrows h_{Y^{\prime}})\simeq \mathcal{F}^{\prime}.\] By Lemma 3.8 for \(g\), we deduce that \(v\) is a representable \(\mathcal{P}\)-morphism. ## 4. Divided log spaces We adapt the definition of algebraic spaces [9, Definition II.1.1] to the dividing Zariski topology as follows. **Definition 4.1**.: We say that \(\mathcal{X}\in\mathbf{Shv}_{dZar}(\mathbf{IFan}/B)\) is a _(noetheiran) divided log space over \(B\)_ if the following two conditions are satisfied. 1. The diagonal morphism \(\Delta\colon\mathcal{X}\to\mathcal{X}\times\mathcal{X}\) is a representable strict immersion. 2. There exists a representable Zariski cover \(h_{X}\to\mathcal{X}\) with \(X\in\mathbf{IFan}/B\). A _morphism of divided log spaces over \(B\)_ is a morphism of sheaves. The category of divided log spaces over \(B\) is denoted by \(\mathbf{ISpc}/B\). **Remark 4.2**.: We chose the terminology "divided" because a dividing sheaf has no more nontrivial dividing cover, i.e., dividing is finished. **Proposition 4.3**.: _Let \(h_{Y}\to\mathcal{X}\) and \(h_{Z}\to\mathcal{X}\) be morphisms in \(\mathbf{ISpc}/B\) with \(Y,Z\in\mathbf{IFan}/B\). Then the fiber product \(h_{Y}\times_{\mathcal{X}}h_{Z}\) is representable._ Proof.: Consider the induced cartesian square The condition that \(\Delta\) is a representable strict immersion implies the claim. **Proposition 4.4**.: _Let \(\mathcal{Z}\xrightarrow{g}\mathcal{Y}\xrightarrow{f}\mathcal{X}\) be morphisms in \(\mathbf{ISpc}/B\). If \(f\) is a representable strict (resp. strict separated) morphism and \(fg\) is a representable strict (resp. strict closed) immersion, then \(g\) is a representable strict (resp. strict closed) immersion._ Proof.: Choose a representable Zariski cover \(h_{U}\to\mathcal{X}\) with \(U\in\mathbf{IFan}/B\). Since \(f\) and \(fg\) are representable, there exists a commutative diagram with cartesian squares. By Lemma 3.12 and Proposition 3.14, we may assume that \(h_{V}\to h_{U}\) is equal to \(h_{u}\) for some strict (resp. strict separated) morphism \(u\colon V\to U\) in \(\mathbf{IFan}/B\) and \(h_{W}\to h_{V}\) is equal to \(h_{v}\) for some morphism \(v\colon W\to V\) in \(\mathbf{IFan}/B\). Lemma 3.11 yields a commutative square with vertical isomorphisms such that \(a\) is a strict (resp. strict closed) immersion and \(p\) and \(q\) are dividing covers in \(\mathbf{IFan}/B\). By Proposition 2.17, there exists a dividing cover \(r\colon W^{\prime\prime}\to W^{\prime}\) such that the square commutes. Use [13, Proposition 4.2.3] to find a dividing cover \(U^{\prime\prime}\to U^{\prime}\) such that the projection \(W^{\prime\prime}\times_{U^{\prime}}U^{\prime\prime}\to U^{\prime\prime}\) is exact. The projection \(W^{\prime}\times_{U^{\prime}}U^{\prime\prime}\to U^{\prime\prime}\) is a strict (resp. strict closed) immersion, so the pullback \(W^{\prime\prime}\times_{U^{\prime}}U^{\prime\prime}\to W^{\prime}\times_{U^{ \prime}}U^{\prime\prime}\) is an exact dividing cover, i.e., an isomorphism by Proposition 2.8(3). It follows that the projection \(W^{\prime\prime}\times_{U^{\prime}}U^{\prime\prime}\to U^{\prime\prime}\) is a strict (resp. strict closed) immersion. We have the commutative diagram with cartesian squares. Since \(u\) is a strict (resp. strict separated) morphism, the projection \(V\times_{U}U^{\prime\prime}\to U^{\prime\prime}\) is a strict (resp. strict separated) morphism. It follows that the induced morphism \(W^{\prime\prime}\times_{U^{\prime}}U^{\prime\prime}\to V\times_{U}U^{\prime\prime}\) is a strict (resp. strict closed) immersion. Since the composition \(h_{V\times_{U}U^{\prime\prime}}\to\mathcal{Y}\) is a representable Zariski cover, Lemma 3.16 shows that \(g\) is a representable strict (resp. strict closed) immersion. **Proposition 4.5**.: _Let \(\mathcal{Y}\to\mathcal{X}\) be a morphism in \(\mathbf{ISpc}/B\). Then the diagonal morphism \(\mathcal{Y}\to\mathcal{Y}\times_{\mathcal{X}}\mathcal{Y}\) is a representable strict immersion._ Proof.: Consider the induced commutative diagram where the horizontal morphisms are the diagonal morphisms, and the square is cartesian. The diagonal morphisms \(\mathcal{X}\to\mathcal{X}\times\mathcal{X}\) and \(\mathcal{Y}\to\mathcal{Y}\times\mathcal{Y}\) are representable strict immersions. To finish the proof, apply Proposition 4.4 to the upper row. **Proposition 4.6**.: _Let \(\mathcal{Y}\to\mathcal{X}\) and \(\mathcal{Z}\to\mathcal{X}\) be morphisms in \(\mathbf{ISpc}/B\). Then the fiber product \(\mathcal{Y}\times_{\mathcal{X}}\mathcal{Z}\) in \(\mathbf{Shv}_{dZar}(\mathbf{IFan}/B)\) is a divided log space over \(B\)._ Proof.: We have the induced cartesian square Since the diagonal morphism \(\mathcal{Y}\to\mathcal{Y}\times\mathcal{Y}\) is a representable strict immersion, \(d\) is a representable strict immersion. Compose \(d\) with the pullback \(\mathcal{Y}\times_{\mathcal{X}}\mathcal{Z}\to\mathcal{Y}\times_{\mathcal{X}} \mathcal{Z}\times_{\mathcal{X}}\mathcal{Z}\) of the diagonal morphism \(\mathcal{Z}\to\mathcal{Z}\times_{\mathcal{X}}\mathcal{Z}\), which is a representable strict immersion by Proposition 4.5, to deduce the condition (i) in Definition 4.1 for \(\mathcal{Y}\times_{\mathcal{X}}\mathcal{Z}\). There exist representable Zariski covers \(h_{Y}\to\mathcal{Y}\) and \(h_{Z}\to\mathcal{Z}\) with \(Y,Z\in\mathbf{IFan}/B\). The pullbacks \(h_{Y}\times_{\mathcal{X}}h_{Z}\to\mathcal{Y}\times_{\mathcal{X}}h_{Z}\) and \(\mathcal{Y}\times_{\mathcal{X}}h_{Z}\to\mathcal{Y}\times_{\mathcal{X}} \mathcal{Z}\) are representable Zariski covers. Hence the composition \(h_{Y}\times_{\mathcal{X}}h_{Z}\to\mathcal{Y}\times_{\mathcal{X}}\mathcal{Z}\) is a representable Zariski cover by Proposition 3.13. This shows the condition (ii) in Definition 4.1 for \(\mathcal{Y}\times_{\mathcal{X}}\mathcal{Z}\). **Proposition 4.7**.: _Let \(f\colon\mathcal{Y}\to\mathcal{X}\) be a representable Zariski cover in \(\mathbf{ISpc}/B\). If \(f\) is a monomorphism in \(\mathbf{ISpc}/B\), then \(f\) is an isomorphism._ Proof.: The diagonal morphism \(\mathcal{Y}\to\mathcal{Y}\times_{\mathcal{X}}\mathcal{Y}\) is an isomorphism, where \(\mathcal{Y}\times_{\mathcal{X}}\mathcal{Y}\) is the fiber product in \(\mathbf{ISpc}/B\), which is the fiber product in \(\mathbf{Shv}_{dZar}(\mathbf{I}\mathbf{Fan}/B)\) by Proposition 4.6. Hence \(f\) is a monomorphism of sheaves. By Proposition 3.15, \(f\) is an epimorphism of sheaves. It follows that \(f\) is a stalkwise isomorphism of sheaves since the dividing Zariski topology has enough points by Remark 2.11, i.e., \(f\) is an isomorphism. **Proposition 4.8**.: _Suppose that \(\mathcal{P}\) is the class of monomorphisms in \(\mathbf{I}\mathbf{Fan}/B\). If \(f\colon\mathcal{Y}\to\mathcal{X}\) be a representable \(\mathcal{P}\)-morphism in \(\mathbf{ISpc}/B\), then \(f\) is a monomorphism in \(\mathbf{ISpc}/B\)._ Proof.: Choose a representable Zariski cover \(h_{U}\to\mathcal{X}\) with \(U\in\mathbf{I}\mathbf{Fan}/B\). We may assume that there exists a cartesian square such that \(g\) is a monomorphism in \(\mathbf{I}\mathbf{Fan}/B\). Then we obtain a cartesian square where \(d\) and \(\Delta\) are the diagonal morphisms. Since \(g\) is a monomorphism, \(d\) is an isomorphism. Use Lemma 3.16 to show that \(\Delta\) is an isomorphism, i.e., \(f\) is a monomorphism. Hence any representable strict morphism in \(\mathbf{ISpc}/B\) is a monomorphism. **Proposition 4.9**.: _If \(X\in\mathbf{Isch}/B\), then \(h_{X}\) is a divided log space over \(B\)._ Proof.: Let \(h_{V}\to h_{X}\times h_{X}\) be a morphism in \(\mathbf{ISpc}/B\) with \(V\in\mathbf{I}\mathbf{Fan}/B\). By Proposition 2.17, after replacing \(V\) by its suitable dividing cover, we may assume that this is isomorphic to \(h_{f}\) for some morphism \(f\colon V\to X\times_{B}X\) in \(\mathbf{Isch}/B\). The diagonal morphism \(X\to X\times_{B}X\) is a proper monomorphism, so the projection \(W:=V\times_{X\times_{B}X}X\to V\) is a proper monomorphism too. Proposition 2.9 yields a dividing cover \(V^{\prime}\to V\) such that the projection \(W^{\prime}:=V^{\prime}\times_{V}W\to V^{\prime}\) is a strict closed immersion. This shows the condition (i) in Definition 4.1 for \(h_{X}\). Choose a Zariski cover \(Y\to X\) with \(Y\in\mathbf{I}\mathbf{Fan}/B\). Let \(h_{V}\to h_{X}\) be a morphism with \(V\in\mathbf{I}\mathbf{Fan}/B\). By Proposition 2.17, after replacing \(V\) by its suitable dividing cover, we may assume that this is isomorphic to \(h_{f}\) for some morphism \(f\colon V\to X\) in \(\mathbf{Isch}/B\). The projection \(h_{V}\times_{h_{X}}h_{Y}\to h_{V}\) is isomorphic to \(h_{p}\), where \(p\) is the projection \(V\times_{X}Y\to V\). Since \(p\) is a Zariski cover, \(h_{X}\) satisfies the condition (ii) in Definition 4.1. Hence the essential image of the Yoneda functor (2.4) lies in \(\mathbf{ISpc}/B\). **Proposition 4.10**.: _Let \(h_{X}\to\mathcal{X}\) be a representable Zariski cover in \(\mathbf{ISpc}/B\) with \(X\in\mathbf{I}\mathbf{Fan}/B\). Then there exists a dividing Zariski covering family \(\{U_{i}\to X\}_{i\in I}\) with finite \(I\) such that each \(h_{U_{i}}\to\mathcal{X}\) is a representable open immersion._ Proof.: Use Lemmas 3.5 and 3.11 to obtain a commutative diagram where \(p_{1}\) (resp. \(p_{2}\)) is the first (resp. second) projection, \(f\), \(f^{\prime}\), and \(f^{\prime\prime}\) are dividing covers, \(g\) and \(g^{\prime}\) are Zariski covers, and \(h_{Y^{\prime}}\simeq h_{Y}\times_{h_{X}}h_{X^{\prime\prime}}\). We can decompose \(Y\) as \(\amalg_{i\in I}Y_{i}\) such that each \(Y_{i}\to X^{\prime}\) is an open immersion. We set \(U_{i}:=g^{\prime}(Y_{i}\times_{Y}Y^{\prime})\) and \(V_{i}:=g^{\prime-1}(U_{i})\), and we regard them as open subschemes of \(X^{\prime\prime}\) and \(Y^{\prime}\) respectively. There is a cartesian square Since \(V_{i}\to U_{i}\) is a Zariski cover and \(V_{i}\to X^{\prime}\) is a log etale monomorphism, Lemma 3.16 shows that \(h_{U_{i}}\to\mathscr{X}\) is a representable open immersion. To conclude, observe that \(\amalg_{i\in I}U_{i}\to X^{\prime\prime}\) is a Zariski cover. ## 5. Zariski equivalence relations Etale equivalence relations in the theory of algebraic spaces are helpful for constructing examples. The purpose of this section is to develop an analogous notion in the category of divided log spaces. As an application, we explain how to glue divided log spaces. **Definition 5.1**.: Suppose \(\mathscr{X}\in\mathbf{ISpc}/B\). A _Zariski equivalence relation on \(\mathscr{X}\)_ is a morphism \[i\colon\mathscr{R}\to\mathscr{X}\times\mathscr{X}\] in \(\mathbf{ISpc}/B\) satisfying the following conditions. * \(i\) is a representable strict immersion. * If \(p_{1},p_{2}\colon\mathscr{X}\times\mathscr{X}\rightrightarrows\mathscr{X}\) denote two projections, then \(p_{1}i\) and \(p_{2}i\) are representable Zariski covers. * For all \(T\in\mathbf{IPan}/B\), \(\mathscr{R}(T)\to\mathscr{X}(T)\times\mathscr{X}(T)\) is an equivalence relation. By Proposition 4.8, the condition (i) implies that \(i\) is a monomorphism, i.e., \(\mathscr{R}(T)\to\mathscr{X}(T)\times\mathscr{X}(T)\) is injective for all \(T\in\mathbf{IPan}/B\). Let \(\mathscr{X}/\mathscr{R}\) denote the dividing Zariski sheaf associated with the presheaf \[(T\in\mathbf{IPan}/B)\mapsto\mathscr{X}(T)/\mathscr{R}(T).\] There is an induced cartesian square (5.1) Suppose that \(\mathcal{T}\) is a Zariski equivalence relation on \(\mathcal{Y}\in\mathbf{ISpc}/B\). If there is a morphism \(f\colon\mathcal{Y}\to\mathcal{X}\) and a commutative square then there is an induced morphism \(\mathcal{Y}/\mathcal{T}\to\mathcal{X}/\mathcal{R}\). **Proposition 5.2**.: _Let \(\mathcal{R}\) be a Zariski equivalence relation on \(\mathcal{X}\in\mathbf{ISpc}/B\). Then \(\mathcal{X}/\mathcal{R}\in\mathbf{ISpc}/B\)._ Proof.: Let \(h_{V}\to\mathcal{X}/\mathcal{R}\times\mathcal{X}/\mathcal{R}\) be a morphism with \(V\in\mathbf{IFan}/B\). The morphism \(\mathcal{X}\to\mathcal{X}/\mathcal{R}\) is an epimorphism, so there exists a dividing Zariski cover \(V^{\prime}\to V\) in \(\mathbf{IFan}/B\) such that the composition \(h_{V^{\prime}}\to h_{V}\to\mathcal{X}/\mathcal{R}\times\mathcal{X}/\mathcal{R}\) factors through \(\mathcal{X}\times\mathcal{X}\). From the cartesian square (5.1), we have an isomorphism \[h_{V^{\prime}}\times_{\mathcal{X}/\mathcal{R}\times\mathcal{X}/\mathcal{R}} \mathcal{X}/\mathcal{R}\simeq h_{V^{\prime}}\times_{\mathcal{X}\times\mathcal{ X}}\mathcal{R}.\] Since \(\mathcal{R}\) is a Zariski equivalence relation on \(\mathcal{X}\), the projection \(h_{V^{\prime}}\times_{\mathcal{X}\times\mathcal{X}}\mathcal{R}\to h_{V^{\prime}}\) is a representable strict immersion. We set \(\mathcal{F}:=h_{V}\times_{\mathcal{X}/\mathcal{R}\times\mathcal{X}/\mathcal{R}} \mathcal{X}/\mathcal{R}\) to have a cartesian square Lemma 3.16 shows that \(\mathcal{F}\to h_{V}\) is a representable strict immersion. This shows that the diagonal morphism \(\mathcal{X}/\mathcal{R}\to\mathcal{X}/\mathcal{R}\times\mathcal{X}/\mathcal{R}\) is a representable strict immersion, which verifies the axiom (i) of divided log spaces for \(\mathcal{X}/\mathcal{R}\). Let \(h_{V}\to\mathcal{X}/\mathcal{R}\) be a morphism with \(V\in\mathbf{IFan}/B\). There exists a dividing Zariski cover \(V^{\prime}\to V\) in \(\mathbf{IFan}/B\) such that the composition \(h_{V^{\prime}}\to h_{V}\to\mathcal{X}/\mathcal{R}\) factors through \(\mathcal{X}\). Since (5.1) is cartesian, we have an isomorphism \[h_{V^{\prime}}\times_{\mathcal{X}/\mathcal{R}}\mathcal{X}\simeq h_{V^{\prime }}\times_{\mathcal{X}}\mathcal{R},\] where the morphism \(\mathcal{R}\to\mathcal{X}\) in this formulation is \(p_{1}i\). Hence we have a cartesian square Since \(p_{1}i\) is a representable Zariski cover, Lemma 3.16 shows that the projection \(h_{V}\times_{\mathcal{X}/\mathcal{R}}\mathcal{X}\to h_{V}\) is a representable Zariski cover. It follows that \(\mathcal{X}\to\mathcal{X}/\mathcal{R}\) is a representable Zariski cover. Hence \(\mathcal{X}/\mathcal{R}\) satisfies the axiom (ii) of divided log spaces. **Proposition 5.3**.: _Let \(f\colon\mathcal{Y}\to\mathcal{X}\) be a representable Zariski cover in \(\mathbf{ISpc}/B\). Then the induced morphism \(i\colon\mathcal{R}:=\mathcal{Y}\times_{\mathcal{X}}\mathcal{Y}\to\mathcal{Y} \times\mathcal{Y}\) is a Zariski equivalence relation on \(\mathcal{Y}\). Furthermore, \(\mathcal{Y}/\mathcal{R}\simeq\mathcal{X}\)._ Proof.: For \(T\in\mathbf{I}\mathbf{Fan}/B\), a section \((a,b)\in\mathcal{Y}(T)\times\mathcal{Y}(T)\) is in \(\mathcal{R}(T)\) if and only if \(f(a)=f(b)\) in \(\mathcal{X}(T)\). This explicit description shows that \(\mathcal{R}(T)\to\mathcal{X}(T)\times\mathcal{X}(T)\) is an equivalence relation and \(\mathcal{Y}/\mathcal{R}\simeq\mathcal{X}\). Since the diagonal morphism \(\mathcal{X}\to\mathcal{X}\times\mathcal{X}\) is a representable strict immersion, so is \(i\). The assumption that \(f\) is a representable Zariski cover implies that the two projections \(\mathcal{R}\rightrightarrows\mathcal{Y}\) are representable Zariski covers. Hence \(\mathcal{R}\) is a Zariski equivalence relation on \(\mathcal{Y}\). **Construction 5.4**.: Let \(I\) be a finite set. Assume that we have given the gluing data 1. \(\mathcal{X}_{i}\in\mathbf{I}\mathbf{Spc}/B\) for all \(i\in I\), 2. representable open immersions \(\mathcal{U}_{ij}\to\mathcal{X}_{i}\) for all \(i,j\in I\), 3. isomorphisms \(\varphi_{ij}\colon\mathcal{U}_{ij}\to\mathcal{U}_{ji}\) for all \(i,j\in I\), satisfying the following conditions for \(i,j,k\in I\): 1. \(\mathcal{U}_{ii}=\mathcal{X}_{i}\) and \(\varphi_{ii}=\mathrm{id}\), 2. there exists an isomorphism \(\psi_{ijk}\colon\mathcal{U}_{ij}\times_{\mathcal{X}_{i}}\mathcal{U}_{ik}\to \mathcal{U}_{ji}\times_{\mathcal{X}_{j}}\mathcal{U}_{jk}\) such that the square \[\begin{CD}\mathcal{U}_{ij}\times_{\mathcal{X}_{i}}\mathcal{U}_{ik}@>{\psi_{ ijk}}>{}>\mathcal{U}_{ji}\times_{\mathcal{X}_{j}}\mathcal{U}_{jk}\\ @V{}V{\mathcal{U}_{ij}}V@V{}V{\varphi_{ij}}V\\ \end{CD}\] commutes, 3. the square \[\begin{CD}\mathcal{U}_{ij}\times_{\mathcal{X}_{i}}\mathcal{U}_{ik}@>{\psi_{ ijk}}>{}>\mathcal{U}_{ji}\times_{\mathcal{X}_{j}}\mathcal{U}_{jk}\\ @V{\simeq}V{\psi_{ijk}}V\\ \mathcal{U}_{ik}\times_{\mathcal{X}_{i}}\mathcal{U}_{ij}@>{\psi_{ikj}}>{}> \mathcal{U}_{ki}\times_{\mathcal{X}_{k}}\mathcal{U}_{kj}\end{CD}\] commutes. In this setting, let us explain the gluing construction. We set \(\mathcal{X}:=\amalg_{i\in I}\mathcal{X}_{i}\) and \(\mathcal{R}:=\amalg_{i,j\in I}\mathcal{U}_{ij}\). The composition \[\mathcal{U}_{ij}@>{\Gamma_{\varphi_{ij}}}>{}>\mathcal{U}_{ij}\times\mathcal{U }_{ji}\hookrightarrow\mathcal{X}_{i}\times\mathcal{X}_{j}\] induces a morphism \(\mathcal{R}\to\mathcal{X}\times\mathcal{X}\), where \(\Gamma_{\varphi_{ij}}\) is the graph morphism. Since the diagonal morphism \(\mathcal{U}_{ij}\to\mathcal{U}_{ij}\times\mathcal{U}_{ij}\) is a representable strict immersion, \(\Gamma_{\varphi_{ij}}\) is a representable strict immersion. Hence \(\mathcal{R}\to\mathcal{X}\times\mathcal{X}\) is a representable strict immersion. The two compositions \(\mathcal{R}\to\mathcal{X}\times\mathcal{X}\rightrightarrows\mathcal{X}\) are representable Zariski covers since the induced morphism \(\amalg_{j\in I}\mathcal{U}_{ij}\to\mathcal{X}_{i}\) is a representable Zariski cover for all \(i\in I\). Together with the above conditions (i)-(iii), we see that \(\mathcal{R}\) is a Zariski equivalence relation on \(\mathcal{X}\). The _gluing of \(\{\mathcal{X}_{i}\}_{i\in I}\) along \(\{\mathcal{U}_{ij}\}_{i,j\in I}\)_ is defined to be \(\mathcal{X}/\mathcal{R}\). Apply Lemma 3.16 to the induced cartesian squares to see that the induced morphism \(\mathcal{X}_{i}\to\mathcal{X}/\mathcal{R}\) is a representable open immersion for all \(i\in I\) and the induced morphism \(\amalg_{i\in I}\mathcal{X}_{i}\to\mathcal{X}/\mathcal{R}\) is a representable Zariski cover. For the functoriality of the gluing construction, assume that another gluing data \[\mathcal{Y}_{i},\,\mathcal{U}_{ij}\to\mathcal{Y}_{i},\,\varphi_{ij},\,\text{and} \,\,\psi_{ijk}\] for \(i,j,k\in J\) are given, where \(J\) is a finite set. Furthermore, assume that a map \(\eta\colon I\to J\), morphisms \(\mathcal{X}_{i}\to\mathcal{Y}_{\eta(i)}\) for all \(i\in I\), and morphisms \(\mathcal{U}_{ij}\to\mathcal{V}_{\eta(i)\eta(j)}\) are given too such that the squares commutes. We set \(\mathcal{Y}:=\amalg_{i\in J}\mathcal{Y}_{i}\) and \(\mathcal{T}:=\amalg_{i,j\in J}\mathcal{V}_{ij}\). There is an induced commutative square This induces a functorial morphism \(\mathcal{X}/\mathcal{R}\to\mathcal{Y}/\mathcal{T}\). **Definition 5.5**.: Let \(\{\mathcal{U}_{i}\to\mathcal{X}\}_{i\in I}\) be a family of representable open immersions in \(\mathbf{lSpc}/B\) with finite \(I\). The _union \(\cup_{i\in I}\mathcal{U}_{i}\) of \(\{\mathcal{U}_{i}\}_{i\in I}\)_ is defined to be the gluing of \(\{\mathcal{U}_{i}\}_{i\in I}\) along \(\{\mathcal{U}_{i}\times_{\mathcal{X}}\mathcal{U}_{j}\}_{i,j\in I}\). Observe that the induced morphism \(\mathcal{U}_{a}\to\cup_{i\in I}\mathcal{U}_{i}\) is a representable open immersion for all \(a\in I\) and the induced morphism \(\amalg_{i\in I}\mathcal{U}_{i}\to\cup_{i\in I}\mathcal{U}_{i}\) is a representable Zariski cover. **Proposition 5.6**.: _Let \(\{\mathcal{U}_{i}\to\mathcal{X}\}_{i\in I}\) be a family of representable open immersions in \(\mathbf{lSpc}/B\) with finite \(I\). Then the induced morphism \(\cup_{i\in I}\mathcal{U}_{i}\to\mathcal{X}\) is a representable open immersion._ Proof.: There exists a representable Zariski cover \(h_{X}\to\mathcal{X}\) with \(X\in\mathbf{lFan}/B\). Apply Lemma 3.16 to the induced cartesian square to reduce to the case when \(\mathcal{X}=h_{X}\) with \(X\in\mathbf{lFan}/B\). Then by Lemma 3.5, there exists a commutative square with vertical isomorphisms such that \(f_{i}\) is a dividing cover and \(u_{i}\) is an open immersion in \(\mathbf{lFan}/B\). Let \(Y\) be the fiber product of all \(X_{i}\) over \(X\), and we set \(V_{i}:=U_{i}\times_{X_{i}}Y\). The induced morphism \(V_{i}\to Y\) is an open immersion for all \(i\). Since \(\cup_{i\in I}\mathcal{U}_{i}\simeq h_{\cup_{i\in I}V_{i}}\) and \(h_{X}\simeq h_{Y}\), we deduce that \(\cup_{i\in I}\mathcal{U}_{i}\to\mathcal{X}\) is a representable open immersion. ## 6. Properties of morphisms of divided log spaces When a morphism \(f\colon\mathcal{Y}\to\mathcal{X}\) and a representable Zariski cover \(g\colon\mathcal{Z}\to\mathcal{Y}\) such that \(fg\) is a representable smooth morphism are given, it is natural to regard \(f\) as a smooth morphism. However, it is unclear whether \(f\) is a representable smooth morphism or not. This is the reason why we introduce a class of morphisms that can include non-representable morphisms as follows. **Definition 6.1**.: Let \(\mathcal{P}\) be a class of morphisms in \(\mathbf{lFan}/B\) closed under pullbacks and compositions and satisfying (Div) and the following condition: * If \(f\colon Y\to X\) is a morphism and \(u\colon U\to X\) is a Zariski cover in \(\mathbf{lFan}/B\), then \(fu\in\mathcal{P}\) implies \(f\in\mathcal{P}\). We say that a morphism \(f\colon\mathcal{Y}\to\mathcal{X}\) in \(\mathbf{lSpc}/B\) is a \(\mathcal{P}\)_-morphism_ if there exists a representable Zariski cover \(u\colon\mathcal{U}\to\mathcal{Y}\) such that \(fu\) is a representable \(\mathcal{P}\)-morphism. **Example 6.2**.: By [6, Theorem 0.2], the classes of log smooth and log etale morphisms in \(\mathbf{lFan}/B\) satisfy (Zarloc). This implies that the classes of exact log smooth and Kummer etale morphisms in \(\mathbf{lFan}/B\) satisfy (Zarloc). The classes of Zariski covers, strict Nisnevich covers, and strict etale covers also satisfy (Zarloc). If \(f\colon Y\to X\) is a morphism and \(u\colon U\to X\) is a Zariski cover in \(\mathbf{lFan}/B\) such that \(fu\) is a monomorphism, then \(u\) is a monomorphism. This implies that \(u\) is an isomorphism. Hence the classes of open immerions and strict closed immersions in \(\mathbf{lFan}/B\) satisfy (Zarloc). **Proposition 6.3**.: _Let \(\mathcal{P}\) be a class of morphisms in \(\mathbf{lFan}/B\) closed under pullbacks and compositions and satisfying (Div) and (Zarloc). Then a morphism in \(\mathbf{lSpc}/B\) is a representable \(\mathcal{P}\)-morphism if and only if it is representable and a \(\mathcal{P}\)-morphism._ Proof.: Any representable \(\mathcal{P}\)-morphism in \(\mathbf{lSpc}/B\) is obviously representable and a \(\mathcal{P}\)-morphism. For the converse, assume that \(f\colon Y\to X\) is a morphism in \(\mathbf{lFan}/B\) such that \(h_{f}\) is a \(\mathcal{P}\)-morphism. We need to show that \(h_{f}\) is a representable \(\mathcal{P}\)-morphism. There exists a representable Zariski cover \(h_{U}\to h_{Y}\) with \(U\in\mathbf{lFan}/B\) such that the composition \(h_{U}\to h_{X}\) is a representable \(\mathcal{P}\)-morphism. By Lemma 3.11, after replacing \(U\) by its suitable dividing cover, we may assume that \(h_{U}\to h_{Y}\) is equal to \(h_{g}\) for some dividing Zariski cover \(g\colon U\to Y\). Then apply Lemma 3.11 to \(h_{fg}\) to obtain a commutative square with vertical isomorphisms such that \(g^{\prime}\) is a \(\mathcal{P}\)-morphism and \(w\) and \(w^{\prime}\) are dividing covers. By Proposition 2.17, there exists a dividing cover \(v\colon V^{\prime}\to V\) such that the square commutes. Proposition 2.9 yields a dividing cover \(Y^{\prime}\to Y\) such that the projection \(V^{\prime\prime}:=V^{\prime}\times_{Y}Y^{\prime}\to Y^{\prime}\) is a Zariski cover. Since \(\mathcal{P}\) satisfies (Div), there exists a dividing cover \(X^{\prime\prime}\to X^{\prime}\) such that the projection \(V^{\prime\prime}\times_{X}X^{\prime\prime}\simeq V^{\prime\prime}\times_{X^{ \prime}}X^{\prime\prime}\to X^{\prime\prime}\) is a \(\mathcal{P}\)-morphism. The first arrow in \[V^{\prime\prime}\times_{X}X^{\prime\prime}\to Y^{\prime}\times_{X}X^{\prime \prime}\to X^{\prime\prime}\] is a Zariski cover. Use (Zarloc) to see that the projection \(Y^{\prime}\times_{X}X^{\prime\prime}\to X^{\prime\prime}\) is a \(\mathcal{P}\)-morphism. This implies that \(h_{f}\colon h_{Y}\to h_{X}\) is a representable \(\mathcal{P}\)-morphism. **Proposition 6.4**.: _Let \(\mathcal{P}\) be a class of morphisms in \(\mathbf{I}\mathbf{F}\mathbf{a}\mathbf{n}/B\) closed under compositions and pullbacks and satisfying (Div) and (Zarloc). For all \(\mathcal{P}\)-morphism \(\mathcal{Y}\to\mathcal{X}\) in \(\mathbf{I}\mathbf{S}\mathbf{p}\mathbf{c}/B\), there exists a commutative square_ _such that \(h_{U}\to\mathcal{X}\) and \(h_{V}\to\mathcal{Y}\times_{\mathcal{X}}h_{U}\) are representable Zariski covers and \(g\) is a \(\mathcal{P}\)-morphism in \(\mathbf{I}\mathbf{F}\mathbf{a}\mathbf{n}/B\)._ Proof.: Choose a representable Zariski cover \(h_{X}\to\mathcal{X}\) with \(X\in\mathbf{I}\mathbf{F}\mathbf{a}\mathbf{n}/B\). There exists a representable Zariski cover \(\mathcal{U}\to\mathcal{Y}\) such that the composition \(\mathcal{U}\to\mathcal{X}\) is a representable \(\mathcal{P}\)-morphism. By Lemma 3.5, there exists a commutative square such that \(g\) is a \(\mathcal{P}\)-morphism and \(u\) is a dividing cover. Since \(\mathcal{U}\to\mathcal{Y}\) is a representable Zariski cover, \(\mathcal{U}\times_{\mathcal{X}}h_{X}\to\mathcal{Y}\times_{\mathcal{X}}h_{X}\) is a representable Zariski cover. This means that \(h_{V}\to\mathcal{Y}\times_{\mathcal{X}}h_{U}\) is a represnetable Zariski cover. **Proposition 6.5**.: _Let \(\mathcal{P}\) be a class of morphisms in \(\mathbf{I}\mathbf{F}\mathbf{a}\mathbf{n}/B\) closed under compositions and pullbacks and satisfying (Div) and (Zarloc). Then the class of \(\mathcal{P}\)-morphisms in \(\mathbf{I}\mathbf{S}\mathbf{p}\mathbf{c}/B\) is closed under pullbacks and compositions._ Proof.: Let \(\mathcal{Y}\to\mathcal{X}\) be a \(\mathcal{P}\)-morphism, and let \(\mathcal{X}^{\prime}\to\mathcal{X}\) be a morphism in \(\mathbf{I}\mathbf{S}\mathbf{p}\mathbf{c}/B\). There exists a representable Zariski cover \(\mathcal{U}\to\mathcal{Y}\) such that the composition \(\mathcal{U}\to\mathcal{X}\) is a representable \(\mathcal{P}\)-morphism. The pullback \(\mathcal{U}\times_{\mathcal{X}}\mathcal{X}^{\prime}\to\mathcal{Y}\times_{ \mathcal{X}}\mathcal{X}^{\prime}\) is a representable Zariski cover, and the projection \(\mathcal{U}\times_{\mathcal{X}}\mathcal{X}^{\prime}\to\mathcal{X}^{\prime}\) is a representable \(\mathcal{P}\)-morphism. Hence the projection \(\mathcal{Y}\times_{\mathcal{X}}\mathcal{X}^{\prime}\to\mathcal{X}^{\prime}\) is a \(\mathcal{P}\)-morphism. Let \(\mathcal{Z}\to\mathcal{Y}\to\mathcal{X}\) be \(\mathcal{P}\)-morphisms in \(\mathbf{I}\mathbf{S}\mathbf{p}\mathbf{c}/B\). There exists a representable Zariski cover \(\mathcal{U}\to\mathcal{Y}\) such that the composition \(\mathcal{U}\to\mathcal{X}\) is a representable \(\mathcal{P}\)-morphism. By the above paragraph, the projection \(\mathcal{Z}\times_{\mathcal{Y}}\mathcal{U}\to\mathcal{U}\) is a \(\mathcal{P}\)-morphism. There exists a representable Zariski cover \(\mathcal{V}\to\mathcal{Z}\times_{\mathcal{Y}}\mathcal{U}\) such that the composition \(\mathcal{V}\to\mathcal{U}\) is a representable \(\mathcal{P}\)-morphism. By Proposition 3.13, the composition \(\mathcal{V}\to\mathcal{X}\) is a representable \(\mathcal{P}\)-morphism, and the composition \(\mathcal{V}\to\mathcal{Z}\) is a representable Zariski cover. Hence \(\mathcal{Z}\to\mathcal{X}\) is a \(\mathcal{P}\)-morphism. **Proposition 6.6**.: _Let \(\mathcal{Z}\overset{g}{\to}\mathcal{Y}\overset{f}{\to}\mathcal{X}\) be morphisms in \(\mathbf{I}\mathbf{S}\mathbf{p}\mathbf{c}/B\). If \(f\) is log etale and \(fg\) is log etale (resp. log smooth), then \(g\) is log etale (resp. log smooth)._ Proof.: There exists a representable Zariski cover \(\mathcal{U}\to\mathcal{Y}\) such that the composition \(\mathcal{U}\to\mathcal{X}\) is a representable log etale morphism in \(\mathbf{ISpc}/B\). The composition \(\mathcal{Z}\times_{\mathcal{Y}}\mathcal{U}\to\mathcal{X}\) is log etale (resp. log smooth), so there exists a representable Zariski cover \(\mathcal{V}\to\mathcal{Z}\times_{\mathcal{Y}}\mathcal{U}\) such that the composition \(\mathcal{V}\to\mathcal{X}\) is representable log smooth. By Lemma 3.12, there exists a commutative square with cartesian squares such that the vertical morphisms are representable Zariski covers, \(u\) is a log etale morphism, and \(v\) is a morphism in \(\mathbf{IFan}/B\). The composition \(h_{V}\to h_{X}\) is a representable log smooth morphism, so Lemma 3.11 yields a dividing cover \(v^{\prime}\colon V^{\prime}\to V\) and a log smooth morphism \(p\colon V^{\prime}\to X\) such that \(h_{uvv^{\prime}}=h_{p}\). By Proposition 2.17, we can replace \(V^{\prime}\) with its suitable dividing cover to assume that the diagram commutes. Owing to [12, Remark IV.3.1.2], the composition \(V^{\prime}\to U\) is log etale (resp. log smooth). Hence the composition \(h_{V^{\prime}}\to\mathcal{Y}\) is a representable log etale (resp. log smooth) morphism too. To conclude, observe that the composition \(h_{V^{\prime}}\to\mathcal{Z}\) is a representable Zariski cover. **Proposition 6.7**.: _Let \(f\colon\mathcal{Y}\to\mathcal{X}\) be a morphism in \(\mathbf{ISpc}/B\). If \(f\) is an open immersion (resp. strict closed immersion), then \(f\) is a representable open immersion (resp. representable strict closed immersion)._ Proof.: There exists a representable Zariski cover \(g\colon\mathcal{U}\to\mathcal{Y}\) such that \(fg\) is a representable open immersion (resp. representable strict closed immersion). Proposition 4.8 shows that \(fg\) is a monomorphism in \(\mathbf{ISpc}/B\), so \(g\) is a monomorphism in \(\mathbf{ISpc}/B\) too. By Proposition 4.7, \(g\) is an isomorphism. Hence \(f\) is a representable open immersion (resp. representable strict closed immersion). ## 7. Topologies on divided log spaces In this section, we begin with introducing several topologies on \(\mathbf{ISpc}/B\). Then we compare the categories of sheaves on \(\mathbf{IFan}/B\) and \(\mathbf{ISpc}/B\). **Definition 7.1**.: Consider the following classes of morphisms in \(\mathbf{IFan}/B\): \begin{tabular}{|c|c|} \hline \(\mathcal{P}\) & exact \(\mathcal{P}\) \\ \hline dividing Zariski covers & Zariski covers \\ dividing Nisnevich covers & strict Nisnevich covers \\ dividing etale covers & strict etale covers \\ log etale covers & Kummer etale covers \\ \hline \end{tabular} The smallest topologies \(t_{\mathcal{P}}\) containing all exact \(\mathcal{P}\)-morphism as a covering for the above cases are called the _Zariski_, _strict Nisnevich_, _strict etale_, and _Kummer etale topologies_. We also call them as the _dividing Zariski_, _dividing Nisnevich_, _dividing etale_, and _log etale topologies_. Let \(\mathbf{lSmSpc}/B\) be the full subcategory of \(\mathbf{lSpc}/B\) consisting of \(\mathscr{X}\) that is log smooth over \(B\). **Proposition 7.2**.: _For the above four cases of \(\mathscr{P}\), the topology \(t_{\mathscr{P}}\) on \(\mathbf{lSpc}/B\) is the smallest topology such that all representable \(\mathscr{P}\)-morphisms are covers._ Proof.: Immediate from the fact that every \(\mathscr{P}\)-morphism admits a refinement that is a representable \(\mathscr{P}\)-morphism. Let \(\varphi\colon\mathscr{C}\to\mathscr{C}^{\prime}\) be a functor of sites. There is a functor \[\varphi^{*}\colon\mathbf{Shv}(\mathscr{C}^{\prime})\to\mathbf{Shv}(\mathscr{ C})\] such that \(\varphi^{*}\mathscr{F}(X):=\mathscr{F}(\varphi(X))\) for \(X\in\mathscr{C}\) and \(\mathscr{F}\in\mathbf{Shv}_{t}(\mathscr{C})\). If \(\varphi\) is a continuous functor of sites, then according to [1, Proposition III.1.2], \(\varphi^{*}\) admits a left adjoint \[\varphi_{!}\colon\mathbf{Shv}(\mathscr{C})\to\mathbf{Shv}(\mathscr{C}^{\prime}).\] Consider the induced commutative diagram (7.1) Let \(\mathscr{P}\) be one of the four class of morphisms in Definition 7.1. These functors are continuous functors of sites for the \(t_{\mathscr{P}}\)-topology, and hence we have a commutative diagram (7.2) Due to the implication (i)\(\Rightarrow\)(ii) in [1, Theoreme III.4.1], \(\beta_{!}\) and \(\beta_{!}^{\prime}\) are equivalences. Since \(\alpha\), \(\alpha^{\prime}\), and \(\alpha^{\prime\prime}\) are cocontinuous and fully faithful, [1, Proposition III.2.6] shows that \(\alpha_{!}\), \(\alpha_{!}^{\prime}\), and \(\alpha_{!}^{\prime\prime}\) are fully faithful. If \(X\in\mathbf{lFan}/B\) and \(h_{Y}\to h_{X}\) is a representable \(t_{\mathscr{P}}\)-cover with \(Y\in\mathbf{lFan}/B\), then Lemma 3.5 yields a commutative square with vertical isomorphisms such that \(g\) is a dividing cover and \(f^{\prime}\) is a \(t_{\mathscr{P}}\)-cover. The composition \(Y^{\prime}\to X\) is a \(t_{\mathscr{P}}\)-cover and \(h_{Y^{\prime}}\to h_{X}\) refines \(h_{Y}\to h_{X}\). This shows that \(\gamma^{\prime}\beta^{\prime}\) is cocontinuous. We can similarly show that \(\gamma\beta\) is cocontinuous. **Proposition 7.3**.: _Let \(\mathscr{P}\) be as above. The functors_ \[\gamma_{!}\colon\mathbf{Shv}_{t_{\mathscr{P}}}(\mathbf{lSm}/B)\to\mathbf{ Shv}_{t_{\mathscr{P}}}(\mathbf{lSmSpc}/B),\] \[\gamma_{!}^{\prime}\colon\mathbf{Shv}_{t_{\mathscr{P}}}(\mathbf{ lSch}/B)\to\mathbf{Shv}_{t_{\mathscr{P}}}(\mathbf{lSpc}/B)\] _are equivalences._ Proof.: By the above observation, we only need to show that \(\gamma_{!}\beta_{!}\) and \(\gamma^{\prime}_{!}\beta^{\prime}_{!}\) are equivalences. We focus on \(\gamma^{\prime}_{!}\beta^{\prime}_{!}\) since the proofs are similar. Let us check the conditions (1)-(5) in [16, Tag 03A0] for \(\gamma^{\prime}\beta^{\prime}\). We have checked the conditions (1) and (2) above. The conditions (3) and (4) are consequences of Proposition 2.17. To show the condition (5), consider a representable Zariski cover \(h_{X}\to\mathcal{X}\) with \(X\in\mathbf{I}\mathbf{Fan}/B\). Hence we have checked the conditions (1)-(5), and we deduce that \(\gamma^{\prime}_{!}\beta^{\prime}_{!}\) is an equivalence. **Definition 7.4**.: A _Zariski distinguished square in \(\mathbf{I}\mathbf{Spc}/B\)_ is a cartesian square in \(\mathbf{I}\mathbf{Spc}/B\) of the form (7.3) such that \(f\) and \(g\) are representable open immersions and the induced morphism \(\mathcal{U}\amalg\mathcal{V}\to\mathcal{X}\) is a representable Zariski cover. The _Zariski cd-structure on \(\mathbf{I}\mathbf{Spc}/B\)_ is the collection of Zariski distinguished squares. By Proposition 4.8, \(f\) and \(g\) are monomorphisms. Observe that the square whose vertical morphisms are the diagonal morphisms is a Zariski distinguished square since the vertical morphisms are isomorphisms. The Zariski cd-structure on \(\mathbf{I}\mathbf{Spc}/B\) is complete and regular in the sense of [18, Definitions 2.3, 2.10]. The topology associated associated with the Zariski cd-structure is defined to be the smallest Grothendieck topology containing \(\mathcal{U}\amalg\mathcal{V}\to\mathcal{X}\) as a covering for all distinguished square of the form (7.3). **Proposition 7.5**.: _The Zariski topology on \(\mathbf{I}\mathbf{Spc}/B\) is the topology associated with the Zariski cd-structure on \(\mathbf{I}\mathbf{Spc}/B\)._ Proof.: Let \(\mathcal{Y}\to\mathcal{X}\) be a Zariski cover in \(\mathbf{I}\mathbf{Spc}/B\). There exists a representable Zariski cover \(\mathcal{U}\to\mathcal{Y}\) such that the composition \(\mathcal{U}\to\mathcal{X}\) is a representable Zariski cover. By Proposition 4.10, we may assume \(\mathcal{U}\simeq\amalg_{i\in I}h_{U_{i}}\) with finite \(I\) and each morphism \(h_{U_{i}}\to\mathcal{X}\) is a representable open immersion, where \(U_{i}\in\mathbf{I}\mathbf{Fan}/B\) for all \(i\in I\). Let \(t\) be the topology associated with the Zariski cd-structure. The sieve generated by \(\{h_{U_{a}},\cup_{i\in I-\{a\}}h_{U_{i}}\to\mathcal{X}\}\) is a \(t\)-covering sieve for all \(a\in I\). By induction on the number of elements of \(I\), we see that the sieve generated by \(\{h_{U_{i}}\to\mathcal{X}\}_{i\in I}\) is a \(t\)-covering sieve. It follows that the sieve generated by \(\mathcal{Y}\to\mathcal{X}\) is a \(t\)-covering sieve. ## 8. Open complements In this section, we define the open complement of a closed immersion of divided log spaces as a universal property. We also show that the open complements always exist and are compatible with pullbacks. **Definition 8.1**.: For a closed immersion \(\mathcal{Z}\to\mathcal{X}\) in \(\mathbf{lSpc}/B\), the _open complement of \(\mathcal{Z}\) in \(\mathcal{X}\)_, denoted \(\mathcal{X}-\mathcal{Z}\), is defined to be a final object (if exists) of the full subcategory of \(\mathbf{lSpc}/\mathcal{X}\) consisting of morphisms \(\mathcal{Y}\to\mathcal{X}\) such that \(\mathcal{Z}\times_{\mathcal{X}}\mathcal{Y}=\emptyset\). **Lemma 8.2**.: _Let \(\mathcal{Z}\to\mathcal{X}\) be a closed immersion in \(\mathbf{lSpc}/B\), and let \(\mathcal{X}^{\prime}\to\mathcal{X}\) be a morphism in \(\mathbf{lSpc}/B\). We set \(\mathcal{Z}^{\prime}:=\mathcal{Z}\times_{\mathcal{X}}\mathcal{X}^{\prime}\). If \(\mathcal{X}-\mathcal{Z}\) exists, then \(\mathcal{X}^{\prime}-\mathcal{Z}^{\prime}\) exists, and there is an isomorphism_ \[\mathcal{X}^{\prime}-\mathcal{Z}^{\prime}\simeq(\mathcal{X}-\mathcal{Z}) \times_{\mathcal{X}}\mathcal{X}^{\prime}.\] Proof.: Suppose that \(\mathcal{Y}\to\mathcal{X}^{\prime}\) is a morphism in \(\mathbf{lSpc}/B\) such that \(\mathcal{Z}^{\prime}\times_{\mathcal{X}^{\prime}}\mathcal{Y}=\emptyset\). Then \(\mathcal{Z}\times_{\mathcal{X}}\mathcal{Y}=\emptyset\), so there exists a unique morphism \(\mathcal{Y}\to\mathcal{X}-\mathcal{Z}\) over \(\mathcal{X}\). This means that there exists a unique morphism \(\mathcal{Y}\to(\mathcal{X}-\mathcal{Z})\times_{\mathcal{X}}\mathcal{X}^{\prime}\) over \(\mathcal{X}^{\prime}\), which completes the proof. **Lemma 8.3**.: _Let \(\mathcal{Z}\to\mathcal{X}\) be a closed immersion in \(\mathbf{lSpc}/B\), and let \(\amalg_{i\in I}\mathcal{X}_{i}\to\mathcal{X}\) be a representable Zariski cover with finite \(I\) such that each \(\mathcal{X}_{i}\to\mathcal{X}\) is a representable open immersion. We set \(\mathcal{X}_{ij}:=\mathcal{X}_{i}\times_{\mathcal{X}}\mathcal{X}_{j}\), \(\mathcal{Z}_{i}:=\mathcal{Z}\times_{\mathcal{X}}\mathcal{X}_{i}\), and \(\mathcal{Z}_{ij}:=\mathcal{Z}\times_{\mathcal{X}}\mathcal{X}_{ij}\) for \(i,j\in I\). If \(\mathcal{X}_{i}-\mathcal{Z}_{i}\) and \(\mathcal{X}_{ij}-\mathcal{Z}_{ij}\) exist for all \(i,j\in I\), then \(\mathcal{X}-\mathcal{Z}\) exists._ Proof.: By Lemma 8.2, we can glue \(\{\mathcal{X}_{i}-\mathcal{Z}_{i}\}_{i\in I}\) using Construction 5.4, and let \(\mathcal{V}\) be the resulting divided log space over \(\mathcal{X}\). For every \(\mathcal{Y}\in\mathbf{lSpc}/\mathcal{X}\), \(\mathcal{Z}\times_{\mathcal{X}}\mathcal{Y}=\emptyset\) if and only if \(\mathcal{Z}_{i}\times_{\mathcal{X}_{i}}\mathcal{Y}=\emptyset\) for all \(i\in I\). Together with the isomorphism \[\operatorname{Hom}_{\mathcal{X}}(\mathcal{Y},\mathcal{V})\simeq\operatorname{ Eq}(\operatorname{Hom}_{\mathcal{X}_{i}}(\mathcal{Y}\times_{\mathcal{X}}\mathcal{X}_{i}, \mathcal{V}\times_{\mathcal{X}}\mathcal{X}_{i})\rightrightarrows\operatorname{ Hom}_{\mathcal{X}_{ij}}(\mathcal{Y}\times_{\mathcal{X}}\mathcal{X}_{ij},\mathcal{V} \times_{\mathcal{X}}\mathcal{X}_{ij})),\] we deduce \(\operatorname{Hom}_{\mathcal{X}}(\mathcal{Y},\mathcal{V})\simeq*\) whenever \(\mathcal{Z}\times_{\mathcal{X}}\mathcal{Y}=\emptyset\). **Lemma 8.4**.: _Let \(Z\to X\) be a strict closed immersion in \(\mathbf{lFan}/B\). Then \(h_{X}-h_{Z}\) exists, and there is an isomorphism_ \[h_{X}-h_{Z}\simeq h_{X-Z}.\] Proof.: Suppose \(\mathcal{Y}\in\mathbf{lSpc}/h_{X}\) and \(h_{Z}\times_{h_{X}}\mathcal{Y}=\emptyset\). Then there is a dividing Zariski cover \(h_{Y}\to\mathcal{Y}\) such that the composition \(h_{Y}\to h_{X}\) is equal to \(h_{f}\) for some morphism \(f\) in \(\mathbf{lFan}/B\). We have \(Z\times_{X}Y=\emptyset\). Hence there exists a unique morphism \(u\colon Y\to X-Z\) over \(X\). Suppose that \(v\colon h_{Y}\to h_{X-Z}\) is a morphism over \(h_{X}\). Then there exists a dividing cover \(p\colon Y^{\prime}\to Y\) such that the composite morphism \(h_{Y^{\prime}}\xrightarrow{vh_{p}}h_{X-Z}\) is equal to \(h_{w}\) for some morphism \(w\colon Y^{\prime}\to X-Z\) over \(X\). By the universal property of open complements, we have \(w=up\). Hence we have \(v=h_{u}\), i.e., \(\operatorname{Hom}_{h_{X}}(h_{Y},h_{X-Z})\simeq*\). Proposition 4.3 shows that \(h_{Y}\times_{y}h_{Y}\) is representable. Using this, we can similarly show \(\operatorname{Hom}_{h_{X}}(h_{Y}\times_{y}h_{Y},h_{X-Z})\simeq*\). Together with the isomorphism \[\operatorname{Hom}_{h_{X}}(\mathcal{Y},h_{X-Z})\simeq\operatorname{Eq}( \operatorname{Hom}_{h_{X}}(h_{Y},h_{X-Z})\rightrightarrows\operatorname{Hom}_{h_{ X}}(h_{Y}\times_{y}h_{Y},h_{X-Z})),\] we obtain \(\operatorname{Hom}_{h_{X}}(\mathcal{Y},h_{X-Z})\simeq*\). To conclude, observe \(h_{Z}\times_{h_{X}}h_{X-Z}=\emptyset\). **Lemma 8.5**.: _Let \(h_{Z}\to h_{X}\) be a closed immersion in \(\mathbf{lSpc}/B\), where \(X,Z\in\mathbf{lFan}/B\). Then \(h_{X}-h_{Z}\) exists._ Proof.: There exists a commutative square with vertical isomorphisms such that \(f^{\prime}\) is a strict closed immersion in \(\mathbf{I}\mathbf{F}\mathbf{an}/B\). Apply Lemma 8.4 to \(f^{\prime}\) to conclude. **Theorem 8.6**.: _Let \(i\colon\mathcal{Z}\to\mathcal{X}\) be a closed immersion in \(\mathbf{I}\mathbf{S}\mathbf{pc}/B\). Then \(\mathcal{X}-\mathcal{Z}\) exists. Furthermore, the induced morphism \(\mathcal{X}-\mathcal{Z}\to\mathcal{X}\) is an open immersion._ Proof.: By Proposition 4.10, there exists a representable Zariski cover \(\mathrm{II}_{i\in I}h_{X_{i}}\to\mathcal{X}\) with finite \(I\) and \(x_{i}\in\mathbf{I}\mathbf{F}\mathbf{an}/B\) such that each \(h_{X_{i}}\to\mathcal{X}\) is a representable open immersion. We set \(\mathcal{X}_{i}:=h_{X_{i}}\). Proposition 4.3 shows that \(\mathcal{X}_{ij}:=\mathcal{X}_{i}\times_{\mathcal{X}}\mathcal{X}_{j}\) is representable for all \(i,j\in I\). By Lemma 8.5, \(\mathcal{X}_{i}-\mathcal{Z}\times_{\mathcal{X}}\mathcal{X}_{i}\) and \(\mathcal{X}_{ij}-\mathcal{Z}\times_{\mathcal{X}}\mathcal{X}_{ij}\) exist for all \(i,j\in I\). Together with Lemma 8.3, we deduce that \(\mathcal{X}-\mathcal{Z}\) exists. Apply Lemma 3.16 to the induced cartesian square to show that \(\mathcal{X}-\mathcal{Z}\to\mathcal{X}\) is a representable open immersion, i.e., an open immersion. ## 9. Blow-ups along closed immersions As in the previous section, we define blow-ups by a universal property. We also show that blow-ups exist in the log smooth case and are compatible with log smooth pullbacks. **Definition 9.1**.: For \(X\in\mathbf{I}\mathbf{F}\mathbf{an}/B\), a strict closed subscheme \(Z\) of \(X\) is called an _effective log Cartier divisor on \(X\)_ if \(\underline{Z}\times_{\underline{X}}X^{\prime}\) is an effective Cartier divisor on \(\underline{X}^{\prime}\) for all log smooth morphism \(X^{\prime}\to X\). We refer to Definition D.1 for the notion of the blow-up \(\mathrm{Bl}_{Z}X\) for all strict closed immersion \(Z\to X\) in \(\mathbf{I}\mathbf{S}\mathbf{ch}/B\). **Lemma 9.2**.: _Let \(i\colon Z\to X\) be a strict closed immersion in \(\mathbf{I}\mathbf{S}\mathbf{m}/S\), where \(S\) is an fs log scheme. If \(\underline{Z}\) is an effective Cartier divisor on \(\underline{X}\), then \(Z\) is an effective log Cartier divisor on \(X\)._ Proof.: The projection \(\mathrm{Bl}_{Z}X\to X\) is an isomorphism. Hence by Lemma D.3, the projection \[\mathrm{Bl}_{Z^{\prime}}X^{\prime}\to X^{\prime}\] is an isomorphism for all log smooth morphism \(X^{\prime}\to X\) of fs log schemes, where \(Z^{\prime}:=Z\times_{X}X^{\prime}\). It follows that \(\underline{Z^{\prime}}\) is an effective Cartier divisor on \(\underline{X^{\prime}}\). **Definition 9.3**.: Let \(\mathcal{Z}\to\mathcal{X}\) be a closed immersion in \(\mathbf{I}\mathbf{S}\mathbf{pc}/B\). We say that \(\mathcal{Z}\) is an _effective log Cartier divisor on \(\mathcal{X}\)_ if there exists a cartesian square (9.1) such that the vertical morphisms are representable Zariski covers and the morphism \(i\colon Z\to X\) in \(\mathbf{I}\mathbf{F}\mathbf{an}/B\) exhibits \(Z\) as an effective log Cartier divisor on \(X\). **Lemma 9.4**.: _Let \(\mathcal{Z}\) be an effective log Cartier divisor on \(\mathcal{X}\). Then for every log smooth morphism \(\mathcal{X}^{\prime}\to\mathcal{X}\) in \(\mathbf{lSpc}/B\), \(\mathcal{Z}\times_{\mathcal{X}}\mathcal{X}^{\prime}\) is an effective log Cartier divisor on \(\mathcal{X}^{\prime}\)._ Proof.: We have a cartesian square of the form (9.1). Proposition 6.4 yields a commutative square such that \(g\) is a log smooth morphism in \(\mathbf{lFan}/B\) and the vertical morphisms are representable Zariski covers. By Lemma 3.11, we can replace \(U\) with its suitable dividing cover and \(V\) by the corresponding pullback to assume that \(h_{U}\to h_{X}\) is equal to \(h_{u}\) for some dividing Zariski cover. We have a commutative square such that the vertical morphisms are representable Zariski covers, where \(i^{\prime}\) is the projection. To conclude, observe that \(Z\times_{X}V\) is an effective log Cartier divisor on \(V\) since the composition \(V\to X\) is log smooth. **Lemma 9.5**.: _Let \(\mathcal{Z}\to\mathcal{X}\) be a closed immersion in \(\mathbf{lSpc}/B\). If there exists a representable Zariski cover \(\mathcal{X}^{\prime}\to\mathcal{X}\) such that \(\mathcal{Z}\times_{\mathcal{X}}\mathcal{X}^{\prime}\) is an effective log Cartier divisor on \(\mathcal{X}^{\prime}\), then \(\mathcal{Z}\) is an effective log Cartier divisor on \(\mathcal{X}\)._ Proof.: There exists a cartesian square such that the vertical morphisms are representable Zariski covers and \(i^{\prime}\) exhibits \(Z^{\prime}\) as an effective log Cartier divisor on \(X^{\prime}\). Hence we obtain a cartesian square such that the vertical morphisms are representable Zariski covers. **Definition 9.6**.: For a closed immersion \(\mathcal{Z}\to\mathcal{X}\) in \(\mathbf{lSpc}/B\), the _blow-up of \(\mathcal{X}\) along \(\mathcal{Z}\)_, denoted \(\mathrm{Bl}_{\mathcal{Z}}\mathcal{X}\), is defined to be a final object (if exists) of the full subcategory of \(\mathbf{lSpc}/\mathcal{X}\) consisting of \(\mathcal{Y}\to\mathcal{X}\) such that \(\mathcal{Z}\times_{\mathcal{X}}\mathcal{Y}\) is an effective log Cartier divisor on \(\mathcal{Y}\). Suppose that \(\mathcal{X}^{\prime}\to\mathcal{X}\) is a morphism in \(\mathbf{lSpc}/B\) with \(\mathcal{Z}^{\prime}:=\mathcal{X}^{\prime}\times_{\mathcal{X}}\mathcal{Z}\). Then \(\mathrm{Bl}_{\mathcal{Z}^{\prime}}\mathcal{X}^{\prime}\times_{\mathcal{X}^{ \prime}}\mathcal{Z}^{\prime}\simeq\mathrm{Bl}_{\mathcal{Z}^{\prime}}\mathcal{ X}^{\prime}\times_{\mathcal{X}}\mathcal{Z}\) is an effective log Cartier divisor on \(\mathrm{Bl}_{\mathcal{Z}^{\prime}}\mathcal{X}^{\prime}\), so there is a canonical morphism \[\mathrm{Bl}_{\mathcal{Z}^{\prime}}\mathcal{X}^{\prime}\to\mathrm{Bl}_{ \mathcal{Z}}\mathcal{X} \tag{9.2}\] whenever the two blow-ups exist. **Lemma 9.7**.: _Let \(\mathcal{Z}\to\mathcal{X}\) be a closed immersion in \(\mathbf{lSpc}/B\), and let \(\mathcal{X}^{\prime}\to\mathcal{X}\) be a log smooth morphism in \(\mathbf{lSpc}/B\). We set \(\mathcal{Z}^{\prime}:=\mathcal{Z}\times_{\mathcal{X}}\mathcal{X}^{\prime}\). If \(\operatorname{Bl}_{\mathcal{Z}}\mathcal{X}\) exists, then \(\operatorname{Bl}_{\mathcal{Z}^{\prime}}\mathcal{X}^{\prime}\) exists, and there is an isomorphism_ \[\operatorname{Bl}_{\mathcal{Z}^{\prime}}\mathcal{X}^{\prime}\simeq \operatorname{Bl}_{\mathcal{Z}}\mathcal{X}\times_{\mathcal{X}}\mathcal{X}^{\prime}.\] Proof.: Apply Lemma 9.4 to \(\operatorname{Bl}_{\mathcal{Z}}\mathcal{X}\times_{\mathcal{X}}\mathcal{X}^{ \prime}\to\operatorname{Bl}_{\mathcal{Z}}\mathcal{X}\) to show that \(\operatorname{Bl}_{\mathcal{Z}}\mathcal{X}\times_{\mathcal{X}}\mathcal{Z}^{\prime}\) is an effective log Cartier divisor on \(\operatorname{Bl}_{\mathcal{Z}}\mathcal{X}\times_{\mathcal{X}}\mathcal{X}^{\prime}\). For every \(\mathcal{Y}\in\mathbf{lSpc}/\mathcal{X}^{\prime}\), there is an isomorphism \(\mathcal{Z}\times_{\mathcal{X}}\mathcal{Y}\simeq\mathcal{Z}^{\prime}\times_{ \mathcal{X}^{\prime}}\mathcal{Y}\). Use these to show that \(\operatorname{Bl}_{\mathcal{Z}}\mathcal{X}\times_{\mathcal{X}}\mathcal{X}^{\prime}\) is a final object of the full subcategory of \(\mathbf{lSpc}/\mathcal{X}^{\prime}\) consisting of \(\mathcal{Y}\to\mathcal{X}^{\prime}\) such that \(\mathcal{Z}^{\prime}\times_{\mathcal{X}^{\prime}}\mathcal{Y}\) is an effective log Cartier divisor on \(\mathcal{Y}\). **Lemma 9.8**.: _Let \(\mathcal{Z}\to\mathcal{X}\) be a closed immersion in \(\mathbf{lSpc}/B\), and let \(\operatorname{II}_{i\in I}\mathcal{X}_{i}\to\mathcal{X}\) is a representable Zariski cover with finite \(I\) such that each \(\mathcal{X}_{i}\to\mathcal{X}\) is a representable open immersion. We set \(\mathcal{X}_{ij}:=\mathcal{X}_{i}\times_{\mathcal{X}}\mathcal{X}_{j}\), \(\mathcal{Z}_{i}:=\mathcal{Z}\times_{\mathcal{X}}\mathcal{X}_{i}\), and \(\mathcal{Z}_{ij}:=\mathcal{Z}\times_{\mathcal{X}}\mathcal{X}_{ij}\) for all \(i,j\in I\). If \(\operatorname{Bl}_{\mathcal{Z}_{i}}\mathcal{X}_{i}\) and \(\operatorname{Bl}_{\mathcal{Z}_{ij}}\mathcal{X}_{ij}\) exist for all \(i,j\in I\), then \(\operatorname{Bl}_{\mathcal{Z}}\mathcal{X}\) exists._ Proof.: By Lemma 9.7, we can glue \(\{\operatorname{Bl}_{\mathcal{Z}_{i}}\mathcal{X}_{i}\}\) using Construction 5.4, and let \(\mathcal{V}\) be the resulting divided log space over \(\mathcal{X}\). For every \(\mathcal{Y}\in\mathbf{lSpc}/\mathcal{X}\), \(\mathcal{Z}\times_{\mathcal{X}}\mathcal{Y}\) is an effective Cartier divisor on \(\mathcal{Y}\) if and only if \(\mathcal{Z}_{i}\times_{\mathcal{X}}\mathcal{Y}\) is an effective Cartier divisor on \(\mathcal{X}_{i}\times_{\mathcal{X}}\mathcal{Y}\) for all \(i\in I\) by Lemma 9.5. Together with an isomorphism \[\operatorname{Hom}_{\mathcal{X}}(\mathcal{Y},\mathcal{V})\simeq\operatorname{ Eq}(\operatorname{Hom}_{\mathcal{X}_{i}}(\mathcal{Y}\times_{\mathcal{X}}\mathcal{X}_{i}, \mathcal{V}\times_{\mathcal{X}}\mathcal{X}_{i})\rightrightarrows\operatorname{ Hom}_{\mathcal{X}_{ij}}(\mathcal{Y}\times_{\mathcal{X}}\mathcal{X}_{ij},\mathcal{V}\times_{ \mathcal{X}}\mathcal{X}_{ij})),\] we deduce \(\operatorname{Hom}_{\mathcal{X}}(\mathcal{Y},\mathcal{V})=*\) whenever \(\mathcal{Z}\times_{\mathcal{X}}\mathcal{Y}\) is an effective log Cartier divisor on \(\mathcal{Y}\). To conclude, observe that \(\mathcal{Z}\times_{\mathcal{X}}\mathcal{V}\) is an effective log Cartier divisor on \(\mathcal{V}\) by Lemma 9.5. **Lemma 9.9**.: _Let \(i\colon Z\to X\) be a strict closed immersion in \(\mathbf{lFan}/B\). If \(h_{Z}\) is an effective log Cartier divisor on \(h_{X}\), then there exists a dividing Zariski cover \(Y\to X\) such that \(Z\times_{X}Y\) is an effective log Cartier divisor on \(Y\)._ Proof.: There exists a cartesian square such that the vertical morphisms are representable Zariski covers and \(i^{\prime}\) exhibits \(Z^{\prime}\) as an effective log Cartier divisor on \(X^{\prime}\). By Lemma 3.4, we can replace \(X^{\prime}\) with its suitable dividing cover and \(Z^{\prime}\) by the corresponding pullback to assume that \(h_{X^{\prime}}\to h_{X}\) is equal to \(h_{u}\) for some morphism \(u\colon X^{\prime}\to X\) in \(\mathbf{lFan}/B\). There is an isomorphism \(q\colon h_{Z^{\prime}}\xrightarrow{\simeq}h_{Z\times_{X}X^{\prime}}\). Apply Lemma 3.7 to \(q\) to obtain a dividing cover \(v\colon V\to Z^{\prime}\) and a morphism \(r\colon V\to Z\times_{X}X^{\prime}\) in \(\mathbf{lFan}/B\) such that \(qh_{v}=h_{r}\). By Lemma 3.4, we obtain a dividing cover \(v^{\prime}\colon V^{\prime}\to V\) and a morphism \(s\colon V^{\prime}\to Z\) such that the composition \[h_{V^{\prime}}\xrightarrow{h_{v^{\prime}}}h_{V}\xrightarrow{h_{v}}h_{Z^{\prime} }\to h_{Z}\] is equal to \(h_{s}\). The square commutes. Use Proposition 2.17 to see that the square commutes after replacing \(V^{\prime}\) by a suitable dividing cover. By Proposition 2.9, there exists a dividing cover \(X^{\prime}_{1}\to X^{\prime}\) (resp. \(X^{\prime}_{2}\to X^{\prime}\)) such that the pullback \(V^{\prime}\times_{X^{\prime}}X^{\prime}_{1}\to Z^{\prime}\times_{X^{\prime}}X^ {\prime}_{1}\) (resp. \(V^{\prime}\times_{X^{\prime}}X^{\prime}_{2}\to(Z\times_{X}X^{\prime})\times_{X^ {\prime}}X^{\prime}_{2}\)) is an isomorphism. Take \(Y:=X^{\prime}_{1}\times_{X}X^{\prime}_{2}\). Then \(Z\times_{X}Y\) is an effective log Cartier divisor on \(Y\) since the closed immersion \(Z\times_{X}Y\to Y\) is a pullback of \(i^{\prime}\colon Z^{\prime}\to X^{\prime}\) along the log smooth morphism \(Y\to X^{\prime}\). **Lemma 9.10**.: _Let \(i\colon Z\to X\) be a strict closed immersion in \(\mathbf{Is}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{missing}}}}}/S\), where \(S\in\mathbf{Is}\mathbf{\mathbf{\mathbf{\mathbf{missing}}}}/B\). Then \(\operatorname{Bl}_{h_{Z}}h_{X}\) exists, and there is an isomorphism_ \[\operatorname{Bl}_{h_{Z}}h_{X}\simeq h_{\operatorname{Bl}_{Z}X}.\] Proof.: Suppose that \(\mathcal{Y}\in\mathbf{Is}\mathbf{\mathbf{\mathbf{\mathbf{missing}}}}/h_{X}\) and \(h_{Z}\times_{h_{X}}\mathcal{Y}\) is an effective log Cartier divisor on \(\mathcal{Y}\). Choose a dividing Zariski cover \(h_{Y}\to\mathcal{Y}\) with \(Y\in\mathbf{Is}\mathbf{\mathbf{\mathbf{\mathbf{missing}}}}/B\). By Lemma 3.4, after replacing \(Y\) by its suitable dividing cover, the composition \(h_{Y}\to\mathcal{Y}\to h_{X}\) is equal to \(h_{f}\) for some morphism \(f\colon Y\to X\) in \(\mathbf{Is}\mathbf{\mathbf{\mathbf{\mathbf{missing}}}}/B\). Since \(h_{Y}\to\mathcal{Y}\) is log smooth, Lemma 9.4 shows that \(h_{Z\times_{X}Y}\) is an effective log Cartier divisor on \(h_{Y}\). Hence by Lemma 9.9, after replacing \(Y\) by its suitable dividing Zariski cover, \(Z\times_{X}Y\) is an effective log Cartier divisor on \(Y\). Since \(Z\times_{X}Y\) is an effective Cartier divisor on \(Y\), there exists a unique morphism \(\underline{Y}\to\operatorname{Bl}_{Z}\underline{X}\) over \(\underline{X}\) by the universal property of blow-ups. Together with \(\operatorname{Bl}_{Z}\underline{X}\simeq\operatorname{Bl}_{Z}\underline{X} \times_{\underline{X}}X\), we deduce that there exists a unique morphism \(u\colon Y\to\operatorname{Bl}_{Z}X\) over \(\underline{X}\). Suppose that \(v\colon h_{Y}\to h_{\operatorname{Bl}_{Z}X}\) is a morphism over \(h_{X}\). By Lemma 3.4, there exists a dividing cover \(p\colon Y^{\prime}\to Y\) such that the composite morphism \(h_{Y^{\prime}}\xrightarrow{vh_{p}}h_{\operatorname{Bl}_{Z}X}\) is equal to \(h_{w}\) for some morphism \(w\colon Y^{\prime}\to\operatorname{Bl}_{Z}X\) in \(\mathbf{\mathbf{\mathbf{missing}}}/B\). The universal property of blow-ups shows \(w=up\). Hence we have \(v=h_{u}\), i.e., \(\operatorname{Hom}_{h_{X}}(h_{Y},h_{\operatorname{Bl}_{Z}X})\simeq*\). By Proposition 4.3, \(h_{Y}\times_{\mathcal{Y}}h_{Y}\) is representable. Using this, we can similarly show \(\operatorname{Hom}_{h_{X}}(h_{Y}\times_{\mathcal{Y}}h_{Y},h_{\operatorname{Bl }_{Z}X})\simeq*\). Together with the isomorphism \[\operatorname{Hom}_{h_{X}}(\mathcal{Y},h_{\operatorname{Bl}_{Z}X})\simeq \operatorname{Eq}(\operatorname{Hom}_{h_{X}}(h_{Y},h_{\operatorname{Bl}_{Z}X}) \rightrightarrows\operatorname{Hom}_{h_{X}}(h_{Y}\times_{\mathcal{Y}}h_{Y},h _{\operatorname{Bl}_{Z}X})),\] we obtain \(\operatorname{Hom}_{h_{X}}(\mathcal{Y},h_{\operatorname{Bl}_{Z}X})\simeq*\). To conclude, observe that \(\operatorname{Bl}_{Z}X\times_{X}Z\) is an effective log Cartier divisor on \(\operatorname{Bl}_{Z}X\) by Lemmas D.2 and 9.2. **Lemma 9.11**.: _Let \(f\colon X\to S\) be a log smooth morphism in \(\mathbf{\mathbf{\mathbf{missing}}}/B\). If \(\mathcal{Z}\to h_{X}\) is a closed immersion such that the composition \(\mathcal{Z}\to h_{S}\) is a log smooth morphism _in \(\mathbf{lSpc}/B\), then there exists a cartesian square_ _such that \(a\) is a strict closed immersion, \(fva\) is log smooth, and \(v\) is a dividing cover._ Proof.: By Lemma 3.5, there exists a commutative square with vertical isomorphisms such that \(i\) is a strict closed immersion and \(u\) is a dividing cover. Apply Lemma 3.11 and Proposition 6.3 to the log smooth morphism \(h_{Z}\to h_{S}\) to obtain a dividing cover \(z\colon Z^{\prime}\to Z\) such that the composition \(h_{Z^{\prime}}\to h_{S}\) is equal to \(h_{g}\) for some log smooth morphism \(g\colon Z^{\prime}\to S\) in \(\mathbf{lFan}/B\). Use Proposition 2.17 to obtain a dividing cover \(z^{\prime}\colon Z^{\prime\prime}\to Z^{\prime}\) such that the two compositions \[h_{Z^{\prime\prime}}\xrightarrow{h_{z^{\prime}}}h_{Z^{\prime}}\xrightarrow{h_ {z}}h_{Z}\xrightarrow{h_{\mathit{iuf}}}h_{S}\text{ and }h_{Z^{\prime\prime}} \xrightarrow{h_{z^{\prime}}}h_{Z^{\prime}}\xrightarrow{h_{g}}h_{S}\] are equal. The composition \(Z^{\prime\prime}\to X^{\prime}\) is a monomorphism, so Proposition 2.9 yields a dividing cover \(X^{\prime\prime}\to X^{\prime}\) such that the projection \(Z^{\prime\prime}\times_{X^{\prime}}X^{\prime\prime}\to X^{\prime\prime}\) is strict. Since the projection \(Z\times_{X^{\prime}}X^{\prime\prime}\to X^{\prime\prime}\) is strict, the induced morphism \(Z^{\prime\prime}\times_{X^{\prime}}X^{\prime\prime}\to Z\times_{X^{\prime}}X ^{\prime\prime}\) is a strict dividing cover, i.e., an isomorphism by Proposition 2.8(3). The composition \(Z^{\prime\prime}\times_{X^{\prime}}X^{\prime\prime}\to S\) is log smooth, so the composition \(Z\times_{X^{\prime}}X^{\prime\prime}\to S\) is log smooth too. Take \(V:=X^{\prime\prime}\) and \(W:=Z\times_{X^{\prime}}X^{\prime\prime}\) to conclude. **Theorem 9.12**.: _Let \(\mathcal{Z}\to\mathcal{X}\) be a closed immersion in \(\mathbf{lSmSpc}/\mathcal{S}\), where \(\mathcal{S}\in\mathbf{lSpc}/B\). Then \(\mathrm{Bl}_{\mathcal{Z}}\mathcal{X}\) exists, and \(\mathrm{Bl}_{\mathcal{Z}}\mathcal{X}\in\mathbf{lSmSpc}/\mathcal{S}\)._ Proof.: By Proposition 6.4, there exists a commutative square such that \(f\) is a log smooth morphism in \(\mathbf{lFan}/B\) and the vertical morphisms are representable Zariski covers. After replacing \(X\) by its suitable dividing cover, by Lemma 9.11, we may assume that there exists a cartesian square such that \(i\) is a strict closed immersion monomorphism in \(\mathbf{lFan}/B\) and \(fi\) is log smooth. By Proposition 4.10, we may further assume that \(X\simeq\amalg_{i\in I}X_{i}\) with finite \(I\) such that each \(h_{X_{i}}\to\mathcal{X}\) is a representable open immersion. For all \(i,j\in I\), there exists a commutative square with vertical isomorphisms such that \(u_{ij}\) is a representable open immersion. Since \(X_{i},Z\times_{X}X_{i},U_{ij},Z\times_{X}U_{ij}\in\mathbf{ISm}/S\), Lemma 9.10 shows that \(\amalg_{\mathbb{Z}_{i}}\mathcal{X}_{i}\) and \(\amalg_{h_{Z_{i}}}h_{X_{ij}}\) exist. Lemma 9.8 finishes the proof. Let \(\square\) be the fs log scheme whose underlying scheme is \(\mathbb{P}^{1}\) and whose log structure is the compactifying log structure associated with the open immersion \(\mathbb{A}^{1}\to\mathbb{P}^{1}\) away from \(\infty\). **Definition 9.13**.: Let \(i\colon\mathcal{Z}\to\mathcal{X}\) be a closed immersion in \(\mathbf{ISmSpc}/8\), where \(\mathcal{S}\in\mathbf{ISpc}/B\). The _deformation space associated with \(i\)_ is defined to be \[\mathrm{D}_{\mathcal{Z}}\mathcal{X}:=\amalg_{\mathbb{Z}\times\{0\}}(\mathcal{ X}\times\square)-\amalg_{\mathbb{Z}\times\{0\}}(\mathcal{X}\times\{0\}).\] The _normal bundle of \(\mathcal{Z}\) in \(\mathcal{X}\)_ is defined to be \[\mathrm{N}_{\mathcal{Z}}\mathcal{X}:=\mathrm{D}_{\mathcal{Z}}\mathcal{X}\times _{\square}\{0\}.\] These exist by Theorems 8.6 and 9.12. Suppose that \(Z\to X\) is a strict closed immersion in \(\mathbf{sSm}/S\) with \(S\in\mathbf{lSch}/B\), where \(\mathbf{sSm}\) denotes the class of strict smooth morphisms. Let \(\mathrm{N}_{Z}X:=\mathrm{N}_{\underline{Z}}\underline{X}\times_{\underline{X} }X\) denote the normal bundle of \(Z\) in \(X\). As explained in [2, Definition 7.5.1], there is an isomorphism \[\mathrm{D}_{Z}X\times_{\square}\{0\}\simeq\mathrm{N}_{Z}X, \tag{9.3}\] where \(\mathrm{D}_{Z}X:=\amalg_{Z\times\{0\}}(X\times\square)-\amalg_{\mathbb{Z} \times\{0\}}(X\times\{0\})\). Hence we have an isomorphism \[\mathrm{N}_{h_{Z}}h_{X}\simeq h_{\mathrm{N}_{Z}X} \tag{9.4}\] by Lemmas 8.4 and 9.10 since showing (9.4) is Zariski local on \(X\) and \(Z\). **Remark 9.14**.: For a closed immersion of schemes \(Z\to X\), Verdier [17] used \(\mathbb{A}^{1}\) to define a deformation space, while Fulton [5] used \(\mathbb{P}^{1}\). We use \(\square\) since this choice is suitable for log motivic homotopy theory, see e.g. [2, Theorem 7.5.4]. **Proposition 9.15**.: _Let \(i\colon Z\to X\) be a closed immersion in \(\mathbf{lSm}/S\) with \(S\in\mathbf{lSch}/B\). Then the induced morphism_ \[i^{*}\Omega^{1}_{X/S}\to\Omega^{1}_{Z/S} \tag{9.5}\] _is surjective, and its kernel is a locally free \(\mathcal{O}_{Z}\)-module. Furthermore, if \(V\) is the vector bundle over \(Z\) associated with the dual of the kernel of (9.5), then then there is an isomorphism_ \[\mathrm{N}_{h_{Z}}h_{X}\simeq h_{V}. \tag{9.6}\] Proof.: The question is strict etale local on \(X\) and \(Z\). Hence by [12, Proposition III.2.3.5], we may assume that \(f\) admits a factorization \(Z\xrightarrow{i^{\prime}}X^{\prime}\xrightarrow{u}X\) such that \(i^{\prime}\) is a strict closed immersion and \(u\) is a log etale monomorphism. Then (9.5) is isomorphic to the induced morphism \[i^{\prime*}\Omega^{1}_{X^{\prime}/S}\to\Omega^{1}_{Z/S}.\] Together with [12, Theorem IV.3.2.2], we see that (9.5) is surjective and its kernel is a locally free \(\mathcal{O}_{Z}\)-module. To show (9.6), it suffices to show \[\operatorname{N}_{h_{Z}}h_{X^{\prime}}\simeq h_{V}\] since \(h_{u}\colon h_{X^{\prime}}\to h_{X}\) is an open immersion. Hence we can replace \(Z\xrightarrow{i}X\) by \(Z\xrightarrow{i^{\prime}}X^{\prime}\), so we may assume that \(i\) is a strict closed immersion. Recall that the question is strict etale local on \(X\). By Proposition C.1, we may assume that there exists a strict smooth morphism \(X\to Y\) in \(\mathbf{lSm}/S\) such that the composition \(Z\to Y\) is strict smooth too. We finish the proof by applying (9.4) to the strict closed immersion \(Z\to X\) in \(\mathbf{sSm}/S\). **Lemma 9.16**.: _Let \(W\to Z\to X\) be closed immersions of schemes. Then there is a cartesian square_ (9.7) Proof.: The question is Zariski local on \(X\), so we reduce to the case when \(X=\operatorname{Spec}(A)\), \(Z=\operatorname{Spec}(A/I)\), and \(W=\operatorname{Spec}(A/J)\), where \(I\subset J\) are ideals of \(A\). According to [5, Section 5.1], we have explicit descriptions \[\operatorname{Bl}_{Z\times\{0\}}(X\times\mathbb{A}^{1})- \operatorname{Bl}_{Z\times\{0\}}(X\times\{0\}) \simeq\operatorname{Spec}\big{(}\bigoplus_{n\in\mathbb{Z}}I^{-n}t^{n}\big{)},\] \[\operatorname{Bl}_{W\times\{0\}}(X\times\mathbb{A}^{1})- \operatorname{Bl}_{W\times\{0\}}(X\times\{0\}) \simeq\operatorname{Spec}\big{(}\bigoplus_{n\in\mathbb{Z}}J^{-n}t^{n}\big{)},\] where \(t\) is an indeterminate and \(I^{n},J^{n}:=A\) for all integer \(n\leq 0\). The closed subscheme \(Z\times\mathbb{A}^{1}\) of \(\operatorname{Bl}_{Z\times\{0\}}(X\times\mathbb{A}^{1})-\operatorname{Bl}_{Z \times\{0\}}(X\times\{0\})\) is given by the ideal generated by \(It^{-1}\). Hence we see that \[(Z\times\mathbb{A}^{1})\times_{\operatorname{Bl}_{Z\times\{0\}}(X\times \mathbb{A}^{1})-\operatorname{Bl}_{Z\times\{0\}}(X\times\{0\})}(\operatorname {Bl}_{W\times\{0\}}(X\times\mathbb{A}^{1})-\operatorname{Bl}_{W\times\{0\}}( X\times\{0\}))\] is isomorphic to \[\operatorname{Bl}_{W\times\{0\}}(Z\times\mathbb{A}^{1})-\operatorname{Bl}_{W \times\{0\}}(Z\times\{0\})\simeq\operatorname{Spec}\big{(}\bigoplus_{n\in \mathbb{Z}}(J/I)^{-n}t^{n}\big{)},\] where \((J/I)^{n}:=A/I\) for all integer \(n\leq 0\). It follows that (9.7) is cartesian. **Proposition 9.17**.: _Let \(\mathcal{W}\to\mathcal{Z}\to\mathcal{X}\) be closed immersions in \(\mathbf{ISmSpc}/\mathcal{S}\), where \(\mathcal{S}\in\mathbf{ISpc}/B\). Then there is a cartesian square_ (9.8) Proof.: As in the proof of Theorem 9.12, there exists a commutative diagram such that \(i\) is strict closed immersion, \(f\) and \(fi\) are log smooth, the vertical morphisms are representable Zariski covers, and the left small square is cartesian. By Lemma 9.11, there exists a cartesian square with vertical isomorphisms such that \(u\) is a dividing cover, \(a\) is a strict closed immersion, and the composition \(W^{\prime}\to S\) is log smooth. The composition \(Z^{\prime}\to X\) is a proper monomorphism, so Proposition 2.9 yields a dividing cover \(X^{\prime\prime}\to X\) such that the projection \(Z^{\prime\prime}:=Z^{\prime}\times_{X}X^{\prime\prime}\to X^{\prime\prime}\) is a strict closed immersion. Hence we obtain a commutative diagram with cartesian squares such that \(W^{\prime\prime}:=W\times_{X}X^{\prime\prime}\), the vertical morphisms are representable Zariski covers, \(i^{\prime\prime}\) and \(a^{\prime\prime}\) are strict closed immersions, and the compositions \(X^{\prime\prime},Z^{\prime\prime},W^{\prime\prime}\to S\) are log smooth. The proof is done if we prove the following steps: 1. Show that \((\mathcal{Z}\times\{0\})\times_{\mathcal{X}\times\square}\mathrm{D}_{\mathcal{ W}}\mathcal{X}\) is an effective log Cartier divisor on \(\mathrm{D}_{\mathcal{W}}\mathcal{X}\). 2. Show \(\mathrm{Bl}_{\mathcal{Z}\times\{0\}}(\mathcal{X}\times\{0\})\times_{\mathrm{ Bl}_{\mathcal{Z}\times\{0\}}(\mathcal{X}\times\square),p}\mathrm{D}_{\mathcal{W}} \mathcal{X}=0\), where the morphism \(p\) is obtained by (1). 3. Show that (9.8) is cartesian, where its right vertical morphism is obtained by (2). The steps (1)-(3) are Zariski local on \(\mathcal{X}\) by Lemmas 8.2, 9.4, 9.5, and 9.7. Hence we reduce to showing the similar steps for \(h_{W^{\prime\prime}}\to h_{Z^{\prime\prime}}\to h_{X^{\prime\prime}}\to h_{S^{ \prime\prime}}\). Lemma 9.16 proves the steps (1)-(3) at once. **Corollary 9.18**.: _Let \(\mathcal{W}\to\mathcal{Z}\to\mathcal{X}\) be closed immersions in \(\mathbf{lSmSpc}/\mathcal{S}\), where \(\mathcal{S}\in\mathbf{lSpc}/B\). Then there is a cartesian square_ (9.9) Proof.: The square (9.9) is obtained by a pullback of (9.8). Normal bundles of divided log spaces can be regarded as affine bundles in the following sense. **Proposition 9.19**.: _Let \(\mathcal{Z}\to\mathcal{X}\) be a closed immersion in \(\mathbf{lSmSpc}/\mathcal{S}\), where \(\mathcal{S}\in\mathbf{lSpc}/B\). Then there exists a cartesian square_ _with \(Z\in\mathbf{lFan}/B\) and \(n\in\mathbb{N}\) such that \(h_{Z}\to\mathcal{Z}\) is a representable Zariski cover and \(p\) is the projection._ Proof.: As in the proof of Theorem 9.12, there exists a commutative diagram with \(S,X,Z\in\mathbf{lFan}/B\) such that the left square is cartesian, the vertical morphisms are representable Zariski covers, \(f\) and \(fi\) are log smooth, and \(i\) is a strict closed immersion. Lemmas 9.7 and 9.10 yield isomorphisms \[\mathrm{N}_{\mathcal{Z}}\mathcal{X}\times_{\mathcal{X}}h_{X}\simeq\mathrm{N}_ {h_{Z}}h_{X}\simeq h_{\mathrm{N}_{Z}X}.\] Together with \(h_{Z}\simeq\mathcal{Z}\times_{\mathcal{X}}h_{X}\), we obtain an isomorphism \[\mathrm{N}_{\mathcal{Z}}\mathcal{X}\times_{\mathcal{Z}}h_{Z}\simeq h_{\mathrm{ N}_{Z}X}.\] Since \(\mathrm{N}_{\mathcal{Z}}X\) is a vector bundle over \(\underline{Z}\), we obtain a desired cartesian square after further Zariski localization on \(Z\). ## Appendix A Charts for log smooth morphisms The chart theorem for log smooth morphisms [2, Theorem IV.3.3.1] is crucial for the development of the theory of log motives since this allows us to understand the structure of log smooth morphisms more concretely. However, the theorem is etale local on the source. In this section, we explain how the theorem can be Zariski local on the source with a stronger assumption. **Definition A.1**.: Let \(X\) be an fs log scheme, and let \(x\) be a point. We say that a chart \(P\) of \(X\) is called _neat at \(x\)_ if \(P\) is sharp and \(P\to\overline{\mathcal{M}}_{X,x}\) is an isomorphism. See [12, Definition II.2.3.1] for other equivalent conditions. **Definition A.2**.: Let \(f\colon X\to S\) be a morphism of fs log schemes. We set \[\mathcal{M}_{X/S}:=\operatorname{coker}(\mathcal{M}_{\underline{X}\times_{ \underline{x}}S}\to\mathcal{M}_{X}),\] where the cokernel is taken in the category of sheaves of monoids on \(X\). By [12, Proposition I.1.3.3], there is an isomorphism \[\mathcal{M}_{X/S}^{\operatorname{gp}}\simeq\operatorname{coker}(\mathcal{M}_{ \underline{X}\times_{\underline{x}}S}^{\operatorname{gp}}\to\mathcal{M}_{X}^{ \operatorname{gp}}).\] Let \(x\) be a point of \(X\). A chart \(\theta\colon P\to Q\) for \(f\) is called _neat at \(x\)_ if the induced sequence \[0\to P^{\operatorname{gp}}\to Q^{\operatorname{gp}}\to\mathcal{M}_{X/S,x}^{ \operatorname{gp}}\to 0\] is exact. This is a rephrase of the conditions in [12, Theorem II.2.4.4]. **Proposition A.3**.: _Let \(f\colon X\to S\) be an exact morphism of fs log schemes. If \(\theta\colon P\to Q\) is a neat chart for \(f\) at \(x\in X\) such that \(P\) is a neat chart at \(f(x)\), then \(Q\) is a neat chart at \(x\)._ Proof.: This is proved in [12, Remark II.2.4.5] with the assumption that the induced homomorphism \(\overline{\mathcal{M}}_{S,f(x)}\to\overline{\mathcal{M}}_{X,x}\) is injective. The exactness of \(f\) implies this assumption by [12, Proposition I.4.2.1(5)]. **Proposition A.4**.: _Let \(f\colon X\to S\) be a log smooth (resp. log etale) morphism of fs log schemes, and let \(x\) be a point of \(X\). Assume that \(\mathcal{M}_{X/S,x}^{\operatorname{gp}}\) is torsion free and \(S\) has a chart \(P\). Then in a Zariski neighborhood of \(x\), \(f\) admits a neat chart \(\theta\colon P\to Q\), and the induced morphism \(X\to S\times_{\mathbb{A}_{P}}\mathbb{A}_{Q}\) is strict smooth (resp. strict log etale)._ Proof.: By [12, Theorem III.1.2.7(4)], \(f\) admits a neat chart in a Zariski neighborhood of \(x\). After further Zariski localization, we may assume that \(f\) is neat at \(x\) using [12, Proposition II.2.3.7]. Then argue as in the proof of [12, Theorem IV.3.3.1(3)] to conclude. ## Appendix B Exact monomorphisms **Definition B.1**.: For an fs monoid \(P\) and a ring \(R\), we set \(\mathbb{A}_{P,R}:=\operatorname{Spec}(P\to R[P])\), see [12, Definition III.1.2.3]. If \(I\) is an ideal of \(P\), we set \[\mathbb{A}_{(P,I),R}:=\mathbb{A}_{P,R}\times_{\operatorname{Spec}(R[P])} \operatorname{Spec}(R[P]/I).\] If \(P\) is sharp, we set \[\operatorname{pt}_{P,R}:=\mathbb{A}_{(P,P^{+}),R},\] where \(P^{+}\) denotes the ideal of non-units of \(P\). We often omit \(R\) in this notation when it is clear from the context. **Lemma B.2**.: _Let \(\theta\colon P\to Q\) be a homomorphism of sharp fs monoids, and let \(k\) be a field. If the induced morphism \(f\colon\operatorname{pt}_{Q,k}\to\operatorname{pt}_{P,k}\) is an exact monomorphism of fs log schemes, then \(\theta\) is an isomorphism._ Proof.: Let us omit \(k\) for simplicity of notation. Consider the cartesian square \[\begin{CD}Z@>{}>{}>\operatorname{pt}_{Q}\times\operatorname{pt}_{Q}\\ @V{}V{}V\\ \mathbb{A}_{Q\oplus PQ}@>{}>{}>\mathbb{A}_{Q}\times\mathbb{A}_{Q},\end{CD}\] where \(Z:=\operatorname{pt}_{Q}\times_{\operatorname{pt}_{P}}\operatorname{pt}_{Q}\). If \((q_{1},q_{2})\in Q\oplus_{P}Q\) is equal to \(0\), then there exists \(p\in P^{\operatorname{gp}}\) such that \(q_{1}=p\) and \(q_{2}=-p\) in \(Q^{\operatorname{gp}}\). This implies \(q_{1},q_{2}\in Q^{*}=0\). Hence the two inclusions \(Q\rightrightarrows Q\oplus_{P}Q\) send \(Q^{+}\) into \((Q\oplus_{P}Q)^{+}\), so \(Z\) contains \(\mathbb{A}_{(Q\oplus_{P}Q,(Q\oplus_{P}Q)^{+})}\) as a strict closed subscheme. It follows that \(Z\) contains a point \(z\) such that \(\overline{\mathcal{M}}_{Z,z}\simeq\overline{Q\oplus_{P}Q}\). Since \(f\) is a monomorphism, the diagonal morphism \(\operatorname{pt}_{Q}\to Z\) is an isomorphism. This gives an isomorphism \[\overline{Q\oplus_{P}Q}\simeq Q.\] Due to [12, Proposition I.1.4.7(2), Corollaries I.2.3.8, I.4.2.16], we have an equality \[2\text{rank}(Q^{\operatorname{gp}})-\text{rank}(P^{\operatorname{gp}})=\text{ rank}((\overline{Q\oplus_{P}Q})^{\operatorname{gp}}).\] We deduce that \(P^{\operatorname{gp}}\) and \(Q^{\operatorname{gp}}\) have the same rank. By [12, Proposition I.4.2.1(5)], \(\theta\) is injective. Together with [12, Proposition I.4.3.5], we see that \(\theta\) is Kummer. Assume that \(q\in Q\) is not in \(P\). Since \(\theta\) is Kummer, \(nq\in P\) for some integer \(n>1\). The element \((q,-q)\in Q^{\operatorname{gp}}\oplus_{P^{\operatorname{gp}}}Q^{\operatorname{gp}}\) satisfies \(n(q,-q)=0\), which implies \((q,-q)\in(Q\oplus_{P}Q)^{*}\) since \(Q\oplus_{P}Q\) is saturated. It follows that \((Q\oplus_{P}Q)^{*}\) is nontrivial. The underlying scheme of \(\mathbb{A}_{(Q\oplus_{P}Q,(Q\oplus_{P}Q)^{+})}\) is \(\mathbb{A}_{(Q\oplus_{P}Q)^{*}}\), which is not the spectrum of \(k\). This contradicts to the fact that \(\operatorname{pt}_{Q}\to Z\) is an isomorphism. Hence \(\theta\) is an isomorphism. **Proposition B.3**.: _Let \(f\colon Y\to X\) be an exact monomorphism of fs log schemes. Then \(f\) is strict._ Proof.: Suppose that \(f\) is not strict at a point \(y\in Y\). We set \(x:=f(y)\), \(P:=\overline{\mathcal{M}}_{X,x}\), and \(Q:=\overline{\mathcal{M}}_{Y,y}\). Let \(\theta\colon P\to Q\) be the induced homomorphism. The restriction of \(f\) at \(x\) and \(y\) is a morphism \(g\colon\operatorname{pt}_{Q,k^{\prime}}\to\operatorname{pt}_{P,k}\) for some fields \(k\) and \(k^{\prime}\). The morphism \(g\) is an exact monomorphism too. Consider the canonical factorization \[\operatorname{pt}_{Q,k^{\prime}}\xrightarrow{g^{\prime}}\operatorname{pt}_{P,k ^{\prime}}\to\operatorname{pt}_{P,k}.\] Since \(g\) is an exact monomorphism, \(g^{\prime}\) is an exact monomorphism. By Lemma B.2, \(\theta\) is an isomorphism. It follows that \(g^{\prime}\) is an isomorphism. Hence \(g\) is strict, which is a contradiction. ## Appendix C Strict closed immersions of log smooth schemes **Proposition C.1**.: _Let \(i\colon Z\to X\) be a strict closed immersion in \(\mathbf{Ism}/S\), where \(S\) is an fs log scheme. Then strict etale locally on \(X\), there exists a cartesian square_ (C.1) _with \(Y\in\mathbf{Ism}/S\) such that \(i_{0}\) is the zero section and \(u\) is strict etale. If \(\mathcal{M}_{X/S}^{\operatorname{gp}}\) is torsion free, then (C.1) exists Zariski locally on \(X\)._ Proof.: Let \(x\) be a point of \(Z\), and let \(\mathcal{I}\) be the sheaf of ideals on \(X\) defining \(Z\). By [12, Lemma IV.1.2.10, Theorem IV.3.2.2], we can choose local sections \(m_{1},\dots,m_{r}\) of \(\mathcal{M}_{X}\) and \(m_{r+1},\dots,m_{r+s}\) of \(\mathcal{I}\) such that \(\{dm_{1},\dots,dm_{r+s}\}\) (resp. \(\{dm_{1},\dots,dm_{r}\}\)) gives rise a basis of \(\Omega^{1}_{X/S,x}\) (resp. \(\Omega^{1}_{Z/S,x}\)). Zariski locally on \(X\), the local sections \(m_{1},\ldots,m_{r+s}\) are global sections. Hence Zariski locally on \(X\), we obtain a cartesian square such that the right vertical morphism is the zero section. According to the proof of [12, Theorem IV.3.2.6], \(v\) is log etale. We may assume that \(S\times\mathbb{A}_{\mathbb{N}^{r}}\) admits a chart \(P\). By [12, Theorem IV.3.3.1], strict etale locally on \(X\), \(v\) admits a chart \(\theta\colon P\to Q\) such that the induced morphism \(X\to(S\times\mathbb{A}_{\mathbb{N}^{r}})\times_{\mathbb{A}_{P}}\mathbb{A}_{Q} \times\mathbb{A}^{s}\) is strict etale. By setting \(Y:=(S\times\mathbb{A}_{\mathbb{N}^{r}})\times_{\mathbb{A}_{P}}\mathbb{A}_{Q}\), we obtain (C.1). If \(\mathcal{M}^{\text{gp}}_{Y/X}\) is torsion free, use Proposition A.4 instead. ## Appendix D Blow-ups along strict closed subschemes **Definition D.1**.: Suppose that \(i\colon Z\to X\) is a strict closed immersion in \(\mathbf{lSch}/S\), where \(S\) is an fs log scheme. The _blow-up of \(X\) along \(Z\)_ is defined to be \[\operatorname{Bl}_{Z}X:=\operatorname{Bl}_{Z}\underline{X}\times_{\underline {X}}X,\] where \(\operatorname{Bl}_{Z}\underline{X}\) denotes the usual blow-up. **Lemma D.2**.: _Let \(i\colon Z\to X\) be a strict closed immersion in \(\mathbf{lSm}/S\), where \(S\) is an fs log scheme. Then \(\operatorname{Bl}_{Z}X\times_{X}Z\) is an effective Cartier divisor on \(\operatorname{Bl}_{Z}\underline{X}\) and we have \(\operatorname{Bl}_{Z}X,\operatorname{Bl}_{Z}X\times_{X}\overline{Z\in \mathbf{lSm}/S}\)._ Proof.: The question is strict etale local on \(X\) by [6, Theorem 0.2], so we may assume the existence of the diagram (C.1). Since the morphism \(u\) in this diagram is strict flat, there is a canonical isomorphism (D.1) \[\operatorname{Bl}_{Z}X\simeq\operatorname{Bl}_{Y}(Y\times\mathbb{A}^{s}) \times_{Y\times\mathbb{A}^{s}}X\] To conclude, observe that the claim for the strict closed immersion \(Y\to Y\times\mathbb{A}^{s}\) is clear. **Lemma D.3**.: _Let \(i\colon Z\to X\) be a strict closed immersion in \(\mathbf{lSm}/S\), where \(S\) is an fs log scheme. Then for every log smooth morphism \(X^{\prime}\to X\), there is a canonical isomorphism_ \[\operatorname{Bl}_{Z^{\prime}}X^{\prime}\simeq\operatorname{Bl}_{Z}X\times_{ X}X^{\prime},\] _where \(Z^{\prime}:=Z\times_{X}X^{\prime}\)._ Proof.: The question is strict etale local on \(X\) and \(X^{\prime}\), so we may assume that (C.1) exists and \(X\) admits a chart \(P\). By [12, Theorem IV.3.3.1], we may also assume that there exists a chart \(P\to Q\) of \(X^{\prime}\to X\) such that the induced morphism \[X^{\prime}\to X\times_{\mathbb{A}_{P}}\mathbb{A}_{Q}\] is strict etale. We set \(Y^{\prime}:=Y\times_{\mathbb{A}_{P}}\mathbb{A}_{Q}\). There are canonical isomorphisms \[\operatorname{Bl}_{Z^{\prime}}X^{\prime}\simeq\operatorname{Bl}_{Y^{\prime}}( Y^{\prime}\times\mathbb{A}^{r})\times_{Y^{\prime}\times\mathbb{A}^{r}}X^{ \prime}\simeq\operatorname{Bl}_{\{0\}}(\mathbb{A}^{r})\times_{\mathbb{A}^{r}} X^{\prime}.\] Together with (D.1), we obtain the desired isomorphism. **Example D.4**.: The conclusion of Lemma D.3 is wrong if we do not assume \(Z\in\mathbf{lSm}/S\). For example, suppose \[X:=(\mathbb{A}^{2},H_{1}+H_{2}),\;X^{\prime}:=(\mathrm{Bl}_{\{0\}}\mathbb{A}^{2}, \widetilde{H}_{1}+\widetilde{H}_{2}+E),\text{ and }Z:=X\times_{\mathbb{A}^{2}}\{0\},\] where \(H_{1}\) and \(H_{2}\) are the axes, \(\widetilde{H}_{1}\) and \(\widetilde{H}_{2}\) are their strict transforms, and \(E\) is the exceptional divisor. While \(\mathrm{Bl}_{Z}X\times_{X}X^{\prime}\) is not irreducible, \(\mathrm{Bl}_{Z^{\prime}}X^{\prime}\simeq X^{\prime}\) is irreducible.
2303.08248
An Intrusion Detection Mechanism for MANETs Based on Deep Learning Artificial Neural Networks (ANNs)
Mobile Ad-hoc Network (MANET) is a distributed, decentralized network of wireless portable nodes connecting directly without any fixed communication base station or centralized administration. Nodes in MANET move continuously in random directions and follow an arbitrary manner, which presents numerous challenges to these networks and make them more susceptible to different security threats. Due to this decentralized nature of their overall architecture, combined with the limitation of hardware resources, those infrastructure-less networks are more susceptible to different security attacks such as black hole attack, network partition, node selfishness, and Denial of Service (DoS) attacks. This work aims to present, investigate, and design an intrusion detection predictive technique for Mobile Ad hoc networks using deep learning artificial neural networks (ANNs). A simulation-based evaluation and a deep ANNs modelling for detecting and isolating a Denial of Service (DoS) attack are presented to improve the overall security level of Mobile ad hoc networks.
Mohamad T Sultan, Hesham El Sayed, Manzoor Ahmed Khan
2023-03-14T21:45:12Z
http://arxiv.org/abs/2303.08248v1
# An Intrusion Detection Mechanism for ###### Abstract Mobile Ad-hoc Network (MANET) is a distributed, decentralized network of wireless portable nodes connecting directly without any fixed communication base station or centralized administration. Nodes in MANET move continuously in random directions and follow an arbitrary manner, which presents numerous challenges to these networks and make them more susceptible to different security threats. Due to this decentralized nature of their overall architecture, combined with the limitation of hardware resources, those infrastructure-less networks are more susceptible to different security attacks such as black hole attack, network partition, node selfishness, and Denial of Service (DoS) attacks. This work aims to present, investigate, and design an intrusion detection predictive technique for Mobile Ad hoc networks using deep learning artificial neural networks (ANNs). A simulation-based evaluation and a deep ANNs modelling for detecting and isolating a Denial of Service (DoS) attack are presented to improve the overall security level of Mobile ad hoc networks. Network Protocols, deep learning, ANN, intrusion detection ## 1 Introduction Recently, the significant advances in wireless networking systems have recently made them among the most innovative topics in computer technologies. Users can access a wide range of information and services through mobile wireless networks. Latest technology developments in wireless data communication devices have led to cheaper prices and larger data rates. Compared to the conventional wired networking, the wireless networking provides a great deal of flexibility, efficiency and cost effectiveness that make them a good alternative in providing an efficient network connectivity. The development of Mobile Ad hoc networks (MANETs) [1][2] presented a reliable, cost-effective and efficient techniques exploit the availability and presence of mobile hosts during the lack of a fixed communication infrastructure. In MANET, the mobile nodes are independent and can effortlessly initiate a direct communication channel with each other as they are freely moving around the infrastructure-less network in different directions and at different velocity speeds. The Ad hoc network functions in a very specific way and nodes cooperation is its main element for forwarding the communication related information from main data sources to the planned destination mobile nodes. Nodes in MANET relies entirely in its operation on batteries as means of energy to move arbitrarily with no restrictions. The mobile nodes could leave or join the dynamic network at any specific time and can take independent decisions without relying on any centralized authority. Due to its core aspect of abandoning the availability of any fixed infrastructure as a necessary factor for the communication to be present. This has dictated that the transmission and communication range of the entire network will be determined by the transmission range of the individual mobile nodes, and it is usually smaller in size compared to the range of the cellular networks. Nevertheless, in cellular networks to avoid interference and provide guaranteed bandwidth, each communication cell depends on various communication frequencies available from the on-hope adjacent neighbouring cells. This expands the communication range of the cellular network especially when different communication cells are joined together to offer a radio and communication coverage for wide-ranging geographical area. However, in MANET each mobile node has a wireless interface and interconnects with other nodes over a wireless channel. Mobile nodes in MANET could range from portable laptops to smartphones or any other digital devices with a communication wireless antenna. Among the numerous advantages the infrastructure-less ad hoc network offers are robustness, efficiency, and inherent support for dynamic random mobility. Fig. 1 illustrates the architecture of MANET. The special characteristics of MANETs have made its deployment a preferable choice for many fields such as in military battlefield operations, natural disasters and in remote areas. However, due to their openness and decentralized structure. MANET has become vulnerable to different kinds of malicious threats and attacks. Their flexibility brings new threats to its security. The categorization of threats that affect mobile ad hoc networks can be perceived in many ways based on behaviour, level and position of the specific attack, flows of the used security algorithms and weaknesses in the structure of the developed routing protocols. Attacks such as blackhole attack, network partition, node selfishness, malicious node, and denial of service (DoS) are among the many popular threats that MANETs is facing [3]. The shared goal for those threats is to degrade the overall network performance. Researchers have become more focus on how to enhance and provide a secure and reliable mobile ad hoc network. Several techniques have been developed such as signature-based, statistical anomaly-based, and protocol analysis. This research will focus on deep learning intrusion detection techniques in MANET based on artificial neural networks The paper concentrates on very specific attack which is the denial-of-service DOS attach that can easily disrupt the MANETs operations. ## 2 Related Work Identifying malicious and misbehaving mobile nodes is necessary to protect the MANET network. Researchers have conducted research on studying the security threats of mobile ad hoc Figure 1: MANET architecture network to make MANET more secure and reliable. In describing the security threats, many researchers make their own categorization of the security threats. MANET threats are classified into two levels. The first level is attacks on the basic mechanism resulted from nodes captured, compromised or the misbehaviour of nodes that do not listen to the rules of cooperative algorithms. The second level is attacks on the security mechanism which exploit the vulnerabilities of the security mechanism employed in MANET. In [4] and [5] researchers have classified security attacks in correspondence with the communication layers, which mean that each layer has its own threats and vulnerabilities. Table 1 shows security threats at the communication layers. In [6] the authors have studied the effect of misbehaviour nodes on the MANET network. In this research a new method was used to efficiently detect and separate malicious mobile nodes from the network. Thus, the network performance remains balanced and stable regardless of the presence of the colluding nodes. The malicious behaviour is represented by suspicious behaviour of unauthorized mobile nodes that can inflict damage on other nodes in the network intentionally or unintentionally. An example of this could include the scenario where the aim of the mobile node is not the attack itself but to gain unauthorized benefits over other nodes. The researchers in [7] proposed a blackhole attack identification mechanism in MANET using fuzzy-based intrusion detection techniques. Their main target was to detect the blackhole attack in the mobile network, which is considered as very popular type of malicious attack that disrupts the operations of MANETs. An adaptive neuro fuzzy inference system was developed by the researchers. The development of this system was based on the popular optimization technique; particle swarm optimization (PSO). Similarly, using fuzzy logic techniques the authors in [8] used a new technique for intrusion detection called node blocking mechanism, to differentiate two popular attacks that targets the network which are the grey hole and the black hole attacks, The authors in [9] proposed a system that uses malicious behaviour-detection ratios to enhance security in mobile networks using modified zone-based intrusion detection techniques. In [10] another intrusion detection system was proposed using smart approach for intrusion identification and isolation. This system detects an attack on the ad hoc network by exhibiting a deep learning neural network with bootstrapped optimistic algorithm. In this system each mobile node submits finger vein biometric, user id, and latitude and longitude then the intrusion detection is executed to verify these entities and detect any suspicious behaviour in the network. \begin{table} \begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline **Layers** & **Attacks** \\ \hline Application layer & Selfish nodes attacks, Malicious attacks like viruses, worms and spyware. \\ \hline Transport layer & Session hijacking, Session control, flooding attack and ACK-storm attack on TCP \\ \hline Network layer & Cache poisoning attacks, Routing protocols attacks (e.g., AODV, DSR, TORA), Packet dropping attacks, blackhole attack node impersonation, denial-of-service DoS attack. \\ \hline Data link layer & Man in the middle attack, MAC interruption (802.11), WEP vulnerability. \\ \hline Physical layer & Eavesdropping, jamming, traffic interceptions. \\ \hline \end{tabular} \end{table} Table 1: Communication Layers Security Threats ## 3 Manet Routing Protocols The random arbitrary nature of mobile nodes in MANET due to the absence of any fixed communication infrastructure keep the network's topology in constant change. This rapid and dynamic change in topology make routing in MANET a challenging task. Thus, an effective routing strategy is required to smoothly accomplish the forwarding of packets form the source to the destination. The routing information is changing frequently to reflect the dynamic changes in network topology. There are numerous potential paths from source to destination. The routing protocol algorithms discovers a route and transports the data packets to the appropriate destination. Numerous routing algorithms for MANET have been developed [11]. The performance of the ad hoc network is highly associated with used routing protocols efficiency. The proposed routing algorithms for MANET can be divided into three different categories based on their behaviour and functionality. These categories are proactive, reactive and hybrid routing algorithms [11][12]. The basic concept of these routing algorithms is to discover the shortest route for the source-destination routes selection. ### AODV and Targeted Attacks In MANETs, one of the widely used routing protocols that follows the reactive routing mechanism is the Ad hoc on demand distance vector (AODV) [13]. The AODV protocol sets up routes using a query cycle consisting of Route request (RREQ) and Route Reply (RREP). If a node has the most recent sequence number for a certain destination and needs to deliver data packets to that location, it will broadcast an RREQ message to its neighbours. Until the requested data is available in some form, this message will be transmitted. After receiving the RREQ message, every node builds a path back to its original sender. After receiving an RREQ message, the destination will respond with an RREP message that includes the destination's current sequence number and the number of hops taken to get there [14][17]. Keep in mind that if a given intermediate node has a newly discovered route to the final destination, it will not relay the RREQ to its neighbours but will instead send an RREP back in the direction of the source. Each node that gets the RREP message sets up a new forward route to the final destination. Thus, rather of storing the whole path, each node simply stores the information necessary for the next step. When a node detects that it has received a duplicate RREQ, it discards the packet. As an added measure, AODV verifies the freshness of the routes by using sequence numbers. The destination routes are only altered if a new path to a given destination has a higher sequence number than the previous path or has the same sequence number but with fewer hops. Moreover, when there is a link failure or a routing problem happens in the network, another technique is executed in AODV which is the route error (RERR) [13][14]. This technique sends warning error packets to the source and destination nodes in the ad hoc network. As an example, used in this research, this section discusses examples of attacks on AODV routing protocol. * Packet dropping attack: In this type of attack malicious mobile users may drop all the legitimate incoming data packets that are mainly employed in route discovery and route maintenance stages. This is usually happening with aim of disrupting the network services such as (RREQ, RREP and RERR). * Denial-of-Service (DoS): One of the most popular known attacks. This type of network attack, render a resource or a service inaccessible in the network [15]. The main aim of this malicious attack is not to get an unauthorized access by the perpetrator but is rather an act of vandalism to shut down a machine, resource or network. This attack will usually result in a legitimate users being unable to access the available resources. In AODV routing algorithm the attacking malicious node that wants to disrupt MANET resources begins to frequently broadcast the route request (RREQ) messages while the route discovery process is taking a place. Using an s. The main goal of DoS attack is to consume the battery power of nodes and disrupt or deny access of legitimate nodes to a specific network services. ## 4 Methodology The mobile ad hoc networks are highly vulnerable to different kinds of security threats due to its dynamic nature of randomness, decentralizations, and lake of central authority. The aim of this work is to propose an effective intrusion detection mechanism for MANETs using deep learning techniques such as artificial neural networks (ANNs). The denial-of-service attack that is being considered in this research is implemented in way where a malicious intruder mobile node injects its malicious data packets in large volumes into the mobile ad hoc network which leads to a disruption and denial of services at the destination node. The main routing protocol used in this study to perform the simulation experiments is the AODV. This routing protocol is used due to its popularity in MANET. In this study and in our experimental setup all the factors and issues that has an impact on link stability on the network is considered and analysed. The main attack which is considered in this research is the Denial of Service (DoS) attack with the aim of rendering the MANET resources and services inaccessible by overloading it with junk packets in a two way network communication setting. This type of attack has the possibility to happen over both wired and wireless networks. However, the wireless networks are more susceptible due to its radio nature and more loosely specified restrictions such as the case of MANETs. Fig. 2 shows an example of DoS attack where intruder D floods the host node C with extra malicious packets. The deep learning ANNs are used to detect intrusions based on abnormal network activity and the attributes, labels and features are selected from the packets generated during the network simulation. Given to the learning and generalizable attributes of artificial neural networks (ANNs), and due to their ability to obtain knowledge from data and infer new information, are more suitable to manage such tasks. The performance of the proposed intrusion detection system is illustrated by means of simulation using AODV routing protocol in MANETs. ANN modelling for attack detection using a simulated MANET environment will be used in this research. Figure 2: Example of DoS attack, the host node C flooded by Intruder D ### Implementation The implementation of the proposed research is illustrated by means of simulations using NS-2 network simulator on Linux ubuntu 10.04 platform to evaluate the performance of the MANET network with 15 mobile nodes forming a network. Attack detection, using a simulated MANET environment and ANNs modelling is used. Once a simulation process is completed, NS2 follows to display the simulation details is by generating a big sized trace file holding all the events sequentially line by line. For all those reasons, the event-driven technique is used in NS2 as it can keep all the occurred events as records and all those records can be traced and analysed for evaluation purposes. In NS2 there are typically two kinds of output data records that can help in further investigation for a specific simulation scenario. The first one is a trace file which records the events traces that assist in studying the performance of the network by processing and analysing it using numerous of methods. The second one is a network animator (NAM) file which assists in detecting the interactions and movements between the mobile nodes visually. Fig. 3 illustrates the complete procedure of how a specific simulation is conducted using NS2. #### 4.1.1 Mobility Model The mobility model plays a very important role in MANET simulations. The considered model should attempt to simulate the movement, behaviour, and actions of real nodes in MANET. However, the mobile nodes in MANET move in a very dynamic arbitrary and decentralized manner. It's a dynamic network of autonomous decentralized mobile nodes. A node in the network could join or leave at any specific time which leads to high rates of link and topology changes. Moreover, the mobile nodes make decision independently and behave as routers where they can send, receive, or route the information simultaneously. Thus, to model this kind of unpredictability and randomness that exist in mobile ad hoc networks, the researchers have proposed different probability distribution models of MANET nodes. The most popular one that is highly represent the distribution of MANET nodes is called the Random Waypoint Mobility Model [16]. For this model the spatial distribution of mobile nodes movements is in general a non uniform. The mobility model that represents the movements of the mobile nodes is an important aspect for any simulation process because the way that these mobile nodes move and behave affects in different ways the performance of the routing protocol that these nodes utilize. The random waypoint mobility model is simple, reliable and is highly used to assess the behaviour of the MANET [16]. This mobility model can highly represent the actions and movements of real mobile nodes in real conditions. There is basically a specific pause time in this model that operates when there are any sudden changes or differences in direction or velocity of mobile nodes. When a specific wireless node starts to travel across the network, it remains in one location for a particular period of time which is a pause time before it moves to another location. Figure 3: NS2 simulation process The node chooses the subsequent destination randomly in the simulation region once that specified pause time has expired. These mobile nodes also select a speed that is generally specified between the minimum and maximum speed (0, Maxspeed) during simulation process. Then it travels to the newly chosen point at that selected speed. When the mobile node reaches at targeted place, it starts waiting again for a certain period of time, seconds before selecting another new way point and another speed. Then it initiates the same procedure all over again. Numerous researchers have adopted and implemented this mobility model in their studies. The Movement of individuals in a cafeteria or shopping mall, and movement of nodes in a conference are some of its practical examples. Fig. 4 presents an illustration of the movement pattern for a mobile node which begins at a randomly selected location (133, 180) and chosen a speed between (0 to 10 m/s) using random waypoint mobility model. ### Simulation Setup and Parameter Selection A scenario file that defines the exact motion of every node in the network along with the exact number of packets generated by each node in the network is being taken as an input for every simulation run. This is together accompanied by the exact time at which each change in motion or packet origination is to occur. The simulation is done using NS2 simulator as shown in table 2 below: Figure 4: Random waypoint mobility for node movement pattern In this simulation execution, 15 nodes are deployed for MANET within the terrain of 500m X 500m using random waypoint mobility for the purpose of realization of a real-time simulation and the simulation runs for the maximum experiment duration of 200s, with maximum speed of 20m/s. It's indicated in the simulation parameters table the "Maximum Speed" of mobile nodes which is in fact implies that the node's speed is already changing form "0 m/s" which is a stationary paused node "no movement" to maximum speed of "20 m/s". Since we have used the popular mobility model "Random Waypoint Mobility Model", which is designed to specify users or mobile nodes movement, their location, velocity, and acceleration change over time. The mobile nodes speed in our simulation environment could change at any random time form (0 - 20 m/s). The MAC layer used is IEEE 802.11b. Once the simulation is finished, the generated output files like trace files should be analysed to extract beneficial data and statistics. As stated earlier, a pair of files will be produced once the simulation process ends. The first one is an event trace file which records all simulation events while the second one is a network visualization file which records the data that can be used in network animation. These event trace files are in its raw format and an analysis and assessment should be performed in order to extract the required necessary information. Both files are CPU intensive tasks while in simulation process and they make use and occupy an amount of the memory. The example excerpt in Fig. 5 below shows how a generated trace file will look like after a simulation run. The excerpt above indicates that the data packet was sent (s) at time (t) 2.556838879 sec, from the main source node (Hs) 1 to target mobile node (Hd) 2. The source node id (Ni) is 1, The source node X axis coordinates (Nx) is 342.47, while the provided Y axis coordinate (Ny) is 4.35. \begin{table} \begin{tabular}{|l|l|} \hline \multicolumn{3}{|c|}{**Simulation parameters**} \\ \hline **Parameter** & **Selected Value** \\ \hline Routing Protocol & Ad hoc on demand distance vector (AODV) \\ \hline Platform & Linux distribution ubuntu version 10.04 \\ \hline Number of Nodes & 15 \\ \hline Simulation Software & NS-2 \\ \hline MAC Layer Protocol & IEEE 802.11b \\ \hline Simulation Area & 500m X 500m \\ \hline Traffic Generation Model & CBR (Constant Bit Rate) \\ \hline Size of packet & 512 bytes \\ \hline Mobility Model & Random Waypoint \\ \hline Maximum Speed & 0-20 m/s \\ \hline No. of Connections & 2 to 10 \\ \hline Duration of experiment & 200 sec \\ \hline Type of Antenna & Antenna/OmniAntenna \\ \hline \end{tabular} \end{table} Table 2: Simulation Parameters Figure 5: Excerpt of trace file Moreover, its Z coordinate (Nz) is 0.00. The available level of energy (Ne) is 1.000000, while the type of trace format for this mobile node for routing (NI) is RTR and the event of the node (Nw) is blank. Moreover, (Ma) 0, is the specification of MAC level information while the address of the destination Ethernet (Md) 0, the address of the source Ethernet (Ms) is 0 and Ethernet kind (Mt) is 0. The features extracted from the logged details can then be used in ANN for attack detection. The analysis process of these trace files can be done using different tools such as using the AWK language command and Perl scripts. Different parameter selection for data extraction can be considered for analysis which merely depends on the nature of the network and the specific attack. The following parameters will be considered: Packet Loss PL, Packet sent (PS), Packet received (PR), Energy consumption (EC). Using analysis log files of simulation run, the parameters were extracted. The data is split for training and testing where 65% of data including 15 mobile nodes in 200 seconds were selected randomly for training and 35% for the purpose of testing and validation process. ### Designing Artificial Neural Network An intrusion detection system using neural network (NN) is proposed to secure the MANET. Neural Network model is trained by applying the simulation data as inputs to the ANN. Feed Forward Back Propagation (FFBP) in the Neural network toolbox is used and the artificial neural network is implemented with four inputs, one output layer including two middle hidden layers. The network training in this setup is conducted using back propagation (BP) learning process, The (TRAINLM) training function of Levenberg-Marquardt backpropagation is used in addition to LEARNGDM as an adaptive learning function. Different transfer functions are available like Purelin, Log-Sigmoid, and Tan-Sigmoid. The main aim of the transfer function is to be used for estimating the output of a specific network layer from its initial net input. LogSigmoid, and Tan-Sigmoid are used in this study. Fig. 6 below shows an example of different transfer functions. A screenshot of how the artificial neural network setup and design is presented in Fig 7 and Fig. 8 respectively. All the setup parameters must be specified before running the artificial neural network. ### Modelling Artificial Neural Networks for DOS Attack Detection Given to the learning and generalizable attributes of feedforward neural networks with back propagation training algorithm, those deep learning networks are used for the purpose of DoS intrusion detection and to identify and predict any unusual activity and the features are selected from the packets generated in the simulation process. The number of input nodes will be determined from the input data set. The number of nodes in the hidden layers in the neural network are varied frequently during the experiments to achieve a highly accurate and stable neural network model and to avoid any overfitting. The structural design of the proposed deep learning neural network consists of two types of different network setups. The first one has 4 inputs and 15 neurons in the first hidden layer and 10 neurons in the second hidden layer and one output. While the second network has 4 inputs and 20 neurons in the first hidden layer and 10 neurons in the second hidden layer and one output. Training using feed forward back propagation (FFBP) in ANN is presented in Fig. 9 and the process is indicated as follows: Figure 8: Neural network design Figure 7: Neural network setup * The model selects training epoch from the training set and initialize weights and biases. * The model the calculates the output of the network. * Then, the error between the network output and the desired output is calculated. * The model modifies the weights of the network in a way that minimizes the error. * The model repeats the steps for each input in the training set until the error for the entire set is acceptable low. ## 5 Performance Results The deep learning technique which is used to design the ANN uses the backpropagation training algorithm to predicts a specific output. Then, this output is compared with actual known class label to measure the difference in error between the predicted and actual outputs. The obtained error is sent back to the neurons for adjustments. FFBP measures the variance of the residuals in a repeated process. The root mean squared error is just one way to calculate this error. The method of squaring the sum of the error is used to prevent the cancel out the positives and negatives values during the sum of the error of all the nodes. We used the root mean squared error instead of the mean absolute error (MAE) to measure the standard deviation of errors as the gradient descent requires the derivative of that loss function to be calculated to minimize the loss function and generate better outputs. The results are presented in table 3 below. The performance results of the designed deep learning model are shown in the table based on the training data. The selection process of the best performing model is based on results obtained. Figure 9: Training process We executed the neural network to detect unusual malicious activities in MANET. As it can be noticed in the performance results that two different transfer functions are used in this research Log-Sigmoid and Tan-Sigmoid. A well-trained ANN should have a very low RMSE at the end of the training phase The best result in ANNs for FFBP network with Tan-Sigmoid function is related to 4-15-10-1 network that produce RMSE\(=\)0.0452, for 14 epochs. The indication of MSE being quite small or almost close to zero is that the neural network model output and the desired output have become very close to each other for the training dataset. The rest of results are given in table 4 below. The table shows the performance results of the neural network model based on testing data. It can be noticed that the best result for neural network model using FFBP with Tan-Sigmoid function is related to 4-15-10-1 design that produce RMSE\(=\)0.0512. Both of Fig. 10 and Fig. 11 show a summary of how the designed artificial neural networks (ANN 4-15-10-1) and (ANN 4-20-10-1) performed for training and testing phases. In this research, after we selected the best model with best RMSE value, we used this model to evaluate the performance of proposed system. The goal is to distinguish a normal connection form a malicious attack connection in MANET. Thus, we used a performance measure which is the Detection Rate (DR). This measure is calculated as the number of attack connections which classified correctly as an attack over the total number of connections in the network. Using this measure, we were able to detect the attack in the network with high accuracy as shown in the table below. It can be noticed that as the number of connections increases the detection rate decreases due to higher false positive rates. \begin{table} \begin{tabular}{|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|} \hline **Network** & **Training Function** & **Layers** & **Transfer function** & **RMSE** \\ \hline \multirow{8}{*}{Feed Forward Back Propagation (FFBP)} & \multirow{8}{*}{Training: TrainLM} & \multirow{8}{*}{4–15–10-1} & LogSigmoid & 0.1998 \\ \cline{3-5} & & & & 0.1982 \\ \cline{3-5} & & & TanSigmoid & 0.0512 \\ \cline{3-5} & & & 0.0781 \\ \cline{3-5} & \multirow{2}{*}{Learning: LearnGDM} & \multirow{2}{*}{4-20-10-1} & LogSigmoid & 0.2337 \\ \cline{3-5} & & & 0.1891 \\ \cline{3-5} & & & TanSigmoid & 0.0821 \\ \cline{3-5} & & & TanSigmoid & 0.0935 \\ \hline \end{tabular} \end{table} Table 4: Artificial neural networks based on testing data \begin{table} \begin{tabular}{|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|p{85.4pt}|} \hline **Network** & **Training/ Learning functions** & **Layers** & **Transfer function** & **RMSE** & **Epoch** \\ \hline \multirow{8}{*}{Feed Forward Back Propagation (FFBP)} & \multirow{8}{*}{Training: TrainLM} & \multirow{8}{*}{4–15–10-1} & LogSigmoid & 0.1924 & 10 \\ \cline{3-5} & & & & 0.1901 & 12 \\ \cline{3-5} & & & TanSigmoid & 0.0452 & 14 \\ \cline{3-5} & & & & 0.0618 & 16 \\ \cline{3-5} & \multirow{2}{*}{Learning: LearnGDM} & \multirow{2}{*}{4-20-10-1} & \multirow{2}{*}{LogSigmoid} & 0.1927 & 10 \\ \cline{3-5} & & & & 0.1801 & 12 \\ \cline{3-5} & & & TanSigmoid & 0.0492 & 14 \\ \cline{3-5} & & & 0.0835 & 16 \\ \hline \end{tabular} \end{table} Table 3: Artificial neural network for training data ## 6 Conclusions This research paper is mainly focused on modelling and investigating the use of artificial neural networks ANNs as a mean for intrusion detection in mobile ad hoc networks (MANETs). The main objective of this work was to analyse, simulate and evaluate the use of feedforward neural networks with back propagation (FFBP) in MANETs. An extracted dataset generated using the means of simulations for mobile ad hoc networks is used to calculate the input parameters of this approach and the RMSR is employed as metric to evaluate the performance of the proposed deep learning artificial neural network modelling. The proposed modelling can be utilized for detecting Figure 11: RMSE for Training and Testing (ANN 4-20-10-1) Figure 10: RMSE for Training and Testing (ANN 4-15-10-1) DoS attack in MANET. The best results in ANNs for FFBP network with Tan- Sigmoid function is related to 4-15-10-1 network that produce RMSE=0.0452, for 14 epochs for training and RMSE=0.0512 for testing data. We also used the Detection Rate (DR) as a performance measure to evaluate the selected neural network model. For the future works, different types of network attacks will be considered for the purpose of intrusion detection. Another measure could be used in the analysis is the coefficient of determination or R squared. R square is the percentage of variation in Y explained by the model. The higher the percentage of the R square is the better. However, the value of R square will be always less than one irrespective of the values in dataset being small or large. ## Conflicts of Interest The authors declare no conflict of interest. ## Acknowledgements This research was funded by the Emirates Center for Mobility Research (ECMR) of the United Arab Emirates University (grant number 31R271).
2304.05913
Thermal quenching of classical and semiclassical scrambling
Quantum scrambling often gives rise to short-time exponential growth in out-of-time-ordered correlators (OTOCs). The scrambling rate over an isolated saddle point at finite temperature is shown here to be reduced by a hierarchy of quenching processes. Two of these appear in the classical limit, where escape from the neighbourhood of the saddle reduces the rate by a factor of two, and thermal fluctuations around the saddle reduce it further; a third process can be explained semiclassically as arising from quantum thermal fluctuations around the saddle, which are also responsible for imposing the Maldacena-Shenker-Stanford bound.
Vijay Ganesh Sadhasivam, Andrew C. Hunt, Lars Meuser, Yair Litman, Stuart C. Althorpe
2023-04-12T15:32:14Z
http://arxiv.org/abs/2304.05913v2
# Thermal quenching of classical and semiclassical chaos ###### Abstract The growth rate of out-of-time ordered correlators (OTOCs) provide for the notion of a quantum Lyapunov exponent, which quantifies chaos and information scrambling in quantum systems. In thermal ensembles, this growth rate is reduced from its corresponding value in the microcanonical distribution, and a temperature-dependent bound on has been conjectured to exist for the quantum Lyapunov exponent. We detail a two-fold mechanism which is responsible for the reduction of the Lyapunov exponents in thermal quantum systems. ## I Introduction In the study of classical chaos in non-linear dynamical systems, Lyapunov exponents form an important class of dynamical quantifiers that capture phase-space instability. They measure the relative spread of phase-space trajectories that are initially close by and their definition is hence inherently classical. The question of what constitutes 'quantum chaos' in systems that are classically chaotic has prompted intense interest for decades. Recently, a quantitative tool called the 'out-of-time-ordered correlator' (OTOC)[1; 2] had been introduced as a diagnostic measure of 'information scrambling'[3], which, in many systems, represents the delocalisation of information due to chaotic dynamics. They have the following structural form: \[C(t)=\langle\left[\hat{W}(t),\hat{V}(0)\right]^{\dagger}\left[\hat{W}(t),\hat {V}(0)\right]\rangle \tag{1}\] where \(\hat{W}\) and \(\hat{V}\) are hermitian operators representing local quantum information. The definition of OTOC is due to Larkin and Ovchinnikov[4] and was inspired from a quasiclassical theory. Due to this semiclassical connection,the exponential growth rate of OTOCs provide for the notion of a 'quantum' Lyapunov exponent in systems which exhibit exponentially fast scrambling. The relation between the growth rate of this OTOC and the classical Lyapunov exponent of the underlying classical system has been a subject of discussion [5; 6; 7]. In thermal quantum systems, it has been conjectured that the quantum Lyapunov exponent satisfies a temperature-dependent bound[1]: \[\lambda_{q}(T)\leq\frac{2\pi k_{B}T}{\hbar} \tag{2}\] While there is no such constraint on the classical Lyapunov exponent, it has been pointed out[5] that, in the semiclassical limit, the growth rate of the quantum OTOC is related to the 'classical' growth rate, which is, in general, different from the classical Lyapunov exponent. In almost all contexts discussed in previous works, the relevant OTOCs were computed as a microcanonical ensemble average. For a certain class systems described in the microcanonical ensemble, it has also been pointed out that there is a _lower bound_ condition[6] on the growth rate of the OTOC in the semiclassical limit. The upper bound condition due to Maldacena et al [1] pertains specifically to the _canonical_ ensemble. Previous works [8; 9; 10; 11] have shown that this quantum bound has a (quantum-Boltzmann) statistical origin. In this letter, we call particular attention to the effects of the thermal averaging on the growth rate of thermal OTOCs and their semiclassical approximation. The OTOCs given by eq. (1) form a specific class of non-linear response functions [12] which serve as a reliable indicator of information scrambling. It is to be noted that the double commutator structure of eq. (1) enables it to probe operator growth, especially in thermal ensembles. In fact, it is well-known[13; 14] from linear response theory that the thermal average of a single commutator of the form \[C_{\text{\tiny T}}(t)=\text{Tr}\left[e^{-\beta H(q,\hat{p})}\left[\hat{q}(t), \hat{q}(0)\right]\right] \tag{3}\] (where \(\hat{q}\) is the position operator) yields, in the semiclassical limit, \[\begin{split} C_{\text{\tiny T}}^{\text{\tiny cl}}(t)& =-i\hbar\int dq\,dp\,e^{-\beta H(q,p)}\frac{\partial q_{t}}{ \partial p}\\ &=-\frac{i\hbar\beta}{m}\int dq\,dp\,e^{-\beta H(q,p)}\,q(t)\,p( 0)\end{split} \tag{4}\] a two-point correlation function (agnostic to time-ordering). By using the Cauchy-Schwarz inequality of weighted inner products, this correlation function can be shown to be bounded for all temperatures and not exponentially growing: \[\langle q(t)p(0)\rangle_{\text{\tiny T}}^{\text{\tiny cl}}\leq\langle q(t)^ {2}\rangle_{\text{\tiny T}}^{\text{\tiny cl}}\,\langle p(0)^{2}\rangle_{\text{ \tiny T}}^{\text{\tiny cl}} \tag{5}\] In fact, it has been shown[14] that even higher-order response functions computed at a single time can be re-expressed in terms of two-point time-correlation functions which implies that they cannot grow exponentially. However, it is well-known from various numerical studies[15; 16] that thermal OTOCs can indeed be exponential and hence are distinct from other non-linear response functions. Thus, it is the double commutator structure of the OTOC definition which yields two stability matrix elements in the semiclassical limit that results in exponential growth. We discuss the exponential behaviour of thermal OTOCs as a function of temperature. ## II Growth rate of thermal OTOCs We start with a thermal OTOC of the form: \[C_{ T}^{\mbox{\tiny{\it{qs}}}}(t)=-\langle\left[\hat{q}(t),\hat{q}(0)\right]^{2 }\rangle_{ T} \tag{6}\] which under the classical approximation yields: \[C_{ T}^{\mbox{\tiny{\it{cl}}}}(t)=\hbar^{2}\int dq\,dp\,e^{- \beta H(q,p)}\left|\frac{\partial q_{t}}{\partial p}\right|^{2} \tag{7}\] Here, we consider a first-quantized one-dimensional Hamiltonian \(H(q,p)=T(p)+V(q)\) separable in position and momentum variable. If the classical phase-space dynamics has a maximal Lyapunov exponent (say \(\lambda_{\mbox{\tiny{\it{cl}}}}=\lambda\)), a 'naive' semiclassical approximation[15] of the thermal OTOC, which neglects the effects of (classical) thermal averaging will predict its growth rate at very high temperature as: \[C_{ T}^{\mbox{\tiny{\it{cl}}}}(t)\sim\hbar^{2}e^{ \lambda_{\mbox{\tiny{\it{cl}}}}t} \tag{8}\] However, it has been established [11; 15] that at high temperatures, we indeed have: \[C_{ T}^{\mbox{\tiny{\it{cl}}}}(t)\sim e^{ \lambda_{\mbox{\tiny{\it{cl}}}}} \tag{9}\] An explanation to this halving of the growth rate can be provided along the same lines of eq. (4). The time evolution of the stability matrix element \(M_{\mbox{\tiny{\it{sp}}}}\) in eq. (7) (for simplicity, in one dimension) is given by the following initial value problem of a second-order ODE: \[\begin{split}&\frac{d^{2}}{dt^{2}}M_{\mbox{\tiny{\it{sp}}}}=- \frac{1}{m}\frac{\partial^{2}V(q)}{\partial q^{2}}M_{\mbox{\tiny{\it{sp}}}}\\ & M_{\mbox{\tiny{\it{sp}}}}(0)=0\hskip 28.452756pt\dot{M}_{ \mbox{\tiny{\it{sp}}}}=\frac{1}{m}\end{split} \tag{10}\] If the Hessian is assumed to be a constant negative value, the solutions to the above ODE will be exponential and it is assumed to constant and positive, the solutions are trigonometric. Hence, we can expect the following factorization to hold for the classical stability matrix elements in general: \[\frac{\partial q_{t}}{\partial p}=\mathcal{S}(t;p,q)\,\exp(\Lambda(t;p,q)) \tag{11}\] where \(q,p\) are the initial momenta and position respectively, \(\mathcal{S}(t;p,q)\) is a trigonometric function of \(t\) and \(\Lambda(t;p,q)\) is a positive function of \(t\), i.e. \(|\mathcal{S}(t;\mathbf{p},\mathbf{q})|\leq 1\) and \(\Lambda(t;\mathbf{p},\mathbf{q})\geq 0\). In the presence of a global maximal Lyapunov exponent \(\lambda\) for the classical phase-space, we then have: \[C_{ T}^{\mbox{\tiny{\it{cl}}}}(t) =\frac{1}{\mathcal{Z}_{\mbox{\tiny{\it{cl}}}}}\int dp\;dq\,e^{- \beta H(p,q)}\left|\frac{\partial q_{t}}{\partial p}\right|^{2} \tag{12}\] \[\leq e^{\lambda_{\mbox{\tiny{\it{cl}}}}}\int dp\;dq\,e^{-\beta H( p,q)}\,\mathcal{S}(t;p,q)\,\frac{\partial q_{t}}{\partial p} \tag{13}\] Integrating the RHS above by parts, we obtain: \[C_{ T}^{\mbox{\tiny{\it{cl}}}}(t)\leq e^{ \lambda_{\mbox{\tiny{\it{cl}}}}}\int dp\;dq\,e^{- \beta H(p,q)}\;q(t)\,\frac{\partial\mathcal{S}(t;p,q)}{\partial p} \tag{14}\] Since the function \(e^{-\beta H}\) is positive, by the Cauchy Schwartz inequality of weighted inner products, we get: \[C_{ T}^{\mbox{\tiny{\it{cl}}}}(t)\leq e^{ \lambda_{\mbox{\tiny{\it{cl}}}}}\,\langle q(t)^{2}\rangle_{\beta}^{{}^{1/2} }\ \left\langle\left(\frac{\partial\mathcal{S}}{\partial p}\right)^{2} \right\rangle_{\beta}^{{}^{1/2}} \tag{15}\] where \(\langle A\rangle_{\beta}=\int\int\;dp\;dq\;e^{-\beta H(q,p)}A\). Since the thermal expectation values of operators are bounded, \[C_{ T}^{\mbox{\tiny{\it{cl}}}}(t)\leq O(e^{ \lambda_{\mbox{\tiny{\it{cl}}}}}) \tag{16}\] provided \(\nicefrac{{\partial\mathcal{S}(t)}}{{\partial p}}\) has, at most, a polynomial growth rate. Since \(S(t)\) is trigonometric by definition, we can expect this indeed to be the case. The same argument also holds for other matrix elements (for instance, the \(M_{\mbox{\tiny{\it{sq}}}}\) element considered in [11]) - the choice of \(M_{\mbox{\tiny{\it{pq}}}}\) for the derivation above is purely for convenience. So if the classical approximation to the OTOC in eq. (6) is exponential, the resulting classical growth rate 'trivially' satisfies the bound \[\lambda^{\mbox{\tiny{\it{cl}}}}(T)\leq\frac{2\pi k_{B}T}{\hbar} \tag{17}\] as long as the temperature satisfies: \[T\geq\frac{\hbar\lambda}{2\pi k_{ B}} \tag{18}\] the RHS of which was identified to be the instanton 'crossover' temperature in [11]. However, it is to be noted that, mathematically, this derivation does not guarantee that thermal OTOCs are exponential for the parameter values and temperatures considered. ## III Lower bound on classical growth rate The bound above on the classical growth rate in thermal ensembles holds only in the presence of a maximal Lyapunov exponent. One important class of quantum systems that have such a maximal exponent are those whose corresponding classical counterparts have an isolated saddle point in the phase space. Examples of such systems include the Dicke model[7], Bose-Hubbard dimer[17] and many potential energy surfaces (PES) of interest in chemistry. It has been shown in [6] that the semiclassical approximation to the growth rate of microcanonical OTOCs in such systems obeys a lower bound condition as follows: \[\lambda_{\text{OTOC}}\geq\lambda_{\text{\tiny saddle}} \tag{19}\] We consider the dynamics around such a generic one-dimensional saddle point described by an inverted harmonic oscillator, which has been the prototype for saddle-point dynamics in several previous works [11; 15; 18]: \[\begin{split} V(q)&=-\frac{1}{2}m\lambda^{2}q^{2}+P (q)\\ H(p,q)&=\frac{p^{2}}{2m}+V(q)\end{split} \tag{20}\] Here, the parameter \(\lambda\) determines \(\lambda_{\text{\tiny saddle}}\). The term \(P(q)\) in the equation above'stabilises' the system and defines the system size, and is required for thermal equilibrium in a canonical ensemble. This term hence needs to be atleast quartic in \(q\). Making the change of variable \(a_{+}=p+m\lambda q\) and \(a_{-}=p-m\lambda q\) as in [6] to denote the unstable and stable directions of the saddle, we get: \[H(a_{+},a_{-})=\frac{a_{+}a_{-}}{2m}+P(a_{+},a_{-}) \tag{21}\] The dynamics around this saddle point is given by: \[\frac{da_{\pm}}{dt}=\pm\lambda a_{\pm}+\widetilde{P}(a_{+},a_{-}) \tag{22}\] Following the variable transformation above, it is straightforward to show that, for a small perturbation \(P(q)\), \[|M_{\text{\tiny\it up}}(t)|^{2}\gtrsim\frac{|M_{++}(t)|^{2}}{4m^{2}\lambda^{2}} \tag{23}\] where \(M_{++}\) corresponds to the stability matrix evaluated along the \(a_{+}\) coordinate. This gives: \[\begin{split} C_{ T}^{\text{\tiny\it el}}(t)& =\int e^{-\beta H(p,q)}|M_{\text{\tiny\it up}}|^{2}dq\;dp\\ &\gtrsim\int e^{-\beta H(a_{+},a_{-})}\,\frac{|M_{++}|^{2}}{2m \lambda}\;da_{+}\;da_{-}\end{split} \tag{24}\] Considering a narrow strip \(S_{t}=\{|a_{+}|\leq\delta e^{-\lambda t},|a_{-}|\leq\delta\}\) around the saddle point at the origin as in [6], we get: \[C_{ T}^{\text{\tiny\it el}}(t)\gtrsim\int_{S_{t}}e^{-\beta H(a_{+},a_{-})}\, \frac{|M_{++}|^{2}}{2m\lambda}\;da_{+}\;da_{-} \tag{25}\] As long as the temperature is high enough such that \(\beta\) satisfies \[\beta P(a_{+},a_{-})_{\text{\tiny typical}}<<1 \tag{26}\] we get: \[C_{ T}^{\text{\tiny\it el}}(t)\gtrsim\frac{e^{2\lambda t}}{2m\lambda}\int_{S_{ t}}e^{-\beta a_{+}a_{-}/2m}\;da_{+}\;da_{-} \tag{27}\] Here, \(P(a_{+},a_{-})_{\text{\tiny typical}}\) is the value of the perturbation term evaluated at typical values of \(a_{+}\) and \(a_{-}\) in the _metastable_ regions of classical regions far from the saddle point, at thermal equilibrium. As mentioned earlier, this value quantifies the size of the system and this condition ensures that the relative energy separation between the saddle point and the metastable regions is much smaller in comparison to thermal fluctuations at this temperature. For instance, for \(P(q)=gq^{4}\) as in [15], this corresponds to the condition that the saddle-point energy \(V_{\text{\tiny\it b}}\) satisfies \(V_{\text{\tiny\it b}}<<k_{B}T\). Eq. (27) can be simplified further to yield: \[\begin{split} C_{ T}^{\text{\tiny\it el}}(t)& \gtrsim\frac{e^{2\lambda t}}{\lambda}\frac{4\;\text{Shi}(\beta\delta^{2}e^{- \lambda t}/2m)}{\beta}\\ &\gtrsim\frac{2\delta^{2}e^{\lambda t}}{m\lambda}+\frac{\beta^{2 }\delta^{6}e^{-\lambda t}}{36\;\lambda m^{3}}=O(e^{\lambda t})\end{split} \tag{28}\] where \(\text{Shi}(x)\) is the hyperbolic sine integral. Combining this with eq. (14) yields \[C_{ T}^{\text{\tiny\it el}}(t)\sim\Theta(e^{\lambda t}) \tag{29}\] Thus, in the classical limit, the quantum thermal OTOC can be expected to grow exponentially, provided the temperature condition in eq. (26) holds. This is borne out by the fact that thermal OTOCs have been shown to be exponential for a range of temperatures in [11; 15]. While thermal averaging 'quenches' exponential dependence with time (and hence the ability to probe chaos) in linear response functions and some non-linear response functions, OTOC survives this quenching, albeit with a factor-two reduction from the microcanonical growth rate. Note that, while the derivations above have been worked out for the one-dimensional case, the same holds for a multidimensional system as well, provided the index of the saddle remains one, as was discussed in [6]. ## III Quantum-Boltzmann statistics and ring-polymer Lyapunov exponents In [11], we showed that at lower temperatures (far lower than that characterised by eq. (26)), the quantumness of the Boltzmann density operator starts to affect the scrambling rate and the quantum Lyapunov exponent obtained from the thermal quantum OTOC in eq. (6) is smaller than the classical growth rate \(\lambda\). It is erroneous to approximate the growth rate of the quantum OTOC with the classical Lyapunov exponent (whose maximal value is \(\lambda\)) below this temperature. By considering the classical dynamics of fictitious ring-polymers (which corresponds to imaginary-time paths that describe the quantum-Boltzmann statistics) in an extended phase space - which forms the basis of ring-polymer molecular dynamics (RPMD) [19] procedure, we showed that the quantum bound to chaos in [1] is brought about by the emergence of delocalised instantonic structures at low temperatures. These structures necessarily emerge at temperatures below regimes of 'classicality' as described above, as they describe thermally-assisted tunneling pathways. It was also shown that all chaotic trajectories are dynamically coupled to these bounce instantons at low temperatures, which suggest that the maximal Lyapunov exponent in the ring-polymer phase space is smaller than the corresponding classical value. This reduced maximal value of the ring-polymer Lyapunov exponents can be found by augmenting an existing symplectic integration procedure[20] used for computing classical Lyapunov exponents to the extended phase space of ring polymers. In figure 1, we provide the results for the 'ring-polymer' Lyapunov exponents computed for each temperature in the chaotic two-dimensional double well potential considered in [11] at the saddle point of the ring-polymer PES. We also plot the growth rate of the quantum thermal OTOC and its RPMD and classical approximation as a function of temperature for reference. As can be seen, the ring-polymer exponents necessarily obey the quantum bound and they differ from the classical Lyapunov exponent below the 'cross-over' temperature \[T_{c}=\frac{\hbar\lambda}{2\pi k_{{}_{B}}}\] mentioned in (18) at which instantons start to form. These instantons become the dominant saddle point in the ring-polymer PES below this temperature. The growth rate of the RPMD (approximation to) OTOC is given as: \[C_{{}_{N}}^{{}_{RP}}(t)=\frac{\hbar^{2}}{\mathcal{Z}}\int d\mathbf{p}^{{}_{N }}\int d\mathbf{q}^{{}_{N}}\,e^{-\beta_{{}_{N}}H_{{}_{N}}(\mathbf{p},\mathbf{q })}\left|\frac{\partial Q_{\mathrm{q}}}{\partial P_{\mathrm{o}}}\right|^{2} \tag{30}\] in which \(\partial Q_{{}_{t}}/\partial P_{\mathrm{o}}\) is the stability matrix (corresponding to \(M_{{}_{qp}}\) in the classical case) of the ring-polymer centroid coordinate. By replacing the classical maximal Lyapunov exponent \(\lambda\) with the (temperature-dependent) ring-polymer maximal exponents in fig. 1 and repeating the derivations above, it is straightforward to see that: \[C_{{}_{T}}^{{}_{RPMD}}(t)\leq O(\,\exp(\lambda_{{}_{\mathrm{max}}}^{{}_{RP}}(T )t)\;) \tag{31}\] Fig. 1 confirms that the growth rate of the RPMD OTOC below \(T_{c}\) is indeed less than the maximal ring-polymer exponents. We have hence established that the mechanism responsible for the reduction of the quantum Lyapunov exponent computed from OTOCs in thermal quantum systems is two-fold: the classical thermal averaging brings about a factor-of-two reduction from the microcanonical rate. The quantumness of the Boltzmann distribution further quenches the quantum Lyapunov exponent below this reduced value. We believe that finding ring-polymer Lyapunov exponents in the extended phase space is a viable strategy to quantify the latter. The majority of the derivation outlined in this letter hinges on the existence of a global maximal Lyapunov exponent independent of the system's energy. While first-quantized Hamiltonians with isolated saddle points form an important class of such systems, we expect a similar mechanism to bring about the quantum bound on chaos in general Hamiltonian systems. Since it is expected that the Sachdev-Ye-Kitaev (SYK) model that saturates this bound is'mono-fractal' [21] and has a single (and hence maximal) Lyapunov exponent for all phase-space trajectories, we believe that the discussions in this letter pertains to a large class of systems. We would like to thank **xxxx** for stimulating discussions and for commenting on an earlier version of this manuscript. VGS acknowledges the support of a Manmohan Singh PhD scholarship from St John's College, Cambridge.
2305.13280
Effective Electromagnetic Wave Properties of Disordered Stealthy Hyperuniform Layered Media Beyond the Quasistatic Regime
Disordered stealthy hyperuniform dielectric composites exhibit novel electromagnetic wave transport properties in two and three dimensions. Here, we carry out the first study of the electromagnetic properties of one-dimensional (1D) disordered stealthy hyperuniform layered media. From an exact nonlocal theory, we derive an approximation formula for the effective dynamic dielectric constant tensor ${\boldsymbol \varepsilon}_e({\bf k}_q,\omega)$ of general 1D media that is valid well beyond the quasistatic regime and apply it to 1D stealthy hyperuniform systems. We consider incident waves of transverse polarization, frequency $\omega$, and wavenumber $k_q$. Our formula for ${\boldsymbol \varepsilon}_e({k}_q,\omega)$, which is given in terms of the spectral density, leads to a closed-form relation for the transmittance $T$. Our theoretical predictions are in excellent agreement with finite-difference time-domain (FDTD) simulations. Stealthy hyperuniform layered media have perfect transparency intervals up to a finite wavenumber, implying no Anderson localization, but non-stealthy hyperuniform media are not perfectly transparent. Our predictive theory provides a new path for the inverse design of the wave characteristics of disordered layered media, which are readily fabricated, by engineering their spectral densities.
Jaeuk Kim, Salvatore Torquato
2023-05-22T17:41:11Z
http://arxiv.org/abs/2305.13280v2
Effective electromagnetic wave properties of disordered stealthy hyperuniform layered media beyond the quasistatic regime ###### Abstract Disordered stealthy hyperuniform dielectric composites exhibit novel electromagnetic wave transport properties in two and three dimensions. Here, we carry out the first study of the electromagnetic properties of one-dimensional (1D) disordered stealthy hyperuniform layered media. From an exact nonlocal theory, we derive an approximation formula for the effective dynamic dielectric constant tensor \(\varepsilon_{e}(\mathbf{k}_{q},\omega)\) of general 1D media that is valid well beyond the quasistatic regime and apply it to 1D stealthy hyperuniform systems. We consider incident waves of transverse polarization, frequency \(\omega\), and wavenumber \(k_{q}\). Our formula for \(\varepsilon_{e}(\mathbf{k}_{q},\omega)\), which is given in terms of the _spectral density_, leads to a closed-form relation for the transmittance \(T\). Our theoretical predictions are in excellent agreement with finite-difference time-domain (FDTD) simulations. Stealthy hyperuniform layered media have perfect transparency intervals up to a finite wavenumber, implying no Anderson localization, but non-stealthy hyperuniform media are not perfectly transparent. Our predictive theory provides a new path for the inverse design of the wave characteristics of disordered layered media, which are readily fabricated, by engineering their spectral densities. ## I Introduction _Disordered hyperuniform_ many-body systems [1, 2, 3] are an emerging class of amorphous states of matter that are endowed with the novel wave and other transport properties with advantages over their periodic counterparts [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]. Such hyperuniform two-phase composites are characterized by an anomalous suppression of volume-fraction fluctuations in the infinite-wavelength limit [2, 3], i.e., the local volume-fraction variance \(\sigma_{V}^{2}(R)\) inside a spherical observation window of radius \(R\) decays faster than \(R^{-d}\) in \(d\) dimensions in the large-\(R\) limit, \(\lim_{R\to 0}R^{d}\sigma_{V}^{2}(R)=0\). Equivalently, its associated _spectral density_\(\tilde{\chi}_{V}(\mathbf{k})\) vanishes as the wavenumber \(|\mathbf{k}|\) tends to zero, i.e., \(\lim_{|\mathbf{k}|\to 0}\tilde{\chi}_{v}(\mathbf{k})=0\). One important class of such hyperuniform media are the disordered _stealthy_ varieties in which \(\tilde{\chi}_{v}(\mathbf{k})=0\) for \(0<|\mathbf{k}|<K\)[24, 25, 26, 27], meaning that they completely suppress single scattering of incident radiation for these wave vectors [3, 25]. The degree of stealthiness \(\chi\) is the ratio of the number of the constrained wave vectors in the reciprocal space to the total number of degrees of freedom. Recent studies showed that such exotic disordered media exhibit novel electromagnetic wave transport properties, including high transparency in the optically dense regime, maximized absorption, and complete photonic band-gap formation [4, 5, 7, 8, 10, 12, 13, 14, 15, 16, 17, 12, 13, 17, 18, 19, 20, 21, 22, 23], in two and three spatial dimensions. For example, previous work on light transparency of stealthy hyperuniform systems considered 2D point scatterers [7] and 2D and 3D two-phase dielectric composites [31, 10]. Here, we undertake the first study of the electromagnetic properties of 1D disordered stealthy hyperuniform layered media. The problem of wave propagation in a two-phase (or multiphase) layered medium has been extensively studied, because of its simplicity and its ease of fabrication [32, 33], including surface plasmons [34, 35], Anderson localization [36, 37, 38, 39, 40, 41], and deep-subwavelength disorder [42], as well as practical applications [43, 44, 18, 45]. An important wave characteristic is the effective dynamic dielectric constant tensor \(\varepsilon_{e}(\mathbf{k}_{q},\omega)\) for the incident radiation of frequency \(\omega\) and wave vector \(\mathbf{k}_{q}\). This complex-valued quantity determines the effective wavenumber \(k_{e}\) and extinction mean free path \(\ell_{e}\). While there have been many theoretical/numerical treatments to estimate \(\epsilon_{e}(\mathbf{k}_{q},\omega)\) of layered media [46, 47, 48, 49, 50, 51, 52, 53], previous approximations are applicable to disordered media in the _quasistatic_ or long-wavelength regime, i.e., \(|\mathbf{k}_{q}|\xi\ll 1\), where \(\xi\) is a characteristic inhomogeneity length scale. Torquato and Kim [31] recently derived the general nonlocal strong-contrast expansion for the effective dielectric constant tensor \(\epsilon_{e}(\mathbf{k}_{q},\omega)\) that can be applied to two-phase media with various symmetries. This expansion exactly treats multiple scattering to all orders beyond the quasistatic regime (i.e., \(0\leq|\mathbf{k}_{q}|\xi\lesssim 1\)) as a series involving functionals of the \(n\)-point correlation functions \(S^{(i)}(\mathbf{x}_{1},\dots,\mathbf{x}_{n})\) for all \(n\) (Section 2). Here, the quantity \(S^{(i)}(\mathbf{x}_{1},\dots,\mathbf{x}_{n})\) gives the probability of finding \(n\) points at positions \(\mathbf{x}_{1},\dots,\mathbf{x}_{n}\) all in phase \(i\left(=1,2\right)\). Because of the fast-convergence property of this series (or the linear fractional form of this series), truncating it at the \(n\)-point level yields _multiple-scattering_ approximations that still accurately capture multiple scattering to all orders for a wide class of microstructures, including statistically _anisotropic_ media. The second-order truncations already provide accurate approximations for 2D and 3D statistically isotropic two-phase media [31]. However, and importantly, analogous approximations for statistically anisotropic media have yet to be extracted and applied. Here, we theoretically and numerically investigate the tensor \(\epsilon_{e}(\mathbf{k}_{q},\omega)\) of 3D anisotropic layered media consisting of infinite parallel dielectric slabs of phases 1 and 2 whose thicknesses are derived from 1D disordered stealthy hyperuniform packings at various \(\chi\) values. For simplicity, we focus on normally incident waves of transverse polarization, where \(\mathbf{k}_{q}=k_{2}\mathbf{\hat{z}}\) (see Fig. 1), and thus the wavenumber \(k_{q}\) is the independent variable of the effective dielectric constant. From the exact strong-contrast expansion, we derive, for the first time, formulas for \(\epsilon_{e}(k_{q},\omega)\) for 3D anisotropic layered media that accurately accounts for multiple scattering in terms of the spectral density \(\tilde{\chi}_{{}_{\nu}}(\mathbf{k})\), enabling us to probe a wide range of wavenumbers. The quantity \(\tilde{\chi}_{{}_{\nu}}(\mathbf{k})\) is the Fourier transform of the autocovariance function \(\chi_{{}_{\nu}}(\mathbf{r})\equiv S_{2}^{(i)}(\mathbf{r})-\phi_{t}{}^{2}\)[54], where \(\mathbf{r}\equiv\mathbf{x}_{2}-\mathbf{x}_{1}\), and can be measured from scattering experiments [55]. To our knowledge, this expression is the first closed-form formula of \(\epsilon_{e}(k_{q},\omega)\) for general 1D media that applies well beyond the quasistatic regime. We numerically verify that our derived formula can accurately capture multiple scattering effects due to correlated disorder beyond the quasistatic regime by using finite-difference time-domain (FDTD) simulations. For this purpose, we numerically generate stealthy hyperuniform stratified dielectric two-phase media via a modified collective-coordinate procedure described in Refs. [16, 31] (Section 3). For dimensionless wavenumbers up to \(k_{1}/p\lesssim 1.5\), our predictions indeed show excellent agreement with the real and imaginary parts of the effective dielectric constant as well as transmittance found from FDTD simulations (Secs. 4-5 of Supplement 1). Notably, our formula predicts that stealthy hyperuniform layered media are perfectly transparent (defined as \(\text{Im}[\epsilon_{e}(k_{q},\omega)]=0\)) up to a finite wavenumber \(K_{T}=K/(2\sqrt{\phi_{1}\epsilon_{1}+\phi_{2}\epsilon_{2}})\) (i.e., no Anderson localization) in the infinite-volume limit; see Eq. (12). This result is especially remarkable because extended states in 1D disordered systems are more difficult to achieve than in higher dimensions [38, 39, 40, 17, 41]. We also show that a perfect transparency interval cannot exist in disordered 1D _non-stealthy_ media, _hyperuniform_ or not, and thus Anderson localization can be present at all wavenumbers. Our results, combined with the methods to generate media with a prescribed spectral density [24, 25, 13, 26], provide a new inverse-design approach [56] to engineer and fabricate multilayered dielectric media with novel wave properties. ## 2 Theory ### Exact Strong-Contrast Expansion Here, we briefly summarize the general nonlocal strong-contrast-expansion formalism of the effective dynamic dielectric constant tensor \(\epsilon_{e}(\mathbf{k}_{q},\omega)\) for 3D two-phase media with arbitrary symmetries [31]. (The strong-property-fluctuation theory [57, 58] corresponds to a special case of our strong-contrast formalism, as detailed in Section 1 of Supplement 1.) We consider a macroscopically large two-phase composite specimen in three dimensions embedded inside an infinitely large reference phase \(q\)[31, 59]. For simplicity, we take the phase \(q\) to be the matrix phase (i.e., \(q=1,2\)) and assume that phases 1 and 2 are nonmagnetic and dielectrically isotropic with real-valued and frequency-independent dielectric constants. These assumptions imply the linear dispersion relation in the reference phase [i.e., \(k_{q}(\omega)\equiv|\mathbf{k}_{q}(\omega)|=\sqrt{\varepsilon_{q}}\omega/c\)], where \(c\) is the speed of light in vacuum, and thus we henceforth do not explicitly indicate the \(\omega\) dependence. The general nonlocal strong-contrast expansion is a series expansion of the linear fractional form of the tensor \(\epsilon_{e}(\mathbf{k}_{q})\), given as \[\phi_{p}\,\mathbf{L}_{p}^{(q)}\cdot\left(\left\{\mathbf{I}+\mathbf{D}^{(q)} \cdot\left[\epsilon_{e}(\mathbf{k}_{q})-\epsilon_{q}\mathbf{I}\right]\right\} \cdot\left[\epsilon_{e}(\mathbf{k}_{q})-\epsilon_{q}\mathbf{I}\right]^{-1}\right) \cdot\phi_{p}\,\mathbf{L}_{p}^{(q)}\] \[=\phi_{p}\,\mathbf{L}_{p}^{(q)}-\sum_{n=2}^{\infty}\mathbf{\mathcal{A}}_{n }^{(p)}\left(\mathbf{k}_{q}\right)\,, \tag{1}\] where \(p(\neq q)\) indicates the polarized phase, \(\mathbf{L}_{p}^{(q)}\) is the expansion parameter defined as \[\mathbf{L}_{p}^{(q)}\equiv \big{(}\epsilon_{p}-\epsilon_{q}\big{)}\,\Big{[}\mathbf{I}+\mathbf{D}^{(q)} \left(\epsilon_{p}-\epsilon_{q}\right)\Big{]}^{-1}, \tag{2}\] and \(\mathbf{\mathcal{A}}_{n}^{(p)}\left(\mathbf{k}_{q}\right)\) is a wave-vector-dependent second-rank tensor that is a functional involving the set of correlation functions \(S_{1}^{(p)},S_{2}^{(p)},\dots,S_{n}^{(p)}\) and products of the principal part of the dyadic Green's function \(\mathbf{H}^{(q)}(\mathbf{r})\); see Section 1 of Supplement 1. The series expansion [Eq. (1)] has four salient features. First, Eq. (1) is derived from a spatially nonlocal averaged constitutive relation [60, 50, 61], resulting in an expansion that is valid from Figure 1: Schematic of three-dimensional disordered anisotropic stratified media consisting of infinite parallel slabs of phases 1 (cyan) and 2 (dark blue). A plane electromagnetic wave of transverse polarization is normally incident into the medium, and its wave vector is shown as a red arrow. the long- to intermediate-wavelength regimes. Second, Eq. (1) exactly treats multiple scattering to all orders at a given incident wave vector \(\mathbf{k}_{q}\) when the nonlocal homogenization theory is valid because the terms \(\mathbf{\mathcal{A}}_{ii}^{(p)}\big{(}\mathbf{k}_{q}\big{)}\) for \(n=2,\ldots\) in Eq. (1) explicitly account for complete microstructural information (the infinite set of \(S_{2},S_{3},\ldots\)) to infinite order [31]; see Section 1 of Supplement 1 for details. Third, the choice of the shape of the infinitesimal exclusion region (i.e., \(\mathbf{D}^{(d)}\)) leads to a different expansion parameter \(\mathbf{L}_{p}^{(q)}\) that determines the convergence properties. Thus, unlike standard multiple-scattering theories [38, 62, 63], here one can naturally 'tune' the general series expansion to obtain distinctly different approximations suited for certain classes of microstructures. Fourth, the left side of Eq. (1) is a linear fractional transformation of \(\varepsilon_{e}\big{(}\mathbf{k}_{q}\big{)}\) rather than \(\varepsilon_{e}\big{(}\mathbf{k}_{q}\big{)}\) itself, which leads to the rapid convergence of the strong-contrast expansions so that its lower-order truncations approximate well higher-order functionals (i.e., multiple scattering) of the exact series to all orders in terms of lower-order diagrams [31]. ### Multiple-Scattering Approximations for Layered Media We can now extract from the exact strong-contrast expansion [Eq. (1)] accurate multiple-scattering approximations for layered media by truncating the expansion at the \(n\)-point level. We focus here on such a formula at the two-point level that depends on the spectral density \(\bar{\chi}_{\nu}\left(\mathbf{k}\right)\) because it is still accurate and easy to compute. For simplicity, we restrict ourselves to normally incident waves (Fig. 1), and thus the effective dielectric constant now depends on the wavenumber \(k_{q}\). We outline the derivation here and provide details in Section 2 of Supplement 1. Since layered media have rotational symmetry about the \(z\)-axis and translational symmetry in the \(x\)-\(y\) plane, the spectral density can be expressed as \[\bar{\chi}_{\nu}\left(\mathbf{k}\right)=\left(2\pi\right)^{2}\delta\big{(}k_{ x}\big{)}\,\delta\big{(}k_{y}\big{)}\,\bar{\chi}_{\nu}\left(k_{z}\right)\,, \tag{3}\] where \(\delta(k)\) is the one-dimensional Dirac delta function, and \(\bar{\chi}_{\nu}\left(k_{z}\right)\) is the spectral density of 1D two-phase media. For 1D packings of identical hard rods of radius \(a\) and packing fraction \(\phi_{2}\), \(\bar{\chi}_{\nu}\left(k_{z}\right)=\phi_{2}[2\sin^{2}(k_{z}a)]/\left(k_{z}^{2 }a\right)S\left(k_{z}\right)\)[54, 64], where \(S(k_{z})\) is the structure factor of the rod centers. Due to these symmetries, when applying the series [Eq. (1)] to layered media, we utilize the feature 3 discussed in Section 2.A by choosing a disk-like exclusion region normal to the \(z\)-axis [31], leading to \[\mathbf{D}^{(d)}= \varepsilon_{q}{}^{-1}\mathbf{\hat{z}}\mathbf{\hat{z}},\quad\mathbf{L}_{p}^{( d)}=\beta_{pq}\Big{[}\varepsilon_{p}\left(\mathbf{I}-\mathbf{\hat{z}}\mathbf{\hat{z}} \right)+\varepsilon_{q}\mathbf{\hat{z}}\mathbf{\hat{z}}\Big{]}, \tag{4}\] where \(\mathbf{\hat{z}}\) is a unit vector along the \(z\)-direction and \(\beta_{pq}\) is the one-dimensional counterpart of the _dielectric polarizability_, defined as \(\beta_{pq}\equiv 1-\varepsilon_{q}/\varepsilon_{p}\). Here, \(\mathbf{L}_{p}^{(q)}\) is obtained by substituting \(\mathbf{D}^{(d)}\) in Eq. (4) into Eq. (2). Using Eq. (4) and the assumption of normal incidence, the general expression for \(\mathbf{\mathcal{A}}_{2}^{(p)}\left(k_{q}\right)\) simplifies as \[\mathbf{\mathcal{A}}_{2}^{(p)}\left(k_{q}\right)= (\varepsilon_{p}\beta_{pq})^{2}\frac{F^{\left(1\right)}\big{(}k_{ q}\big{)}}{\varepsilon_{q}}(\mathbf{I}-\mathbf{\hat{z}}\mathbf{\hat{z}}), \tag{5}\] \[F^{\left(1\right)}\left(k\right)= \frac{k^{2}}{\pi}\text{p.v.}\int_{0}^{\infty}\text{d}q_{z}\frac{ \bar{\chi}_{\nu}\left(q_{z}\right)}{q_{z}^{2}-\left(2k\right)^{2}}+\frac{ik}{4 }[\bar{\chi}_{\nu}\left(0\right)+\bar{\chi}_{\nu}\left(2k\right)], \tag{6}\] where p.v. stands for the Cauchy principal value. Note that \(F^{\left(1\right)}\big{(}k_{q}\big{)}\) is the nonlocal attenuation function for 1D two-phase media, whose 2D and 3D counterparts were derived in Ref. [31]. Applying Eq. (4) and Eq. (5) into the second-order truncation of the series [Eq. (1)] yields \[\frac{(\phi_{p}\varepsilon_{p}\beta_{pq})^{2}}{\varepsilon_{q}( \varepsilon_{e}^{\perp}\left(k_{q}\right)/\varepsilon_{q}-1)}(\mathbf{I}-\mathbf{\hat{ z}}\mathbf{\hat{z}})+\frac{(\phi_{p}\varepsilon_{q}\beta_{pq})^{2}}{\varepsilon_{q}(1- \varepsilon_{q}/\varepsilon_{e}^{\perp}\big{(}k_{q}\big{)})}\mathbf{\hat{z}}\mathbf{ \hat{z}}\] \[= \varepsilon_{p}\beta_{pq}[\phi_{p}-(\varepsilon_{p}\beta_{pq})\,F^ {\left(1\right)}\left(k_{q}\right)/\varepsilon_{q}](\mathbf{I}-\mathbf{\hat{z}}\mathbf{ \hat{z}})+\phi_{p}\varepsilon_{q}\beta_{pq}\mathbf{\hat{z}}\mathbf{\hat{z}}, \tag{7}\] where we have decomposed the effective dielectric constant tensor into two orthogonal components \(\varepsilon_{e}^{\perp}\left(k_{q}\right)\) and \(\varepsilon_{e}^{\perp}\big{(}k_{q}\big{)}\) for the transverse and longitudinal polarizations, respectively, as follows: \(\varepsilon_{e}\big{(}\mathbf{k}_{q}\big{)}=\varepsilon_{e}^{\perp}\left(k_{q} \right)(\mathbf{I}-\mathbf{\hat{z}}\mathbf{\hat{z}})+\varepsilon_{e}^{\perp}\big{(}k_{q} \big{)}\mathbf{\hat{z}}\mathbf{\hat{z}}\). Now Eq. (7) provides two independent approximations: \[\varepsilon_{e}^{\perp}\left(k_{q}\right)= \varepsilon_{q}\bigg{[}1+\frac{\phi_{p}{}^{2}(\varepsilon_{p}/ \varepsilon_{q})\beta_{pq}}{\phi_{p}-(\varepsilon_{p}\beta_{pq})\,F^{\left(1 \right)}\left(k_{q}\right)/\varepsilon_{q}}\bigg{]}, \tag{8}\] \[\varepsilon_{e}^{z}\big{(}k_{q}\big{)}= \varepsilon_{q}(1-\phi_{p}\beta_{pq})^{-1}. \tag{9}\] Note that \(\varepsilon_{e}^{\perp}\left(k_{q}\right)\) is dependent on the incident wavenumber \(k_{q}\) and is complex-valued if \(\bar{\chi}_{\nu}\left(0\right)+\bar{\chi}_{\nu}\left(2k_{q}\right)>0\), implying that the media can be lossy due to forward scattering and backscattering from inhomogeneities in the local dielectric constant. By contrast, \(\varepsilon_{e}^{z}\big{(}k_{q}\big{)}\) is independent of \(k_{q}\), reflecting the fact that a traveling longitudinal wave cannot exist under our assumptions. Hence, we focus on \(\varepsilon_{e}^{\perp}\big{(}k_{q}\big{)}\) in the rest of this work. In the static limit, Eqs. (8) and (9) reduce to the arithmetic and harmonic means of the dielectric constants, respectively: \[\varepsilon_{e}^{\perp}\left(0\right)=\langle\varepsilon\rangle\equiv\phi_{p} \varepsilon_{p}+\phi_{q}\varepsilon_{q},\quad\varepsilon_{e}^{2}(0)= (\phi_{p}/\varepsilon_{p}+\phi_{q}/\varepsilon_{q})^{-1}. \tag{10}\] Interestingly, these static results are exact for any microstructure [54]. Renormalization of the reference phase for the optimal convergence (Section 2 of Supplement 1), equivalent to using the effective Green's function in Ref. [31], yields a _scaled_ strong-contrast approximation for disordered layered media: \[\varepsilon_{e}^{\perp}\left(k_{q}\right)=\varepsilon_{q}\left[1+\frac{\phi_{p}{}^ {2}(\varepsilon_{p}/\varepsilon_{q})\beta_{pq}}{\phi_{p}-(\varepsilon_{p}\beta_{ pq})\,F^{\left(1\right)}\left(k_{q}\sqrt{\langle\varepsilon\rangle\ /\varepsilon_{q}}\right)/\langle\varepsilon\rangle}\right], \tag{11}\] where \(\langle\varepsilon\rangle\) is given in Eq. (10). We henceforth focus on this scaled approximation because it is more accurate than Eq. (8), as shown in Fig. S6 in Supplement 1. As shown in Ref. [31], Eq. (11) satisfies the Kramers-Kronig relations [65] so that its predictions properly exhibit both _normal dispersion_ [i.e., an increase in \(\text{Re}[\varepsilon_{e}^{\perp}]\) with \(k_{q}\)] and _anomalous dispersion_ [i.e., a decrease in \(\text{Re}[\varepsilon_{e}^{\perp}]\) with \(k_{q}\)]. Furthermore, satisfying the Kramers-Kronig relations also implies that Eq. (11) yields qualitatively accurate predictions, even beyond the validity regime (i.e., \(k_{q}/\rho\lesssim 1.5\)); see Sections 5 and 6 on the Supplement 1 for details. In the scaled approximation [Eq. (11)], the quantity \(F^{\left(1\right)}\left(k_{q}\right)\) is generally complex-valued at a given incident wavenumber \(k_{q}\), producing a corresponding \(\varepsilon_{e}^{\perp}\left(k_{q}\right)\) with a nonnegative imaginary part. Following conventional usage, a composite attenuates waves at a given wavenumber if the imaginary part of the effective dielectric constant is positive. Such attenuation occurs here only because of multiple scattering effects (not absorption). Importantly, our \(\mathrm{Im}[F^{(\mathrm{ID})}(k_{1})]=0\), and substituting this condition into the scaled formula [Eq. (11)] yields a perfect transparency interval: \[0\leq k_{1}<K_{T}\equiv\frac{K}{2\sqrt{\langle\varepsilon\rangle}}=\frac{\rho \pi\chi}{\sqrt{\langle\varepsilon\rangle}}, \tag{12}\] where \(\langle\varepsilon\rangle\) is the arithmetic mean of the local dielectric constant; see Eq. (10). Perfect transparency implies an infinite localization length within this spectral range, which is consistent with an estimate that the localization length is inversely proportional to what we call the spectral density [40]. We stress that the prediction [Eq. (12)] is purely theoretical and hence does not rely on simulations or measurements of the spectral density. ## 3 Model Microstructures Stealthy hyperuniform particle systems are defined by a spectral density that vanishes in a finite range of wavenumbers that includes the origin [\(\bar{\chi}_{\nu}(\mathbf{k})=0\) for \(0<|\mathbf{k}|\leq K\)]. The degree of stealthiness \(\chi\) is measured by the ratio of the number of the constrained wave vectors in the reciprocal space to the total degrees of freedom, i.e., in one dimension, \(\chi=K/(2\pi\rho)\), where \(\rho\) is the number density of particles. For \(\chi<0.5\) in two and three dimensions or \(\chi<1/3\) in one dimension, these stealthy hyperuniform systems are highly degenerate and disordered [26]. Thus, we consider 1D cases in which \(\chi\) takes the following values: \(\chi=0.1,0.2,0.3\). Henceforth, we take the characteristic inhomogeneity length scale \(\xi\) to be the mean particle separation \(1/\rho\), which is of the order of the mean nearest-neighbor distance \(\ell_{p}\), (i.e., \(\ell_{p}\sim 1/\rho\)). This choice means that the range of validity of our nonlocal theory is \(k_{1}/\rho\lesssim 1.5\) for the current models, as shown later in Section 5. We numerically generate 1D packings of packing fraction \(\phi_{2}=0.2\) in the following two-step procedure. First, we generate point configurations of \(N\) particles in a periodic fundamental cell \(\mathfrak{F}\) via the _collective-coordinate optimization_ technique [24, 25, 26], which numerically generates ground states with very high-precision of the following potential energy: \[\Phi\Big{(}\mathbf{r}^{N}\Big{)}=\frac{1}{V_{\mathfrak{F}}}\sum_{|\mathbf{k}| <K}\mathfrak{F}(\mathbf{k})\,S(\mathbf{k})+\sum_{i<j}u\left(r_{ij}\right), \tag{13}\] where \(\mathfrak{F}(\mathbf{k})=1\) for \(|\mathbf{k}|<K\), \(V_{\mathfrak{F}}\) is the volume of \(\mathfrak{F}\), and a soft-core repulsion term [66] is \[u(r)=\left\{\begin{array}{ll}(1-r/\sigma)^{2},&r<\sigma,\\ 0,&\text{otherwise}.\end{array}\right. \tag{14}\] The soft repulsion [Eq. (14)] is to get the stealthy hyperuniform ground states whose nearest-neighbor distances are larger than the length scale \(\sigma\)[16, 31]. Finally, we create packings with dielectric constant \(\varepsilon_{2}\) by circumscribing the points by identical rods of radius \(a<\sigma/2\) without overlaps [8]; see Fig. 2(a). The parameters used to generate these systems are listed in Section 3 of Supplement 1. We compute the spectral density \(\bar{\chi}_{\nu}(k)\) from the generated packings as shown in Fig. 2(b). From the long- to intermediate-wavelength regimes (\(k/\rho\lesssim 10\)), we clearly see that stealthy hyperuniform packings exhibit a higher degree of correlations as \(\chi\) increases. In the small-wavelength regime (\(k/\rho\gg 10\) or, equivalently, \(ka\gg 1\)), however, the curves tend to collapse onto a single curve, reflecting the fact that these three systems consist of identical hard rods. ## 4 Simulations We validate the accuracy of our predictions on the effective dielectric constant \(\varepsilon_{\epsilon}^{\perp}(k_{1})\) and the transmittance \(T\) by comparing primarily to full-waveform simulations [67] via an open-source FDTD package MEEP [68], although we use the transfer matrix method to compute \(T\), as explained in Supplement 1. We take the matrix to be the reference phase (i.e., \(q=1\)) and the particles to be the polarized phase (i.e., \(p=2\)) and set the phase contrast ratio as \(\varepsilon_{2}/\varepsilon_{1}=4\). We measure the transmittance spectra \(T\) through the disordered stealthy hyperuniform layered media, which are then compared to the predictions from Eq. (11). We also directly extract \(\varepsilon_{\epsilon}^{\perp}(k_{1})\) from the nonlocal constitutive relation \(\varepsilon_{\epsilon}^{\perp}(k_{1})=\left\langle\tilde{D}(k_{\epsilon}, \omega)\right\rangle/\left\langle\tilde{E}(k_{\epsilon},\omega)\right\rangle\) at a given frequency \(\omega\), as was done in Ref. [31]; see Fig. S6 of Supplement 1. Here, \(\left\langle\tilde{D}(k_{\epsilon},\omega)\right\rangle\) and \(\left\langle\tilde{E}(k_{\epsilon},\omega)\right\rangle\) are the spatial Fourier transforms of the ensemble averages of dielectric displacement field \(\left\langle D(x,\omega)\right\rangle\) and electric field \(\left\langle E(x,\omega)\right\rangle\) at the complex-valued _effective wavenumber_\(k_{\epsilon}\), respectively; see details in Section 3 of Supplement 1. ## 5 Results We now show how our multiple-scattering approximation [Eq. (11)] enables us to accurately predict the real and imaginary parts of the effective dielectric constant tensor \(\epsilon_{\epsilon}(k_{1})\) for disordered stealthy hyperuniform layered media for \(\chi=0.1,0.2\) Figure 2: Disordered stealthy hyperuniform layered media of packing fraction \(\phi_{2}=0.2\) and unit number density \(\rho=1\) at three values of \(\chi=0.1,0.2,0.3\). (a) Representative images at (Top) \(\chi=0.1\), (Middle) \(\chi=0.2\), and (Bottom) \(\chi=0.3\). The particle phases are shown in different colors. (b) The spectral densities \(\bar{\chi}_{\nu}(k)\) as functions of dimensionless wavenumber \(k/\rho\). and 0.3. We begin by computing the nonlocal attenuation function \(F^{(\mathrm{ID})}(k)\) given in Eq. (6) from \(\hat{\chi}_{\nu}(k)\); see Fig. 3. Stealthy hyperuniform layered media can achieve \(\mathrm{Im}[F^{(\mathrm{ID})}(k)]=0\) up to a finite wavenumber \(k\), leading to the prediction of perfect transparency interval; see Eq. (12). Using the values of \(F^{(\mathrm{ID})}(k)\) in Fig. 3, we then compute the scaled approximation [Eq. (11)] for \(\varepsilon_{\varepsilon}^{\perp}(k_{1})\); see Fig. 4. Both real and imaginary parts of these predictions show excellent agreement with the results from FDTD simulations up to \(k_{1}/\rho\lesssim 1.5\) (see Fig. S6 of Supplement 1). In Fig. 4, the real part of our formula increases with \(k_{1}\) (normal dispersion) within the perfect transparency interval and then decreases with \(k_{1}\) (anomalous dispersion) outside of those intervals. Such a spectral dependence of \(\mathrm{Re}[\varepsilon_{\varepsilon}^{\perp}(k_{1})]\) comes from the fact that the strong-contrast formula satisfies the Kramers-Kronig relations [65] (see Section 6 of Supplement 1), as does the corresponding approximation for 3D statistically isotropic media [31]. Equation (11) also shows qualitatively accurate dielectric responses even beyond the intermediate-wavelength regime (i.e., \(k_{1}/\rho\gtrsim 1.5\)) because the Kramers-Kronig relations closely relate the high-frequency predictions of \(\mathrm{Re}[\varepsilon_{\varepsilon}^{\perp}]\) to the accurate predictions of \(\mathrm{Im}[\varepsilon_{\varepsilon}^{\perp}]\) in a finite spectral range and vice versa [69]. As shown in Fig. 4, these composites are perfectly transparent (i.e., \(\mathrm{Im}[\varepsilon_{\varepsilon}^{\perp}]=0\)) for a wide range of frequencies, as predicted by Eq. (12). At the edges of these transparency intervals, a discontinuous change occurs in \(\mathrm{Im}[\varepsilon_{\varepsilon}^{\perp}]\) with \(k_{1}\) because the imaginary part is directly proportional to \(\hat{\chi}_{\nu}(2k_{1})\); see \(\mathrm{Im}[F^{(\mathrm{ID})}(k)]\) given in Eq. (6). In higher dimensions, however, such abrupt transitions become increasingly more difficult to achieve, as observed in Ref. [31]; see Section 2 of Supplement 1 for detail. Figure 5 depicts how the predicted perfect transparency interval from Eq. (12) varies with the phase contrast ratio \(\varepsilon_{2}/\varepsilon_{1}\) for given values of \(\chi\), \(\phi_{2}\), and \(\rho\). We numerically demonstrate that these predicted intervals are valid for \(1<\varepsilon_{2}/\varepsilon_{1}<10\); see Fig. S3 of Supplement 1. From our scaled approximation [Eq. (11)], we also predict the _normal_ transmittance \(T\) through a layered medium at \(k_{1}\) by assuming that the system is a homogeneous slab of thickness \(L\) with an effective dielectric constant \(\varepsilon_{\varepsilon}^{\perp}(k_{1})\) and is optically thin so that waves inside it can interfere coherently. To estimate \(T\), we use an Airy formula [46] of transmittance for a lossy homogeneous slab with absorption: \[T=\left|\frac{-\sqrt{\varepsilon_{\varepsilon}^{\perp}}\,t^{2}\exp\left(i \sqrt{\varepsilon_{\varepsilon}^{\perp}}k_{1}L\right)}{1-r^{2}\exp\left(2i \sqrt{\varepsilon_{\varepsilon}^{\perp}}k_{1}L\right)}\right|^{2}, \tag{15}\] where \(r\equiv(1-\sqrt{\varepsilon_{\varepsilon}^{\perp}})/(1+\sqrt{\varepsilon_{ \varepsilon}^{\perp}})\) and \(t\equiv 2/(1+\sqrt{\varepsilon_{\varepsilon}^{\perp}})\), and \(\varepsilon_{\varepsilon}^{\perp}\) is given by the approximation [Eq. (11)]; see Section 4 of Supplement 1 for results from Eq. (8). The electric field inside the dielectric layered media attenuates solely due to multiple scattering, but its effective behavior is identical to an exponentially damped wave due to absorption in a lossy homogeneous medium (see Fig. S2 of Supplement 1). Hence, we expect Eq. (15) to provide a good approximation of \(T\). We now compare our theoretical predictions for transmission to the corresponding results obtained from FDTD simulations; see Fig. 6. Remarkably, our theory very accurately predicts the perfect transparency intervals [Eq. (12)] i.e., no Anderson localization (green regions in Fig. 6), because they correctly incorporate multiple scattering at finite wavelengths via the spectral density. This observation is noteworthy because extended states in 1D disordered systems are much more difficult to achieve than in higher dimensions [38, 39, 41, 17]. Within these transparency intervals (\(k_{1}<K_{T}\)), our theory accurately predicts the small-amplitude periodic oscillations in \(T\) around unity, which come from coherent interference of the multiply reflected waves due to the finite system thickness \(L\) and, thus, reduces to a constant close to unity when \(L\) is much larger than the coherence length of light [46, 70]. Outside of the perfect transparency intervals, where scattering attenuation is strong, \(T\) from the Airy formula Figure 4: Predictions of the scaled strong-contrast approximation [Eq. (11)] of the effective dynamic dielectric constant \(\varepsilon_{\varepsilon}^{\perp}(k_{1})\) as a function of the dimensionless incident wavenumber \(k_{1}/\rho\) for disordered stealthy hyperuniform layered media of \(\phi_{2}=0.2\) and \(\varepsilon_{2}/\varepsilon_{1}=4\) at \(\chi=0.1,0.2,0.3\). The lower panel is a semilog plot of the imaginary part \(\mathrm{Im}[\varepsilon_{\varepsilon}^{\perp}(k_{1})]\). For the effective dielectric constants, our theory is accurate up to \(k_{1}/\rho\lesssim 1.5\); see Fig. S6 of Supplement 1. Figure 3: Real (upper) and imaginary (lower) parts of the nonlocal attenuation functions \(F^{(\mathrm{ID})}(k)\) as functions of dimensionless wavenumber \(k/\rho\) for 1D disordered stealthy hyperuniform layered media of packing fraction \(\phi_{2}=0.2\) at three values of \(\chi=0.1,0.2,0.3\). The functions are computed from the spectral densities in Fig. 2 and Eq. (6). [Eq. (15)] provides a lower bound on the simulation results for the reason explained in Section 4 of Supplement 1. However, we confirm that our theory still accurately predicts, qualitatively, the spectral dependence of \(T\) because Eq. (11) yields a physically feasible dielectric response due to the Kramers-Kronig relations. Within these strong-attenuation intervals, \(T\) is increasingly suppressed as \(L\) increases and becomes virtually zero for sufficiently large \(L\) since \(\text{Im}[\varepsilon_{e}^{+}(k_{1})]>0\) if \(k_{1}>K_{T}\). Stealthy hyperuniformity is required for disordered layered media to possess perfect transparency for a finite range of wavenumbers. Indeed, we show that such a perfect transparency interval cannot exist for 'non-stealthy' hyperuniform 1D media, even though these correlated disordered systems anomalously suppress large-scale volume-fraction fluctuations; see the example in Section 7 of Supplement 1. ## 6 Conclusions and Discussion In summary, we have theoretically and numerically investigated the effective wave properties, including the effective dynamic dielectric constant tensor \(\varepsilon_{e}(k_{q})=\varepsilon_{e}^{\perp}(k_{q})\)\((\mathbf{I}-\mathbf{\hat{z}}\mathbf{\hat{z}})+\varepsilon_{e}^{z}(k_{q})\)\(\mathbf{\hat{z}}\mathbf{\hat{z}}\) and transmittance \(T\), of 3D statistically anisotropic two-phase layered media made of 1D disordered stealthy hyperuniform packings. To predict \(\varepsilon_{e}(k_{q})\) of such exotic disordered models, we derived for the first time a multiple-scattering approximation [Eq. (11)] for statistically anisotropic media from the strong-contrast-expansion formalism. Predictions of both the effective dielectric constant \(\varepsilon_{e}^{\perp}(k_{q})\) and transmittance \(T\) are in excellent agreement with corresponding results obtained from the FDTD simulations up to \(k_{1}/\rho\lesssim 1.5\). Remarkably, our predictions of \(T\) are virtually identical to the simulation results within the perfect transparency ranges. Our multiple-scattering approximation is the first closed-form formula that provides a simple but accurate relation between the effective wave properties of 3D layered media and their spectral density that applies well beyond the quasistatic regime. Beyond the valid range \((k_{1}/\rho\gtrsim 1.5)\), Eq. (11) can still provide qualitatively accurate and physically realistic predictions of dielectric responses, since it satisfies the Kramers-Kronig relations. We applied this newly derived formula [Eq. (11)] to disordered stealthy hyperuniform layered media at \(\chi=0.1,0.2\), and \(0.3\). It is noteworthy that our multiple-scattering formula Figure 5: Prediction [Eq. (12)] of the upper bound \(K_{T}\) of the perfect transparency intervals of 1D stealthy hyperuniform layered media with \(\phi_{2}=0.2\) and \(\rho=1\) as a function of the phase contrast ratio \(\varepsilon_{2}/\varepsilon_{1}\). We consider three \(\chi\) values: \(\chi=0.1,0.2\), and \(0.3\). Figure 6: Transmittance spectra \(T\) as a function of the dimensionless wavenumber \(k_{1}/\rho\) for disordered stealthy hyperuniform layered media of packing fraction \(\phi_{2}\) (\(\equiv 2\rho a)=0.2\) and phase-contrast ratio \(\varepsilon_{2}/\varepsilon_{1}\ =\ 4\) at three values of (a) \(\chi\ =\ 0.1\), (b) \(\chi=0.2\), and (c) \(\chi=0.3\). The predictions are computed from Eq. (15) using the scaled approximation [Eq. (11)]. The green-shaded area indicates the predicted transparency intervals [Eq. (12)]. Within these intervals, our predictions show excellent agreement with simulations, since there is no absorption. [Eq. (11)] predicts that these disordered systems are perfectly transparent in the infinite-volume limit for a finite ratio \(\epsilon_{2}/\epsilon_{1}\), implying no Anderson localization up to a finite wavenumber proportional to \(\chi\), as given by Eq. (12). This observation is remarkable in that such extended states in 1D disordered systems are more difficult to achieve than in higher dimensions [38, 39, 40, 41, 17]. If the localization length is actually finite, we expect that it will be extremely large for sufficiently small \(\epsilon_{2}/\epsilon_{1}\) compared to any practically large sample size in the transparency interval in such stratified media, as will be reported elsewhere [71]. In contrast, for disordered non-stealthy layered media, hyperuniform or not, our theory shows that there is no spectral range of perfect transparency, implying that localization emerges as the system size grows. Our findings also have important practical implications. For example, we clearly demonstrate that disordered stealthy hyperuniform layered media can be employed as low-pass filters that transmit waves up to a selected wavenumber. Furthermore, combining our theory with the capabilities to generate media with a prescribed spectral density [24, 25, 26, 13] enables an inverse-design approach [56] to engineer and fabricate layered dielectric materials with novel wave properties. One possible design is a layered medium satisfying \(\widehat{\chi}_{v}(k)=0\) for \(k<\epsilon\) and \(2k_{L}<k<2k_{L}\), which transmits waves within a narrow spectrum \(k_{L}<k_{1}<k_{U}\). Notably, such computationally designed layered media as well as other 1D disordered models can be readily fabricated via vacuum deposition [72], spin-coating [73], and 3D printing techniques [74]. Thus, our results offer promising prospects for engineering novel optoelectronic devices. We note that our formalism can be extended to cases of obliquely incident wavevectors \(\mathbf{k}_{q}\), in which both effective dielectric constants \(\varepsilon_{\varepsilon}^{\perp}\left(\mathbf{k}_{q}\right)\) and \(\varepsilon_{\varepsilon}^{\perp}(\mathbf{k}_{q})\) become important. Moreover, it also will be interesting to extend our theory to lossy dielectric or metallic phases whose dielectric constants are now frequency-dependent and complex-valued. Funding.Army Research Office (W911NF-22-2-0103).Acknowledgments.The authors thank Michael Klatt for helpful discussions. Simulations were performed on computational resources managed and supported by the Princeton Institute for Computational Science and Engineering (PICSciE). Disclosures.The authors declare no conflicts of interest. Data availability.Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. Supplemental document.See Supplement 1 for supporting content.
2301.01373
Covariate-guided Bayesian mixture model for multivariate time series
With rapid development of techniques to measure brain activity and structure, statistical methods for analyzing modern brain-imaging play an important role in the advancement of science. Imaging data that measure brain function are usually multivariate time series and are heterogeneous across both imaging sources and subjects, which lead to various statistical and computational challenges. In this paper, we propose a group-based method to cluster a collection of multivariate time series via a Bayesian mixture of smoothing splines. Our method assumes each multivariate time series is a mixture of multiple components with different mixing weights. Time-independent covariates are assumed to be associated with the mixture components and are incorporated via logistic weights of a mixture-of-experts model. We formulate this approach under a fully Bayesian framework using Gibbs sampling where the number of components is selected based on a deviance information criterion. The proposed method is compared to existing methods via simulation studies and is applied to a study on functional near-infrared spectroscopy (fNIRS), which aims to understand infant emotional reactivity and recovery from stress. The results reveal distinct patterns of brain activity, as well as associations between these patterns and selected covariates.
Haoyi Fu, Lu Tang, Ori Rosen, Alison E. Hipwell, Theodore J. Huppert, Robert T. Krafty
2023-01-03T22:04:39Z
http://arxiv.org/abs/2301.01373v1
# Covariate-Guided Bayesian Mixture of Spline Experts for the Analysis of Multivariate Time Series ###### Abstract With rapid development of techniques to measure brain activity and structure, statistical methods for analyzing modern brain-imaging play an important role in the advancement of science. Imaging data that measure brain function are usually multivariate time series and are heterogeneous across both imaging sources and subjects, which lead to various statistical and computational challenges. In this paper, we propose a group-based method to cluster a collection of multivariate time series via a Bayesian mixture of smoothing splines. Our method assumes each multivariate time series is a mixture of multiple components with different mixing weights. Time-independent covariates are assumed to be associated with the mixture components and are incorporated via logistic weights of a mixture-of-experts model. We formulate this approach under a fully Bayesian framework using Gibbs sampling where the number of components is selected based on a deviance information criterion. The proposed method is compared to existing methods via simulation studies and is applied to a study on functional near-infrared spectroscopy (fNIRS), which aims to understand infant emotional reactivity and recovery from stress. The results reveal distinct patterns of brain activity, as well as associations between these patterns and selected covariates. Bayesian mixture model Brain-imaging Functional near-infrared spectroscopy Model-based clustering Multivariate time series Smoothing splines Face-to-face still-face ## 1 Introduction Time series are realizations of random processes. Obtaining estimated time series trajectories may provide insights into many practical problems. Functional near-infrared spectroscopy (fNIRS) is a noninvasive brain imaging technique that measures changes in both oxy- and deoxy-hemoglobin using near-infrared light (Jobsis, 1977). In fNIRS, processed data are nonstationary multivariate time series with a non-constant mean and high variability across time, which pose many statistical challenges in inference and estimation. In the case of fNIRS, different subjects could have distinct patterns of multivariate time series trajectories, which could be associated with certain clinical or demographic characteristics. The analysis of fNIRS data requires an appropriate method for the analysis of a collection of multivariate time series observed from different subjects, which is often referred to as a replicated multivariate time series setting. Cluster analysis is often used to address the issue of heterogeneity and identify subgroups from collections of time series observed from different subjects. Time series clustering has been used in diverse scientific areas to discover trajectory patterns, which can uncover valuable information from complex and massive datasets (Liao, 2005). Time series clustering partitions the entire collection of data into different groups such that homogeneous time series are grouped together based on a certain similarity measure. Challenges in time-series clustering include computational issues due to high-dimensionality and the selection of proper similarity measures (Lin _and others_, 2003; Keogh and Pazzani, 2000). Several authors have proposed clustering algorithms for multivariate time series. Kakizawa _and others_ (1998) used Kullback-Leibler discrimination information as the minimum discrimination criterion for clustering multivariate Gaussian time series. Wang _and others_ (2007) used a modified \(K\)-means clustering algorithm for clustering multivariate time series based on univariate structures. A variety of papers have established different model-based clustering methods for clustering multivariate time series, such as multivariate autoregressive models (Maharaj, 1999; _He and others_, 2022), a hidden Markov model (Li _and others_, 2001) and smoothing splines (Krafty _and others_, 2017; Li and Krafty, 2019). Comprehensive review of methods for time series clustering can be found in Liao (2005) and in Maharaj _and others_ (2019). Covariate-dependent structures can often be associated with the mixture components from a clustering of time series. Bertolacci _and others_ (2022) presented an analysis of multiple nonstationary time series by using a covariate-dependent infinite mixture with logistic stick-breaking weights, where mixing weights are computed based on covariates. The mixture-of-experts model (Jacobs _and others_, 1991) assigns weights to each expert via a covariate-dependent multinomial logists. Huerta _and others_ (2003) addressed the issue of time series model mixing based on covariates using the hierarchical mixture-of-experts (Jordan and Jacobs, 1994). Smoothing splines, which are nonparametric methods that utilize roughness-based penalties, have been widely used in the analysis of time series (Wang, 2011; Gu, 2013). Bayesian interpretations of smoothing splines were first discussed by Kimeldorf and Wahba (1970). Wahba (1978) showed that the solution to the smoothing splines objective function is equivalent to Bayesian estimation with a partially diffuse prior. Speckman and Sun (2003) adopted a fully Bayesian approach for implementing smoothing splines with a noninformative prior on the variance component, as well as derived necessary and sufficient conditions for the propriety of the posterior. Smoothing splines require estimation of a large number of coefficients, which might be impractical in high-dimensional settings. Gu and Kim (2002) used a subset of reproducing kernel functions to achieve a low-dimensional approximation. Wood _and others_ (2002) obtained a subset of basis functions using the eigen-decomposition of the Gaussian kernel. Krafty _and others_ (2017) proposed a tensor-product model for the analysis of replicated multivariate time series which decomposes the power spectrum into products of univariate outcomes and frequencies. Our goal in this paper is to perform a covariate-guided clustering of multivariate time series that can capture trajectory patterns of mixture components and evaluate the relationship between covariates and trajectory patterns. To this end, each mixture component is modeled via smoothing splines, and time-independent covariates are incorporated into the mixture model via the mixing weights. The method is formulated in a fully Bayesian framework. The rest of this paper is organized as follows. In Section 2 we introduce the motivating study. Sections 3 and 4 present the proposed model and priors. Section 5 introduces the sampling scheme. In Section 6 we report simulation results under different settings and Section 7 illustrates our proposed method with application to the motivating study. Section 8 concludes the paper with a discussion. ## 2 Motivating Study Our motivating study aims to understand patterns of infant's brain activity before, during and after an emotionally stressful probe called face-to-face still-face (FFSF) (Tronick _and others_, 1978). Participant mothers in this study were recruited from the longitudinal Pittsburgh Girls Study (PGS), a population-based study of 2,450 girls who were recruited in the city of Pittsburgh between the ages of 5 and 8 (Keenan _and others_, 2010). In 2016, a large-scale sub-study of the PGS was initiated to investigate how environmental factors, such as psychological stressors experienced during childhood and adolescence, affect later maternal pregnancy and child health. The study is part of the National Institutes of Health Environmental Influences of Child Health Outcomes (ECHO) program, which examines different impacts of prenatal environmental exposures across biological, chemical, physical and social domains on offspring health and development (Gillman and Blaisdell, 2018). The PGS-ECHO study enrolls PGS participants as they become pregnant or recently deliver a live birth. Participants complete multiple prenatal lab visits and the children are followed from ages 6 to 36 months. The lab protocol includes interviews and interaction tasks to assess contextual stressors, health, mood, lifestyle behaviors and offspring behavioral and emotional development. Face-to-face interactions between mothers and infants are essential to the development of infants with respect to communication and social skills, as well as the regulation of emotion and temperament (Hipwell _and others_, 2019). The FFSF paradigm is a widely used stress task (a violation of the expectation of social interaction) that allows for biobehavioral measurement of individual differences in infant response and recovery. The FFSF comprises of three phases: interact (or baseline), still-face and recovery (Adamson and Frick, 2003). In phase 1, mothers perform normal interactions with infants without the use of toys; this phase serves as the baseline. In phase 2, mothers adopt a neutral facial expression (still-face with no facial or oral communication) to infants, followed by phase 3, where mothers resume normal interactions with their infants. Prior to the start of the FFSF, an fNIRS cap is fitted on the infant's head to measure the level of and change in brain activation across the three phases. PGS-ECHO fNIRS still-face data are recorded using a continuous NIRS imaging system (NIRScout; NIRx Medical Technologies, Berlin, Germany) at the sampling rate of 7.8125 Hz and using the NIRStart acquisition software. The data are measured simultaneously at two wavelengths (760 nm and 850 nm). As shown in Figure 1(a), this fNIRS probe consists of 12 channels from 8 sources and 4 detectors. In the current study, we measured infant brain activity using the above fNIRS probe (roughly \(120\) seconds of measurements for each phase). At the end of 2021, recorded fNIRS still-face data had been collected from 155 infant subjects. Demographic variables of infants and mothers such as gestational age, infant age, sex, birth weight, head circumference, along with parent reports on the Infant Behavior Questionnaire-Revised (IBQ-R) (Garstein and Rothbart, 2003) were also collected. By removing infants who did not complete the three phases of the still-face paradigm, who had large outliers based on leverage and who had a very short period of measurements in any of the three still-face phases, there were a total of 82 subjects with complete fNIRS still-face data available for future analysis. The above quality control steps were performed by the NIRS brain AnalyzIR toolbox in MATLAB (Santosa _and others_, 2018). Moreover, additional data pre-processing steps were performed in R software, including data interpolation and rescaling. Finally, processed fNIRS data had a total of 1,500 measurement points for each subject and each channel, where each phase consisted of 500 points. All measurements and sampling times were rescaled to be between 0 and 1, with the interact phase occurring between time 0 to 1/3, still-face between 1/3 to 2/3, and recovery between 2/3 to 1. An example of processed fNIRS time series from two selected subjects and four selected channels is displayed in Figure 2. The goals of our analysis are to identify distinct patterns of brain activity trajectories from multiple fNIRS channels represented by the relative concentration of oxy-hemoglobin, and to assess the association between trajectory patterns and relevant covariates. ## 3 Model In this section, we provide a detailed description of our proposed covariate-guided Bayesian mixture of spline experts model. The proposed model consists of spline components whose mixing weights depend on covariates. ### Mixture of splines model We propose a tensor-product mixture of splines model for multivariate time series. For each subject \(i=1,\ldots,N\), let \(\mathbf{y}_{i}=(\mathbf{y}^{\prime}_{i1},\ldots,\mathbf{y}^{\prime}_{ik},\ldots,\mathbf{y}^{ \prime}_{iK})^{\prime}\) be the \(nK\)-vector corresponding to the \(K\)-dimensional time series for \(k=1,\ldots,K\), where \(\mathbf{y}_{ik}=\left[y_{ik}(t_{1}),\ldots,y_{ik}(t_{j}),\ldots,y_{ik}(t_{n}) \right]^{\prime}\) contains the trajectory of measurements on the \(k\)th entry of the time series evaluated over a grid of \(n\) time points for \(j=1,\ldots,n\), and \(\mathbf{\epsilon}_{i}=(\mathbf{\epsilon}^{\prime}_{i1},\ldots,\mathbf{\epsilon}^{\prime} _{iK})^{\prime}\) is the \(nK\)-vector of errors. Following the model representation of Kratny _and others_(2017), the tensor-product model for the \(K\)-dimensional multivariate time series, conditional on component \(g\), \(g=1,\ldots,G\), can be written as: \[\{\mathbf{y}_{i}\mid z_{ig}=1\}=(\mathbf{I}_{K}\otimes\mathbf{X})\mathbf{\alpha}_{g}+(\mathbf{I}_{ K}\otimes\mathbf{W})\mathbf{\beta}_{g}+\mathbf{\epsilon}_{i}, \tag{1}\] where \(\{z_{ig}\}_{g=1}^{G}\) are latent indicators as described in Section 3.3, \(\mathbf{\alpha}_{g}=(\mathbf{\alpha}^{\prime}_{g1},\ldots,\mathbf{\alpha}^{\prime}_{gK})^ {\prime}\) is a \(2K\)-vector of intercepts and slopes, \(\mathbf{\beta}_{g}=(\mathbf{\beta}^{\prime}_{g1},\ldots,\mathbf{\beta}^{\prime}_{gK})^{\prime}\) is a \(mK\)-vector of basis function coefficients as described in Section 4.1, \(\mathbf{I}_{K}\) is a \(K\times K\) identity matrix and \(\otimes\) denotes a tensor product. The matrix \(\mathbf{X}\) is given by \(\mathbf{X}=\begin{pmatrix}1&1&\ldots&1\\ t_{1}&t_{2}&\ldots&t_{n}\end{pmatrix}^{\prime}\) and the \(m\) columns of the matrix \(\mathbf{W}\) are smoothing splines basis functions as described in Section 4.1. We assume the error vector \(\mathbf{\epsilon}_{i}\) follows a MVN\((\mathbf{0},\mathbf{\Psi}_{g}\otimes\mathbf{U})\) distribution, where \(\mathbf{U}=\mathbf{I}_{n}\) is the \(n\times n\) identity matrix, and \(\mathbf{\Psi}_{g}=\text{diag}(\mathbf{\sigma}^{2}_{g})\) is a \(K\times K\) diagonal matrix with the error variances \(\mathbf{\sigma}^{2}_{g}=(\sigma^{2}_{g1},\ldots,\sigma^{2}_{gK})^{\prime}\). We assume each subject has a common grid of time points across all \(K\) entries, such that \(\mathbf{X}\) and \(\mathbf{W}\) are common to all subjects, although our proposed method can be generalized to the case where subjects are observed at different grids of time points. In addition, we assume \(\mathbf{E}(\mathbf{y}_{ik},\mathbf{y}_{ih})=\mathbf{0}_{n\times n}\) for \(k\neq h\). To simplify notation, we let \(\mathbf{S}=[\mathbf{X}\ \mathbf{W}]\) and \(\mathbf{\theta}_{g}=(\mathbf{\alpha}^{\prime}_{g1},\mathbf{\beta}^{\prime}_{g1},\dots,\mathbf{ \alpha}^{\prime}_{gK},\mathbf{\beta}^{\prime}_{gK})^{\prime}\). Equation (1) can then be rewritten as: \[\{\mathbf{y}_{i}\mid z_{ig}=1\}=(\mathbf{I}_{K}\otimes\mathbf{S})\mathbf{\theta}_{g}+\mathbf{e}_{i}. \tag{2}\] ### Model for the mixing weights The mixture-of-experts model (Jacobs _and others_, 1991) is applied to form a covariate-guided structure for our proposed model, where the mixing weights are multinomial logits that are functions of selected covariates. As in Sun _and others_ (2007), the mixing weights are expressed as \[\pi_{ig}(\mathbf{V}_{i})=\frac{\exp(\mathbf{V}^{\prime}_{i}\mathbf{\delta}_{g}+\zeta_{ig}) }{\sum_{h=1}^{G}\exp(\mathbf{V}^{\prime}_{i}\mathbf{\delta}_{h}+\zeta_{ih})}, \tag{3}\] where \(\mathbf{V}_{i}=(1,V_{i1},\cdots,V_{iP})^{\prime}\) is a vector of length \((P+1)\) containing values of \(P\) covariates for subject \(i\), and \(\mathbf{\delta}_{g}=(\delta_{g0},\delta_{g1},\cdots,\delta_{gP})^{\prime}\) is the corresponding coefficient vector. For identifiability, we set \(\mathbf{\delta}_{G}=\mathbf{0}\). Equation (3) differs slightly from the weights in the traditional mixture of experts model in that it includes a random term \(\zeta_{ig}\) for each subject. This term accounts for unmeasured factors beyond the observed covariates, and enhances model performance and inference of the mixing weights. ### Augmented likelihood To account for heterogeneity across subjects, we assume that the \(k\)th entry of the multivariate time series, \(\mathbf{y}_{ik}\), comes from a mixture model with \(G\) components, i.e., \[\mathbf{y}_{ik}\sim\sum_{g=1}^{G}\pi_{ig}f_{gk}(\mathbf{y}_{ik}\mid\mathbf{\mu}_{gk},\sigma _{gk}^{2}\mathbf{I}_{n}), \tag{4}\] where \(f_{gk}(\mathbf{y}_{ik}\mid\mathbf{\mu}_{gk},\sigma_{gk}^{2}\mathbf{I}_{n})\) is the probability density function of the multivariate normal distribution with mean vector \(\mathbf{\mu}_{gk}=\mathbf{X}\mathbf{\alpha}_{gk}+\mathbf{W}\mathbf{\beta}_{gk}\) and covariance matrix \(\sigma_{gk}^{2}\mathbf{I}_{n}\) for the \(g\)th component and the \(k\)th entry. The \(\pi_{ig}\) are mixing weights that depend on covariates as described in Section 3.2. As is common in mixture models, augmenting the likelihood with latent variables indicating the component from which a time series originates simplifies the computation greatly (Dempster _and others_, 1977). In particular, let \(z_{ig}=1\) if the \(i\)th multivariate time series belongs to the \(g\)th component and \(z_{ig}=0\), otherwise. Let \(\mathbf{y}=(\mathbf{y}_{1},\dots,\mathbf{y}_{N})^{\prime}\) be all observed multivariate time series and \(\mathbf{\Theta}_{gk}\) be the aggregation of all parameters for component \(g\) and entry \(k\). The parameter vector for all components and all entries is then denoted by \(\mathbf{\Theta}=(\mathbf{\Theta}^{\prime}_{11},\dots,\mathbf{\Theta}^{\prime}_{GK})^{\prime}\). The augmented likelihood of all \(N\) multivariate time series is given by \[L(\mathbf{\Theta}\mid\mathbf{y},Z)=\prod_{i=1}^{N}\prod_{g=1}^{G}\Big{[}\pi_{ig}\prod_ {k=1}^{K}f_{gk}(\mathbf{y}_{ik}\mid\mathbf{\Theta}_{gk})\Big{]}^{z_{ig}}, \tag{5}\] where \(f_{gk}(\mathbf{y}_{ik}\mid\mathbf{\Theta}_{gk})\) is the probability density function as appeared in the (4). From Bayes' rule, the distribution of the latent indicators \(z_{ig}\) is given by \[p(z_{ig}=1\mid\mathbf{y},\mathbf{S},\mathbf{\Theta},\pi_{ig})=\frac{\pi_{ig}\prod_{k=1}^{K }f_{gk}(\mathbf{y}_{ik}\mid\mathbf{\Theta}_{gk})}{\sum_{h=1}^{G}\pi_{ih}\prod_{k=1}^{K }f_{hk}(\mathbf{y}_{ik}\mid\mathbf{\Theta}_{hk})}. \tag{6}\] ## 4 Priors In this section, the priors on the model parameters are introduced. ### Smoothing splines prior The conditional expectation of a mixture component in model (4) is given by \(E(\mathbf{y}_{ik}\mid z_{ig}=1)=\mathbf{X}\mathbf{\alpha}_{gk}+\mathbf{W}\mathbf{\beta}_{gk}\). We place a smoothing spline prior on \(\mathbf{\beta}_{gk}\) and let \(\mathbf{\mathcal{H}}_{gk}=\mathbf{W}\mathbf{\beta}_{gk}\), where \(\mathbf{\mathcal{H}}_{gk}=\big{[}\mathcal{H}_{gk}(t_{1}),\dots,\mathcal{H}_{gk}(t_ {n})\big{]}^{\prime}\) is a zero-mean Gaussian process with variance covariance matrix \(\tau_{gk}^{2}\mathbf{\Phi}\)(Wahba, 1980; Wood _and others_, 2002), such that \(\text{cov}\big{[}\mathcal{H}_{gk}(t_{r}),\mathcal{H}_{gk}(t_{h})\big{]}=\tau_{ gk}^{2}\phi_{rh}\), \(\tau_{gk}^{2}\) is a smoothing parameter for component \(g\) and entry \(k\), and the \((r,h)\)th element of \(\mathbf{\Phi}\) is given by \(\phi_{rh}=\frac{1}{2}t_{r}^{2}(t_{h}-\frac{t_{r}}{3})\) for \(t_{r}\leq t_{h}\). The matrix \(\mathbf{\Phi}\) is common to all subjects since all entries of the multivariate time series are observed at common time points. As seen above, the matrix \(\mathbf{\Phi}\) is \(n\times n\), and to avoid the computational burden for large \(n\), a low-rank approximation is often adopted. To facilitate this approximation, we obtain basis functions via the spectral decomposition of \(\mathbf{\Phi}\), as has been proposed in Wood _and others_(2002) and used in Rosen _and others_(2009, 2012); Krafty _and others_(2011). In particular, the matrix \(\mathbf{W}\) consists of \(m\) basis functions evaluated at times \(t_{1},\ldots,t_{n}\), and \(\mathbf{\beta}_{gk}\) is an \(m\)-dimensional vector of basis function coefficients. These basis functions are obtained by applying the spectral decomposition to \(\mathbf{\Phi}\) such that \(\mathbf{\Phi}=\mathbf{Q}\mathbf{\Gamma}\mathbf{Q}^{T}\), where \(\mathbf{Q}\) is the matrix of eigenvectors of \(\mathbf{\Phi}\), and \(\mathbf{\Gamma}\) is a diagonal matrix containing the eigenvalues of \(\mathbf{\Phi}\). We then let the design matrix \(\mathbf{W}=\mathbf{Q}\mathbf{\Gamma}^{1/2}\) and place a normal prior \(N(0,\tau_{gk}^{2}\mathbf{I}_{n})\) on \(\mathbf{\beta}_{gk}\), which leads to \(\mathbf{\mathcal{H}}_{gk}\) or \(\mathbf{W}\mathbf{\beta}_{gk}\sim N(\mathbf{0},\tau_{gk}^{2}\mathbf{\Phi})\) as mentioned above. By using the low-rank approximation, the number of columns of \(\mathbf{W}\) is reduced from \(n\) to \(m\) (\(m<n\)), which greatly reduces the computational burden without sacrificing the model fit (Wahba, 1980; Wood, 2006). Eubank (1999) indicated that the eigenvalues in the diagonal matrix \(\mathbf{\Gamma}\) decay rapidly as \(m\) increases. Thus, we can achieve a good approximation by selecting a relatively small number \(m\) of basis functions. The number of basis functions \(m\) is set to \(10\) in simulation studies as described in Section 6, which has been shown (Krafty _and others_2011) to explain more than \(98\%\) of the total variability. The prior on \(\mathbf{\theta}_{g}\) is thus \(\mathbf{\theta}_{g}\sim N(\mathbf{0},\mathbf{D}_{g})\), where \(\mathbf{D}_{g}=\text{diag}(\sigma_{\alpha 1}^{2}\mathbf{1}_{2},\ \tau_{g1}^{2}\mathbf{1}_{m},\ \ldots,\sigma_{ \alpha K}^{2}\mathbf{1}_{2},\ \tau_{gK}^{2}\mathbf{1}_{m})\) is the covariance matrix of \(\mathbf{\theta}_{g}\). The vector \((\sigma_{\alpha 1}^{2},\ldots,\sigma_{\alpha K}^{2})^{\prime}\) contains fixed prior variances for the regression coefficients \(\mathbf{\alpha}_{gk}\), common to all components and entries. In particular, we fix the common prior variance \(\sigma_{\alpha}^{2}=100\). The vector \(\mathbf{\tau}_{g}^{2}=(\tau_{g1}^{2},\ldots,\tau_{gK}^{2})^{\prime}\) contains the smoothing parameters for the \(g\)th mixture component and \(\mathbf{1}_{m}\) is an \(m\)-vector of ones. We assume independence between the regression coefficients \(\mathbf{\alpha}_{gk}\) and the basis function coefficients \(\mathbf{\beta}_{gk}\). ### Priors on the smoothing parameters We assume the smoothing parameters \(\mathbf{\tau}_{g}^{2}=(\tau_{g1}^{2},\ldots,\tau_{gK}^{2})^{\prime}\) vary across components \(g\) and entries \(k\). Although the most common choice for the prior on a variance parameter is the inverse gamma distribution, Gelman(2006) and Wand _and others_(2011) suggested that a half-\(t\) prior on the standard deviation can reflect lack of information on a scale parameter. The half-\(t\) is a family of heavy-tailed distributions and has a good shrinkage performance. It can be expressed as a scale mixture of inverse gamma random variables using a latent variable which follows an inverse gamma distribution (Wand _and others_, 2011). Thus, we assume a half-\(t\) distribution such that \(\tau_{gk}\sim t_{\nu_{\tau}}^{+}(0,A_{\tau})\), where \(\nu_{\tau}\) is a degrees of freedom parameter, and \(A_{\tau}\) is a scale parameter. We set \(\nu_{\tau}=3\) and \(A_{\tau}=10\) for all components and entries. ### Priors on the error variances We assume \(\sigma_{gk}\stackrel{{\text{i.i.d.}}}{{\sim}}t_{\nu_{\sigma}}^{+} (0,A_{\sigma})\) and set \(\nu_{\sigma}=3\) and \(A_{\sigma}=10\) for all components and entries. ### Priors on the logistic parameters and the variances of random intercepts This section provides details on the prior distributions placed on the parameters of the logistic weights (3). For ease of notation, we denote \(\mathbf{\delta}_{g}^{*}=(\mathbf{\delta}_{g}^{T},\mathbf{\zeta}_{g}^{T})^{T}\), where \(\mathbf{\zeta}_{g}=(\zeta_{1g},\cdots,\zeta_{Ng})^{T}\), \(g=1,\ldots,G\). We let \(\mathbf{V}_{i}^{*}=(\mathbf{V}_{i}^{\prime},\mathbf{e}_{i}^{\prime})^{\prime}\) where \(\mathbf{e}_{i}\) is a vector of all zeros except for a single \(1\) in the \(i\)th position, and \(\mathbf{V}^{*}\) is a matrix consisting of the rows \(\mathbf{V}_{i}^{*T}\), \(i=1,\ldots,N\). Gaussian priors are placed on the logistic parameters, i.e., \(\mathbf{\delta}_{g}^{*}\sim N(\mathbf{0},\mathbf{B}_{g})\), where \(\mathbf{B}_{g}=\text{diag}(\sigma_{gk}^{2}\mathbf{1}_{\text{p}+1},\ \kappa_{g}^{2}\mathbf{1}_{\text{N}})\), and the priors on the random intercepts satisfy \(\mathbf{\zeta}_{g}\sim N(\mathbf{0},\kappa_{\zeta g}^{2}\mathbf{I}_{N})\). As for the hyperparameters, we assume \(\sigma_{\delta g}^{2}=10\) for all components and covariates, and \(\kappa_{\zeta g}\sim t_{\nu_{\kappa}}^{+}(0,A_{\kappa})\), where \(\nu_{\kappa}=3\) and \(A_{\kappa}=10\) for all components. To sample the logistic parameters, Polson _and others_(2013) proposed a data augmentation scheme incorporating Polya-Gamma latent variables, which facilitates Gibbs steps. Details on sampling the logistic parameters are provided in the Supplementary Material. ## 5 Sampling scheme This section outlines the Gibbs steps for sampling from the conditional posterior distributions of all the model parameters. More details are given in Supplementary Material. ### Gibbs sampling steps Letting \(\ell\) denote the current Gibbs sampling iteration, parameter values at the \((\ell+1)\)th iteration are drawn according to the following steps. 1. Draw \(\mathbf{\theta}_{gk}^{(\ell+1)}\) from \((\mathbf{\theta}_{gk}^{(\ell+1)}\mid\mathbf{y},\mathbf{S},\tau_{gk}^{2(\ell)},\sigma_{gk}^{ 2(\ell)})\sim N(\mathbf{u}_{gk},\sigma_{gk}^{2}\mathbf{\Lambda}_{gk})\), where \(\mathbf{u}_{gk}\) and \(\mathbf{\Lambda}_{gk}\) are mean vectors and covariance matrices. 2. Draw \(\sigma_{gk}^{2(\ell+1)}\) from \((\sigma_{gk}^{2(\ell+1)}\mid\mathbf{\epsilon}_{igk}^{(\ell+1)},a_{\sigma_{gk}}^{( \ell+1)})\sim IG\Big{(}(nN_{g}^{(\ell)}+\nu_{\sigma})/2,\sum_{i=1}^{N}z_{ig} \mathbf{\epsilon}_{igk}^{\prime}\mathbf{\epsilon}_{igk}/2+\nu_{\sigma}/a_{\sigma_{gk}} \Big{)}\), where \(N_{g}^{(\ell)}\) is the current number of subjects in the \(g\)th component, \(\mathbf{\epsilon}_{igk}\) is the error vector for the \(g\)th component, the \(i\)th subject and the \(k\)th entry, and \(a_{\sigma_{gk}}\) is a latent variable in the \(IG\) scale mixture underlying the half-\(t\) distribution. 3. Draw \(\tau_{gk}^{2(\ell+1)}\) from \((\tau_{gk}^{2(\ell+1)}\mid\mathbf{\beta}_{gk}^{(\ell+1)},a_{\tau_{gk}}^{(\ell+1)})\sim IG \Big{(}(\nu_{\tau}+m)/2,\mathbf{\beta}_{gk}^{\prime}\mathbf{\beta}_{gk}/2+\nu_{\tau}/a_ {\tau_{gk}}\Big{)}\), where \(a_{\tau_{gk}}\) is a latent variable as in 2. 4. Draw \(\mathbf{\delta}_{g}^{\star(\ell+1)}\) from \((\mathbf{\delta}_{g}^{\star(\ell+1)}\mid\mathbf{V}^{\star},z_{ig}^{(\ell)},\omega_{ig} ^{(\ell+1)},\kappa_{\zeta g}^{2(\ell)})\sim N(\mathbf{M}_{g},\mathbf{\Sigma}_{g})\), where \(\omega_{ig}^{(\ell+1)}\) is a Polya-Gamma latent variable in the augmentation described in Section 4.4. 5. Draw \(\kappa_{\zeta g}^{2(\ell+1)}\) from \((\kappa_{\zeta g}^{2(\ell+1)}\mid\mathbf{\zeta}_{g}^{(\ell+1)},a_{\kappa_{g}}^{( \ell+1)})\sim IG\Big{(}\nu_{\kappa}/2,\mathbf{\zeta}_{g}^{\prime}\mathbf{\zeta}_{g}/2 +(\nu_{\kappa}+N)/a_{\kappa_{g}}\Big{)}\), where \(a_{\kappa_{g}}\) is a latent variable as in 2 and 3. 6. The mixing weights \(\pi_{ig}^{(\ell+1)}\) are obtained by computing \(p(\pi_{ig}^{(\ell+1)}\mid\mathbf{V}^{\star},\mathbf{\delta}_{g}^{\star(\ell+1)},z_{ig} ^{(\ell)})\) from Equation (3). 7. Draw \(z_{ig}^{(\ell+1)}\sim p(z_{ig}^{(\ell+1)}=1\mid\mathbf{y},\mathbf{S},\mathbf{\theta}_{gk}^ {(\ell+1)},\sigma_{gk}^{2(\ell+1)},\pi_{ig}^{(\ell+1)})\) according to Equation (6). ### Selecting the number of components Spiegelhalter _and others_(2002) suggested the use of the deviance information criterion (DIC) for model selection based on the effective number of parameters. Gelman _and others_(2003) introduced an alternative measure of effective number of parameters based on the variance of the log predictive density across MCMC iterations. This measure is robust and more accurate than the original one. Moreover, it has the advantages of always being positive and invariant to reparameterizations (Gelman _and others_, 2003). In this paper, we use DIC to select the number of components for our proposed mixture model. ## 6 Simulation studies To demonstrate the performance of the proposed method, we conduct simulation studies by generating data sets from the proposed model under two scenarios: two-component mixture (\(G=2\)) of trivariate time series (\(K=3\)) and four-component mixture (\(G=4\)) of bivariate time series (\(K=2\)). We simulate \(100\) replicates in each simulation setting with \(N=150\) time series of length \(n=50\). A total of \(20,000\) Gibbs sampling iterations are run with a burn-in of \(4,000\). In all simulation settings, the hyperparameters are assigned the same values, given in Section 4. ### Two-component trivariate model In this scenario, we consider the two-component trivariate model. From Equation (1), the \(g\)th component of the proposed mixture model is given by \[\{\mathbf{y}(t_{j})\mid z_{ig}=1\}=\mathbf{\alpha}_{0g}+\mathbf{\alpha}_{1g}t_{j}+\sum_{q =1}^{m}w_{q}(t_{j})\mathbf{\beta}_{gq}+\mathbf{\epsilon}_{gt_{j}},\quad j=1,\ldots,n, \ \ g=1,\ldots,G, \tag{7}\] where \(\mathbf{y}(t_{j})\) is the trivariate time series evaluated at time \(t_{j}\), \(\mathbf{\alpha}_{01}=(1,-3,-2)^{\prime}\), \(\mathbf{\alpha}_{02}=(5,4,3)^{\prime}\) and \(\mathbf{\alpha}_{11}=(-2,2,0.5)^{\prime}\), \(\mathbf{\alpha}_{12}=(1,-1,-0.5)^{\prime}\) are independent intercepts and slopes for each component, respectively. The vector \(\mathbf{\beta}_{gq}\) consists of the \(q\)th spline coefficients of all variates for component \(g\), and \(w_{q}(t_{j})\) is the \(q\)th spline basis function evaluated at time \(t_{j}\). The \(\mathbf{\epsilon}_{gt_{j}}\) are independent zero-mean error terms, distributed as \(\mathbf{\epsilon}_{gt_{j}}\sim\text{MVN}\Big{(}\mathbf{0},\text{diag}(\sigma_{g1}^{2}, \sigma_{g2}^{2},\sigma_{g3}^{2})\Big{)}\), where \(\sigma_{1}^{2}=(\sigma_{11}^{2},\sigma_{12}^{2},\sigma_{13}^{2})^{\prime}=(3,5,4.5)^{\prime}\) and \(\sigma_{2}^{2}=(\sigma_{21}^{2},\sigma_{22}^{2},\sigma_{23}^{2})^{\prime}=(4,3.5,4)^{\prime}\). The smoothing parameters are set to \(\tau_{1}^{2}=(\tau_{11}^{2},\tau_{12}^{2},\tau_{13}^{2})^{\prime}=(3.5,5,8.5)^{\prime}\) and \(\tau_{2}^{2}=(\tau_{21}^{2},\tau_{22}^{2},\tau_{23}^{2})^{\prime}=(6,2.5,1.5)^{\prime}\). We investigate the performance of the trajectory and logistic parameter (see Equation (3)) estimates. For the former, we calculate the averaged root square error (ARSE) of each mixture component \(g\) \[\text{ARSE}_{g}=\sqrt{\frac{1}{nK}\sum_{j=1}^{n}\sum_{k=1}^{K}\Big{[}\mu_{gk}(t_{ j})-\hat{\mu}_{gk}(t_{j})\Big{]}^{2}},\] where \(\mu_{gk}(t_{j})\) is the expectation of \(y_{k}(t_{j})\) according to the \(g\)th component, and \(y_{k}(t_{j})\) is the \(k\)th entry of the time series evaluated at time \(t_{j}\). The \(\hat{\mu}_{gk}(t_{j})\) are the estimated posterior means of \(\mu_{gk}(t_{j})\) for \(k=1,\ldots,K\) and \(j=1,\ldots,n\). To handle a potential label switching across mixture components, we compute ARSE\({}_{g}\) as the minimum value across all components, by using the estimate of the \(g\)th component and the truth of each group, \(g=1,\ldots,G\). After obtaining correct component labels by evaluating ARSE, we also report the averaged bias (A-bias) and the variance of the bias (V-bias) of each mixture component \(g\), where \[\text{A-bias}_{g}=\frac{1}{nK}\sum_{j=1}^{n}\sum_{k=1}^{K}\Big{[}\hat{\mu}_{gk }(t_{j})-\mu_{gk}(t_{j})\Big{]},\] and V-bias\({}_{g}\) is computed by calculating the sample variance of the bias over entries and time points. For each replicate, time series trajectories are estimated by three methods: the proposed method, the R package gbmt(Magrini, 2022) and the TRAJ procedure in SAS(Nagin _and others_, 2018). Boxplots of ARSE, A-bias and V-bias of each component are given in Figure 3. Notably, TRAJ is able to fit a regression spline model by treating basis functions as time-varying covariates, while gbmt is only able to fit a cubic model. Our proposed method fits a penalized spline model under the Bayesian framework and is able to outperform both gbmt and TRAJ in terms of ARSE and V-bias for both components. A-biases are close to zero and comparable for all three methods. These findings demonstrate that all three methods are able to achieve a reasonable fit to group-based trajectories since bias over the entire time series is close to zero. Our proposed method is able to obtain more precise estimates of trajectories as is evident from the smaller V-biases. To evaluate the performances of the logistic parameters, we compute the root mean squared error (RMSE) for each logistic parameter using the proposed method and TRAJ. Notably, gbmt is not able to incorporate covariates into the computation of mixing weights. Results of RMSEs of each logistic parameter are given in Table 1. We also compare RMSEs between the proposed method and TRAJ under four settings of different combinations of \(N=150,250\) and \(n=50,70\). Our proposed method yields smaller RMSEs of the logistic parameters in all cases, especially for the intercept \(\delta_{0}\) and the first covariate \(\delta_{1}\). This is to be expected since TRAJ uses a multinomial logistic model, which may result in inflated parameter estimates in cases of unbalanced outcomes or perfect separation, while our proposed method is able to obtain a shrinkage result using the penalization method. ### Four-component bivariate model In this scenario, we consider the four-component bivariate model whose \(g\)th component is given in Equation (7), where the values of the intercepts and slopes are \(\mathbf{\alpha}_{01}=(1,-2)^{\prime}\), \(\mathbf{\alpha}_{02}=(5,3)^{\prime}\), \(\mathbf{\alpha}_{03}=(-3,5.5)^{\prime}\), \(\mathbf{\alpha}_{04}=(4,-1)^{\prime}\), \(\mathbf{\alpha}_{11}=(-3,0)^{\prime}\), \(\mathbf{\alpha}_{12}=(2,-3.5)^{\prime}\), \(\mathbf{\alpha}_{13}=(2.5,2)^{\prime}\) and \(\mathbf{\alpha}_{14}=(-3,1.5)^{\prime}\). By analogy to the two-component triatoric model, the errors \(\mathbf{\epsilon}_{gt_{j}}\) are independent zero-mean bivariate Gaussian random variables, distributed as \(\mathbf{\epsilon}_{gt_{j}}\sim\text{MVN}\Big{(}\mathbf{0},\text{diag}(\sigma_{g1}^{2},\sigma_{g2}^{2})\Big{)}\), where \(\sigma_{1}^{2}=(\sigma_{11}^{2},\sigma_{12}^{2})^{\prime}=(6,9)^{\prime}\), \(\sigma_{2}^{2}=(\sigma_{21}^{2},\sigma_{22}^{2})^{\prime}=(8,7.5)^{\prime}\), \(\sigma_{3}^{2}=(\sigma_{31}^{2},\sigma_{32}^{2})^{\prime}=(10,6.5)^{\prime}\) and \(\sigma_{4}^{2}=(\sigma_{41}^{2},\sigma_{42}^{2})^{\prime}=(7,8.5)^{\prime}\). The performances of the estimated trajectories and logistic parameters for this scenario are displayed in Figure 4 and Table 2. As in the first scenario, our proposed method outperforms both gbmt and TRAJ in terms of ARSE and V-bias for all components. Notably, TRAJ fails to yield precise estimates in several replicates and thus results in larger mean ARSE and V-bias. In terms of the logistic parameters, the proposed method performs well with smaller RMSEs in almost all cases, especially for \(\delta_{0}\) and \(\delta_{1}\). More simulation results based on different values of \(N\) and \(n\) under the two scenarios considered above are presented in the Supplementary Material. infant age (in Months), head circumference (in cm) and sex. All continuous covariates are centered and scaled. We set the number of basis functions at \(m=20\) and run a total of \(30,000\) Gibbs iterations with a burn-in period of \(6,000\). The values of the hyperparameters are the same as the ones used in the simulation studies. The IBQ-NE construct combines data from the following subscales: Sadness, Distress to Limitations, Fear, and Falling Reactivity/Rate of Recovery from Distress. IBQ-EC refers to the ability to inhibit a dominant response to perform a subdominant one and has been shown to be protective against a myriad of difficulties (Garstentien _and others_, 2013). Finally, the data consist of 79 subjects with complete fINRS and covariate values. We present results based on analyzing one set of four-channels. Additional results based on analyzing another set of four channels and all channels are given in the Supplementary Material. The four channels are S1D1, S2D2, S5D3 and S6D4. Channels S1D1 and S5D3 are in the central prefrontal region, while channels S2D2 and S6D4 are in the left and right prefrontal region, respectively. We fit our proposed model with the number of components varying from 2 to 6. Based on values of DIC introduced in Section 5.2, the two-component model is selected as the best model for this four-channel analysis. Figure 5 presents the estimated trajectories of the two-component model fitted to the four channels. We are interested in brain activation signals in the still-face period while the interact period is used as the reference level. For component 1, a decreasing trajectory is observed for the still-face period in all four channels. In contrast, an increasing trend is observed for the still-face period in all four channels for component 2. After fitting the mixture model and finding above trajectory patterns, we define component 1 as the no response component and component 2 as the response component based on trajectory patterns in the still-face period. Figure 6 displays the logistic parameter estimates for all covariates in the 2-component model, where component 2 is used as the reference. There is evidence that IBQ-NE scores differ between the two components as its 95% credible interval does not include zero. A positive coefficient of IBQ-NE indicates that a higher IBQ-NE score is associated with component 1, which has decreased brain activation levels in the still-face period for all four channels. Though other logistic coefficients have 95% credible intervals that include zero, the negative posterior mean estimate of the IBQ-EC score could still indicate that a high IBQ-EC is associated with an increased brain activation as shown for component 2. These conclusions are consistent with findings in Garstentien _and others_ (2013) that IBQ-NE is negatively associated with IBQ-EC. Enlow _and others_ (2016) reported a negative association between activity level and IBQ-NE among infants whose families encourage a high level of activities. Furthermore, a negative posterior mean of logistic coefficient of infant age suggests that younger infant tends to have a decreasing brain activation level in the still-face period. ## 8 Discussion The proposed covariate-guided Bayesian mixture of spline experts model aims to perform a model-based clustering of multivariate time series from multiple subjects. The mixture components in this model are penalized splines, and the mixing weights incorporate covariates. Our proposed method is compared to two commonly used methods through simulation studies which demonstrate a better performance of our method under different scenarios. We apply our proposed method to a fNIRS still-face study and find distinct patterns of components of time series trajectories, as well as an association between IBQ-NE score and a pattern of decreased brain activity in the still-face period. To the best of our knowledge, this is the first still-face study using fNIRS whose purpose is to identify trajectory components. Our proposed method has some limitations. First, as in any mixture models, label switching may occur, especially in the real-data application. We have adopted the Equivalence Classes Representatives (ECR) algorithm proposed by Papastamoulis and Iliopoulos (2010) to make the components interpretable, but other methods may be considered. Second, the proposed method assumes independence among the entries of the time series and does not allow spatial dependence. Spatial correlations of fNIRS are correlations among fNIRS channels based on the placements and locations of each source and detector. An extension to a multilevel multivariate model would be possible by considering spatial correlations among time series entries. Lastly, our proposed method uses DIC to select the number of components which might be sub-optimal. Bayesian model averaging and reversible jump MCMC (RJMCMC) methods could be considered, but trans-dimensional sampling methods would pose challenges in providing interpretable components. ## 9 Software Software in the form of R codes, together with an example data, is available at [https://github.com/HaoyiFu1993/CBMOSE](https://github.com/HaoyiFu1993/CBMOSE).
2307.06792
Planar Disjoint Paths, Treewidth, and Kernels
In the Planar Disjoint Paths problem, one is given an undirected planar graph with a set of $k$ vertex pairs $(s_i,t_i)$ and the task is to find $k$ pairwise vertex-disjoint paths such that the $i$-th path connects $s_i$ to $t_i$. We study the problem through the lens of kernelization, aiming at efficiently reducing the input size in terms of a parameter. We show that Planar Disjoint Paths does not admit a polynomial kernel when parameterized by $k$ unless coNP $\subseteq$ NP/poly, resolving an open problem by [Bodlaender, Thomass{\'e}, Yeo, ESA'09]. Moreover, we rule out the existence of a polynomial Turing kernel unless the WK-hierarchy collapses. Our reduction carries over to the setting of edge-disjoint paths, where the kernelization status remained open even in general graphs. On the positive side, we present a polynomial kernel for Planar Disjoint Paths parameterized by $k + tw$, where $tw$ denotes the treewidth of the input graph. As a consequence of both our results, we rule out the possibility of a polynomial-time (Turing) treewidth reduction to $tw= k^{O(1)}$ under the same assumptions. To the best of our knowledge, this is the first hardness result of this kind. Finally, combining our kernel with the known techniques [Adler, Kolliopoulos, Krause, Lokshtanov, Saurabh, Thilikos, JCTB'17; Schrijver, SICOMP'94] yields an alternative (and arguably simpler) proof that Planar Disjoint Paths can be solved in time $2^{O(k^2)}\cdot n^{O(1)}$, matching the result of [Lokshtanov, Misra, Pilipczuk, Saurabh, Zehavi, STOC'20].
Michał Włodarczyk, Meirav Zehavi
2023-07-13T15:02:49Z
http://arxiv.org/abs/2307.06792v1
# Planar Disjoint Paths, Treewidth, and Kernels ###### Abstract In the Planar Disjoint Paths problem, one is given an undirected planar graph with a set of \(k\) vertex pairs \((s_{i},t_{i})\) and the task is to find \(k\) pairwise vertex-disjoint paths such that the \(i\)-th path connects \(s_{i}\) to \(t_{i}\). We study the problem through the lens of kernelization, aiming at efficiently reducing the input size in terms of a parameter. We show that Planar Disjoint Paths does not admit a polynomial kernel when parameterized by \(k\) unless \(\operatorname{coNP}\subseteq\operatorname{NP}/\operatorname{poly}\), resolving an open problem by [Bodlaender, Thomasse, Yeo, ESA'09]. Moreover, we rule out the existence of a polynomial Turing kernel unless the \(\operatorname{WK}\)-hierarchy collapses. Our reduction carries over to the setting of edge-disjoint paths, where the kernelization status remained open even in general graphs. On the positive side, we present a polynomial kernel for Planar Disjoint Paths parameterized by \(k+\mathsf{tw}\), where \(\mathsf{tw}\) denotes the treewidth of the input graph. As a consequence of both our results, we rule out the possibility of a polynomial-time (Turing) treewidth reduction to \(\mathsf{tw}=k^{\mathcal{O}(1)}\) under the same assumptions. To the best of our knowledge, this is the first hardness result of this kind. Finally, combining our kernel with the known techniques [Adler, Kolliopoulos, Krause, Lokshtanov, Saurabh, Thilikos, JCTB'17; Schrijver, SICOMP'94] yields an alternative (and arguably simpler) proof that Planar Disjoint Paths can be solved in time \(2^{\mathcal{O}(k^{2})}\cdot n^{\mathcal{O}(1)}\), matching the result of [Lokshtanov, Misra, Pilipczuk, Saurabh, Zehavi, STOC'20]. Introduction Disjoint Paths is a fundamental routing problem: for several decades, it has been extensively studied in a wide variety of areas in computer science and graph theory. We focus on the area of algorithm design, specifically of parameterized algorithms. Phrased as a parameterized problem, given an \(n\)-vertex undirected graph \(G\) and a set of \(k\) pairwise disjoint vertex pairs, \(\{s_{i},t_{i}\}_{i=1}^{k}\), the objective is to decide whether there exist \(k\) pairwise vertex-disjoint paths connecting \(s_{i}\) to \(t_{i}\) for each \(i\in\{1,\ldots,k\}\). Here, the classic parameter choice is \(k\). The problem was shown to be NP-hard by Karp (attributing it to Knuth) in 1975 [59], in a follow-up paper to his classic list of 21 NP-complete problems [58]. Since then, the problem was shown to be NP-hard on various simple graph classes [49, 74, 80] including the class of grid graphs [64], a highly restricted subclass of planar graphs. Notably, Disjoint Paths is a cornerstone for the widely celebrated graph minors project of Robertson and Seymour, considered to be one of the greatest feats of modern mathematics (see Section 1.5). Moreover, Disjoint Paths finds applications in various practical fields such as VLSI layout and virtual circuit routing [42, 82, 92, 93]. Due to its computational hardness, Disjoint Paths was studied from the perspectives of parameterized complexity and approximation algorithms. In particular, Disjoint Paths was shown to be in FPT (that is, solvable in time \(f(k)\cdot n^{O(1)}\) for some computational function \(f\) of \(k\)) in 1995 as part of the graph minors project [88], being one of the first problems classified in FPT. Here, the polynomial is \(n^{3}\). In 2012, the polynomial was improved to \(n^{2}\)[60]. Unfortunately, the dependency on \(k\) in both algorithms is "galactic" [68, 57], being a tower of exponents. Concerning approximation algorithms, the state-of-the-art is grim as well: despite substantial efforts, the currently best-known approximation algorithm is still a simple greedy one that achieves a ratio of \(O(\sqrt{n})\)[63]. We focus on Planar Disjoint Paths, the most well-studied special case of Disjoint Paths, where the input graph is restricted to be planar. Understanding this special case is critical for algorithms for Disjoint Paths, Minor Testing and Topological Minor Testing on general graphs (see Section 1.5). Moreover, it finds most of the general case's applications. Fortunately, this special case is known to be more tractable than the general one. Already in the 90s, Disjoint Paths on planar [85, 86] and bounded genus graphs [85, 31, 62] were shown to admit algorithms with running times whose dependency on \(n\) in linear. Regarding the dependency on \(k\), the state-of-the-art for Planar Disjoint Paths is \(2^{O(k^{2})}\cdot n^{O(1)}\)[71], improving upon earlier works [2, 85]. Very recently, the dependency on \(n\) was improved to be linear without compromising this dependency on \(k\)[17]. It is also noteworthy that when extended to directed graphs, Planar Disjoint Paths is in FPT [24] (and, for three decades, already known to be in XP [91]), while Disjoint Paths is NP-hard already when \(k=2\)[40]. Planar Disjoint Paths has also been intensively studied from the perspective of approximation algorithms, with a burst of activity in recent years. Some of the highlights of this line of works include a polynomial-time approximation algorithm with a factor of \(n^{9/19}\log^{O(1)}n\)[19], and, under reasonable complexity-theoretic assumptions, the proof of hardness of polynomial-time approximation within a factor of \(2^{o(\sqrt{\log n})}\)[20]. ### Our focus: Kernelization of planar disjoint paths. From the perspective of parameterized complexity, the (arguably) biggest open question that remains regarding Planar Disjoint Paths is whether it admits a polynomial kernel. Kernelization is a mathematical paradigm for the analysis of preprocessing procedures [36]. Due to the profound impact of preprocessing, kernelization has been termed "the lost continent of polynomial time" [35]. Formally, a parameterized problem \(\Pi\) admits a _kernel_ if there is a polynomial-time algorithm (called a kernelization algorithm) that, given an instance \((I,k)\) of \(\Pi\), translates it into an equivalent instance \((I^{\prime},k^{\prime})\) of \(\Pi\) of size \(f(k)\) for some computable function \(f\) depending only on \(k\). (Equivalence means that \((I,k)\) is a yes-instance if and only if \((I^{\prime},k^{\prime})\) is a yes-instance.) A (decidable) problem admits a kernel if and only if it is in FPT [13]. So, the central question in kernelization is: Which problems admit kernels of size \(f(k)\) where \(f\) is polynomial in \(k\), called _polynomial kernels_. Originally in 2009, Disjoint Paths was shown not to admit a polynomial kernel with respect to \(k\) unless coNP \(\subseteq\) NP/poly [10, 11], being one of the first problems for which such a result was proved. In the same paper, it was already asked whether Planar Disjoint Paths has a polynomial kernel. Still, up until this paper, it was not even known whether its extension to directed planar graphs admits a polynomial kernel. Remarkably, the literature abounds with problems that do not admit polynomial kernels on general graphs unless coNP \(\subseteq\) NP/poly, but admit polynomial kernels on planar graphs [36]. What is more, many of them are W[1]-hard1 or even W[2]-hard on general graphs, while the sizes of their polynomial kernels on planar graphs are, in fact, linear; Dominating Set is a prime example for this phenomenon. Today, we have very general techniques to design such kernels on planar graphs [38, 36] and there exist only few2 natural problems that are in FPT on planar graphs, but have non-trivial kernelization lower bounds. By non-trivial we mean that the proof for planar graphs is not essentially the same as for general graphs. Footnote 1: A W[1]-hard problem is unlikely to be in FPT [23]. Footnote 2: We are aware of one example: for Steiner Tree on planar graphs parameterized by the number of terminals, the unlikely existence of a polynomial kernel is implied by the combination of the lower and upper bounds given in [78]. In this paper, we decipher the complexity of preprocessing procedures (kernels and treewidth reductions) for Planar Disjoint Paths. Below, we present our main theorems and their implications. Next, we discuss the role of our work in the efforts of making the graph minors theory efficient. ### On the negative side: Our first main theorem First, we resolve the almost decade-and-a-half open question of whether Planar Disjoint Paths admits a polynomial kernel with respect to \(k\): unless the polynomial hierarchy collapses, the answer is negative. **Theorem 1.1** (**Main Theorem I)**.: _Unless coNP \(\subseteq\) NP/poly, Planar Disjoint Paths does not admit a polynomial kernel with respect to \(k\)._ Our reduction also shows that Planar Disjoint Paths is WK[1]-hard3, which means (see [50]) that it is unlikely even to admit a weaker form of a preprocessing procedure called a _polynomial Turing kernel_. Formally, a parameterized problem \(\Pi\) admits a _Turing kernel_ if there exists a polynomial-time algorithm for \(\Pi\) using an oracle that solves instances of \(\Pi\) of size at most \(f(k)\) for some computable function \(f\). Similarly to standard kernelization, polynomial Turing kernel refers to the case where \(f\) is polynomial in \(k\). Note that a kernel is a special case of a Turing kernel where the algorithm can perform exactly one call to the oracle. To date, we know of many problems that admit a polynomial Turing kernel but are unlikely to admit a polynomial kernel. This is the case for the other most famous path problem in parameterized complexity, called \(k\)-Path (determine whether a given undirected graph contains a path on \(k\) vertices): while \(k\)-Path is unlikely to admit a polynomial kernel when restricted to planar graphs (which can be shown by a trivial OR-composition [36]), it does admit a polynomial Turing kernel on planar graphs [51] or even on topological-minor-free graphs [54]. In light of this result, we find Theorem 1.2 quite surprising. **Theorem 1.2**.: Planar Disjoint Paths _is WK[1]-hard._ Additionally, our reduction carries over to Planar Edge-Disjoint Paths, where the solution paths are required to be edge-disjoint rather than vertex-disjoint. Specifically, we show that it is also unlikely to admit a polynomial kernel (or even a polynomial Turing kernel). Remarkably, prior to our work, it was not even known whether the problem admits a polynomial kernel on general graphs, although it was already asked as an open question close to a decade ago [9, 49]. **Theorem 1.3**.: _Unless coNP \(\subseteq\) NP/poly, Planar Edge-Disjoint Paths does not admit a polynomial kernel with respect to \(k\). Moreover, Planar Edge-Disjoint Paths is WK[1]-hard._ The Edge-Disjoint Paths problem in general, and the Planar Edge-Disjoint Paths problem in particular, have been intensively studied in the literature (see, e.g., [5, 15, 16, 41, 83, 60, 81, 49]), although perhaps to a lesser extent than their vertex counterparts. We remark that the vertex and edge versions sometimes behave very differently--for example, while Disjoint Paths is in FPT with respect to treewidth [90], Edge-Disjoint Paths is NP-complete even on series-parallel graphs [81] and thus on graphs of treewidth at most 2. Still, the work of Robertson and Seymour implies that Edge-Disjoint Paths is solvable in time \(f(k)\cdot n^{3}\); later, the polynomial factor was reduced to \(n^{2}\) by Kawarabayashi et al. [60]. ### On the positive side: Our second main theorem. We prove that Planar Disjoint Paths admits a polynomial kernel with respect to \(k+\mathsf{tw}\), where \(\mathsf{tw}\) is the treewidth of the input graph.4 This theorem is (arguably) the broadest and the most involved positive result known to date regarding the kernelization complexity of Disjoint Paths; other results in the literature concern highly restricted graph classes: split graphs [49, 95] and well-partitioned chordal graphs [4]. Footnote 4: A trivial AND-composition [36] implies that parameterization by \(\mathsf{tw}\) alone is unlikely to yield a polynomial kernel. **Theorem 1.4** (**Main Theorem II)**.: Planar Disjoint Paths _admits a polynomial kernel with respect to \(k+\mathsf{tw}\), where \(\mathsf{tw}\) is the treewidth of the input graph._ The interest in the parameterization of Disjoint Paths and Planar Disjoint Paths by \(\mathsf{tw}\) stems, mainly, from the fact that all known algorithms for these problems as well as for (Topological) Minor Testing rely on _treewidth reduction_ (defined below), and, in particular, require the resolution of these problems when \(\mathsf{tw}\) is small as part of their execution (see Section 1.5). In fact, some of the running times are stated as a function of \(k+\mathsf{tw}\) rather than \(k\) alone (e.g., the algorithm of [71] is stated to run in time \(\mathsf{tw}^{O(k)}\cdot n^{O(1)}\)). Moreover, treewidth is the most well-studied structural parameter is parameterized complexity [23, 30]. It is known that Disjoint Paths parameterized by \(\mathsf{tw}\) is solvable in time \(2^{O(\mathsf{tw}\log\mathsf{tw})}\cdot n\)[90], while, under the Exponential Time Hypothesis (ETH), Disjoint Paths and Planar Disjoint Paths cannot be solved in times \(2^{o(\mathsf{tw}\log\mathsf{tw})}\cdot n^{O(1)}\)[70] and \(2^{o(\mathsf{tw})}\cdot n^{O(1)}\)[6], respectively. A treewidth reduction for a parameterized graph problem \(\Pi\) is a polynomial-time algorithm that, given an instance \((I,k)\) of \(\Pi\), translates it into an equivalent instance of \(\Pi\) where the treewidth of the new graph is bounded by \(f(k)\) for some computational function \(f\) of \(k\). For Disjoint Paths, unfortunately, the best-known function is a tower of exponents [88, 60]. However, for Planar Disjoint Paths, \(f(k)=2^{O(k)}\)[2]. Thus, since the problem is solvable in time \(n^{O(k)}\)[91], Theorem 1.4 yields a \(2^{O(k^{2})}\cdot n^{O(1)}\)-time algorithm: Reduce the treewidth of the graph to \(2^{O(k)}\) in polynomial time [17], then run our kernelization algorithm in polynomial time, obtaining an equivalent instance with \(2^{O(k)}\) vertices, and lastly solve the new instance in time \(2^{O(k^{2})}\) using the \(n^{O(k)}\)-time algorithm. This provides an alternative (and much shorter) proof of the result of [71]. Unlike [17, 71], we use the algorithm of [91] as a black box, so any improvement upon it (i.e., an \(n^{o(k)}\)-time algorithm) would immediately entail an improvement also in the FPT running time. **Theorem 1.5**.: _The algorithm of Schrijver [91] can be used in a black-box manner to solve Planar Disjoint Paths in time \(2^{O(k^{2})}\cdot n^{O(1)}\)._ 1.4. Implication for treewidth reductions.A remarkable corollary of the combination of Theorems 1.1 and 1.4 rules out the existence of a polynomial treewidth reduction: If there existed a polynomial treewidth reduction for Planar Disjoint Paths with respect to \(k\), then combined with Theorem 1.4, this would have yielded a polynomial kernel with respect to \(k\), contradicting Theorem 1.1. This result can be viewed as a significant strengthening of Theorem 1.1: not only we cannot efficiently preprocess the graph so that its size will be polynomial in \(k\), but we even cannot preprocess it so that only its treewidth will be polynomial in \(k\). **Theorem 1.6**.: _Unless coNP \(\subseteq\) NP/poly, Planar Disjoint Paths does not admit a polynomial treewidth reduction with respect to \(k\)._ To the best of our knowledge, this is the first non-trivial result of this form,5 although treewidth reduction is a common tool in parameterized complexity, particularly since it is tightly linked to the irrelevant vertex technique as well as to Bidimensionality theory [23]. We refer to [72, 76, 52, 77, 43, 46, 37, 75, 45, 25, 26, 28] for a few illustrative examples of treewidth reductions for problems other than Disjoint Paths and Minor Testing. Prior to our work, there was hope that Planar Disjoint Paths would admit a polynomial treewidth reduction with respect to \(k\)--notably, coupled with the \(\mathsf{tw}^{O(k)}\)-time algorithm of [71], this would have yielded a \(2^{O(k\log k)}\cdot n^{O(1)}\)-time algorithm. Footnote 5: Here, by non-trivial, we mean that the result does not follow simply because the problem does not admit any treewidth reduction (e.g., since it is in FPT with respect to \(\mathsf{tw}\) but it is not in FPT with respect to the parameter under consideration). A negative hint was given by Adler et al. [3], who constructed yes-instances of Planar Disjoint Paths where the treewidth of the graph is \(2^{\Omega(k)}\) and every vertex is _relevant_, that is, the removal of any vertex would turn the instance into a no-instance. This is indeed a negative hint since all known algorithms for (Planar) Disjoint Paths apply treewidth reduction by iteratively finding and removing irrelevant vertices until the treewidth of the graph becomes "small enough". However, the result of [3] does not imply that Planar Disjoint Paths does not admit a polynomial treewidth reduction--indeed, it _provably cannot_ even show that the removal of irrelevant edges rather than irrelevant vertices is futile. To see this, consider any yes-instance and some solution of it, and remove from the graph all the edges that are not part of the solution (which are irrelevant edges)--then we are left with a collection of paths, having treewidth 1. Our result rules out not just the success of removal of irrelevant edges, but the success of any method implementing a polynomial treewidth reduction for Planar Disjoint Paths. In fact, since we show that the problem is WK[1]-hard, our result even rules out a "Turing-version" of a polynomial treewidth reduction (defined in the natural way), strengthening Theorem 1.2. **Theorem 1.7**.: _Unless the WK-hierarchy collapses, Planar Disjoint Paths does not admit a polynomial Turing treewidth reduction with respect to \(k\)._ 1.5. Part of the development of an efficient graph minors theory.The concept of a _minor_ has been extensively studied already in the early 20th century, and it is defined as follows: A graph \(H\) is a _minor_ of a graph \(G\) if \(H\) can be obtained from \(G\) by deleting vertices and edges, and contracting edges. Kuratowski's famous theorem states that a graph is planar if and only if it does not contain the graphs \(K_{3,3}\) and \(K_{5}\) as topological minors [67], which holds also for minors [94]. Thus, the class of planar graphs is characterized by a set of two forbidden minors. The graph minors project of Robertson and Seymour is a series of 23 papers spanning more than two decades, dedicated to proving the generalization of Kuratowski's theorem called Wagner's conjecture [94]: The class of all graphs is well-quasi ordered by the minor relation, or, equivalently, any minor-closed family of graphs can be characterized by a finite set of forbidden minors. The graph minors project had tremendous impact on various areas of theoretical computer science and on graph theory, particularly due to numerous concepts, structural results, and algorithms that it yielded. Notably, it is considered to be the origin of the field of parameterized complexity [29] and the source for a large number of its most central notions and techniques [73]. Unfortunately, the dependencies on the parameters of most algorithms based on the graph minors project are huge, being towers of exponents, and so they are called "galactic algorithms" [68, 57]. As written in [73]: "keeping in mind that the primary objective of the paradigm of parameterized complexity is to cope with computational intractability, we are facing a blatant discrepancy." Thus, the holy grail of parameterized complexity is to amend this discrepancy, making the graph minors theory efficient. While substantial efforts have been made in this direction (e.g., see [47, 60, 61, 14, 18, 21]), it will likely take a long time for the matter to be well understood. In particular, two central concrete goals posed for this purpose are to solve Minor Testing and Disjoint Paths efficiently [73]. All known algorithms for (Topological) Minor Testing and Disjoint Paths use the following case distinction. First, if the treewidth of the graph is "small", then they directly solve the problem using dynamic programming (classically) or a different mean [17, 71]. Else, if the graph contains a large clique as a minor, then they use rerouting arguments to find an irrelevant vertex within it. Lastly, we are left with the case where the treewidth is large and the graph does not have a large clique as a minor. This reduces the problem to almost-embeddable graphs [89], where a so-called flat wall theorem ensures the existence of a large almost planar piece within the graph, which is afterwards analyzed. Hence understanding the planar case is paramount to understand the problem in general. Our contributions can be viewed as a piece of the ongoing efforts of many researchers to establish which parts of the graph minor theory can be made algorithmically efficient. 1.6. Organization.First, in Section 2, we outline the proofs of our results. Afterwards, in Section 3, we present the more basic preliminaries required for our work. Then, in Sections 4 and 5, we provide the full details of the proofs of our positive and negative results, respectively. Finally, in Section 6, we conclude the paper with several open questions. Whenever we want to emphasize the importance of a statement (e.g., when it is a building block of the main proof), we use the "proposition" environment, instead of "lemma". Outline In this section, we give an informal overview of our technical contributions, beginning from the positive result, which requires fewer intermediate steps. ### Polynomial kernel for parameter \(k+\mathsf{tw}\) Our kernelization algorithm is based on several steps that reduce the size of certain subgraphs of \(G\) while treating their boundary vertices as temporary terminals. Since we have no control over which pairs of these terminals might be connected by paths in a solution, it is convenient to work in a slightly more general setting. We say that two graphs \(G_{1},G_{2}\) sharing a set of vertices \(X\) are _\(X\)-linkage-equivalent_ if for every set of pairs \(\mathcal{T}\subseteq X^{2}\), the instances \((G_{1},\mathcal{T})\) and \((G_{2},\mathcal{T})\) of Disjoint Paths are equivalent. In fact, we prove a theorem that is more general than Theorem 1.4 as we do not need to know in advance which pairs of terminals should be connected. **Theorem 2.1**.: _Let \(G\) be a planar graph of treewidth \(\mathsf{tw}\) and \(X\subseteq V(G)\) be of size \(k\). Then we can construct, in polynomial time, a planar graph \(G^{\prime}\) with \(X\subseteq V(G^{\prime})\) such that \(|V(G^{\prime})|=\mathcal{O}(k^{12}\mathsf{tw}^{12})\) and \(G^{\prime}\) is \(X\)-linkage-equivalent to \(G\)._ Single-face case.The problem becomes simpler when we are equipped with an embedding of \(G\) with all the terminals from \(X\) lying on a single face (we can assume that this is the outer face by flipping the embedding). In fact, in this case Disjoint Paths is solvable in polynomial time [87], similarly as Steiner Tree[33]. To design a useful subroutine, we need to reduce the size of \(G\) to be polynomial in \(|X|\) while maintaining \(X\)-linkage-equivalency. To this end, we take advantage of the criterion by Robertson and Seymour [87], stating that when \(X\) lies on the outer face of \(G\) and \(\mathcal{T}\subseteq X^{2}\), then the instance \((G,\mathcal{T})\) is solvable if and only if (1) \(\mathcal{T}\) is cross-free6 with respect to the cyclic ordering of \(X\) and (2) for every partition of \(X\) into continuous segments \((X_{1},X_{2})\) the number of requested paths with one endpoint in \(X_{1}\) and the other one in \(X_{2}\) is not greater than the minimum vertex \((X_{1},X_{2})\)-cut (see Figure 1). So, to compress \(G\) we need a _mimicking network_ that preserves such minimum cuts. There are known constructions of mimicking networks for planar [44, 66] and general [65] graphs, but they are designed to preserve edge-cuts. We give a self-contained construction of a mimicking network of size \(\mathcal{O}(|X|^{6})\) preserving the necessary vertex-cuts. Footnote 6: When \(X\) lies on the outer face, then \(\mathcal{T}\subseteq X^{2}\) is called _cross-free_ if there are no pairs \((a,b),(c,d)\in\mathcal{T}\) such that \(a,c,b,d\) lie in this order on the outer face. To reduce the general case to the single-face case, we follow the idea from the kernelization algorithm for Planar Steiner Tree parameterized by the solution size [84] (cf. [12]). To adapt it to our setting, we consider the _radial graph_ of the plane graph \(G\), obtained by inserting a vertex inside each face, connecting it to all vertices from \(G\) lying on this face, and removing original edges from \(E(G)\). Let \(T\) be a tree in the radial graph that spans all the terminals from \(X\). Imagine "cutting the graph open" alongside \(T\) by widening the fissure marked by \(T\) on the plane and duplicating the vertices of \(G\) lying on this fissure (see Figure 1). This operation creates a new face, incident to all the copies of terminals. We could compress the obtained graph using the approach outlined above and afterwards stitch it back alongside \(T\). However, now we need to treat all the vertices lying on \(T\) (not only the ones from \(X\)) as terminals in order to keep track of paths that might traverse \(T\). This means that, prior to opening the graph, we need to ensure that the tree \(T\) is not too large, that is, the vertices from \(X\) are close to each other in the radial graph. In other words, we need to reduce the _radial diameter_ of the graph, i.e., the maximum number of faces one must cross to reach a certain vertex from another one. Such an approach has been applied in the reductions to a single-face case for Vertex Multiway Cut[53] and Vertex Planarization[55]. Radial diameter reduction.The radial diameter of a plane graph \(G\) is proportional to the maximal number of concentric cycles in \(G\) (i.e., these cycles are vertex-disjoint and each one is located in the interior of the next one, resembling a well). In particular, when the radial diameter is as large as \(\Omega(k\cdot\mathsf{tw}^{2})\), then one can find a sequence \(C_{1},C_{2},\ldots,C_{m}\) of concentric cycles such that \(m=\Omega(\mathsf{tw}^{2})\) and each terminal from \(X\) is located in either the interior of \(C_{1}\) or the exterior of \(C_{m}\). We show that in this case, \(G\) must contain an _irrelevant edge_, that is, an edge \(e\) for which the graph \(G\setminus e\) is \(X\)-linkage-equivalent to \(G\). Our strategy is to iteratively remove irrelevant edges from \(G\) until its radial diameter becomes bounded by \(O(k\cdot\mathsf{tw}^{2})\). Afterwards, we will be able to find a Steiner tree \(T\) of \(X\) of size \(O(k^{2}\cdot\mathsf{tw}^{2})\) in the radial graph. This will allow us to reduce the problem to the single-face case with \(O(k^{2}\cdot\mathsf{tw}^{2})\) terminals. Consider a sequence of concentric cycles \(C_{1},C_{2},\ldots,C_{t}\). It is known [48] that the existence of \(t\) vertex-disjoint paths between \(V(C_{1})\) and \(V(C_{t})\) yields a minor model of a \(t\times t\)-grid, thus implying that the treewidth of the graph is at least \(t\). Conversely, if we know that treewidth is less than \(t\), Menger's theorem implies that there is a vertex \((V(C_{1}),V(C_{t}))\)-separator of size less than \(t\). We can always find such a separator located within the well, that is, between \(C_{1}\) and \(C_{t}\) (inclusively). In our setting, this implies that we can find a \((V(C_{1}),V(C_{m}))\)-separator of size at most \(\mathsf{tw}\) located inside the cycle \(C_{\mathsf{tw}+1}\), and similarly, one located outside \(C_{m-\mathsf{tw}-1}\). Therefore, the search of an irrelevant edge can be reduced to the two-face case: we are given a plane graph \(G\) with a set of terminals \(V_{out}\) located on the outer face, another set of terminals \(V_{in}\) located on some internal face, such that \(|V_{in}|,|V_{out}|\leq\mathsf{tw}\), and a sequence of \(\Omega(\mathsf{tw}^{2})\) concentric \(C_{1},C_{2},\ldots,C_{m}\) cycles around \(V_{in}\) (see Figure 2). Now the task is to find an edge \(e\in E(G)\) such that \(G\setminus e\) and \(G\) are \((V_{in}\cup V_{out})\)-linkage-equivalent. We begin with computing the minimum \((V_{in},V_{out})\)-separator \(S_{in}\) that is _closest_ to \(V_{in}\). By Figure 1: A visualization of cutting a graph open alongside a tree \(T\) in the radial graph. Left: The tree \(T\) is sketched with dotted lines; we have \(V(T)\cap V(G)=\{1,2,3,4\}\). The vertices \(1,3,4\) (black squares) belong to \(X\). A \((1,3)\)-path \(P\) is drawn with solid orange lines. Middle: After opening the graph, the vertex \(2\) is split into three copies \(2a,2b,2c\), while the path \(P\) is split into a \((1,2c)\)-path and a \((2a,3)\)-path. The white and black squares are the new terminals. Right: After flipping the embedding, we can assume that the face where the cut happened is the outer face. The dotted lines illustrate all the cuts that must be preserved in a mimicking network. As an example, the brown heavier line stands for the minimum cut between \(\{4,2c\}\) and \(\{1,2a,3,2b\}\). "closest to \(V_{in}\)" we mean that for any other minimum separator \(S\) the set of vertices reachable from \(V_{in}\) in \(G-S\) is a superset of those reachable from \(V_{in}\) in \(G-S_{in}\). It is well-known [23, Thm. 8.4] that such a separator exists. Similarly, we compute the minimum \((V_{in},V_{out})\)-separator \(S_{out}\) closest to \(V_{out}\). Every inclusion-minimal vertex separator \(S\) in a plane graph \(G\) can be represented by a _noose_ in the plane that intersects the image of \(G\) exactly at the vertices of \(S\). Since \(|S_{in}|,|S_{out}|\leq\mathsf{tw}\), the corresponding nooses cannot cross more than \(\mathsf{tw}\) cycles from \(C_{1},C_{2},\ldots,C_{m}\). These nooses form a partition of \(G\) into three parts, one of which must contain \(\Omega(\mathsf{tw}^{2})\) cycles from \(C_{1},C_{2},\ldots,C_{m}\). Depending on which part is "deep", we apply different strategies for detecting an irrelevant edge. Case I: Deep well in the interior/exterior.First consider the case that there are \(\Omega(\mathsf{tw}^{2})\) concentric cycles between \(V_{in}\) and \(S_{in}\). (The case with a deep subgraph between \(S_{out}\) and \(V_{out}\) is analogous.) Let \(\mathcal{P}\) be a family of vertex-disjoint paths (a linkage) with endpoints in \(V_{in}\cup V_{out}\) and \(\mathcal{P}_{long}\) denote the subfamily of paths in \(\mathcal{P}\) that connect \(V_{in}\) to \(V_{out}\). By a standard argument, the paths from \(\mathcal{P}\setminus\mathcal{P}_{long}\) can be assumed to cross only few cycles from \(C_{1},C_{2},\ldots,C_{m}\), namely at most \(\max(|V_{in}|,|V_{out}|)\leq\mathsf{tw}\), so our main focus is on the paths from \(\mathcal{P}_{long}\). Each of these paths must traverse the separator \(S_{in}\), let \(p\) denote its size, so \(|\mathcal{P}_{long}|\leq p\). But since \(S_{in}\) is the minimum \((V_{in},V_{out})\)-separator closest to \(V_{in}\), for any \((V_{in},V_{out})\)-separator \(S\), located inclusively between \(V_{in}\) and \(S_{in}\), there exists at least \(p+1\) vertex-disjoint \((V_{in},S)\)-paths, i.e., \(\mu(V_{in},S)>p\). This allows us to Figure 2: Left: An overview of the two-face case. The well comprises a sequence of concentric cycles \(C_{1},\ldots,C_{m}\) such that each terminal from \(X\) (the red squares) is either inside \(C_{1}\) or outside \(C_{m}\). The nooses representing separators \(V_{in},V_{out}\) are drawn blue, while the nooses of \(S_{in},S_{out}\) are red; each of them can intersect at most \(\mathsf{tw}\) cycles. The cycles contained entirely in each of the three parts of the graph are highlighted in green. A family of vertex-disjoint paths with endpoints at \(X\) is sketched with black lines. We can assume that the \((V_{in},V_{in})\)-subpaths and the \((V_{out},V_{out})\)-subpaths, which do not traverse the well from inside to outside, intersect only few cycles. Since the remaining \((V_{in},V_{out})\)-subpaths must cross the separator \(S_{in}\), which is the minimum \((V_{in},V_{out})\)-separator closest to \(V_{in}\), there is a space for an augmenting path between \(V_{in}\) and \(S\) (the orange noose). This fact, crucial for the analysis of Case I, is illustrated with two dashed lines. Right: An illustration of the notion of the winding number from Case II. The blue, green, and orange paths have winding numbers 1, -1, -7, respectively. focus on the following variant of the two-face case: \(S\) replaces the set of terminals \(V_{out}\) lying on the outer face, \(\mu(V_{in},S)>p\), there are still \(\Omega(\mathsf{tw}^{2})\) concentric cycles between \(V_{in}\) and \(S\), and we want to detect an edge \(e\) that is not relevant for any family \(\mathcal{P}\) of at most \(p\) vertex-disjoint \((V_{in},S)\)-paths. Here, by "not relevant for \(\mathcal{P}\)" we mean that there exists a linkage \(\mathcal{P}^{\prime}\) in \(G\setminus e\) connecting the same pairs of vertices as \(\mathcal{P}\). Let \(V^{\prime}\subseteq V_{in}\) (resp. \(S^{\prime}\subseteq S\)) denote the endpoints of \(\mathcal{P}\) at \(V_{in}\) (resp. at \(S\)). We prove a criterion that under the given assumptions, such a linkage exists if and only if (1) the cyclic ordering of \(V^{\prime}\) matches the cyclic ordering of \(S^{\prime}\) and (2) the cut-condition \(\mu(V^{\prime},S^{\prime})\geq|\mathcal{P}|\) holds. To prove this, we take advantage of the slack \(\mu(V_{in},S)>p\) to show that any family of at most \(p\) disjoint \((V^{\prime},S^{\prime})\)-paths can be "shifted" clockwise using \(\Omega(\mathsf{tw})\) concentric cycles. As we need at most \(\mathsf{tw}\) shifts to transform any \((V^{\prime},S^{\prime})\)-linkage into \(\mathcal{P}\), having \(\Omega(\mathsf{tw}^{2})\) concentric cycles suffices. With this criterion at hand, we show that there always exists an edge \(e\) whose removal does not affect the cut-condition for any pair \((V^{\prime},S^{\prime})\), implying that \(e\) is irrelevant. Case II: Deep well in the middle.Now consider the case that there are \(\Omega(\mathsf{tw}^{2})\) concentric cycles between \(S_{in}\) and \(S_{out}\). Recall that \(p=|S_{in}|=|S_{out}|=\mu(S_{in},S_{out})\). Let \(\mathcal{P}\) be a family of vertex-disjoint \((S_{in}\cup S_{out})\)-paths. If \(\mathcal{P}\) contains less than \(p\) paths that connect \(S_{in}\) to \(S_{out}\), then the analysis is the same as in the previous case. Therefore, the problem boils down to a very restricted case: every path in \(\mathcal{P}\) connects \(S_{in}\) to \(S_{out}\) and every vertex in \((S_{in}\cup S_{out})\) is an endpoint of a path from \(\mathcal{P}\). We call such \(\mathcal{P}\) a _cylindrical linkage_. It is now convenient to fix a concrete plane embedding of the graph: assume that the vertices from \(S_{in}\) lie on the circle \(\{(x,y)\in\mathbb{R}^{2}\mid x^{2}+y^{2}=1\}\) and the vertices from \(S_{out}\) lie on \(\{(x,y)\in\mathbb{R}^{2}\mid x^{2}+y^{2}=4\}\). Furthermore, assume that the \(j\)-th element of \(S_{in}\), \(0\leq j<p\), has polar coordinates \((1,\frac{-2\pi}{p}j)\) and the \(j\)-th element of \(S_{out}\) has polar coordinates \((2,\frac{-2\pi}{p}j)\). For an \((S_{in},S_{out})\)-path \(P\), we define its _winding number_\(\theta(P)\in\mathbb{Z}\) as \(\frac{p}{2\pi}\) times the total angle traversed by the curve corresponding to \(P\), measured clockwise (see Figure 2). It is easy to see that all paths in a cylindrical linkage \(\mathcal{P}\) share the same winding number, so we can also define a winding number \(\theta(\mathcal{P})\) of \(\mathcal{P}\). We say that \(\theta\in\mathbb{Z}\) is _feasible_ if there exists a cylindrical linkage \(\mathcal{P}\) with \(\theta(\mathcal{P})=\theta\). Note that if \(\theta(\mathcal{P}_{1})\equiv\theta(\mathcal{P}_{2})\mod p\) then the linkages \(\mathcal{P}_{1},\mathcal{P}_{2}\) connect the same pairs of terminals. Therefore there are at most \(p\) different connection-patterns that we need to preserve. The structure of cylindrical linkages has been studied by Robertson and Seymour [87] who showed that when \(\theta_{1}<\theta_{2}<\theta_{3}\) and \(\theta_{1},\theta_{3}\) are feasible, then so is \(\theta_{2}\). This means that it suffices to preserve just the minimal and maximal values \(\theta_{min},\theta_{max}\) that are feasible. They can be efficiently computed because the problem becomes polynomial-time solvable in this special case [87]. Moreover, the observation above allows us to assume that the values \(\theta_{min},\theta_{max}\) differ by at most \(p-1\). We prove that there exist cylindrical linkages \(\mathcal{P}_{1},\mathcal{P}_{2}\) with \(\theta(\mathcal{P}_{1})=\theta_{min}\), \(\theta(\mathcal{P}_{2})=\theta_{max}\), such that the intersection of any \(P_{1}\in\mathcal{P}_{1}\) and \(P_{2}\in\mathcal{P}_{2}\) has at most one connected component. Combined with the presence of many concentric cycles between \(S_{in}\) and \(S_{out}\), this implies that there exists an edge \(e\) used by neither \(\mathcal{P}_{1}\) nor \(\mathcal{P}_{2}\). Consequently, removing \(e\) from the graph preserves the set of feasible values of \(\theta\) (modulo \(p\)) and hence yields an \((S_{in}\cup S_{out})\)-linkage-equivalent instance. This concludes the description of radial diameter reduction. Comparison to the previous approach.The known \(2^{O(k^{2})}\cdot n^{O(1)}\)-time algorithms for Planar Disjoint Paths[17, 71] are based on bounding the number of relevant homotopy classes of a solution by \(2^{O(k^{2})}\). Afterwards, for each fixed homotopy class, the problem can be solved in polynomial time [91]. An important case in the analysis of the homotopy classes resembles the two-face case, described above. To bound the winding numbers of paths, these proofs rely on a highly technical argument originating from the FPT algorithm for the directed variant of the problem [24]. By dividing the analysis into two cases, one with non-maximal linkages and one with highly structured linkages, we avoid these technicalities and obtain a stronger result (i.e., a kernel) by simpler means. ### Kernelization hardness for parameter \(k\) To establish the kernelization hardness, we present a polynomial-time reduction from Set Cover with parameter (the universe size) \(k\) to Planar Disjoint Paths with parameter (the number of terminal pairs) \(k^{\prime}=k^{\mathcal{O}(1)}\). Under this parameterization, Set Cover is known not to admit a polynomial kernel unless \(\mathrm{coNP}\subseteq\mathrm{NP}/\mathrm{poly}\)[27] and to be \(\mathrm{WK}[1]\)-complete [50]. Hence, such a reduction entails Theorems 1.1 and 1.2. Before we present the reduction, we discuss the intuition that guided its construction (and, in particular, of the so-called vector-containment gadget described later). Recall that Planar Disjoint Paths is solvable in polynomial time once we fix the _homotopy class_ of a sought solution [91]. This suggests that a polynomial reduction from an NP-hard problem should map different \(\mathrm{NP}\)-witnesses (each encoding a solution candidate that can be verified in polynomial time) into different homotopy classes of a solution. Notice that the solution candidates to Set Cover are tuples of sets (whose number can be huge), while the number of different homotopy classes is bounded by \(n^{\mathcal{O}(k^{\prime})}\)[91].7 Thus, _we must map the solution candidates to_ Set Cover _into homotopies in an economical fashion._ The main crux of the reduction is an intricate mechanism that allows to encode a set family of size as large as \(2^{\Omega(k)}\) using a homotopy class of just \(k^{\mathcal{O}(1)}\) paths. Footnote 7: The number of different homotopy classes is also known to be bounded by \(2^{\mathcal{O}(k^{2})}\)[71], if we restrict ourselves to only “relevant” ones. The precise definition of “relevant” in this context is immaterial for our work. Non-crossing multicommodity flow.We present our reduction in the language of non-crossing edge-disjoint walks. Focusing on edge-disjoint walks allows us to utilize the convenient link between max-flows and shortest paths in the dual graph. What is more, this setting generalizes finding both vertex-disjoint or edge-disjoint paths in planar graphs [7]8, which will make Theorems 1.1, 1.2, 1.3 simple corollaries from the main reduction. Footnote 8: We remark that there is a flaw in [7, Proposition 12] because replacing a vertex with merely a cycle is not sufficient. We give a correct argument using a cylindrical wall instead of a single cycle. For a multigraph \(G\) with a fixed plane embedding, two pairs of edges \((e_{1},f_{1})\) and \((e_{2},f_{2})\), with all edges incident to a vertex \(v\in V(G)\), _cross_ if \(e_{1},e_{2},f_{1},f_{2}\) appear in this order in the cyclic ordering of edges around \(v\). Next, two edge-disjoint walks \(W_{1},W_{2}\) in \(G\) are _non-crossing_ if there are no pairs of consecutive edges \((e_{1},f_{1})\) in \(W_{1}\) and \((e_{2},f_{2})\) in \(W_{2}\) that cross (see Figure 3). In the Non-crossing Multicommodity Flow problem (cf. [7, 71]), we are given a plane multigraph \(G\) and a family \(\mathcal{T}\) of \(k\) tuples \((s_{i},t_{i},d_{i})\in V(G)\times V(G)\times\mathbb{N}\), called _requests_. A solution is a family \(\mathcal{P}\) of pairwise edge-disjoint non-crossing walks containing \(d_{i}\) walks connecting \(s_{i}\) to \(t_{i}\), for \(i\in[k]\), called a _non-crossing \(\mathcal{T}\)-flow_.9 We add one more technical requirement for a solution, which is irrelevant in this informal outline (see Definition 5.2). Since we do not impose any bounds on the _demands_\(d_{i}\), the total size of the family \(\mathcal{P}\) may be exponential in the parameter \(k\). We address this issue later. Footnote 9: In this paper, we consider only integral flows. The main gadgets.We employ three types of gadgets sharing a common interface: each gadget is a plane multigraph \(G\) with a set of requests \(\mathcal{T}\subseteq V(G)\times V(G)\times\mathbb{N}\) and distinguished vertices \(s_{1},t_{1},\ldots,s_{m},t_{m}\) lying on the outer face in this clockwise order. For a subset \(F\subseteq[m]\) let \(\mathcal{T}_{F}=\{(s_{i},t_{i},1)\mid i\in F\}\). A gadget \((G,\mathcal{T})\) encodes some downward-closed family of sets \(\mathcal{F}\) as follows: the instance \((G,\mathcal{T}\cup\mathcal{T}_{F})\) should be solvable if and only if \(F\in\mathcal{F}\). In other words, routing all \((s_{i},t_{i})\)-walks for \(i\in F\) through the gadget should be possible only when \(F\) satisfies property \(\mathcal{F}\). The first type of a gadget is an \(\ell\)-Existential Gadget with \(\ell\) terminal pairs \((s_{i},t_{i})\). For this gadget, \(\mathcal{F}\) is defined as a family of all proper subsets of \([\ell]\). That is, \((G,\mathcal{T}\cup\mathcal{T}_{F})\) is solvable if and only if \(|F|<\ell\). Suppose we are given an instance \((k,\mathcal{S},\ell)\) of Set Cover, i.e., \(\mathcal{S}\) is a family of subsets of \([k]\) and we ask whether there are \(\ell\) sets in \(\mathcal{S}\) that cover \([k]\). We will make a single copy of an \(\ell\)-Existential Gadget for each \(i\in[k]\). For \(j\in[\ell]\) the intended meaning of \(j\not\in F\) in the \(i\)-th gadget is that the element \(i\) should be covered by the \(j\)-th set in a solution. The \(\ell\)-Existential Gadget ensures that for at least one \(j\in[\ell]\) this condition will hold. Next, we introduce an \((r,k,\mathcal{S})\)-Subset Gadget. By padding the family \(\mathcal{S}\) with empty sets, we can assume that \(|\mathcal{S}|=2^{r}\) for some integer \(r\leq k\). The \((r,k,\mathcal{S})\)-Subset Gadget has \(k\) terminal pairs \((s_{i},t_{i})\) and we require \(F\in\mathcal{F}\) if and only if there exists \(S\in\mathcal{S}\) with \(F\subseteq S\). In other words, the set of additional terminals should encode a subset of some set from \(\mathcal{S}\). Imagine the following construction: we make \(k\) copies of an \(\ell\)-Existential Gadget, \(\ell\) copies of an \((r,k,\mathcal{S})\)-Subset Gadget and, for each \(i\in[k]\), \(j\in[\ell]\), we add terminals \(u_{i,j}\), \(v_{i,j}\), connected to the \(j\)-th pair of terminals in the \(i\)-th existential gadget and the \(i\)-th pair of terminals in the \(j\)-th subset gadget. For each created pair \(u_{i,j}\), \(v_{i,j}\), we demand a single unit of flow between \(u_{i,j}\) and \(v_{i,j}\). By the property of an \(\ell\)-Existential Gadget, for each \(i\in[k]\) there should be at least one \(j\in[\ell]\) for which the \((u_{i,j},v_{i,j})\)-walk goes through the \(j\)-th subset gadget. On the other hand, for each \(j\in[\ell]\) the set of such indices \(i\) forms a subset of some set from \(\mathcal{S}\). Therefore satisfying all the requests is possible Figure 3: Left: Three non-crossing walks traversing a vertex. In the reduction from Non-crossing Multicommodity Flow to Planar (Edge-)Disjoint Paths we replace each vertex with a cylindrical wall. This transformation makes the graph simple and subcubic, so the three notions of (a) edge-disjoint non-crossing walks, (b) edge-disjoint paths, and (c) vertex-disjoint paths become equivalent. Right: A system of gadgets for \(k=3\), \(\ell=2\), in a reduction from Set Cover. The existential gadgets are on the top and the subset gadgets are on the right. The terminal pairs in each gadget are numbered clockwise. The three squares in the middle are the junction gadgets. The highlighted stripe shows a way of communication between the first existential gadget and the first subset gadget. The red flow encodes a solution \(S_{1}=\{1\}\), \(S_{2}=\{2,3\}\). exactly when there are \(\ell\) sets in \(\mathcal{S}\) whose union is \([k]\), as intended. The problem with this construction is that already for \(\ell=k=3\) such a graph contains \(K_{3,3}\) as a minor, so it cannot be planar. To circumvent this, we need yet another gadget to allow the links between each \(i\)-th existential gadget and each \(j\)-th subset gadget to cross. To this end, we utilize a \(\mathsf{Junction\,Gadget}\)\((G,\mathcal{T})\) with \(4\) terminal pairs \((s_{i},t_{i})\). We demand that \((G,\mathcal{T}\cup\mathcal{T}_{F})\) should be solvable if and only if \(\{1,3\}\not\subseteq F\) and \(\{2,4\}\not\subseteq F\). That is, when we allow a walk on the left then we cannot route a walk on the right, and when we allow a walk at the top then we cannot route a walk at the bottom, and vice versa. These two exclusion mechanisms are independent from each other, thus allowing two bits of information to "travel" in a crossing fashion (see Figure 3). The existential and junction gadgets have been employed in the original NP-hardness proof of Planar Disjoint Paths[64] and we can easily adapt them for our purposes. The main challenge though is to construct the \((r,k,\mathcal{S})\)-\(\mathsf{Subset\,Gadget}\). In order to design a meaningful reduction we can produce only \((r+k)^{\mathcal{O}(1)}\) requests while we need to encode as many as \(2^{r}\) sets from \(\mathcal{S}\). Subset gadget: The first attempt.We begin with a simplified construction, first presenting the pattern propagation mechanism alone. This also reflects how the full construction is presented in Sections 5.3.1 and 5.3.2. Additionally, here we do not delve into formulas regarding the numbers of parallel edges and the non-crucial demands \(d_{i}\), aiming at the simplest presentation of the main ideas. By a slight abuse of notation, we treat \(\mathcal{S}\) as a function from \(\{0,1\}^{r}\) to subsets of \([k]\). We will build the gadget from \(k\) blocks, each of which could "choose" a pattern encoded by a vector \(\mathbf{b}\in\{0,1\}^{r}\). When \(i\in F\) and the \(i\)-th block chooses a vector \(\mathbf{b}^{i}\), this should imply \(i\in\mathcal{S}(\mathbf{b}^{i})\). The pattern propagation will ensure that each block chooses exactly the same vector \(\mathbf{b}\). In turn, this will imply that \(i\in F\Rightarrow i\in\mathcal{S}(\mathbf{b})\), matching the gadget specification. Let \(r\)-ladder be the \((r+1)\times 2\)-grid, with internal faces numbered bottom-up as \(f_{1},\ldots,f_{r}\). We construct the \(i\)-th block using two \(r\)-ladders, an upper one \(L_{i}^{+}\) and a lower one \(L_{i}^{-}\), and connect the consecutive blocks as depicted in Figure 4. We also connect the first and the last block, thus creating a ring-like structure with the lower ladders in its interior. In each ladder \(L\) we attach two vertices \(L[u_{0}]\), \(L[u_{1}]\) to, respectively, the bottom and the top of \(L\), and add a request \((L[u_{0}]\), \(L[u_{1}],1)\) to \(\mathcal{T}\). We will enforce that the \((L[u_{0}]\), \(L[u_{1}])\)-walk \(W\) must be entirely contained within \(L\), and so it can be associated with a vector \(\mathbf{b}^{L}\in\{0,1\}^{r}\) encoding which faces \(f_{1},\ldots,f_{r}\) are to the left of \(W\) (then the corresponding bit in \(\mathbf{b}^{L}\) is set to \(0\)) and which are to the right (then the corresponding bit is \(1\)). We shall call \(\mathbf{b}^{L}\) the _pattern_ in \(L\). Next, for each \(i\in[k]\) and \(j\in[r]\) we attach a vertex inside the face \(f_{j}\) of the ladder \(L_{i}^{+}\) (we refer to this vertex as \(L_{i}^{+}[x_{j}]\)) and a vertex inside the face \(f_{j}\) of the ladder \(L_{i}^{-}\) (denoted \(L_{i}^{-}[x_{j}]\)). We create a request \((L_{i}^{+}[x_{j}],L_{i}^{-}[x_{j}],2^{j-1})\). Because the \((L_{i}^{+}[x_{j}],L_{i}^{-}[x_{j}])\)-walks cannot cross the \((L_{i}^{+}[u_{0}]\), \(L_{i}^{+}[u_{1}])\)-walk, they need to use the passage on the left when the \(j\)-th bit of the pattern is \(0\), or the passage on the right when this bit is \(1\). This already implies that the patterns in the ladders \(L_{i}^{-}\), \(L_{i}^{+}\) must be the same, and we will refer to this common pattern as \(\mathbf{b}^{i}\). We set the capacity of each passage going through the middle belt to be \(2^{r}-1\), i.e., we place this many parallel edges in each of \(k\) passages. Note that the \(x\)-requests from the \(i\)-th block send \(\sum_{j=1}^{r}2^{j-1}\cdot 1_{[\mathbf{b}^{i}_{j}=0]}\) units of flow through the passage to the left of the \(i\)-th block and \(\sum_{j=1}^{r}2^{j-1}\cdot 1_{[\mathbf{b}^{i}_{j}=1]}\) units of flow through the right passage. If all the patterns are the same, then the total amount of flow going through each passage is \(\sum_{j=1}^{r}2^{j-1}=2^{r}-1\). Because we work on a ring structure, when some patterns differ then there is a passage through which one would need to push at least \(2^{r}\) units of flow. Consequently, all the vectors \(\mathbf{b}^{i}\) must coincide and so pattern propagation works as intended. So far we have established a mechanism that makes a solution choose a single vector \(\mathbf{b}\in\{0,1\}^{r}\) that appears as a pattern in each block of the \((r,k,\mathcal{S})\)-\(\mathsf{Subset\,Gadget}\). The next step is to enforce that \(i\in F\Rightarrow i\in\mathcal{S}(\mathbf{b})\). It is convenient to define \(Z_{i}^{\mathcal{S}}\) as the set of vectors \(\mathbf{b}\) for which \(i\in\mathcal{S}(\mathbf{b})\). Then the set \(Z_{i}^{\mathcal{S}}\) can be encoded in the graph, and we need to check whether the pattern \(\mathbf{b}\) chosen by a solution belongs to \(Z_{i}^{\mathcal{S}}\). To this end, we will need another kind of a gadget, with a slightly different interface. For \(Z\subseteq\{0,1\}^{r}\), an \((r,Z)\)-\(\mathsf{Vector\,Containment\,Gadget}\) is a plane multigraph \(G\) with distinguished vertices \(z_{1},\ldots,z_{r}\) and \(w_{0},w_{1}\), with the last two lying on the outer face. For \(\mathbf{b}\in\{0,1\}^{r}\) we define \(\mathcal{T}_{\mathbf{b},d}\) as the family of following requests: 1. \((w_{0},z_{j},1)\) for each \(j\in[r]\) with \(\mathbf{b}_{j}=0\), 2. \((w_{1},z_{j},1)\) for each \(j\in[r]\) with \(\mathbf{b}_{j}=1\), 3. the request \((w_{0},w_{1},d)\). (The vertex \(z_{j}\) needs to be connected by a walk to either \(w_{0}\) or \(w_{1}\) depending on the \(j\)-th bit in \(\mathbf{b}\).) We require that the instance \((G,\mathcal{T}_{\mathbf{b},0})\) is satisfiable for any \(\mathbf{b}\in\{0,1\}^{r}\) but the instance \((G,\mathcal{T}_{\mathbf{b},1})\) is satisfiable if and only if \(\mathbf{b}\in Z\). In other words, the choice of the vector \(\mathbf{b}\) governs whether we can insert a \((w_{0},w_{1})\)-walk on top of the \(r\) walks with endpoints at \(z_{1},\ldots,z_{r}\). Figure 4: A simplified construction of a \((3,k,\mathcal{S})\)-\(\mathsf{Subset\,Gadget}\) and a sketch of a solution (the colorful lines). The vertices that need to be connected by walks share common colors and shapes. The \(i\)-th block is built from two 3-ladders \(L_{i}^{+},L_{i}^{-}\), and a \((3,Z_{i}^{\mathcal{S}})\)-\(\mathsf{Vector\,Containment\,Gadget}\). The blocks are combined into a ring-like structure (on the right). The area separating the upper and lower ladder is referred to as the _middle belt_. Due to pattern propagation, the choice of which colors are being routed through the left or right passage must the same in all blocks. The common pattern \(\mathbf{b}=(010)\) chosen by the solution is sketched with solid lines. Accommodating an \((s_{i},t_{i})\)-walk (purple) through the \(i\)-th vector-containment gadget is possible if and only if \(\mathbf{b}\in Z_{i}^{\mathcal{S}}\). Assuming that such a gadget exists, we could finish the construction of the subset gadget as follows. For each \(i\in[k]\) we insert an \((r,\mathcal{Z}_{i}^{\mathcal{S}})\)-Vector Containment Gadget below the \(i\)-th block and connect it to the lower corners of the ladder \(L_{i}^{-}\). We refer to its distinguished vertices with \(i\) in the superscript, e.g., \(w_{0}^{i},z_{j}^{i}\). Next, for each \(i\in[k]\), \(j\in[r]\), we create another vertex inside the face \(f_{j}\) of the ladder \(L_{i}^{-}\) (denoted \(L_{i}^{-}[y_{j}]\)) and add a request \((L_{i}^{-}[y_{j}],z_{j}^{i},1)\). By the same argument as above, these walks must go through \(w_{0}^{i}\) when \(\mathbf{b}_{j}=0\) or through \(w_{1}^{i}\) when \(\mathbf{b}_{j}=1\). Therefore, they contain subwalks satisfying the requests from \(\mathcal{T}_{\mathbf{b},0}\). Next, we create vertices \(s_{i},t_{i}\) attached to the upper corners of the ladder \(L_{i}^{+}\); note that they end up in the exterior of the ring structure, i.e., on the outer face of the subset gadget. The only possible way from \(s_{i}\) to \(t_{i}\) leads through the \(i\)-th vector-containment gadget. Hence when \(i\in F\) and we need to satisfy the request \((s_{i},t_{i},1)\), there must exist a non-crossing \(\mathcal{T}_{\mathbf{b},1}\)-flow in the \((r,Z_{i}^{\mathcal{S}})\)-Vector Containment Gadget. By its definition, this implies \(\mathbf{b}\in Z_{i}^{\mathcal{S}}\) and so \(i\in\mathcal{S}(\mathbf{b})\), as intended. The last issue is that the passages through the middle belt are already saturated by the \(x\)-requests. This is not a big problem though as we can multiply the demands in the \(x\)-requests by a constant and similarly multiply the number of the parallel edges in the passages, creating a little slack sufficient for routing the \((s_{i},t_{i})\)-walks. Vector-containment gadget.Unfortunately, the construction above does not work because we do not know how to construct an \((r,Z)\)-Vector Containment Gadget. Instead, we now present a gadget with a slightly more complicated specification. First, we explain how to construct a gadget that behaves almost like an \((r,Z)\)-Vector Containment Gadget and then augment the construction of the subset gadget with additional elements necessary to plug in the proper vector-containment gadget. It is convenient to analyze the gadget from the dual perspective. For each \(\mathbf{b}\in\{0,1\}^{r}\) we can draw a curve (the red dashed curve in Figure 4) that has to intersect all the walks in a \(\mathcal{T}_{\mathbf{b},d}\)-flow. When there is a path \(P\) in the dual graph whose homotopy aligns with this curve (that is, \(P\) traverses \(z_{j}\) from the side of \(w_{0}\) when \(\mathbf{b}_{j}=0\) and from the side of \(w_{1}\) when \(\mathbf{b}_{j}=1\)) then the length of \(P\) imposes an upper bound on the maximal number of walks in a \(\mathcal{T}_{\mathbf{b},d}\)-flow, what in turn entails an upper bound on \(d\). Given the set \(Z\), we would like to construct a plane graph, being a prototype of a dual graph of the gadget, with two vertices \(s,t\) on the outer face and \(r\) distinguished internal faces \(f_{1},\ldots,f_{r}\) with the following property: the length of a shortest \((s,t)\)-path with homotopy class encoding the vector \(\mathbf{b}\) (with respect to the faces \(f_{1},\ldots,f_{r}\)) depends on whether \(\mathbf{b}\in Z\). We construct a plane graph \(H_{r}\) (see Figure 5) having a _unique_ shortest \((s,t)\)-path \(P_{\mathbf{b}}\) for each homotopy class given by \(\mathbf{b}\in\{0,1\}^{r}\). Moreover, the paths \(P_{\mathbf{b}}\) are pairwise edge-disjoint. The graph \(H_{r}\) has size \(2^{\mathcal{O}(r)}\), which is polynomial in the size of family \(\mathcal{S}\), and its treewidth is \(2^{\Omega(r)}\) due to large grid subgraphs; this is inevitable in the light of Theorem 1.4. The faces \(f_{1},\ldots,f_{r}\) are the dual counterparts of the vertices \(z_{1},\ldots,z_{r}\), while the areas above and below \(H_{r}\) are the placeholders for \(w_{0},w_{1}\). In order to achieve our goal, we would like to modify \(H_{r}\) to increase the length of the path \(P_{\mathbf{b}}\) exactly when \(\mathbf{b}\in Z\), thus allowing more slack for flows in the dual of \(H_{r}\). This can be obtained by subdividing the first edge (incident to \(s\)) on the path \(P_{\mathbf{b}}\) when \(\mathbf{b}\in Z\); due to edge-disjointness of the paths \(P_{\mathbf{b}}\), this does not affect the remaining homotopy classes. This modification raises the upper bound on the size of a \(\mathcal{T}_{\mathbf{b},d}\)-flow by one; by applying some other adjustments to \(H_{r}\) we can make this upper bound tight for non-crossing flows. As a consequence, we can accommodate one more \((w_{0},w_{1})\)-walk in the dual exactly when the pattern \(\mathbf{b}\) belongs to \(Z\), as intended. The main technical hurdle comes from the fact that the length of the path \(P_{\mathbf{b}}\) in \(H_{r}\) depends on \(\mathbf{b}\): it is very short for \(\mathbf{b}\) being a \(0\)-vector and very long for \(\mathbf{b}\) comprising alternating \(0\)'s and \(1\)'s. So the bound on the size of a \(\mathcal{T}_{\mathbf{b},d}\)-walk depends not only on whether \(\mathbf{b}\in Z\) but also on some function \(\gamma(\mathbf{b})\), making it useless for the current construction of the subset gadget. To circumvent this, we first prove that the function \(\gamma\) enjoys a very special form, which will play a crucial role later. \[\gamma(b_{1}b_{2}\ldots b_{r})=\sum_{1\leq p<q\leq r}1_{[b_{p}\neq b_{q}]}\cdot 2 ^{r-q+p-1}\] We will now work with a generalization of an \((r,Z)\)-Vector Containment Gadget, namely an \((r,\gamma,Z)\)-Vector Containment Gadget. The difference is that a non-crossing \(\mathcal{T}_{\mathbf{b},d}\)-flow should be feasible exactly when \(d\leq\gamma(\mathbf{b})+1_{[\mathbf{b}\in Z]}\). In fact, the proper definition (5.5) requires more subtleties concerning factors depending on \(r\) but we omit them here. Subset gadget: The real deal.The current task is to adapt the construction of an \((r,k,\mathcal{S})\)-Subset Gadget to be compatible with the more complicated \((r,\gamma,Z)\)-Vector Containment Gadget. The condition \(\mathbf{b}\in Z\) becomes meaningful for the vector-containment gadget only when it needs to additionally accommodate \(\gamma(\mathbf{b})\) units of \((w_{0},w_{1})\)-flow. We will extend the previous construction with "dynamic flow generators": new requests that could be satisfied either locally, within their ladder, or via walks passing through the vector-containment gadget. We insert \(\binom{r}{2}\) new blocks (without vector-containment gadgets) between each pair of blocks in the ring structure. In each new block, labeled now with a triple \((i,p,q)\), \(i\in[k]\), \(1\leq p<q\leq r\), we request \(2^{r-q+p-1}\) walks from a vertex within the face \(f_{p}\) of the corresponding lower ladder \(L\) to a vertex within the face \(f_{q}\) of the same ladder (see Figure 6). Due to pattern propagation, these faces will end up on the same side of the \((L[u_{0}],L[u_{1}])\)-walk only when a solution chooses a pattern \(\mathbf{b}\) with \(\mathbf{b}_{p}=\mathbf{b}_{q}\). In this case the new requests can be satisfied by walks contained in \(L\). However, when \(\mathbf{b}_{p}\neq\mathbf{b}_{q}\) the faces \(f_{p},f_{q}\) in the ladder \(L\) become separated and the corresponding \(2^{r-q+p-1}\) walks must be routed elsewhere. Observe that the total amount of flow that cannot be satisfied locally, summing over all pairs \(1\leq p<q\leq r\), matches the formula for \(\gamma(\mathbf{b})\). Figure 5: The graph \(H_{4}\) with \(2^{4}\) vertices in each vertical path. Four distinguished faces \(f_{1},f_{2},f_{3},f_{4}\) are highlighted in light blue. For \(\mathbf{b}=(1001)\) the path \(P_{\mathbf{b}}\) is drawn in orange and for \(\mathbf{v}=(1110)\) the path \(P_{\mathbf{v}}\) is drawn in green. These vectors encode whether a path passes above or below each of the faces \(f_{1},f_{2},f_{3},f_{4}\). The red dashed curves illustrate the directions of walks in a \(\mathcal{T}_{\mathbf{b},d}\)-flow in the dual of \(H_{4}\): note that each of them intersects \(P_{\mathbf{b}}\). We need a way to satisfy the requests, that cannot be satisfied locally, with walks that traverse the \(i\)-th vector-containment gadget. To this end, we design an "irrigation system" of workarounds that are too narrow (in the number of parallel edges) to be used by the other requests. It comprises two groups of "pipes", one inside the ring structure, gathering the walks starting on the left-sides of the ladders and leading towards \(w_{0}^{i}\), and the other one outside the ring structure, gathering the walks starting on the right-sides of the ladders and leading towards \(w_{1}^{i}\). By designing these systems carefully, we can ensure that (a) the walks going through the pipes can traverse the vector-containment gadget without crossing each other or the remaining walks, (b) the only way from the upper pipe system to the lower pipe system leads through the vector-containment gadget, and (c) the terminals \(s_{i},t_{i}\) stay on the outer face of the subset gadget, as required by its specification. This concludes the description of the \((r,k,\mathcal{S})\)-\(\mathsf{Subset\,Gadget}\)\((G,\mathcal{T})\) with the total number of requests \(|\mathcal{T}|=\mathcal{O}(k\cdot r^{3})\). Implementing weights.The last issue is to deal with the fact in the Non-crossing Multicommodity Flow problem we allow requests of the form \((s_{i},t_{i},d_{i})\) where the demand \(d_{i}\) may be exponentially large in the parameter. As our final goal is to reduce the problem to Planar Disjoint Paths, we would like to set all the demands to \(d_{i}=1\) without increasing the number of requests too much. To this end, we take advantage of the construction by Adler and Krause [3] who Figure 6: A refined construction of a \((3,k,\mathcal{S})\)-\(\mathsf{Subset\,Gadget}\), showing new blocks inserted between the \(i\)-th block and the \((i+1)\)-th block from the previous construction. The faces with the endpoints of the new requests are highlighted. The “pipes” are drawn with dotted lines. The blue paths show other new requests whose purpose is to block the connections between the upper and lower pipes. The common pattern of the solution is depicted with solid lines while the red lines illustrate the workaround for the new requests traversing the \(i\)-th vector-containment gadget. Observe that one pair of highlighted faces is not being separated by the pattern so the corresponding request can be satisfied locally. presented an instance of Planar Disjoint Paths with parameter \(\ell\) and a roughly \(2^{\ell}\times 2^{\ell}\)-grid subgraph in which every vertex must be used in the unique solution (hence no vertex is irrelevant despite large treewidth). What is important for us, the unique solution must traverse the grid \(2^{\ell}-1\) times from left to right (see Figure 29 on page 77). We utilize this property to implement a request of the form \((s_{i},t_{i},2^{\ell}-1)\) using only \(\ell\) unitary requests. We replace the endpoint \(s_{i}\) with a gadget mimicking the left-side of the grid and \(t_{i}\) with a gadget for the right-side of the grid. By the arguments from [3] we obtain that a solution must contain \(2^{\ell}-1\) subwalks between these two gadgets, which can be translated back into \(2^{\ell}-1\) walks between \(s_{i}\) and \(t_{i}\). Finally, we can partition each request \((s_{i},t_{i},d_{i})\) into \(\mathcal{O}(\log d_{i})\) requests of the form as above, thus increasing the total size of \(\mathcal{T}\) by a factor of \(\mathcal{O}(k^{2})\) when the demands are bounded by \(2^{\mathcal{O}(k)}\). This concludes the outline of the reduction. ## 3 Preliminaries The set \(\{1,\ldots,p\}\) is denoted by \([p]\). A graph \(G\) has vertex set \(V(G)\) and edge set \(E(G)\) of distinct pairs of vertices. We also consider multigraphs that may have parallel edges but no loops, i.e., \(E(G)\) becomes a multiset of pairs of distinct vertices. For a vertex \(v\in V(G)\) we denote by \(E_{G}(v)\) the set of edges incident to \(v\). For \(A,B\subseteq V(G)\) we define \(E_{G}(A,B)=\{uv\mid u\in A,v\in B,uv\in E(G)\}\). The open neighborhood of \(v\in V(G)\) is \(N_{G}(v):=\{u\mid uv\in E(G)\}\). For a vertex set \(S\subseteq V(G)\) the open neighborhood of \(S\), denoted \(N_{G}(S)\), is defined as \(\bigcup_{v\in S}N_{G}(v)\setminus S\). For \(S\subseteq V(G)\), the graph induced by \(S\) is denoted by \(G[S]\). We use shorthand \(G-S\) for the graph \(G[V(G)\setminus S]\). For \(v\in V(G)\), we write \(G-v\) instead of \(G-\{v\}\). For \(A\subseteq E(G)\) we denote by \(G\setminus A\) the graph with vertex set \(V(G)\) and edge set \(E(G)\setminus A\). For \(e\in E(G)\) we write \(G\setminus e\) instead of \(G\setminus\{e\}\). Paths, linkages, and separators.For \(X,Y\subseteq V(G)\) an \((X,Y)\)-walk in \(G\) is an alternating sequence of vertices and edges from \(G\) so that the first element is a vertex from \(X\), the last one element is a vertex from \(Y\), and each edge is incident to the proceeding and succeeding vertex. An \((X,Y)\)-path is an \((X,Y)\)-walk without vertex repetitions; we often identity a path with a subgraph of \(G\). When \(X=\{x\}\), \(Y=\{y\}\), we simply refer to \((x,y)\)-walks and \((x,y)\)-paths. The length of a path equals the number of its edges. For \(x,y\in V(G)\) we define \(\mathsf{dist}_{G}(x,y)\) as the length of the shortest \((x,y)\)-path. A _linkage_ in \(G\) is a family of vertex-disjoint paths in \(G\). For \(X,Y\subseteq V(G)\) we say that \(\mathcal{P}\) is an _\((X,Y)\)-linkage_ when \(\mathcal{P}\) comprises \((X,Y)\)-paths. We use shorthand _\(X\)-linkage_ for an \((X,X)\)-linkage. For \(\mathcal{T}\subseteq V(G)\times V(G)\) we say that \(\mathcal{P}\) is a _\(\mathcal{T}\)-linkage_ when \(|\mathcal{P}|=|\mathcal{T}|\) and \(\mathcal{P}\) contains an \((s_{i},t_{i})\)-path for each \((s_{i},t_{i})\in\mathcal{T}\). We say that \(\mathcal{T}\) is _realizable_ in \(G\) if there exists a \(\mathcal{T}\)-linkage in \(G\). Two linkages \(\mathcal{P}_{1},\mathcal{P}_{2}\) are _aligned_ if there is a bijection \(f\colon\mathcal{P}_{1}\to\mathcal{P}_{2}\) such that \(P\in\mathcal{P}_{1}\) has the same endpoints as \(f(P)\). For \(\mathcal{T}\subseteq V(G)\times V(G)\) we denote by \(V_{\mathcal{T}}\) the set of all vertices occurring in \(\mathcal{T}\). For two sets \(X,Y\subseteq V(G)\), a set \(S\subseteq V(G)\) is an \((X,Y)\)-separator if no connected component of \(G-S\) contains a vertex from both \(X\setminus S\) and \(Y\setminus S\). Note that such a separator may intersect \(X\cup Y\). By Menger's theorem, the minimum cardinality of such a separator is equal to the maximum cardinality of an \((X,Y)\)-linkage. We denote this quantity by \(\mu_{G}(X,Y)\). An \((X,Y)\)-separator \(S\) is _inclusion-minimal_ if no proper subset of \(S\) is an \((X,Y)\)-separator. **Theorem 3.1** ([23, Thm. 8.4, 8.5]).: _Let \(G\) be a graph and \(X,Y\subseteq V(G)\) be disjoint sets of vertices. There exists a minimum-size \((X,Y)\)-separator \(S\) such that for any other minimum-size \((X,Y)\)-separator \(S^{\prime}\) the set of vertices reachable from \(X\) in \(G-S\) is a subset of the set of vertices reachable from \(X\) in \(G-S^{\prime}\). Furthermore, \(S\) can be constructed in polynomial time._ We say that \(S\) is the minimum-size \((X,Y)\)-separator closest to \(X\). Next, we shall need the following fact, which follows from the analysis of the residual graph in the Ford-Fulkerson algorithm. **Lemma 3.2** (Augmenting path, [39, 22] (Implicit)).: _Let \(X,Y\subseteq V(G)\) and \(X^{\prime}\subset X,\,Y^{\prime}\subset Y\) be such that \(|X^{\prime}|=|Y^{\prime}|=\mu_{G}(X^{\prime},Y^{\prime})<\mu_{G}(X,Y)\). Then there exist \(x\in X\setminus X^{\prime}\), \(y\in Y\setminus Y^{\prime}\), and an \((X^{\prime}\cup\{x\},Y^{\prime}\cup\{y\})\)-linkage of size \(|\mu_{G}(X^{\prime},Y^{\prime})|+1\)._ Contractions and minors.The operation of contracting an edge \(uv\in E(G)\) introduces a new vertex adjacent to all of \(N_{G}(\{u,v\})\), after which \(u\) and \(v\) are deleted. When working with multigraphs, we accumulate multiplicities of edges with a common endpoint. The result of contracting \(uv\in E(G)\) is denoted \(G/uv\). For \(A\subseteq V(G)\) such that \(G[A]\) is connected, we say we contract \(A\) if we simultaneously contract all edges in \(G[A]\) and introduce a single new vertex. We say that \(H\) is a minor of \(G\), if we can turn \(G\) into \(H\) by a (possibly empty) series of edge contractions, edge deletions, and vertex deletions. The result of such a process is given by a minor-model, i.e., a mapping \(\Pi\colon V(H)\to 2^{V(G)}\), such that the branch sets \((\Pi(h))_{h\in V(H)}\) are pairwise disjoint, induce connected subgraphs of \(G\), and \(h_{1}h_{2}\in E(H)\) implies that \(E_{G}(\Pi(h_{1}),\Pi(h_{2}))\neq\emptyset\). Treewidth.A tree decomposition of graph \(G\) is a pair \((T,\chi)\) where \(T\) is a rooted tree, and \(\chi\colon V(T)\to 2^{V(G)}\) is a function, such that: 1. For each \(v\in V(G)\) the nodes \(\{t\mid v\in\chi(t)\}\) form a non-empty connected subtree of \(T\). 2. For each edge \(uv\in E(G)\) there is a node \(t\in V(G)\) with \(\{u,v\}\subseteq\chi(t)\). The _width_ of a tree decomposition is defined as \(\max_{t\in V(T)}|\chi(t)|-1\). The _treewidth_ of a graph \(G\), denoted \(\mathbf{tw}(G)\), is the minimum width of a tree decomposition of \(G\). Planar graphs and multigraphs.We provide only the necessary background here and refer to the textbook [79] for more details. A plane embedding of a multigraph \(G\) is given by a mapping from \(V(G)\) to \(\mathbb{R}^{2}\) and a mapping that associates with each edge \(uv\in E(G)\) a simple curve on the plane connecting the images of \(u\) and \(v\), such that the curves given by two distinct edges can intersect only in a common endpoint. A multigraph is called planar if it admits a plane embedding. A plane (multi)graph is a (multi)graph with a fixed planar embedding, in which we identify the set of vertices with the set of their images on the plane. For a vertex \(v\) in a plane multigraph \(G\) we denote by \(\pi_{G}(v)\) the clockwise ordering of the set \(E_{G}(v)\). The family of such orderings is called a _rotation system_. For a topological disc \(D\subseteq\mathbb{R}^{2}\) such that \(G[V(G)\cap D]\) is connected, the outcome of contracting \(V(G)\cap D\) is the unique (with respect to the rotation system) plane multigraph obtained by contracting \(G\cap D\) into a point. A _face_ in a plane embedding of a multigraph \(G\) is a maximal connected subset of the plane minus the image of \(G\). We say that a vertex or an edge lies on a face \(f\) if its images belongs to the closure of \(f\). In every plane embedding there is exactly one face of infinite area, referred to as the outer face. For a plane multigraph \(G\) we define its dual multigraph \(G^{*}\) with \(V(G^{*})\) being the set of faces of \(G\) and edges given by pairs of distinct faces that are incident to an image of a common edge from \(E(G)\). Each (a) vertex \(v\), (b) edge \(e\), and (c) face \(f\) of \(G\) has a counterpart in \(G^{*}\), respectively: (a) face \(v^{*}\), (b) edge \(e^{*}\), and (c) vertex \(f^{*}\). For a plane graph \(G\) with a set of faces \(F\), we define its _radial graph_\(\widehat{G}\) as a bipartite graph with the set of vertices \(V(\widehat{G})=V(G)\cup F\) and edges given by pairs \((v,f)\) where \(v\in V(G)\), \(f\in F\), and \(v\) lies on the face \(f\). For two vertices \(u,v\in V(G)\) we define their _radial distance_\(\mathsf{drist}_{G}(u,v)\) to be one less than the minimum length of a sequence of vertices that starts at \(u\), ends in \(v\), and in which every two consecutive vertices lie on a common face. For two sets \(X,Y\subseteq V(G)\) we define \(\mathsf{drist}_{G}(X,Y)=\min_{x\in X,y\in Y}\mathsf{drist}_{G}(x,y)\). The radial diameter of a plane graph \(G\) equals \(\max_{u,v\in V(G)}\mathsf{drist}_{G}(u,v)\). **Lemma 3.3** ([52, Prop. 2.1]).: _Let \(G\) be a plane graph with non-empty disjoint vertex sets \(X\) and \(Y\), such that \(G[X]\) and \(G[Y]\) are connected and \(\mathsf{drist}_{G}(X,Y)=d\geq 2\). For any \(r\) with \(0<r<d\) there is a cycle \(C\) in \(G-(X\cup Y)\) such that all vertices \(u\in V(C)\) satisfy \(\mathsf{drist}_{G}(X,u)=r\), and such that \(V(C)\) is an \((X,Y)\)-separator in \(G\)._ A _noose_ is a subset of \(\mathbb{R}^{2}\) homeomorphic to the circle \(\mathbb{S}^{1}\). For a plane graph \(G\), a \(G\)-noose is a noose that intersects \(G\) only at vertices; the length of a \(G\)-noose is defined as the number of vertices it intersects. For a noose \(I\) we define \(\mathsf{Disc}(I)\) as the closure of the bounded region of \(\mathbb{R}^{2}\setminus I\). For a closed set \(D\subseteq\mathbb{R}^{2}\) we define its _interior_\(\mathsf{int}(D)\) as \(D\) minus its boundary \(\partial D\). For two nooses \(I_{in},I_{out}\), such that \(I_{in}\) lies in the interior of \(\mathsf{Disc}(I_{out})\), we define \(\mathsf{Ring}(I_{in},I_{out})=\mathsf{Disc}(I_{out})\setminus\mathsf{int}( \mathsf{Disc}(I_{in}))\). A plane graph \(G\) is _properly embedded_ in a set \(D\subseteq\mathbb{R}^{2}\) if \(G\subseteq D\) and \(G\cap\partial D\subseteq V(G)\). **Lemma 3.4** ([79, Prop. 8.2.3]).: _Let \(G\) be a plane graph, \(X,Y\subseteq V(G)\) be non-empty, disjoint, and inducing connected subgraphs of \(G\), and \(S\subseteq V(G)\setminus(X\cup Y)\) be an inclusion-minimal \((X,Y)\)-separator. Then there exists a \(G\)-noose \(\gamma\) such that \(S=\gamma\cap V(G)\) and \(\gamma\) separates the plane into two regions, one containing \(X\) and the second containing \(Y\)._ We remark that the original statement in [79] involves singleton sets \(X,Y\) and triangulated plane graphs, but it can be adapted to our setting by contractions and inserting new vertices inside faces. **Lemma 3.5**.: _Let \(G\) be a plane graph, \(C_{1},C_{2}\) be vertex-disjoint cycles in \(G\) such that \(C_{1}\) lies in the interior of \(C_{2}\). Let \(S\) be an inclusion-minimal \((C_{1},C_{2})\)-separator, not necessarily disjoint from \(C_{1},C_{2}\). Then there exists a \(G\)-noose \(\gamma\) such that \(S=\gamma\cap V(G)\) and \(\mathsf{Disc}(C_{1})\subseteq\mathsf{Disc}(\gamma)\subseteq\mathsf{Disc}(C_{2})\)._ Proof.: By the minimality of the separator, \(S\) does not contain vertices in the interior of \(\mathsf{Disc}(C_{1})\) and in the exterior of \(\mathsf{Disc}(C_{2})\). Consider a graph \(G^{\prime}\) obtained from \(G\) by removing all the vertices in the interior of \(\mathsf{Disc}(C_{1})\), inserting a vertex \(u\) inside \(C_{1}\) adjacent to the entire \(V(C_{1})\), removing all the vertices in the exterior of \(\mathsf{Disc}(C_{2})\) and inserting a vertex \(v\) outside \(C_{2}\) adjacent to the entire \(V(C_{2})\). Then \(S\) is an inclusion-minimal \((u,v)\)-separator in \(G^{\prime}\), disjoint from \(u,v\). The claim follows from Lemma 3.4. ## 4 Polynomial kernel for parameter \(k+\mathsf{tw}\) In this section we prove Theorem 2.1, being a generalization of Theorem 1.4. We begin with providing additional preliminaries about linkages. Next, we present the radial diameter reduction (Section 4.2), analyzing the two cases described in the outline, and then combining them into a procedure for the irrelevant edge detection. Afterwards, we deal with the single-face case (Section 4.3) and then apply it to process the low-radial-diameter instances in Section 4.4. ### Preliminaries for processing linkages We gather several useful facts about linkages that will form our toolbox for proving Theorem 2.1. This is mostly a compilation of know facts, adapted to our setting. We begin with the concept of linkage-equivalency and explain how it helps in compressing subgraphs without terminals. **Definition 4.1**.: _Two graphs \(G_{1},G_{2}\) sharing a set of vertices \(X\) are \(X\)-linkage-equivalent if for every set of disjoint pairs \(\mathcal{T}\subseteq X^{2}\), \(\mathcal{T}\) is realizable in \(G_{1}\) if and only if \(\mathcal{T}\) is realizable in \(G_{2}\)._ **Lemma 4.2**.: _Let \(G\) be a graph and \(X,Y\subseteq V(G)\), \(U\subseteq V(G)\setminus(X\cup Y)\), \(N_{G}(U)\subseteq Y\). Suppose that there is an edge \(e\in E(G[U\cup Y])\) such that \(G[U\cup Y]\setminus e\) is \(Y\)-linkage-equivalent to \(G[U\cup Y]\). Then \(G\setminus e\) is \(X\)-linkage-equivalent to \(G\)._ Proof.: Let \(\mathcal{P}\) be an \(X\)-linkage in \(G\). For a path \(P\in\mathcal{P}\) let \(\Gamma(P)\) be the family of maximal subpaths of \(P\) with vertex sets contained in \(U\cup Y\). Since \(U\cap X=\emptyset\) and \(N_{G}(U)\subseteq Y\) then every path in \(\Gamma(P)\) is a \((Y,Y)\)-path. Therefore, the linkage \(\mathcal{P}_{U}\) given by the union of \(\Gamma(P)\) over \(P\in\mathcal{P}\) is a \(Y\)-linkage in \(G[U\cup Y]\). Because \(G[U\cup Y]\setminus e\) is \(Y\)-linkage-equivalent to \(G[U\cup Y]\), there exists a linkage \(\mathcal{P}_{U}^{\prime}\) in \(G[U\cup Y]\setminus e\) that is aligned with \(\mathcal{P}_{U}\). For \(P\in\mathcal{P}\) let \(\widehat{P}\) be obtained from \(P\) by replacing each subpath from \(\Gamma(P)\) with its counterpart from \(\mathcal{P}_{U}^{\prime}\). Then \(\{\widehat{P}\mid P\in\mathcal{P}\}\) is a linkage in \(G\setminus e\) that is aligned with \(\mathcal{P}\). The following concept has been introduced in the work about treewidth reduction for Planar Disjoint Paths [2]. **Definition 4.3** (Tight concentric cycles).: _Let \(G\) be a plane graph, \(X_{in},X_{out}\subseteq V(G)\), and \(C_{1},\ldots,C_{m}\) be a sequence of cycles in \(G\). We call \(C_{1},\ldots,C_{m}\) concentric, if for all \(i\in[m-1]\), the cycle \(C_{i}\) is contained in the interior of \(\mathsf{Disc}(C_{i+1})\). When additionally \(X_{in}\subseteq\mathsf{int}(\mathsf{Disc}(C_{1}))\) and \(X_{out}\cap\mathsf{Disc}(C_{m})=\emptyset\), then we call it an \((X_{in},X_{out})\)-sequence of concentric cycles._ _A \((X_{in},X_{out})\)-sequence of concentric cycles is tight if, in addition, for every \(i\in[m-1]\), \(\mathsf{Disc}(C_{i+1})\setminus\mathsf{Disc}(C_{i})\) does not contain a cycle \(C\) with \(\mathsf{Disc}(C_{i})\subseteq\mathsf{Disc}(C)\subsetneq\mathsf{Disc}(C_{i+1})\), and \(\mathsf{Disc}(C_{1})\setminus X_{in}\) does not contain a cycle \(C\) with \(X_{in}\subseteq\mathsf{Disc}(C)\subsetneq\mathsf{Disc}(C_{1})\)._ When a plane graph \(G\) properly embedded in \(\mathsf{Ring}(I_{in},I_{out})\) is clear from the context, we denote \(V_{in}=V(G)\cap I_{in}\) and \(V_{out}=V(G)\cap I_{out}\). **Lemma 4.4**.: _Consider a graph \(G\) properly embedded in \(\mathsf{Ring}(I_{in},I_{out})\) with \(d=\mathsf{rdist}_{G}(V_{in},V_{out})\). Let \(X_{in}\subseteq V_{in}\) and \(X_{out}\subseteq V_{out}\). Then there exists a tight \((X_{in},X_{out})\)-sequence of concentric cycles \(C_{1},\ldots,C_{d-1}\)._ Proof.: Consider a graph \(G^{\prime}\) obtained from \(G\) by inserting a vertex \(v_{in}\) inside \(I_{in}\) adjacent to entire \(X_{in}\) and a vertex \(v_{out}\) outside \(I_{out}\) adjacent to entire \(X_{out}\). The sets \(X_{in}\cup\{v_{in}\}\) and \(X_{out}\cup\{v_{out}\}\) induce connected subgraphs of \(G^{\prime}\) and their radial distance is at least \(d\). By Lemma 3.3 there exists cycles in \(C_{1},\ldots,C_{d-1}\) in \(G-(X_{in}\cup X_{out})\) that are \((X_{in},X_{out})\)-separators in \(G\). By Lemma 3.4 each \(C_{i}\) has \(X_{in}\) in its interior and \(X_{out}\) in its exterior, so \(C_{1},\ldots,C_{d-1}\) are concentric. As long as this sequence is not tight, we can find a local refinement pushing some cycle closer to \(X_{in}\). After a finite number of refinements we obtain a tight \((X_{in},X_{out})\)-sequence of concentric cycles. **Lemma 4.5**.: _Consider a graph \(G\) properly embedded in \(\mathsf{Ring}(I_{in},I_{out})\). Let \(C_{1},\ldots,C_{m}\) be a \((V_{in},V_{out})\)-sequence of concentric cycles and \(S\) be an inclusion-minimal \((V_{in},V_{out})\)-separator. Then there exists an interval \(J\subseteq[m]\) of size at most \(|S|\) so that \(S\cap V(C_{j})\neq\emptyset\) implies \(j\in J\)._ Proof.: The minimality of \(S\) implies that there exists a \(G\)-noose \(\gamma\) with \(\gamma\cap V(G)=S\). Therefore each pair of consecutive vertices on \(\gamma\) shares a face and the maximal radial distance between vertices of \(S\) is at most \(|S|-1\). On the other hand, for \(u\in C_{i}\), \(v\in C_{j}\), we have \(\mathsf{rdist}_{G}(u,v)\geq|i-j|\). The lemma follows. When working with linkages traversing concentric cycles, it is convenient to assume that each path in a linkage intersects each cycle exactly once. We can enforce this property as long as the sequence of concentric cycles is tight. The following proof is an adaptation of [56, Lemma 6.15]. **Lemma 4.6**.: _Consider a graph \(G\) properly embedded in \(\mathsf{Ring}(I_{in},I_{out})\), \(X_{in}\subseteq V_{in}\), and \(X_{out}\subseteq V_{out}\). Let \(C_{1},\ldots,C_{m}\) be a tight \((X_{in},X_{out})\)-sequence of concentric cycles and \(\mathcal{P}\) be a \((X_{in},X_{out})\)-linkage in \(G\) such that each vertex in \(X_{in}\) is an endpoint of a path in \(\mathcal{P}\). Then there exists a linkage \(\mathcal{P}^{\prime}\) aligned with \(\mathcal{P}\) such that the intersection of each \(P\in\mathcal{P}^{\prime}\) and each \(C_{i}\) has exactly one connected component._ Proof.: Let \(\mathcal{P}^{\prime}\) be a linkage aligned with \(\mathcal{P}\) that minimizes the number of edges in \(\bigcup_{P\in\mathcal{P}^{\prime}}V(P)\) that do not belong to any cycle \(C_{i}\). Suppose there is \(i\in[d]\) and \(P\in\mathcal{P}^{\prime}\) such that \(P\cap C_{i}\) has at least two connected components. Choose the minimal \(i\in[d]\) with this property. Then \(P\) contains a subpath \(Q\) with endpoints on \(C_{i}\) and internal vertices disjoint from \(V(C_{i})\). First suppose that these vertices lie in the interior of \(\mathsf{Disc}(C_{i})\). By the choice of \(i\) either \(i=1\) or the path \(Q\) does not intersect \(C_{i-1}\). If \(i=1\) then by the assumption that each vertex in \(X_{in}\) is an endpoint of some path we infer that \(Q\) cannot intersect \(X_{in}\). In both cases we could use \(Q\) to construct a cycle \(C\) enclosing \(X_{in}\) (resp. \(C_{i-1}\)) with \(\mathsf{Disc}(C)\subsetneq\mathsf{Disc}(C_{i})\), contradicting the tightness of the sequence \(C_{1},\ldots,C_{m}\). Now suppose that that the internal vertices of \(Q\) are disjoint from \(\mathsf{Disc}(C_{i})\). Let \(D\) be the bounded region of \(\mathbb{R}^{2}\setminus(C_{i}\cup Q)\) incident to \(Q\) and \(Q^{\prime}\subseteq C_{i}\) be the path within \(C_{i}\) whose image is \(C_{i}\cap\partial D\); then \(Q^{\prime}\) connects the endpoints of \(Q\) (see Figure 7). If the internal vertices of \(Q^{\prime}\) were disjoint from all the other paths in \(\mathcal{P}^{\prime}\) we could replace \(Q\) with \(Q^{\prime}\) (and remove some redundant vertices if we get a self-crossing of \(P\)) but this would contradict the choice of \(\mathcal{P}^{\prime}\). Therefore there is some path \(P^{\prime}\in\mathcal{P}^{\prime}\), \(P^{\prime}\neq P\), using a vertex \(v\in V(C_{i})\) being internal in \(Q^{\prime}\). When \(i>1\) then the \((X_{in},v)\)-subpath of \(P^{\prime}\) must intersect \(C_{i-1}\) so by the choice of \(i\) the \((v,X_{out})\)-subpath of \(P^{\prime}\) cannot intersect \(C_{i-1}\). As \(P^{\prime}\) is disjoint from \(Q\) it must contain a subpath with endpoints on \(C_{i}\) and internal vertices in \(\mathsf{int}(\mathsf{Disc}(C_{i})\setminus\mathsf{Disc}(C_{i-1}))\). When \(i=1\) we obtain a subpath of \(P^{\prime}\) with endpoints on \(C_{1}\) and internal vertices in \(\mathsf{int}(\mathsf{Disc}(C_{1})\setminus X_{in}\) because \(P^{\prime}\) does not have internal vertices from \(X_{in}\). In both cases we get a contradiction with the tightness of \(C_{1},\ldots,C_{m}\). We will work with the notion of a cylindrical grid, which can be regarded as an outcome of identifying the opposite sides of a standard grid. Figure 7: An illustration of the two cases in Lemma 4.6. Left: The green subpath \(Q\) of \(P\) is contained in \(\mathsf{Disc}(C_{i})\setminus\mathsf{Disc}(C_{i-1})\) what contradicts the tightness of the sequence. Right: \(Q\) is internally disjoint from \(\mathsf{Disc}(C_{i})\). Since the chosen linkage minimizes the number of used edges that do not belong to any cycle, some other path \(P^{\prime}\) must visit the arc of \(C_{i}\) between the endpoints of \(Q\), again leading to a contradiction. **Definition 4.7**.: _Let \(k\geq 3\), \(m\geq 1\). The \((k,m)\)-cylindrical grid \(C_{k}^{m}\) is a plane graph constructed as follows. We draw \(m\) concentric cycles referred to as \(C^{1},\ldots,C^{m}\), counting from the innermost one. Then we draw \(k\) pairwise disjoint lines connecting \(C^{1}\) to \(C^{m}\); these lines are called \(C_{1},\ldots,C_{k}\), counting clockwise. We turn each intersection of \(C_{i}\) and \(C^{j}\) into a vertex, referred to as \(c_{i}^{j}\)._ We require \(k\geq 3\) in order to avoid using parallel edges and restrict our arguments only to simple graphs. The (4, 8)-cylindrical grid is shown in Figure 8. It is known [48, Lemma 4.2] that a large linkage between two sets that are far apart in the radial distance entails a minor model of a large cylindrical grid. We need an intuitive strengthening of this fact relating the endpoints in this linkage to the branch sets in the minor model. **Lemma 4.8**.: _Consider a graph \(G\) properly embedded in \(\mathsf{Ring}(I_{in},I_{out})\) and \(m=\mathsf{rdist}_{G}(V_{in},V_{out})\geq 2\). Suppose that \(\mathcal{T}=(s_{1},t_{1}),\ldots,(s_{k},t_{k})\) is realizable in \(G\), where \(k\geq 3\), and \(s_{1},\ldots,s_{k}\) lie in this cyclic order on \(I_{in}\) and \(t_{1},\ldots,t_{k}\) lie in this cyclic order on \(I_{out}\) (both counted clockwise). Then \(G\) contains a minor model of the cylindrical grid \(C_{k}^{m-1}\) such that for each \(i\in[k]\) the vertex \(s_{i}\) belongs to the branch set of \(c_{i}^{1}\) and \(t_{i}\) belongs to the branch set of \(c_{i}^{m-1}\)._ Proof.: Let \(X_{in}=V_{\mathcal{T}}\cap V_{in}\) and \(X_{out}=V_{\mathcal{T}}\cap V_{out}\); then \(\mathsf{rdist}_{G}(X_{in},X_{out})\geq m\). By Lemma 4.4 there exists a tight \((X_{in},X_{out})\)-sequence of concentric cycles \(C_{1},\ldots,C_{m-1}\) in \(G\). We can apply Lemma 4.6 to obtain that there exists a \(\mathcal{T}\)-linkage \(\mathcal{P}\) such that the intersection of each \(P\in\mathcal{P}\) and each \(C_{i}\) is a single segment of \(C\). Let \(G^{\prime}\) be the union of all paths in \(\mathcal{P}\) and the cycles \(C_{1},\ldots,C_{m-1}\). Let \(P_{i}\) denote the \((s_{i},t_{i})\)-path in \(\mathcal{P}\); then \(P_{1},\ldots,P_{k}\) are ordered clockwise. The set \(V(P_{i})\cap V(C_{j})\) induces a connected subgraph of \(G^{\prime}\); we contract it into a single vertex referred to as \(c_{i}^{j}\). Let \(G^{\prime\prime}\) be the graph obtained by these contractions; then \(G^{\prime\prime}\) is a minor of \(G\) and each vertex \(c_{i}^{j}\) has exactly four neighbors in \(G^{\prime\prime}\). Next, we contract every degree-2 vertex with one of its neighbors. Finally, we contract each degree-1 vertex with its only neighbor so \(s_{i}\) gets contracted with \(c_{i}^{1}\) and \(t_{i}\) gets contracted with \(c_{i}^{m-1}\). The claim follows. A linkage in a cylindrical grid can be transformed into a linkage in graph \(G\) using the following observation. **Observation 4.9**.: _Let \(G,H\) be graphs and \((V_{h})_{h\in V(H)}\) be a minor model of \(H\) in \(G\). Consider a set of pairs \((s_{1},t_{1}),\ldots,(s_{k},t_{k})\) from \(V(G)\times V(G)\) with all the vertices distinct. Suppose there exists an injection \(\pi:\bigcup_{i=1}^{k}\{s_{i}\}\cup\{t_{i}\}\to V(H)\) such that \(s_{i}\in V_{\pi(s_{i})}\) and \(t_{i}\in V_{\pi(t_{i})}\). If \((\pi(s_{1}),\pi(t_{1})),\ldots,(\pi(s_{k}),\pi(t_{k}))\) is realizable in \(H\) then \((s_{1},t_{1}),\ldots,(s_{k},t_{k})\) is realizable in \(G\)._ Because \(\mathbf{tw}(C_{t}^{t})\geq t\)[23, SS7.7.1] and treewidth is a monotone measure with respect to taking minors, we also obtain the following corollary from Lemma 4.8. Note that \(|i-j|\geq t+1\) implies \(\mathsf{rdist}(C_{i},C_{j})\geq t+1\). **Corollary 4.10**.: _Let \(G\) be a plane graph and \(C_{1},\ldots,C_{m}\) be a concentric sequence of cycles. Suppose there exists \(i,j\in[m]\) such that \(|i-j|\geq t+1\) and \(\mu_{G}(C_{i},C_{j})\geq t\). Then \(\mathbf{tw}(G)\geq t\)._ A \((V_{in}\cup V_{out})\)-linkage may contain paths (or subpaths) with both endpoints in \(V_{in}\) or both in \(V_{out}\). The next lemma assures that we can always assume that such paths do not go "too deep" inside the graph. The proof uses similar ideas as [2, Lemma 3]. **Lemma 4.11**.: _Let \(G\) be a plane graph properly embedded in \(\mathsf{Ring}(I_{in},I_{out})\) and \(t=\max(|V_{in}|,|V_{out}|)\). Let \(C_{1},\ldots,C_{m}\) be a \((V_{in},V_{out})\)-sequence of concentric cycles. Then for every \((V_{in}\cup V_{out})\)-linkage \(\mathcal{P}\) there exists a linkage \(\mathcal{P}^{\prime}\) aligned with \(\mathcal{P}\) such that_ 1. _every inclusion-minimal_ \((V_{in}\cup V_{out})\)_-subpath of a path in_ \(\mathcal{P}^{\prime}\)_, that is a_ \((V_{in},V_{in})\)_-path, intersects at most_ \(t\) _first cycles in_ \(C_{1},\ldots,C_{m}\)_, and_ 2. _every inclusion-minimal_ \((V_{in}\cup V_{out})\)_-subpath of a path in_ \(\mathcal{P}^{\prime}\)_, that is a_ \((V_{out},V_{out})\)_-path, intersects at most_ \(t\) _last cycles in_ \(C_{1},\ldots,C_{m}\)_._ Proof.: Let \(\mathcal{P}^{\prime}\) be a linkage aligned with \(\mathcal{P}\) that minimizes the number of edges in \(\bigcup_{P\in\mathcal{P}^{\prime}}V(P)\) that do not belong to any cycle \(C_{i}\). Let \(\widehat{\mathcal{P}}\) be the family of the inclusion-minimal \((V_{in}\cup V_{out})\)-subpaths of paths in \(\mathcal{P}^{\prime}\). Then \(\widehat{\mathcal{P}}\) is a family of internally disjoint paths and each endpoint can be shared by at most two paths. For a \((V_{in},V_{in})\)-path \(P\in\widehat{\mathcal{P}}\) we define \(R(P)\) as the bounded region of \(\mathbb{R}^{2}\setminus(P\cup I_{in})\) incident to \(P\). Let \(h(P)\) be the number of paths from \(\widehat{\mathcal{P}}\), different from \(P\), which are contained in \(R(P)\). We have \(h(P)\leq t-1\) and for every path \(P^{\prime}\neq P\) contained in \(R(P)\) it holds that \(h(P^{\prime})<h(P)\). We show inductively that when \(h(P)=\ell\) then \(P\) intersects at most \(\ell+1\) first cycles in \(C_{1},\ldots,C_{m}\). First consider \(\ell=0\) and suppose that \(P\) intersect \(C_{2}\). Let \(P^{\prime}\) be a \((C_{1},C_{1})\)-subpath of \(P\) with internal vertices disjoint from \(\mathsf{Disc}(C_{1})\). Then there exists a path \(P^{\prime\prime}\subset C_{1}\cap R(P)\) with the same endpoints as \(P^{\prime}\). We can replace \(P^{\prime}\) with \(P^{\prime\prime}\) in \(P\) and, as a result, obtain a linkage aligned with \(\mathcal{P}^{\prime}\) which uses less edges not belonging to any cycle \(C_{i}\). This gives a contradiction. Assume now that the claim holds for \(h(P)<\ell\) and consider \(P\in\widehat{\mathcal{P}}\) with \(h(P)=\ell\) with intersects \(C_{\ell+2}\). Let \(P^{\prime}\) be a \((C_{\ell+1},C_{\ell+1})\)-subpath of \(P\) with internal vertices disjoint from \(\mathsf{Disc}(C_{\ell+1})\). Then there exists a path \(P^{\prime\prime}\subset C_{\ell+1}\cap R(P)\) with the same endpoints as \(P^{\prime}\). By the assumption, this path is disjoint from all paths in \(R(P)\) different than \(P\). Again, by a replacement argument we obtain a linkage aligned with \(\mathcal{P}^{\prime}\) with a smaller cost. This concludes the proof of the first part. The second part, concerning \((V_{out},V_{out})\)-paths in \(\widehat{\mathcal{P}}\), is symmetric. ### Radial diameter reduction As outlined in Section 2, we will reduce the radial diameter of the graph \(G\) by repeatedly removing irrelevant edges. We focus on the scenario where a subgraph of \(G\), devoid of terminals, can be properly embedded in \(\mathsf{Ring}(I_{in},I_{out})\) and \(\mathsf{rdist}(V_{in},V_{out})\) is large. We inspect two cases, first analyzing _non-maximal_ linkages, in which the number of \((V_{in},V_{out})\)-paths is less than \(\mu(V_{in},V_{out})\). Later on we will be armed with two strategies for detecting an irrelevant edge, each applicable in a different setting. #### 4.2.1 Rerouting a non-maximal linkage We are going to show that when \(\mathcal{T}\subseteq V_{in}\times V_{out}\), \(|\mathcal{T}|<\mu(V_{in},V_{out})\), and \(\mathsf{rdist}(V_{in},V_{out})\) is large, then the cut-condition \(|\mathcal{T}|\leq\mu(V_{\mathcal{T}}\cap V_{in},V_{\mathcal{T}}\cap V_{out})\) is sufficient for a \(\mathcal{T}\)-linkage to exist. We begin with an argument for cylindrical grids. **Lemma 4.12**.: _Consider the cylindrical grid \(C_{k}^{m}\) with \(m\geq k^{2}\), \(k\geq 3\). Let vertices \(s_{1},\ldots,s_{k-1}\) lie in this cyclic order on \(C^{1}\) and vertices \(t_{1},\ldots,t_{k-1}\) lie in this cyclic order on \(C^{m}\) (both counted clockwise). Then \(\{(s_{1},t_{1}),\ldots,(s_{k-1},t_{k-1})\}\) is realizable in \(C_{k}^{m}\)._ Proof.: For two vertices \(u,v\in V(C_{k}^{m})\) we define \(\mathsf{Shift}(u,v)\) as follows. Let \(i,j\in[k]\) be such that \(u\in V(C_{i})\) and \(v\in V(C_{j})\). Then \(\mathsf{Shift}(u,v)\) is the minimum non-negative integer \(\ell\) satisfying \((i+\ell)\) \(\mathrm{mod}\ k=j\mod k\). In other words, it is the number of clockwise jumps needed to reach \(C_{j}\) from \(C_{i}\). We will show the following claim by induction on \(j\). **Claim 4.13**.: _Consider the cylindrical grid \(C_{k}^{m}\) where \(k\geq 3\), \(m=(j+1)k\), for some \(j\geq 0\). Let vertices \(s_{1},\ldots,s_{k-1}\) lie in this cyclic order on \(C^{1}\) and vertices \(t_{1},\ldots,t_{k-1}\) lie in this cyclic order on \(C^{m}\) (both counted clockwise). If \(\mathsf{Shift}(s_{1},t_{1})=j\) then \(\{(s_{1},t_{1}),\ldots,(s_{k},t_{k})\}\) is realizable in \(C_{k}^{m}\)._ Proof.: First consider the basic case \(j=0\) in which \(m=k\). Since \(\mathsf{Shift}(s_{1},t_{1})=0\) we can assume w.l.o.g. that \(s_{1},t_{1}\) both lie on \(C_{1}\), that is, \(s_{1}=c_{1}^{1}\), \(t_{1}=c_{1}^{k}\). Let \(s^{*}\) be the unique vertex in \(C^{1}\setminus\{s_{1},\ldots,s_{k-1}\}\) and \(t^{*}\) be the unique vertex in \(C^{k}\setminus\{t_{1},\ldots,t_{k-1}\}\). For the clarity of presentation we examine only the extremal case \(s^{*}=c_{k}^{1}\), \(t^{*}=c_{2}^{k}\) in detail; the other cases are analogous. The pair \((s_{1},t_{1})\) can be connected directly via the path \(P_{1}=C_{1}\). By the assumption on the cyclic order we have that for each \(i\in[2,k-1]\) the vertex \(s_{i}\) lies on \(C_{i}\) and \(t_{i}\) lies on \(C_{i+1}\). For \(i\in[2,k-1]\) we define the path \(P_{i}\) as a concatenation of 1. the subpath of \(C_{i}\) from \(s_{i}=c_{i}^{1}\) to \(c_{i}^{k-i+1}\), 2. the edge \(c_{i}^{k-i+1}c_{i+1}^{k-i+1}\), 3. the subpath of \(C_{i+1}\) from \(c_{i+1}^{k-i+1}\) to \(c_{i+1}^{k}=t_{i}\). See Figure 8. Note that none of these paths intersect \(P_{1}=C_{1}\). Furthermore, \(P_{i+1}\cap C_{i+1}\) is contained in the disc enclosed by \(C^{k-i-1}\) (inclusively) while \(P_{i}\cap C_{i+1}\) is disjoint from the interior of the disc enclosed by \(C^{k-i}\). Hence, the paths \(P_{1},\ldots,P_{k-1}\) are vertex-disjoint. The construction for different \(s^{*},t^{*}\) is analogous. This concludes the analysis for the case \(j=0\). Suppose that that \(j>0\). Let \(\pi(i)\in[k]\) be that \(s_{i}=c_{\pi(i)}^{1}\) for \(i\in[k]\). Let \(s^{\prime}_{i}\) be the vertex \(c_{j}^{k}\) where \(j=\pi(i)+1\) modulo \(k\). Then \(\mathsf{Shift}(s^{\prime}_{1},t_{1})=j-1\). By the inductive assumption, the Figure 8: A visualization of the cylindrical grid \(C_{4}^{8}\) and the proof of Lemma 4.12. Here \(\mathsf{Shift}(s_{1},t_{1})=j=1\) so we need at least \(2\cdot 4=8\) concentric cycles in Claim 4.13. The three \((s_{i},t_{i})\)-paths are drawn in colors. The four inner cycles illustrate the inductive argument for \(j>0\) where each path gets shifted once clockwise. The four outer cycles show the argument for \(j=0\) where the origin of the path ending at \(t_{1}\) is already correctly positioned and we might only need to shift the remaining ones. \((k,jk)\)-cylindrical grid induced by the cycles \(C^{k+1},C^{k+2},\ldots,C^{m}\) contains a linkage connecting pairs \((s^{\prime}_{1},t_{1}),\ldots,(s^{\prime}_{k-1},t_{k-1})\). Therefore, it suffices to construct a linkage \((P_{1},\ldots,P_{k-1})\) in \(C^{k}_{k}\) connecting pairs \((s_{1},s^{\prime}_{1}),\ldots,(s_{k-1},s^{\prime}_{k-1})\). Due to the assumption on the cyclic order we can assume w.l.o.g. that \(\pi(i)=i\) for each \(i\in[k-1]\). Then the path \(P_{i}\) for \(i\in[k-1]\) is given by the same concatenation formula as in the case \(j=0\). Again, these paths are vertex-disjoint, which yields the claim. \(\blacksquare\) The lemma follows from the observation that \(\mathsf{Shift}(s_{1},t_{1})<k\) so the claim can be applied. \(\Box\) Now we generalize the argument from a cylindrical grid to the general case. **Lemma 4.14**.: _Let \(G\) be a graph properly embedded in \(\mathsf{Ring}(I_{in},I_{out})\) and \(r<p\) be integers. Suppose that \(\mu_{G}(V_{in},V_{out})\geq p\) and \(\mathsf{drist}_{G}(V_{in},V_{out})\geq p^{2}+1\). Let vertices \(s_{1},\ldots,s_{r}\) lie in this cyclic order on \(I_{in}\) and vertices \(t_{1},\ldots,t_{r}\) lie in this cyclic order on \(I_{out}\) (both counted clockwise). Then \(\mathcal{T}=\{(s_{1},t_{1}),\ldots,(s_{r},t_{r})\}\) is realizable in \(G\) if and only if \(\mu_{G}(\{s_{1},\ldots,s_{r}\},\{t_{1},\ldots,t_{r}\})\geq r\)._ Proof.: The lemma is trivial for \(r=1\) so we will assume \(r\geq 2\). The condition \(\mu_{G}(V_{\mathcal{T}}\cap V_{in},V_{\mathcal{T}}\cap V_{out})\geq r\) is clearly necessary for a \(\mathcal{T}\)-linkage to exist. Suppose that this condition holds and let \(\mathcal{P}\) be some \((V_{\mathcal{T}}\cap V_{in},V_{\mathcal{T}}\cap V_{out})\)-linkage of size \(r\). Since \(r<\mu_{G}(V_{in},V_{out})\), by Lemma 3.2 there exist vertices \(s^{*}\in V_{in}\setminus V_{\mathcal{T}}\), \(t^{*}\in V_{out}\setminus V_{\mathcal{T}}\), and a linkage \(\mathcal{P}^{\prime}\) connecting sets \(S=\{s_{1},\ldots,s_{r},s^{*}\}\) and \(T=\{t_{1},\ldots,t_{r},t^{*}\}\). Note that \(3\leq r+1=|S|=|T|\leq p\). By Lemma 4.8 the graph \(G\) contains a minor model of \(C^{m}_{r+1}\), where \(m=p^{2}\geq(r+1)^{2}\), and there are bijections \(\pi_{S}\colon S\to C^{1}\), \(\pi_{T}\colon T\to C^{m}\) that preserve the cyclic ordering, such that \(s\in S\) belongs to the branch set of \(\pi_{S}(s)\) and \(t\in T\) belongs to the branch set of \(\pi_{T}(t)\). We can thus assume w.l.o.g. that \(\pi_{S}(s_{1}),\ldots,\pi_{S}(s_{r})\) lie in this cyclic order on \(C^{1}\) and \(\pi_{T}(t_{1}),\ldots,\pi_{T}(t_{r})\) lie in this cyclic order on \(C^{m}\), counted clockwise. By Lemma 4.12 the set of pairs \(\{(\pi_{S}(s_{1}),\pi_{T}(t_{1})),\ldots,(\pi_{S}(s_{r}),\pi_{T}(t_{r})\}\) is realizable in \(C^{m}_{r+1}\). Then the lemma follows from Observation 4.9. \(\Box\) Our goal is to detect an edge that can be safely removed without modifying the family of possible non-maximal linkages. We can assume that the \((V_{in},V_{in})\)-paths and the \((V_{out},V_{out})\)-paths intersect only few cycles in the concentric family, so the main challenge is to preserve the non-maximal \((V_{in},V_{out})\)-linkages. As we know that the cut-condition is sufficient for a such a linkage to exist, it remains to find an edge \(e\) whose removal does not affect any cut-condition. We show that this can be guaranteed by two requirements: (a) removing \(e\) does not decrease \(\mu(V_{in},V_{out})\) and, (b) \(e\) has sufficiently large radial distance from both \(V_{in}\) and \(V_{out}\). **Proposition 4.15**.: _Let \(G\) be a graph properly embedded in \(\mathsf{Ring}(I_{in},I_{out})\), \(t=\max(|V_{in}|,|V_{out}|)\), and \(s=\mu_{G}(V_{in},V_{out})\). Let \(C_{1},\ldots,C_{m}\) be a \((V_{in},V_{out})\)-sequence of concentric cycles in \(G\) and \(m\geq(t+2)^{2}\)._ _Consider \(i\in[2t+1,m-2t]\) and edge \(e\in E(C_{i})\) such that \(\mu_{G\setminus e}(V_{in}V_{out})=\mu_{G}(V_{in},V_{out})=s\). Let \(\mathcal{T}\subseteq(V_{in}\cup V_{out})^{2}\). Suppose that \(\mathcal{T}\) contains less than \(s\) pairs with one element in \(V_{in}\) and one in \(V_{out}\). Then \(\mathcal{T}\) is realizable in \(G\) if and only if \(\mathcal{T}\) is realizable in \(G\setminus e\)._ _Furthermore, there exists at least one edge \(e\) satisfying the requirements above and it can be found in polynomial time._ Proof.: When \(\mathcal{T}\) is realizable in \(G\) then, by Lemma 4.11, there exists a \(\mathcal{T}\)-linkage \(\mathcal{P}\) in \(G\) such that every \((V_{in},V_{in})\)-path in \(\mathcal{P}\) intersects at most \(t\) of the first cycles in the sequence \(C_{1},\ldots,C_{m}\), while every \((V_{out},V_{out})\)-path in \(\mathcal{P}\) intersects at most \(t\) of the last cycles in \(C_{1},\ldots,C_{m}\). Let \(\mathcal{P}_{\text{long}}\subseteq\mathcal{P}\) be the subfamily of paths from \(\mathcal{P}\) with one endpoint in \(V_{in}\) and one in \(V_{out}\). By the assumption \(|\mathcal{P}_{\mathrm{long}}|<s\). If \(\mathcal{P}_{\mathrm{long}}=\emptyset\) then we are done; suppose that this is not the case. The graph obtained from \(G\) by removing the paths from \(\mathcal{P}\setminus\mathcal{P}_{\mathrm{long}}\) has exactly one connected component containing the paths from \(\mathcal{P}_{\mathrm{long}}\); let \(G^{\prime}\) indicate this component. Let \(V^{\prime}_{in}\subseteq V(G^{\prime})\) be the set of vertices lying on the inner face of \(G^{\prime}\) containing \(I_{in}\) and \(V^{\prime}_{out}\subseteq V(G^{\prime})\) be the set of vertices lying on the outer face of \(G^{\prime}\). **Claim 4.16**.: _It holds that \(\mu_{G^{\prime}}(V^{\prime}_{in},V^{\prime}_{out})\geq s\)._ Proof.: Let \(\mathcal{P}_{max}\) be a \((V_{in},V_{out})\)-linkage of size \(s\) in \(G\). Then each path \(P\) from \(\mathcal{P}_{max}\) has a non-empty intersection with \(V(G^{\prime})\). In particular, \(P\) contains a subpath being a \((V^{\prime}_{in},V^{\prime}_{out})\)-path in \(G^{\prime}\). This gives a \((V^{\prime}_{in},V^{\prime}_{out})\)-linkage in \(G^{\prime}\) of size \(s\). All the cycles \(C_{t+1},\ldots,C_{m-t}\) are contained in \(G^{\prime}\) and so is \(e\). Hence \(\mathsf{rdist}_{G^{\prime}}(V^{\prime}_{in},V^{\prime}_{out})\geq m-2t\geq t^{ 2}+2\) and \(\mathsf{rdist}_{G^{\prime}\setminus e}(V^{\prime}_{in},V^{\prime}_{out})\geq t ^{2}+1\geq s^{2}+1\). Let \(e=v_{1}v_{2}\). Then each of \(v_{1},v_{2}\) is separated from each of \(V^{\prime}_{in},V^{\prime}_{out}\) with at least \(t\) cycles from \(C_{t+1},\ldots,C_{m-t}\). **Claim 4.17**.: _It holds that \(\mu_{G^{\prime}\setminus e}(V^{\prime}_{in},V^{\prime}_{out})\geq s\)._ Proof.: Suppose otherwise. Then there exists a \((G^{\prime}\setminus e)\)-noose \(\gamma\) in the plane which separates \(V^{\prime}_{in}\) from \(V^{\prime}_{out}\) and intersects \(G^{\prime}\setminus e\) on at most \(s-1\) vertices; let \(S=\gamma\cap V(G^{\prime}\setminus e)\). By Claim 4.16 there are no such small \((V^{\prime}_{in},V^{\prime}_{out})\)-separators in \(G^{\prime}\). Therefore \(\gamma\) must intersect the image of \(e\) and so \(S\) contains a vertex \(u\) that lies on a common face with \(e\) in \(G^{\prime}\). Suppose that \(\gamma\) intersects \(C_{t+1}\); then it must also Figure 9: An illustration for Proposition 4.15. The nooses \(I_{in},I_{out}\) are dotted and the \((V_{in},V_{out})\)-sequence of concentric cycles \(C_{1},\ldots,C_{m}\) is gray. The cycles \(C_{t+1},C_{m-t}\) are highlighted. To obtain graph \(G^{\prime}\), we remove from \(G\) the paths in \(\mathcal{P}\setminus\mathcal{P}_{\mathrm{long}}\) (which do not intersect \(C_{t+1},C_{m-t}\)) together with the shaded area. The edge \(e\) is drawn solid red. The proposition relies on the observation that any inclusion-minimal separator \(S\) in \(G^{\prime}\setminus e\) between a vertex inside \(C_{t+1}\) and a vertex outside \(C_{m-t}\), such that \(S\) is not present in \(G^{\prime}\), is represented by a noose (dashed) intersecting \(e\). Since \(e\in C_{i}\), where \(i\) is sufficiently far for \(t+1\) and \(m-t\), this noose cannot intersect \(C_{t+1},C_{m-t}\) so it must be a \((C_{t+1},C_{m-t})\)-separator as well. intersect \(C_{i-1}\). Lemma 4.5 implies that \(|(i-1)-(t-1)|=|i-t-2|\leq s-2\leq t-2\) but this contradicts the assumption that \(i\geq 2t+1\). By the symmetric argument, \(\gamma\) cannot intersect \(C_{m-t}\). Therefore, the curve \(\gamma\) must be contained in the interior of \(\mathsf{Ring}(C_{t+1},C_{m-t})\) and so \(S\) is a \((C_{t+1},C_{m-t})\)-separator in \(G^{\prime}\setminus e\): see Figure 9. As a consequence, \(S\) is also a \((V_{in},V_{out})\)-separator in \(G\setminus e\). This implies that \(\mu_{G\setminus e}(V_{in},V_{out})\leq s-1\) and contradicts the assumption that \(\mu_{G\setminus e}(V_{in},V_{out})=\mu_{G}(V_{in},V_{out})\). Note that \(\mathcal{P}_{\mathrm{long}}\) is a \((V^{\prime}_{in},V^{\prime}_{out})\)-linkage in \(G^{\prime}\). Let \(T_{in}\subseteq V^{\prime}_{in},T_{out}\subseteq V^{\prime}_{out}\) be the sets of endpoints of paths from \(\mathcal{P}_{\mathrm{long}}\). **Claim 4.18**.: _It holds that \(\mu_{G^{\prime}\setminus e}(T_{in},T_{out})\geq|\mathcal{P}_{long}|\)._ Proof.: Clearly, \(\mu_{G^{\prime}}(T_{in},T_{out})\geq|\mathcal{P}_{\mathrm{long}}|\). Suppose that such an inequality does not hold in \(G^{\prime}\setminus e\). Similarly as in Claim 4.17, there exists a \((G^{\prime}\setminus e)\)-noose \(\gamma\) in the plane which separates \(T_{in}\) from \(T_{out}\) and intersects \(G^{\prime}\setminus e\) on at most \(|\mathcal{P}_{\mathrm{long}}|-1\) vertices; let \(S=\gamma\cap V(G^{\prime}\setminus e)\). By the same argument as before we obtain that \(\gamma\) is contained in the interior of \(\mathsf{Ring}(C_{t+1},C_{m-t})\) and \(S\) is a \((V_{in},V_{out})\)-separator in \(G\setminus e\). Recall that \(e=v_{1}v_{2}\). Observe that any \((V_{in},V_{out})\)-path in \(G-S\) must go through \(e\) so \(S\cup\{v_{1}\}\) is a \((V_{in},V_{out})\)-separator in \(G\). Hence, \(\mu_{G}(V_{in},V_{out})\leq|S|+1\leq|\mathcal{P}_{\mathrm{long}}|\). This contradicts the assumption that \(|\mathcal{P}_{\mathrm{long}}|<s=\mu_{G}(V_{in},V_{out})\). The two claims above allows us to apply the criterion from Lemma 4.14 to \(G^{\prime}\setminus e,V^{\prime}_{in},V^{\prime}_{out}\), \(\mathcal{P}_{\mathrm{long}}\) with \(p=s\), \(r=|\mathcal{P}_{\mathrm{long}}|<p\), and \(\mathsf{rdist}_{G^{\prime}\setminus e}(V^{\prime}_{in},V^{\prime}_{out})\geq p ^{2}+1\). We derive that there exists a linkage in \(G^{\prime}\setminus e\) aligned with \(\mathcal{P}_{\mathrm{long}}\). By the construction of \(G^{\prime}\) this implies that there exists a linkage in \(G\setminus e\) aligned with \(\mathcal{P}\). It remains to justify that \(e\) can be efficiently found. First, \((t+2)^{2}-4t>0\) so the interval \([2t+1,m-2t]\) is non-empty. A \((V_{in},V_{out})\)-linkage \(\mathcal{P}_{max}\) of size \(s\) can be found in polynomial time. Then \(e\) can be chosen as any edge on \(C_{i}\) that is not used by \(\mathcal{P}_{max}\). #### 4.2.2 Rerouting a maximal linkage We move on to the scenario in which the number of \((V_{in},V_{out})\)-paths in a linkage equals \(\mu(V_{in},V_{out})\). The crucial special case occurs when \(|V_{in}|=|V_{out}|=\mu(V_{in},V_{out})\). This is the same setting that has been studied by Robertson and Seymour [87] as a subroutine in their FPT algorithm for Planar Disjoint Paths. We shall adopt the same perspective for analyzing this case, based on the following convenient plane embedding. **Definition 4.19**.: _A plane graph \(G\) is called \(k\)-cylindrical if:_ 1. _It is properly embedded in_ \(\mathsf{Ring}(I_{in},I_{out})\) _where_ \(I_{in}=\{(x,y)\in\mathbb{R}^{2}\mid x^{2}+y^{2}=1\}\) _and_ \(I_{out}=\{(x,y)\in\mathbb{R}^{2}\mid x^{2}+y^{2}=4\}\)_;_ 2. _The sets_ \(V_{in}=V(G)\cap I_{in}\) _and_ \(V_{out}=V(G)\cap I_{out}\) _have size_ \(k\) _each;_ 3. \(V_{in}=\{(1,\frac{2j}{k}\pi)\}_{0\leq j<k}\) _and_ \(V_{out}=\{(2,\frac{2j}{k}\pi)\}_{0\leq j<k}\) _in polar coordinates, and_ 4. \(\mu_{G}(V_{in},V_{out})=k\)_._ We refer to the elements of \(V_{in}\) as \(s_{0},s_{1},\ldots,s_{k-1}\) so that \(s_{j}\) has polar coordinates \((1,\frac{-2\pi}{k}j)\). Similarly, \(t_{0},t_{1},\ldots,t_{k-1}\) are the elements of \(V_{out}\) and \(t_{j}=(2,\frac{-2\pi}{k}j)\). **Definition 4.20**.: _For a path \(P\) connecting \(s\in V_{in}\) and \(t\in V_{out}\) we define its winding number \(\theta(P)\in\mathbb{Z}\) as \(\frac{k}{2\pi}\) times the total angle traversed by the curve corresponding to \(P\) (measured clockwise)._ See Figure 2 on page 2 for an example. Intuitively, the winding number measures how many times a path winds around the ring (and in which direction) and what is the difference in the angles of its endpoints. **Definition 4.21**.: _A cylindrical linkage in \(G\) is a \((V_{in},V_{out})\)-linkage of size \(k\). When \(\mathcal{P}\) is cylindrical then every path \(P\in\mathcal{P}\) has the same winding number and we refer to it as \(\theta(\mathcal{P})\). We say that \(\theta\) is feasible in \(G\) if there is a cylindrical linkage in \(G\) with the winding number \(\theta\)._ We remark that Robertson and Seymour [87] defined the winding number of \(P\) as \(-\theta(P)/k\) but we choose this convention so we could work with integers and the more intuitive clockwise ordering. **Lemma 4.22** ([87, Lem. 5.9]).: _Let \(G\) be a \(k\)-cylindrical graph. If \(\theta_{1}<\theta_{2}<\theta_{3}\) and \(\theta_{1},\theta_{3}\) are feasible in \(G\), then so is \(\theta_{2}\)._ For a \(k\)-cylindrical graph \(G\) let \(\Theta^{G}\) be the set of all feasible values of \(\theta\). By Lemma 4.22 the set \(\Theta^{G}\) forms an interval of integers and it is non-empty because \(\mu_{G}(V_{in},V_{out})=k\). The set \(\Theta^{G}\) is always finite and it can enumerated efficiently. **Lemma 4.23** ([87, Lem. 5.11]).: _There is a polynomial-time algorithm that, given a \(k\)-cylindrical graph \(G\), enumerates the set \(\Theta^{G}\)._ We define \(\theta_{1}^{G},\theta_{2}^{G}\) as follows. If \(|\Theta^{G}|<k\) then \(\theta_{1}^{G}=\min\Theta^{G}\) and \(\theta_{2}^{G}=\max\Theta^{G}\). Otherwise, we set \(\theta_{1}^{G}=\min\Theta^{G}\) and \(\theta_{2}^{G}=\theta_{1}^{G}+k-1\). **Observation 4.24**.: _Let \(G\) be a \(k\)-cylindrical graph. Then \([\theta_{1}^{G},\theta_{2}^{G}]\subseteq\Theta^{G}\). Furthermore, if \(\theta\in\Theta^{G}\) then there exists \(\theta^{\prime}\in[\theta_{1}^{G},\theta_{2}^{G}]\) such that \(\theta^{\prime}\equiv\theta\mod k\)._ For \(j\in[0,k-1]\) let \(\mathcal{T}_{j}\subseteq V_{in}\times V_{out}\) be the set of pairs \((s_{i},t_{i+j\mod k})_{i\in[0,k-1]}\). Clearly, \(\mathcal{T}_{j}\) is realizable in \(G\) if and only if there exists \(\theta\in\Theta^{G}\) such that \((\theta\mod k)=j\). This is equivalent to the existence of \(\theta\in[\theta_{1}^{G},\theta_{2}^{G}]\) with \((\theta\mod k)=j\). Combining all these observations with the strategy for coping with non-maximal linkages yields a criterion for an edge to be irrelevant in a \(k\)-cylindrical graph. **Lemma 4.25**.: _Let \(G\) be a \(k\)-cylindrical graph and \(C_{1},\ldots,C_{m}\) be a \((V_{in},V_{out})\)-sequence of concentric cycles in \(G\) with \(m\geq(k+2)^{2}\). Consider \(i\in[2k+1,m-2k]\) and edge \(e\in E(C_{i})\) such that \(\theta_{1}^{G},\theta_{2}^{G}\) are feasible in \(G\setminus e\). Then \(G\setminus e\) is \((V_{in}\cup V_{out})\)-linkage-equivalent to \(G\)._ Proof.: The assumptions imply that \(\mu_{G\setminus e}(V_{in},V_{out})=\mu_{G}(V_{in},V_{out})=k\). Consider some \(\mathcal{T}\subseteq(V_{in}\cup V_{out})^{2}\) that is realizable in \(G\). Let \(\ell\) be the number of pairs in \(\mathcal{T}\) with one endpoint in \(V_{in}\) and one in \(V_{out}\). If \(\ell<k\) then \(\mathcal{T}\) is realizable in \(G\setminus e\) due to Proposition 4.15. Suppose that \(\ell=k\). Then \(\mathcal{T}=\mathcal{T}_{j}\) for some \(j\in[0,k-1]\) for which there exists \(\theta\in\Theta^{G}\) such that \((\theta\mod k)=j\). By Observation 4.24 we can assume that \(\theta\in[\theta_{1}^{G},\theta_{2}^{G}]\). As both \(\theta_{1}^{G},\theta_{2}^{G}\) are feasible in \(G\setminus e\), it follows from Lemma 4.22 that so is \(\theta\) Disentangling cylindrical linkages.While finding an edge not required by a single linkage is simple, finding a single edge that is not needed by two linkages is more challenging. A priori, it could be the case then the union of any linkages \(\mathcal{P}_{1}\) with \(\theta(\mathcal{P}_{1})=\theta_{1}^{G}\) and \(\mathcal{P}_{2}\) with \(\theta(\mathcal{P}_{2})=\theta_{2}^{G}\) is the entire graph. We show that this is not the case by constructing linkages \(\mathcal{P}_{1},\mathcal{P}_{2}\) whose intersection pattern is relatively simple. We need two additional tools to achieve this goal. We begin with ordering \((u,v)\)-paths in a \(k\)-cylindrical graph in a clockwise fashion. **Definition 4.26**.: _Let \(G\) be a \(k\)-cylindrical graph, \(u\in V_{in}\), and \(v\in V_{out}\). Consider two distinct \((u,v)\) paths \(P_{1},P_{2}\) oriented from \(u\) to \(v\); let \(w\) be the last vertex on their longest common prefix._ _When \(w=u\), let \(e_{1},\ldots,e_{d}\) be the clockwise ordering of \(E_{G}(u)\) such that \(e_{1},e_{d}\) are incident with the face containing \(I_{in}\). We write \(P_{1}\sqsubset P_{2}\) when the first edge of \(P_{1}\) appears earlier in \(e_{1},\ldots,e_{d}\) than the first edge of \(P_{2}\)._ _When \(w\neq u\), let \(e\) be the edge preceding \(w\) in both \(P_{1},P_{2}\) and \(e_{1},\ldots,e_{d}\) be the clockwise ordering of \(E(w)\setminus e\) such that \(e\) lies between \(e_{d},e_{1}\). We write \(P_{1}\sqsubset P_{2}\) when the edge following \(w\) in \(P_{1}\) appears earlier in \(e_{1},\ldots,e_{d}\) than the edge following \(w\) in \(P_{2}\)._ See Figure 10 for an example. The relation \(\sqsubset\) is transitive and it yields a linear order on the family of \((u,v)\)-paths in \(G\). **Definition 4.27** (Handle, clockwise-tightness).: _Let \(G\) be a \(k\)-cylindrical graph, \(u\in V_{in}\), and \(v\in V_{out}\). A path \(Q\) is called a handle of \(P\) is the endpoints of \(Q\) lie on \(P\) and \(Q\) is internally disjoint from \(P\). Let \(P^{Q}\) be the \((u,v)\)-path obtained from \(P\) by replacing the subpath between the endpoints of \(Q\) with the path \(Q\). We say that \(Q\) is a clockwise handle when \(P\sqsubset P^{Q}\)._ _We say that a cylindrical linkage \(\mathcal{P}\) is clockwise-tight if no \(P\in\mathcal{P}\) contains a clockwise handle internally disjoint from all the paths in \(\mathcal{P}\)._ Intuitively, being clockwise-tight means that internal points of the paths in the linkage are maximally "bent" in the clockwise direction while maintaining disjointedness. We show that every cylindrical linkage can be modified to be clockwise-tight. **Lemma 4.28**.: _Let \(k\geq 2\) and \(G\) be a \(k\)-cylindrical graph and \(\theta\in\Theta^{G}\). There exists a clockwise-tight cylindrical linkage \(\mathcal{P}\) in \(G\) with the winding number \(\theta\)._ Figure 10: Three \((u,v)\)-paths satisfying \(P_{1}\sqsubset P_{2}\sqsubset P_{3}\). The ring has been deformed for a better presentation. The dashed path \(Q\) is a clockwise handle of \(P_{3}\). Proof.: Let \(\mathcal{P}\) be any cylindrical linkage in \(G\) with the winding number \(\theta\). We exhaustively apply the following modification to \(\mathcal{P}\): while there exists a path \(P\in\mathcal{P}\) with a clockwise handle \(Q\) internally disjoint from \(\mathcal{P}\), replace \(P\) with \(P^{Q}\). After such a replacement, we obtain a new cylindrical linkage in \(G\) with the same winding number \(\theta\) (here we use the assumption that \(k\geq 2\)). We claim that this process must terminate in a finite number of steps. If not, an infinite number of replacements happens to a single path \(P\in\mathcal{P}\). Hence, there exists an infinite sequence of \((u,v)\)-paths \(P^{1}\sqsubset P^{2}\sqsubset\dots\). This is impossible because the relation \(\sqsubset\) is a linear order and there are only finitely many different \((u,v)\)-paths in \(G\). The second tool is based on the following concept from topology, used in the analysis of topological spaces with "holes", like a torus. We only provide simple definitions, tailored for our applications. **Definition 4.29** (Covering).: _The covering of \(\mathsf{Ring}(I_{in},I_{out})\) is a function \(\tau\colon[1,2]\times\mathbb{R}\to\mathbb{C}\) defined as \(\tau((x,y))=y\cdot\exp\left(\frac{-2i\pi}{k}\cdot x\right)\). We identify the image of \(\tau\) with \(\mathsf{Ring}(I_{in},I_{out})\)._ **Observation 4.30** (Lifting).: _Let \(G\) be a \(k\)-cylindrical graph, \(u=(1,\frac{-2\pi}{k}\cdot p)\), \(v=(2,\frac{-2\pi}{k}\cdot q)\) (in polar coordinates), and \(P\) be a \((u,v)\)-path. Then for every \(\ell\in\mathbb{Z}\) there is a unique curve \(P^{\prime}\) in \([1,2]\times\mathbb{R}\), called a lifting of \(P\), that starts at \((1,\,\ell\cdot k+p)\), ends at \((2,\,\ell\cdot k+p+\theta(P))\), and \(\tau(P^{\prime})=P\). It holds that \(p+\theta(P)\equiv q\mod k\). Moreover, any liftings of two vertex-disjoint paths are disjoint._ These notions are depicted in Figure 12. Now we can analyze two linkages with different winding numbers through their liftings in \([1,2]\times\mathbb{R}\). Here, we can take advantage of the fact that when two disjoint curves connect points on a boundary of a topological disc, then these points cannot be intrinsically crossing. **Observation 4.31**.: _Consider a curve \(P\) in \([1,2]\times\mathbb{R}\) which starts at \((1,x)\), ends at \((2,y)\), and is internally contained in \((1,2)\times\mathbb{R}\). Let \(D\) be the closure of the connected component of \(([1,2]\times\mathbb{R})\setminus P\) containing the point \((1,x-1)\). Next, let \(x_{0}<x_{1}\leq x\) and \(p_{0},p_{1}\in P\). Suppose that there exist disjoint curves \(Q_{0},Q_{1}\) in \(D\) such that \(Q_{0}\) connects \((1,x_{0})\) to \(p_{0}\) and \(Q_{1}\) connects \((1,x_{1})\) to \(p_{1}\). Then \(p_{0}\) occurs later than \(p_{1}\) on \(P\) when considered oriented from \((1,x)\) to \((2,y)\)._ This observation is illustrated in Figure 11. We are ready to prove the main technical lemma about cylindrical graphs, showing that any two cylindrical linkages can be "disentagled". **Lemma 4.32**.: _Let \(G\) be a \(k\)-cylindrical graph and \(\theta_{1}\leq\theta_{2}=\theta_{1}+\ell\), where \(\ell<k\). If \(\theta_{1},\theta_{2}\) are feasible in \(G\) then there exist cylindrical linkages \(\mathcal{P},\mathcal{R}\) in \(G\) such that \(\theta(\mathcal{P})=\theta_{1}\), \(\theta(\mathcal{R})=\theta_{2}\), and and for each \(P\in\mathcal{P},R\in\mathcal{R}\) the intersection of \(P\) and \(R\) comprises at most one path._ Figure 11: An illustration of Observation 4.31. The interior of the set \(D\) is highlighted. The curves \(Q_{0},Q_{1}\) are disjoint so their endpoints on the boundary of \(D\) cannot cross. **Figure 12:** An illustration for the proof of Lemma 4.32 with \(k=4\), \(\ell=2\). Top left: Two cylindrical linkages in a 4-cylindrical graph. For simplicity the linkage \(\mathcal{P}\) is drawn as straight dashed lines and the linkage \(\mathcal{Q}\) is drawn in colors. Top right: The covering of \(\mathsf{Ring}(I_{in},I_{out})\) and the liftings of the paths. The curves \(Q^{\prime}_{0},Q^{\prime}_{1},Q^{\prime}_{2},Q^{\prime}_{3}\) are drawn in colors matching their images on the left. Note that the curve \(Q^{\prime}_{-1}\) coincides with \(Q^{\prime}_{3}\) modulo a shift. The same applies to \(Q^{\prime}_{0}\) and \(Q^{\prime}_{4}\). Since \(\mathcal{P}\) is clockwise-tight, none path \(P\in\mathcal{P}\) can have a clockwise handle; hence there cannot be any curve like the red dotted one. Middle right: The curves \(Q^{2}_{0}\), \(Q^{1}_{1}\) (red), and \(Q^{3}_{2}\), \(Q^{2}_{3}\) (blue). The third one is an example of a curve \(Q^{\ell+1}_{i}\) which is not necessarily contained in \(Q^{\prime}_{i}\) due to the last straight segment. This forms a special case for property (P5) but this choice of definition guarantees property (P3). The relative position of the black disks located on \(P^{\prime}_{1}\) and \(P^{\prime}_{3}\) is the subject of Claim 4.33. Bottom right: The curves \(\widehat{Q}^{1}_{i}\) are drawn in red, while the curves \(\widehat{Q}^{2}_{i}\) are blue. Together with the green segments they form the paths \(R^{\prime}_{0},R^{\prime}_{1},R^{\prime}_{2},R^{\prime}_{3}\). Bottom left: The images of paths \(R^{\prime}_{i}\) form the sought family \(\mathcal{R}\). Proof.: If \(k=1\) or \(\theta_{1}=\theta_{2}\), then the claim is trivial so we can assume \(k\geq 2,\ell\geq 1\). Let \(\mathcal{P},\mathcal{Q}\) be cylindrical linkages with the winding numbers \(\theta_{1},\theta_{2}\). By Lemma 4.28 we can assume that \(\mathcal{P}\) is clockwise-tight. We order the linkages in a clockwise manner: \(\mathcal{P}=P_{0},\ldots P_{k-1},\mathcal{Q}=Q_{0},\ldots Q_{k-1}\), so that \(P_{i}\) and \(Q_{i}\) start at \((1,\frac{-2\pi}{k}i)\). Consider the covering \(\tau\colon[1,2]\times\mathbb{R}\to\mathsf{Ring}(I_{in},I_{out})\) and the liftings of \(\mathcal{P},\mathcal{Q}\). More precisely, we consider a unique infinite family \((P^{\prime}_{i})_{i\in\mathbb{Z}}\) of disjoint curves in \([1,2]\times\mathbb{R}\) such that \(P^{\prime}_{i}\) is a path from \((1,i)\) to \((2,i+\theta_{1})\) and \(\tau(P^{\prime}_{i})=P_{(i\mod k)}\). Similarly we define the lifting \((Q^{\prime}_{i})_{i\in\mathbb{Z}}\) of \(\mathcal{Q}\). Note that each curve \(P^{\prime}_{i},Q^{\prime}_{i}\) is internally contained in \((1,2)\times\mathbb{R}\). Each curve \(Q^{\prime}_{i}\) must intersect \(P^{\prime}_{i},\ldots,P^{\prime}_{i+\ell}\). For \(i\in\mathbb{Z}\) and \(j\in[1,\ell]\) we define \(Q^{j}_{i}\) as the minimal prefix of \(Q^{\prime}_{i}\) which ends at \(P^{\prime}_{i+j}\). Furthermore, let \(\widehat{Q}^{j}_{i}\) be the minimal suffix of \(Q^{j}_{i}\) which starts at \(P^{\prime}_{i+j-1}\). In order to cover the corner cases, we define both \(Q^{0}_{i}\) and \(\widehat{Q}^{0}_{i}\) to be the trivial path from \((1,i)\) to \((1,i)\), \(\widehat{Q}^{\ell+1}_{i}\) as the trivial path from \((2,i+\theta_{2})\) to \((2,i+\theta_{2})\) (the last point on \(Q^{\prime}_{i}\)), and \(Q^{\ell+1}_{i}\) as the concatenation of \(Q^{\ell}_{i}\) with the subpath of \(P^{\prime}_{i+\ell}\) from the endpoint of \(Q^{\ell}_{i}\) to \((2,i+\theta_{2})\). We make note of the following properties that hold for each \(i\in\mathbb{Z}\) and \(j\in[0,\ell+1]\): 1. \(\widehat{Q}^{j}_{i}\subseteq Q^{j}_{i}\), 2. \(\widehat{Q}^{j}_{i}\subseteq Q^{\prime}_{i}\), 3. \(Q^{j}_{i}\) is internally disjoint from \(P^{\prime}_{i+j}\), 4. \(\tau(Q^{j}_{i})\) is a walk in \(G\), 5. if \(j\leq\ell\) then \(Q^{j}_{i}\subseteq Q^{\prime}_{i}\). For \(i\in\mathbb{Z}\) we define \(R^{\prime}_{i}\) as the unique path from \((1,i)\) to \((2,i+\theta_{2})\) which is contained in \(P^{\prime}_{i}\cup\widehat{Q}^{1}_{i}\cup P^{\prime}_{i+1}\cup\widehat{Q}^{2} _{i}\cup\cdots\cup\widehat{Q}^{\ell}_{i}\cup P^{\prime}_{i+\ell}\) (see Figure 12, bottom right). The intersection of \(R^{\prime}_{i}\) with \(P^{\prime}_{i+j}\), for \(j\in[0,\ell]\), is then a subpath of \(P^{\prime}_{i+j}\) between the endpoints of \(\widehat{Q}^{j}_{i}\) and \(\widehat{Q}^{j+1}_{i}\). It holds that \(\theta(\tau(R^{\prime}_{i}))=\theta_{2}\). **Claim 4.33**.: _Let \(i\in\mathbb{Z}\) and \(j\in[1,\ell]\). Consider points \(p_{0},p_{1}\in[1,2]\times\mathbb{R}\) such that \(p_{0}\in Q^{j+1}_{i}\cap P^{\prime}_{i+j}\) and \(p_{1}\in Q^{j}_{i+1}\cap P^{\prime}_{i+j}\). Then \(p_{0}\) occurs later than \(p_{1}\) on \(P^{\prime}_{i+j}\), when considered oriented from \((1,i+j)\) to \((2,i+j+\theta_{1})\)._ Proof.: Here we exploit the fact that \(\mathcal{P}\) is clockwise-tight. First consider the case \(j<\ell\) as then, by property (P5), the paths \(Q^{j+1}_{i}\) and \(Q^{j}_{i+1}\) are disjoint as subpaths of \(Q^{\prime}_{i},Q^{\prime}_{i+1}\). By property (P3) both \(Q^{j+1}_{i}\) and \(Q^{j}_{i+1}\) are internally disjoint from \(P^{\prime}_{i+j+1}\). Let \(\tilde{Q}^{\prime}_{i}\) be the prefix of \(Q^{j+1}_{i}\) ending at \(p_{0}\) and \(\tilde{Q}^{\prime}_{i+1}\) be the prefix of \(Q^{j}_{i+1}\) ending at \(p_{1}\). Next, let \(D\) be the closure of the connected component of \(([1,2]\times\mathbb{R})\setminus P^{\prime}_{i+j}\) containing the point \((1,i+j-1)\). Suppose that \(\tilde{Q}^{\prime}_{i}\) or \(\tilde{Q}^{\prime}_{i+1}\) contains a point \(y\not\in D\). Then \(\tilde{Q}^{\prime}_{i}\) or \(\tilde{Q}^{\prime}_{i+1}\) has a subpath \(Q^{\prime\prime}\) with both endpoints on \(P^{\prime}_{i+j}\) and internally contained in the region of \([1,2]\times\mathbb{R}\) between \(P^{\prime}_{i+j}\) and \(P^{\prime}_{i+j+1}\). By property (P4) the image \(\tau(Q^{\prime\prime})\) is a walk in \(G\) and it contains a clockwise handle of \(\tau(P^{\prime}_{i+j})\) which is disjoint from \(\mathcal{P}\); this contradicts \(\mathcal{P}\) being clockwise-tight. We obtain that \(\tilde{Q}^{\prime}_{i}\), \(\tilde{Q}^{\prime}_{i+1}\) lie entirely within \(D\). The claim follows from Observation 4.31. Finally, consider the case \(j=\ell\) where \(Q^{\ell+1}_{i}\) is not necessarily a subpath of of \(Q^{\prime}_{i}\). However, for \(p_{0}\in Q^{\ell+1}_{i}\) chosen as the unique point on \(Q^{\ell}_{i}\cap P^{\prime}_{i+\ell}\) the path \(\tilde{Q}^{\prime}_{i}\), defined as above, is a subpath of \(Q^{\prime}_{i}\) due to property (P5). See Figure 12, middle right. In this case \(Q^{\prime}_{i},Q^{\prime}_{i+1}\) are again disjoint and the same argument applies. The general claim follows from the observation that any other point \(p\in Q_{i}^{\ell+1}\cap P_{i+\ell}^{\prime}\) occurs later than \(p_{0}\) on \(P_{i+j}^{\prime}\). **Claim 4.34**.: _The paths \((R_{i}^{\prime})_{i\in\mathbb{Z}}\) are pairwise disjoint._ Proof.: It suffices to show that for every \(i\in\mathbb{Z}\) the paths \(R_{i}^{\prime},R_{i+1}^{\prime}\) are disjoint. By property (P2) the paths of the form \(\widehat{Q}_{i}^{j}\), \(\widehat{Q}_{i+1}^{j^{\prime}}\) belong to the disjoint paths \(Q_{i}^{\prime},Q_{i+1}^{\prime}\) so they cannot intersect each other. If \(R_{i}^{\prime},R_{i+1}^{\prime}\) intersect then there must be \(j\in[1,\ell]\) so that \(R_{i}^{\prime}\cap P_{i+j}^{\prime}\) and \(R_{i+1}^{\prime}\cap P_{i+j}^{\prime}\) intersect. This may happen only if the subpath of \(P_{i+j}^{\prime}\) between the endpoints of \(\widehat{Q}_{i}^{j},\widehat{Q}_{i}^{j+1}\) and the subpath of \(P_{i+j}^{\prime}\) between the endpoints of \(\widehat{Q}_{i+1}^{j-1},\widehat{Q}_{i+1}^{j}\) have a non-empty intersection. This is impossible due to property (P1) and Claim 4.33. It follows from the construction that \(\tau(R_{i}^{\prime})=\tau(R_{j}^{\prime})\) whenever \(i\equiv j\mod k\). We can thus define \(R_{0},\ldots,R_{k-1}\) as the path family in \(G\) such that \(\tau(R_{j}^{\prime})=R_{(j\mod k)}\) for each \(j\in\mathbb{Z}\). Since \(\ell<k\), no path \(R_{i}^{\prime}\) intersects both \(P_{j}^{\prime}\) and \(P_{j+k}^{\prime}\) for any \(j\in\mathbb{Z}\). As each intersection \(R_{i}^{\prime}\cap P_{j}^{\prime}\) is either empty or a single path, we infer that also each intersection \(R_{i}\cap P_{j}\) is either empty or a single path. Consider \(0\leq i<j<k\): from Claim 4.34 we know that \(R_{i}^{\prime}\), \(R_{j}^{\prime}\) are disjoint. Moreover, the path \(R_{j}^{\prime}\) is contained in the region \(D\) of \([1,2]\times\mathbb{R}\) between \(R_{i}^{\prime}\) and \(R_{i+k}^{\prime}\), exclusively. Because \(\tau(R_{i}^{\prime})=\tau(R_{i+k}^{\prime})=R_{i}\) and \(\tau(D)\cap R_{i}=\emptyset\), we obtain that \(R_{i}\) and \(R_{j}\) form disjoint subsets of \(\mathsf{Ring}(I_{in},I_{out})\); hence they are vertex-disjoint paths. We conclude that \(\mathcal{R}=R_{0}\ldots,R_{k-1}\) is the desired linkage with the winding number \(\theta_{2}\) and single-path intersections with \(\mathcal{P}\). Since the union of two disentangled linkages cannot contain too many concentric cycles, we can now find an edge to which the criterion from Lemma 4.25 applies. **Proposition 4.35**.: _Let \(G\) be a \(k\)-cylindrical graph with \(\mathsf{rdist}_{G}(V_{in},V_{out})\geq(k+2)^{2}\). Then there exists an edge \(e\in E(G)\) such that \(G\setminus e\) is \((V_{in}\cup V_{out})\)-linkage-equivalent to \(G\). Furthermore, such an edge can be found in polynomial time._ Proof.: Let \(C_{1},\ldots,C_{m}\) be a \((V_{in},V_{out})\)-sequence of concentric cycles in \(G\) with \(m=(k+2)^{2}-1\). We need to show that there exists an edge satisfying the requirements of Lemma 4.25. Note that \(\theta_{2}^{G}<\theta_{1}^{G}+k\). Let \(\mathcal{P}_{1},\mathcal{P}_{2}\) be the linkages from Lemma 4.32 such that \(\theta(\mathcal{P}_{1})=\theta_{1}^{G}\), \(\theta(\mathcal{P}_{2})=\theta_{2}^{G}\), and \(G^{\prime}\) be the union of \(\mathcal{P}_{1},\mathcal{P}_{2}\). We claim that there exists \(i\in[2k+1,m-2k]\) and \(e\in E(C_{i})\) such that \(e\not\in E(G^{\prime})\). Suppose otherwise. Then there is a \((V_{in},V_{out})\)-sequence of \(m-4k\) concentric cycles in \(G^{\prime}\) and so \(\mathsf{rdist}_{G^{\prime}}(V_{in},V_{out})\geq m-4k\geq k+3\). Due to Lemma 4.32, for each \(P_{1}\in\mathcal{P}_{1},P_{2}\in\mathcal{P}_{2}\) the intersection of \(P_{1}\) and \(P_{2}\) has at most one path. Consider a region \(R\) of \(\mathsf{Ring}(I_{in},I_{out})\) between two consecutive paths from \(\mathcal{P}_{1}\). Each path from \(\mathcal{P}_{2}\) has at most one subpath intersecting \(R\), which gives at most \(k\) subpaths in total. As a consequence, \(\mathsf{rdist}_{G^{\prime}}(V_{in},V_{out})\leq k+1\) and we get a contradiction with the assumption that there is no edge \(e\) obeying the specification above. Therefore, there exists \(i\in[2k+1,m-2k]\) and \(e\in E(C_{i})\) such that both linkages \(\mathcal{P}_{1},\mathcal{P}_{2}\) are present in \(G\setminus e\). As both \(\theta_{1}^{G},\theta_{2}^{G}\) are feasible in \(G\setminus e\), the criterion from Lemma 4.25 applies. In order to detect such an edge, we simply enumerate all edges in \(G\) and check for each \(e\) whether \(e\) satisfies the requirements of Lemma 4.25. This can be done in polynomial time with Lemma 4.23. #### 4.2.3 Finding an irrelevant edge We can now combine the two strategies for detecting an irrelevant edge to process any graph properly embedded in \(\mathsf{Ring}(I_{in},I_{out})\) with sufficiently many concentric cycles. Note that the notation in the following lemma differs slightly from that in the outline. The separators \(V_{in},V_{out}\) therein become here \(S^{1}_{in},S^{1}_{out}\), the separators \(S_{in},S_{out}\) become \(S^{2}_{in},S^{2}_{out}\), and \(S\) corresponds to \(S^{3}_{in}\). **Lemma 4.36**.: _Let \(G\) be a plane graph properly embedded in \(\mathsf{Ring}(I_{in},I_{out})\) and \(C_{1},\ldots,C_{m}\) be a \((V_{in},V_{out})\)-sequence of concentric cycles. Suppose that \(m\geq 3(t+4)^{2}\) where \(t=\boldsymbol{tw}(G)+1\). Then there exists an edge \(e\in E(G)\) such that \(G\setminus e\) is \((V_{in}\cup V_{out})\)-linkage-equivalent to \(G\). Furthermore, such an edge can be found in polynomial time._ Proof.: Let \(J^{1}_{in}=[1,t+2]\) and \(J^{1}_{out}=[m-t-1,m]\). By Corollary 4.10 there exists a \((C_{1},C_{t+2})\)-separator \(S^{1}_{in}\) of size less than \(t\) and a \((C_{m-t-1},C_{m})\)-separator \(S^{1}_{out}\) of size less than \(t\). By Lemma 3.5 there exist \(G\)-notices \(N^{1}_{in},N^{1}_{out}\) contained respectively in \(\mathsf{Ring}(C_{1},C_{t+2})\), \(\mathsf{Ring}(C_{m-t-1},C_{m})\), so that \(N^{1}_{in}\cap V(G)=S^{1}_{in}\) and likewise for \(N^{1}_{out}\). Let \(G_{1}\) be the subgraph of \(G\) induced by the vertices located in \(\mathsf{Ring}(N^{1}_{in},N^{1}_{out})\). By Lemma 4.2 it suffices to find an edge \(e\in E(G_{1})\) so that \(G_{1}\setminus e\) is \((S^{1}_{in}\cup S^{1}_{out})\)-linkage-equivalent to \(G_{1}\). This simplifies our task because \(|S^{1}_{in}|,|S^{1}_{out}|<t\). There are at least \(m-2(t+1)\geq 3(t+3)^{2}+2t\) cycles from \(C_{1},\ldots,C_{m}\) lying entirely between \(N^{1}_{in}\) and \(N^{1}_{out}\). Let \(S^{2}_{in},S^{2}_{out}\) be the minimum-size \((S^{1}_{in},S^{1}_{out})\)-separators in \(G_{1}\) that are closest to \(S^{1}_{in},S^{1}_{out}\) Figure 13: An illustration for the proof of Lemma 4.36. The planar structure is discarded here (cf. Figure 2) and the innermost layers of the graph \(G\) are portrayed to the left. The separators \(S^{1}_{in},S^{2}_{in},S^{2}_{out},S^{1}_{out}\) are sketched gray while the separator \(S^{3}_{in}\) is light blue. The vertical lines represent the family of cycles \(C_{1},\ldots,C_{m}\), where the dashed lines are the cycles intersecting one of the separators above. An \((S^{1}_{in},S^{1}_{out})\)-linkage \(\mathcal{P}\) illustrates the argument for the case where \(J^{2}_{in}\) is large. The blue linkage \(\mathcal{P}_{3}\) is given by the intersections of \((S^{1}_{in},S^{1}_{out})\)-paths from \(\mathcal{P}\) with the subgraph \(G_{3}\). Since \(S^{2}_{in}\) is the minimal \((S^{1}_{in},S^{1}_{out})\)-separator closest to \(S^{1}_{in}\), there exists a \((S^{1}_{in},S^{3}_{in})\)-linkage larger than \(\mathcal{P}_{3}\): this is shown with the dashed paths. This observation allows us to use Proposition 4.15 to find an edge in \(G_{3}\) that is not needed for any choice of \(\mathcal{P}\). respectively. They can be found in polynomial time (Theorem 3.1). Let \(p\) denote \(|S^{2}_{in}|=|S^{2}_{out}|\). By Corollary 4.10 we have \(p<t\). By Lemma 4.5, each of \(S^{2}_{in},S^{2}_{out}\) intersects at most \(p\) consecutive cycles from \(C_{1},\ldots,C_{m}\). Let \(J^{2}_{in},J^{2}_{mid},J^{2}_{out}\) denote the intervals representing indices of cycles in \(C_{1},\ldots,C_{m}\) which lie respectively: between \(S^{1}_{in}\) and \(S^{2}_{in}\), between \(S^{2}_{in}\) and \(S^{2}_{out}\), between \(S^{2}_{out}\) and \(S^{1}_{out}\). Note that the separators \(S^{2}_{in},S^{2}_{out}\) may intersect; in this case \(J^{2}_{mid}=\emptyset\). We have \(|J^{2}_{in}|+|J^{2}_{mid}|+|J^{2}_{out}|\geq 3(t+3)^{2}\). One of these intervals must contain at least \((t+3)^{2}\) elements. We distinguish two scenarios. Deep cylindrical subgraph in the middle.First suppose that \(|J^{2}_{mid}|\geq(t+3)^{2}\). Let \(N^{2}_{in},N^{2}_{out}\) be the \(G_{1}\)-nooses corresponding to the separators \(S^{2}_{in},S^{2}_{out}\) and \(G_{2}\) be the subgraph of \(G_{1}\) induced by the vertices lying in \(\mathsf{Ring}(N^{2}_{in},N^{2}_{out})\). Note that \(\mu_{G_{2}}(S^{2}_{in},S^{2}_{out})=p\) due to the choice of \(S^{2}_{in},S^{2}_{out}\). We can thus transform \(G_{2}\) in a homotopic way to a \(p\)-cylindrical graph \(H\). There may be many ways to obtain \(H\) which differ by a relative cyclic shift between \(S^{2}_{in}\), \(S^{2}_{out}\) and give different sets \(\Theta^{H}\) but we can choose an arbitrary one. Proposition 4.35 works regardless of the chosen embedding of \(H\) and it gives a polynomial-time algorithm to find an edge \(e\in E(H)=E(G_{2})\) such that \(G_{2}\setminus e\) is \((S^{2}_{in}\cup S^{2}_{out})\)-linkage-equivalent to \(G_{2}\). Since \(S^{2}_{in}\cup S^{2}_{out}\) separates the endpoints of \(e\) from \(S^{1}_{in}\cup S^{1}_{out}\), we obtain from Lemma 4.2 that \(G_{1}\setminus e\) is \((S^{1}_{in}\cup S^{1}_{out})\)-linkage-equivalent to \(G_{1}\). Deep subgraph with a non-maximal linkage.Suppose now that \(|J^{2}_{in}|\geq(t+3)^{2}\) or \(|J^{2}_{out}|\geq(t+3)^{2}\). These cases are symmetric so we only examine the first one. Let \(J^{\prime}\) be the subinterval of \(J^{2}_{in}\) comprising its last \(t+2\) elements (representing the cycles closest to \(S^{2}_{in}\)). By the same argument as before, there exists an \((S^{1}_{in},S^{2}_{in})\)-separator \(S^{3}_{in}\) of size at most \(t\) given by a \(G_{1}\)-noose \(N^{3}_{in}\) which may intersect only these cycles from \(C_{1},\ldots,C_{m}\) with indices within \(J^{\prime}\). Let \(G_{3}\) be the subgraph of \(G_{1}\) induced by the vertices lying in \(\mathsf{Ring}(N^{1}_{in},N^{3}_{in})\). There are at least \((t+3)^{2}-(t+2)\geq(t+2)^{2}\) cycles from \(C_{1},\ldots,C_{m}\) lying in the interior of \(\mathsf{Ring}(N^{1}_{in},N^{3}_{in})\). Since \(S^{2}_{in}\) was chosen as the minimum \((S^{1}_{in},S^{1}_{out})\)-separator in \(G_{1}\) closest to \(S^{1}_{in}\), there are no such separators within \(\mathsf{Ring}(N^{1}_{in},N^{3}_{in})\) of size \(p\) or smaller. This implies that \(\mu_{G_{3}}(S^{1}_{in},S^{3}_{in})\geq p+1\). As a result, \(G_{3}\) satisfies the preconditions of Proposition 4.15 with \(\max(|S^{1}_{in}|,|S^{3}_{in}|)\leq t\) and \(s\geq p+1\); let \(e\in E(G_{3})\) be an edge provided by that proposition. We will show that \(G_{1}\setminus e\) is \((S^{1}_{in}\cup S^{1}_{out})\)-linkage-equivalent to \(G_{1}\). Let \(\mathcal{P}\) be an \((S^{1}_{in}\cup S^{1}_{out})\)-linkage in \(G_{1}\). Recall that \(|S^{1}_{in}|,|S^{1}_{out}|\leq t\). Because the interval \(J^{2}_{in}\setminus J^{\prime}\) is sufficiently long and due to Lemma 4.11, there exists a linkage \(\mathcal{P}^{\prime}\) in \(G_{1}\) that is aligned with \(\mathcal{P}\) and every inclusion-minimal \((S^{1}_{in}\cup S^{1}_{out})\)-subpath of \(P\in\mathcal{P}^{\prime}\) which is an \((S^{1}_{in},S^{1}_{in})\)-path does not intersect \(S^{3}_{in}\). Let \(\mathcal{P}^{\prime}_{\text{long}}\) be the family of inclusion-minimal \((S^{1}_{in}\cup S^{1}_{out})\)-subpaths of paths in \(\mathcal{P}^{\prime}\) which are \((S^{1}_{in},S^{1}_{out})\)-paths. There are at most \(p\) paths in \(\mathcal{P}^{\prime}_{\text{long}}\) because every such path must intersect the separator \(S^{2}_{in}\) of size \(p\). Observe that every minimal \((S^{1}_{in}\cup S^{1}_{out})\)-subpath in \(\mathcal{P}^{\prime}\) that visits both \(S^{1}_{in},S^{3}_{in}\) belongs to \(\mathcal{P}^{\prime}_{\text{long}}\). Let \(\mathcal{P}_{3}\) be a linkage in \(G_{3}\) given by the maximal intersections of paths from \(\mathcal{P}^{\prime}\) with \(V(G_{3})\); then \(\mathcal{P}_{3}\) is a \((S^{1}_{in}\cup S^{3}_{in})\)-linkage (see Figure 13). By the observation above, each \((S^{1}_{in},S^{3}_{in})\)-path \(Q\in\mathcal{P}_{3}\) intersects some path \(Q^{\prime}\in\mathcal{P}^{\prime}_{\text{long}}\) and the mapping \(Q\to Q^{\prime}\) is injective. We infer that \(\mathcal{P}_{3}\) contains at most \(p\) many \((S^{1}_{in},S^{3}_{in})\)-paths. We apply Proposition 4.15 with \(s\geq p+1\) to derive that there exists a linkage in \(G_{3}\setminus e\) aligned with \(\mathcal{P}_{3}\). As a result, \(G_{1}\setminus e\) contains a linkage aligned with \(\mathcal{P}\). This concludes the proof. Finally, we show that when a plane graph \(G\) has sufficiently large radial diameter, then we can find a subgraph of \(G\) to which Lemma 4.36 applies. This allows us to detect an irrelevant edge in \(G\) **Proposition 4.37**.: _Let \(G\) be a plane graph, \(X\subseteq V(G)\) be of size \(k\), and \(t=\textbf{tw}(G)+1\). Suppose that the radial diameter of \(G\) is at least \(7(k+1)(t+5)^{2}\). Then we can find, in polynomial time, an edge \(e\in E(G)\) such that \(G\setminus e\) is \(X\)-linkage-equivalent to \(G\)._ Proof.: Let \(v\) be a vertex on the outer face of \(G\). By the triangle inequality we obtain that there must exist a vertex \(u\) with \(\mathsf{rdist}_{G}(u,v)>3(k+1)(t+5)^{2}\). By Lemma 3.3 there exists a \((\{u\},\{v\})\)-sequence of concentric cycles \(C_{1},\ldots,C_{m}\), where \(m=3(k+1)(t+5)^{2}\). For \(i\in[m-1]\) let \(V_{i}=V(G)\cap(\mathsf{Disc}(C_{i+1})\setminus\mathsf{Disc}(C_{i}))\). These sets are disjoint and there may be at most \(k\) of them containing a vertex from \(X\). Hence, there is an interval \(J\subseteq[m-1]\) of length \((m-1-k)/(k+1)\geq 3(t+4)^{2}+1\) where \(V_{j}\cap X=\emptyset\) for \(j\in J\). Let \(i=\min(J)\), \(j=\max(J)\), and \(U\) be the set of vertices lying between \(C_{i}\) and \(C_{j+1}\). Then \(U\cap X=\emptyset\). Let \(G^{\prime}=G[U\cup V(C_{i})\cup V(C_{j+1})]\); then \(\textbf{tw}(G^{\prime})\leq\textbf{tw}(G)=t-1\) and \(G^{\prime}\) is properly embedded in \(\mathsf{Ring}(I_{in},I_{out})\) for some curves \(I_{in},I_{out}\) such that \(I_{in}\cap G^{\prime}=C_{i}\) and \(I_{out}\cap G^{\prime}=C_{j+1}\). Furthermore, \(C_{i+1},\ldots,C_{j}\) forms a \((V(C_{i}),V(C_{j+1}))\)-sequence of concentric cycles in \(G^{\prime}\) of length \(3(t+4)^{2}\). We apply Lemma 4.36 to find an edge \(e\in E(G^{\prime})\) such that \(G^{\prime}\setminus e\) is \((V(C_{i})\cup V(C_{j+1}))\)-linkage equivalent to \(G^{\prime}\). It follows from Lemma 4.2 that \(G\setminus e\) is \(X\)-linkage-equivalent to \(G\). ### Single-face case Let \(G\) be a plane graph properly embedded in \(\mathsf{Disc}(I)\) and \(X=V(G)\cap I\). We say the a set of disjoint pairs \(\mathcal{T}\subseteq X^{2}\) is _cross-free_ if it does not contain pairs \((a,c),(b,d)\) so that \(a,b,c,d\) lie on \(I\) in this order. A division \(X=X_{1}\cup X_{2}\) is called _canonical_ if both \(X_{1},X_{2}\) are non-empty and there are points \(y_{1},y_{2}\in I\) so that \(X_{1},X_{2}\) belong to different connected components of \(I\setminus\{y_{1},y_{2}\}\). We define \(\mu_{\mathcal{T}}(X_{1},X_{2})\) to be the number of pairs in \(\mathcal{T}\) with one element in \(X_{1}\) and the other one in \(X_{2}\). It turns out that when a graph is properly embedded in a disc and all the terminals occur at the boundary of the disc, then again the cut-condition is sufficient for a linkage to exist. **Lemma 4.38** ([87, Lem. 3.6]).: _Let \(G\) be properly embedded in \(\mathsf{Disc}(I)\), \(V(G)\cap I=X\), and \(\mathcal{T}\subseteq X^{2}\). Then \(\mathcal{T}\) is realizable in \(G\) if and only if \(\mathcal{T}\) is cross-free and for every canonical division \((X_{1},X_{2})\) of \(X\) it holds that \(\mu_{G}(X_{1},X_{2})\geq\mu_{\mathcal{T}}(X_{1},X_{2})\)._ Our goal now is to compress a given graph with terminal set \(X\) located on the boundary of the disc, to an \(X\)-linkage-equivalent graph of size \(|X|^{\mathcal{O}(1)}\). By the lemma above, it is sufficient to preserve the sizes of minimum \((X_{1},X_{2})\)-separators for all canonical divisions \((X_{1},X_{2})\). Our strategy is to mark \(|X|^{\mathcal{O}(1)}\) vertices covering all the relevant separators and then replace the remaining parts of the graph with gadgets that are "at least as good". **Lemma 4.39**.: _Let \(I\) be a noose and \(X\subseteq I\) be a finite set of size \(k\). There exists a plane graph \(H\) properly embedded in \(\mathsf{Disc}(I)\) on at most \(k^{2}\) vertices such that \(H\cap I=X\) and every cross-free \(\mathcal{T}\subseteq X^{2}\) is realizable in \(H\). This graph can be constructed in time polynomial in \(k\)._ Proof.: The lemma is trivial for \(k\leq 2\), so we will assume \(k\geq 3\). We use the \((k,k)\)-cylindrical grid \(C_{k}^{k}\) (Definition 4.7), which has \(k^{2}\) vertices, and identify the vertices on the outer cycle \(C^{k}\) with the points from \(X\). It is easy to see that \(C_{k}^{k}\) can be properly embedded in \(\mathsf{Disc}(I)\) in such a way that \(C_{k}^{k}\cap I=C^{k}\). We argue that \(C_{k}^{k}\) satisfies the lemma using the criterion from Lemma 4.38. We claim that for any canonical division \((X_{1},X_{2})\) of \(X=C^{k}\) it holds that \(\mu_{C_{k}^{k}}(X_{1},X_{2})=\min(|X_{1}|,|X_{2}|)\). Let \(p=\min(|X_{1}|,|X_{2}|)\) and assume w.l.o.g. that \(X_{1}=\{c_{i}^{k}\mid i\in[p]\}\). For \(i\in[p]\) let \(P_{i}\) be the unique \((c_{i}^{k},c_{k+1-i}^{k})\)-path contained in the paths \(C_{i}\), \(C_{k+1-i}\) and the subpath of \(C^{k+1-i}\) between \(C_{i}\) \(C_{k+1-i}\) that contains the vertex \(c_{1}^{k+1-i}\). Then \(P_{1},\ldots,P_{\ell}\) form an \((X_{1},X_{2})\)-linkage implying that \(\mu_{C_{k}^{k}}(X_{1},X_{2})\geq p\). In fact, we get equality because each of the sets \(X_{1},X_{2}\) is an \((X_{1},X_{2})\)-separator. For any \(\mathcal{T}\subseteq X^{2}\) it holds that \(\mu_{\mathcal{T}}(X_{1},X_{2})\leq\min(|X_{1}|,|X_{2}|)\). Hence any cross-free \(\mathcal{T}\) is realizable in \(C_{k}^{k}\) due to Lemma 4.38. The following fact will come in useful for estimating the number of necessary gadgets. **Lemma 4.40** ([36, Lem. 13.3]).: _Let \(G\) be a planar graph, \(X\subseteq V(G)\), and let \(N_{3}\) be a set of vertices from \(V(G)\setminus X\) such that every vertex from \(N_{3}\) has at least three neighbors in \(X\). Then \(|N_{3}|\leq 2\cdot|X|\)._ **Proposition 4.41**.: _Let \(G\) be properly embedded in \(\mathsf{Disc}(I)\), \(V(G)\cap I=X\), and \(k=|X|\). One can construct, in polynomial time, a graph \(\widehat{G}\) on \(\mathcal{O}(k^{6})\) vertices, properly embedded in \(\mathsf{Disc}(I)\), that is \(X\)-linkage-equivalent to \(G\) and with \(V(\widehat{G})\cap I=X\)._ Proof.: For each canonical division \((X_{1},X_{2})\) of \(X\) we compute a minimum-size \((X_{1},X_{2})\)-separator \(S(X_{1},X_{2})\). Clearly, \(|S(X_{1},X_{2})|\leq k\). When one of the sets \(X_{1},X_{2}\) is a singleton \(\{v\}\) then we can assume that \(S(X_{1},X_{2})=\{v\}\). Otherwise the separator \(S(X_{1},X_{2})\) can be represented by a simple curve \(N(X_{1},X_{2})\subseteq\mathsf{Disc}(I)\) connecting two points on \(I\) so that \(N(X_{1},X_{2})\cap G=S(X_{1},X_{2})\) and every curve within \(\mathsf{Disc}(I)\) connecting points from \(X_{1}\) and \(X_{2}\) must intersect \(N(X_{1},X_{2})\). Let \(N\) be the union of all the curves \(N(X_{1},X_{2})\) and \(S\) be the union of all the sets \(S(X_{1},X_{2})\). By the argument above, we have \(X\subseteq S\). There are at most \(k^{2}\) canonical divisions so \(|S|\leq k^{3}\). We say that \(F\) is a face of \((I,N)\) if it is a closure of an inclusion-maximal subset of \(\mathsf{Disc}(I)\setminus N\). Every face \(F\) is of the from \(F=\mathsf{Disc}(\partial F)\). For a face \(F\) let \(V_{F}=V(G)\cap\mathsf{int}(F)\), and \(X_{F}=V(G)\cap\partial F\); clearly \(X_{F}\subseteq S\). Note that \(G\cap F=G[V_{F}\cup X_{F}]\) is properly embedded in \(F\). If \(V_{F}\neq\emptyset\), we replace the subgraph \(G\cap F\) with a graph \(H_{F}\) given by Lemma 4.39 applied to \(\partial F\) and \(X_{F}\). Then \(H_{F}\cap\partial F=X_{F}\) and \(|V(H_{F})|\leq|X_{F}|^{2}\). Observe that this modification does not affect \(G\cap N\). Let \(G^{\prime}\) be obtained from \(G\) by applying this modification to every face \(F\) of \((I,N)\). **Claim 4.42**.: _For every canonical division \((X_{1},X_{2})\) of \(X\) it holds that \(\mu_{G^{\prime}}(X_{1},X_{2})\leq\mu_{G}(X_{1},X_{2})\)._ Proof.: By construction \(G^{\prime}\cap N(X_{1},X_{2})=G\cap N(X_{1},X_{2})=S(X_{1},X_{2})\). Therefore \(\mu_{G^{\prime}}(X_{1},X_{2})\leq|S(X_{1},X_{2})|=\mu_{G}(X_{1},X_{2})\). **Claim 4.43**.: _For every canonical division \((X_{1},X_{2})\) of \(X\) it holds that \(\mu_{G^{\prime}}(X_{1},X_{2})\geq\mu_{G}(X_{1},X_{2})\)._ Proof.: Let \(\mathcal{P}\) be an \((X_{1},X_{2})\)-linkage in \(G\) of size \(\mu_{G}(X_{1},X_{2})\). We claim that there exists an \((X_{1},X_{2})\)-linkage \(\mathcal{P}^{\prime}\) in \(G^{\prime}\) of the same size. Let \(\mathcal{P}=\{P_{1},\ldots,P_{s}\}\). Let \(F\) be a face of \((I,N)\) and \(\mathcal{P}_{F}\) be the family of maximal subpaths of \(P_{1},\ldots,P_{s}\) that traverse \(F\). This is an \(X_{F}\)-linkage in \(G\cap F\). Let \(\mathcal{T}_{F}\) be the set of pairs representing the endpoints of \(\mathcal{P}_{F}\); then \(\mathcal{T}_{F}\) is cross-free with respect to \(\partial F\). As a consequence of Lemma 4.39, \(\mathcal{T}_{F}\) is realizable in \(H_{F}\). By applying this argument to every \(F\), we turn \(\mathcal{P}\) into a linkage \(\mathcal{P}^{\prime}\) in \(G^{\prime}\) connecting the same pairs of vertices in \(X\). Hence, \(\mu_{G^{\prime}}(X_{1},X_{2})\geq|\mathcal{P}^{\prime}|=|\mathcal{P}|=\mu_{G}( X_{1},X_{2})\). With the two claims above we apply Lemma 4.38 to infer that \(G^{\prime}\) is \(X\)-linkage-equivalent to \(G\). Finally, we apply two reduction rules to bound the size of \(G^{\prime}\). When there exists a vertex set \(C\subseteq V(G)\setminus S\) such that \(N_{G^{\prime}}(C)\subseteq S\) and (a) \(|N_{G^{\prime}}(C)|=1\), then remove \(C\); or (b) \(N_{G^{\prime}}(C)=\{u,v\}\), then replace \(C\) with the edge \(uv\) (when such an edge already exists, do nothing). Let \(\widehat{G}\) be the result of applying these rules to \(G^{\prime}\). These modifications preserve \(X\)-linkages and \(\widehat{G}\) remains properly embedded in \(\mathsf{Disc}(I)\). Moreover, for every face \(F\) of \((I,N)\) with a non-empty set \(V(\widehat{G})\cap\mathsf{int}(F)\) it holds that \(|X_{F}|\geq 3\). **Claim 4.44**.: _The graph \(\widehat{G}\) has \(\mathcal{O}(k^{6})\) vertices._ Proof.: Consider a graph \(\widehat{G}^{c}\) obtained from \(\widehat{G}\) by contracting each connected component of \(\widehat{G}-S\) into a single vertex. Let \(B\) be the set of the vertices created due to contractions. Each vertex from \(B\) corresponds to some face \(F\) of \((I,N)\) with \(V(\widehat{G})\cap\mathsf{int}(F)\neq\emptyset\) and \(|X_{F}|\geq 3\), so the minimum degree in \(B\) is at least \(3\). By Lemma 4.40 the size of \(B\) is at most \(2|S|\leq 2k^{3}\). Due to planarity, the number of edges in \(\widehat{G}^{c}\) is at most \(3\cdot(|B|+|S|)=\mathcal{O}(k^{3})\). Let \(\mathcal{F}\) be the set of faces \(F\) of \((I,N)\) with \(V(\widehat{G})\cap\mathsf{int}(F)\neq\emptyset\). We have \(\sum_{F\in\mathcal{F}}|X_{F}|\leq|E(\widehat{G}^{c})|\). In turn, \(\sum_{F\in\mathcal{F}}|V(H_{F})|\leq\sum_{F\in\mathcal{F}}|X_{F}|^{2}\leq( \sum_{F\in\mathcal{F}}|X_{F}|)^{2}=\mathcal{O}(k^{6})\). This entails the claimed bound on the size of \(\widehat{G}\). The construction of \(\widehat{G}\) can be easily performed in polynomial time. The proposition follows. ### Cutting the graph open In this section, we finalize the construction of a polynomial kernel. After reducing the radial diameter, we can find a tree of moderate size spanning the set of terminals \(X\) in the radial graph. We shall _cut the graph open_ alongside this tree to reduce the problem to the case where all the terminals lie on a single face. The following transformation has been used in the algorithms for Steiner Tree[12, 84] and Vertex Multiway Cut[53] on planar graphs. **Definition 4.45** (Cut alongside a tree).: _Let \(G\) be a plane graph and \(T\) be a tree in the radial graph of \(G\). The plane graph \(G^{T}\) is obtained from \(G\) as follows. Consider an Euler tour of \(T\) that traverses each edge twice in different directions, and respects the plane embedding of \(T\). We replace each vertex \(v\in V(T)\cap V(G)\) with \(\deg_{T}(v)\) many copies, reflecting its occurrences on the Euler tour, and distribute the copies in the plane, creating a new face incident to all the created copy-vertices (we refer to the set of these vertices as \(V_{T}\subseteq V(G^{T})\)). For \(v\in V(T)\cap V(G)\) let \(\Gamma_{T}(v)\subseteq V_{T}\) be the set of copies of \(v\) created during this process._ This construction is depicted in Figure 1 on page 1. Since the sum of vertex degrees in a tree is at most twice the number of its vertices, we obtain the following observation. **Observation 4.46**.: _For a plane graph \(G\) and a tree \(T\) in the radial graph of \(G\), we have \(|V_{T}|\leq 2\cdot|V(T)|\)._ We show that for the sake of obtaining an equivalent instance of Planar Disjoint Paths, we can focus on the new instance obtained via the cutting operation. **Lemma 4.47**.: _Let \(G_{1},G_{2}\) be plane graphs sharing a vertex set \(Y\). Next, let \(T_{1},T_{2}\) be trees in the radial graphs of \(G_{1},G_{2}\), respectively, such that \(Y=V(T_{1})\cap V(G_{1})=V(T_{2})\cap V(G_{2})\), \(V_{T_{1}}=V_{T_{2}}\) (we refer to this set as \(Y^{\prime}\)) and for each \(v\in Y\) it holds that \(\Gamma_{T_{1}}(v)=\Gamma_{T_{2}}(v)\). If \(G_{1}^{T_{1}}\) and \(G_{2}^{T_{2}}\) are \(Y^{\prime}\)-linkage-equivalent, then \(G_{1}\) and \(G_{2}\) are \(Y\)-linkage-equivalent._ Proof.: By symmetry, it suffices to prove that when \(\mathcal{T}\subseteq Y\times Y\) is realizable in \(G_{1}\), then it is also realizable in \(G_{2}\). Let \(\mathcal{P}_{1}\) be a \(\mathcal{T}\)-linkage in \(G_{1}\). We say that a path \(Q\) in \(G_{1}\) does not cross \(T_{1}\) if for each pair of consecutive edges \(e_{1},e_{2}\in E_{G_{1}}(V)\) on \(Q\) there is a single \(v\)-copy \(v^{\prime}\in\Gamma_{T_{1}}(v)\) such that \(e_{1},e_{2}\in E_{G_{1}^{T_{1}}}(v^{\prime})\). Note that when a subpath \(Q\) of a \((Y,Y)\)-path does not cross \(T_{1}\) then \(Q\) corresponds to a unique path in \(G_{1}^{T_{1}}\). We partition each path \(P\in\mathcal{P}_{1}\) into maximal subpaths that do not cross \(T_{1}\); let \(\Gamma_{1}(P)\) denote the family of corresponding paths in \(G_{1}^{T_{1}}\). We define \(\mathcal{T}^{\prime}\subseteq Y^{\prime}\times Y^{\prime}\) as follows. First, we insert to \(\mathcal{T}^{\prime}\) the endpoints of each path from \(\Gamma_{1}(P)\) for every \(P\in\mathcal{P}_{1}\). Next, when \(v^{\prime}\in V_{T_{1}}\) does not belong to any of the paths above, we insert the pair \((v^{\prime},v^{\prime})\) to \(\mathcal{T}^{\prime}\); we then say that \(v^{\prime}\) is _blocked_. Clearly, \(\mathcal{T}^{\prime}\) is realizable in \(G_{1}^{T_{1}}\) and, by the assumption, it is realizable in \(G_{2}^{T_{2}}\) as well. Let \(\mathcal{P}_{2}^{\prime}\) be a \(\mathcal{T}^{\prime}\)-linkage in \(G_{2}^{T_{2}}\). We need to show that it can be merged back to a \(\mathcal{T}\)-linkage in \(G_{2}\). For \(P\in\mathcal{P}_{1}\) let \(\Gamma_{2}(P)\subseteq\mathcal{P}_{2}^{\prime}\) be the linkage aligned with \(\Gamma_{1}(P)\); these families are disjoint for distinct \(P\in\mathcal{P}_{1}\). The paths from \(\Gamma_{2}(P)\) can be merged into a path \(\widehat{P}\) in \(G_{2}\) with the same endpoints as \(P\); let \(\mathcal{P}_{2}\) be the union of such paths. We argue that different paths from \(\mathcal{P}_{2}\) are vertex-disjoint. There is a 1-1 mapping between the vertices from \(V(G_{2}^{T_{2}})\setminus Y^{\prime}\) and \(V(G_{2})\setminus Y\) so we only need to check that no vertices from \(Y\) collide. Consider \(v\in Y\); if \(v\) is not being visited by any path from \(\mathcal{P}_{1}\), then for each \(v^{\prime}\in\Gamma_{T_{2}}(v)\) the pair \((v^{\prime},v^{\prime})\) belongs to \(\mathcal{T}^{\prime}\) (i.e., \(v^{\prime}\) is blocked) and so \(v\) cannot belong to any path from \(\mathcal{P}_{2}\). Suppose that \(v\in V(P)\) for some \(P\in\mathcal{P}_{1}\); we consider two cases. If \(v\) is an endpoint of a maximal subpath of \(P\) that does not cross \(T_{1}\), then either \(v\) is an endpoint of \(P\) or there are two vertices \(v^{\prime},v^{\prime\prime}\in\Gamma_{T_{2}}(v)\) which are endpoints of paths in \(\Gamma_{2}(P)\) whereas all the remaining vertices from \(\Gamma_{T_{2}}(v)\) are blocked. Therefore \(v\) is being visited only by \(\widehat{P}\). In the last case, \(v\) is being visited by \(P\) but no vertex from \(\Gamma_{T_{2}}(v)\) is an endpoint of a path in \(\Gamma_{2}(P)\). Then there is exactly one vertex \(v^{\prime}\in\Gamma_{T_{2}}(v)\) and one path \(P^{\prime}\in\Gamma_{1}(P)\) that visits \(v^{\prime}\) while the remaining vertices from \(\Gamma_{T_{2}}(v)\) are blocked. So also in this case \(v\) is being visited only by \(\widehat{P}\). We infer that \(\mathcal{P}_{2}\) is a \(\mathcal{T}\)-linkage in \(G_{2}\) aligned with \(\mathcal{P}_{1}\); this concludes the proof. We are ready to prove Theorem 2.1 which, in turn, implies Theorem 1.4. **Theorem 2.1**.: _Let \(G\) be a planar graph of treewidth \(\mathsf{tw}\) and \(X\subseteq V(G)\) be of size \(k\). Then we can construct, in polynomial time, a planar graph \(G^{\prime}\) with \(X\subseteq V(G^{\prime})\) such that \(|V(G^{\prime})|=\mathcal{O}(k^{12}\mathsf{tw}^{12})\) and \(G^{\prime}\) is \(X\)-linkage-equivalent to \(G\)._ Proof.: Consider an arbitrary plane embedding of \(G\). While the radial diameter of \(G\) is larger than \(7(k+1)(\mathsf{tw}+6)^{2}\), we apply Proposition 4.37 to find an irrelevant edge and reduce the size of \(G\) while maintaining \(X\)-linkage-equivalency. By applying this reduction exhaustively, we can assume that \(G\) has radial diameter \(d=\mathcal{O}(k\cdot\mathsf{tw}^{2})\). We greedily construct a Steiner tree \(T\) of \(X\) in the radial graph of \(G\): we order the vertices \(x_{1},x_{2},\ldots,x_{k}\) of \(X\) arbitrarily and for \(i=1,2,\ldots,k\) we construct a tree \(T_{i}\) by finding a shortest radial path between \(x_{i}\) and a vertex of \(T_{i-1}\). In each step we augment the tree with a path of length at most \(2d=\mathcal{O}(k\cdot\mathsf{tw}^{2})\), so, eventually, \(|V(T)|=\mathcal{O}(k^{2}\cdot\mathsf{tw}^{2})\). We cut \(G\) open alongside \(T\) obtaining a graph \(G^{T}\). It can be properly embedded in \(\mathsf{Disc}(I)\) for some noose \(I\) in such a way that \(G^{T}\cap I=V_{T}\) (this requires flipping the embedding to turn the newly created face into the outer face). Observation 4.46 implies that \(|V_{T}|=\mathcal{O}(k^{2}\cdot\mathsf{tw}^{2})\). We apply Proposition 4.41 to replace \(G^{T}\) with a graph \(H\) on \(\mathcal{O}(|V_{T}|^{6})=\mathcal{O}(k^{12}\cdot\mathsf{tw}^{12})\) vertices, properly embedded in \(\mathsf{Disc}(I)\), such that \(V(H)\cap I=V_{T}\) and \(H\) is \(V_{T}\)-linkage-equivalent to \(G^{T}\). Then we merge the split vertices back together: for each \(v\in V(T)\cap V(G)\) we identify the vertices from \(\Gamma_{T}(v)\) in \(H\). The graph \(G^{\prime}\) obtained this way remains planar. Since \(X\subseteq V(T)\cap V(G)\), Lemma 4.47 implies that \(G^{\prime}\) is \(X\)-linkage-equivalent to \(G\). The theorem follows. ## 5 Kernelization hardness for parameter \(k\) In this section, we prove Theorems 1.1, 1.2, and 1.3. First, we introduce the intermediate problem Non-crossing Multicommodity Flow and present a reduction to it from Set Cover. In Section 5.2 we construct the most internal gadget, allowing us to encode a large family of sets using homotopy classes. We use it as a building block to design a subset gadget in Section 5.3. In order to introduce the ideas gradually, we first give a simplified construction working under an overly-optimistic assumption (Section 5.3.1), followed by a proper one (Section 5.3.2), reflecting the exposition in Section 2. The reduction to Non-crossing Multicommodity Flow is finalized in Section 5.4. Afterwards, we explain how to get rid of the weighted requests (Section 5.5) and to translate the hardness to Planar (Edge-)Disjoint Paths (Section 5.6). Note that several definitions in this section differ from their simplified versions in Section 2. ### Non-crossing multicommodity flow We introduce the concept of a non-crossing flow and establish some notation to work with it. **Definition 5.1**.: _Let \(G\) be a plane multigraph. Consider a vertex \(v\in V(G)\) and two pairs of edges \((e_{1},f_{1})\) and \((e_{2},f_{2})\), such that \(\{e_{1},f_{1},e_{2},f_{2}\}\subseteq E_{G}(v)\). We say that these pairs cross if \(e_{1},e_{2},f_{1},f_{2}\) appear in this, or opposite, order in the cyclic ordering \(\pi_{G}(v)\) of \(E_{G}(v)\). Two edge-disjoint walks \(W_{1},W_{2}\) in \(G\) are non-crossing if there are no pairs of consecutive edges \((e_{1},f_{1})\) in \(W_{1}\) and \((e_{2},f_{2})\) in \(W_{2}\) that cross._ For two walks \(W_{1}=(v_{1},e_{1},\ldots,e_{p},v_{p+1})\), \(W_{2}=(u_{1},f_{1},\ldots,f_{q},u_{q+1})\) with \(v_{p+1}=u_{1}\), we define their concatenation \(W_{1}+W_{2}\) as a walk given by \((v_{1},e_{1},\ldots,e_{p},u_{1},f_{1},\ldots,f_{q},u_{q+1})\). **Definition 5.2**.: _Let \(G\) be a plane multigraph and \(\mathcal{T}\) be a multiset of triples from \(V(G)\times V(G)\times\mathbb{N}\). A family \(\mathcal{P}\) of edge-disjoint walks in \(G\) is a \(\mathcal{T}\)-flow if for every triple \((s_{i},t_{i},d_{i})\in\mathcal{T}\) there exists a subfamily \(\mathcal{P}_{i}\subseteq\mathcal{P}\) of \(d_{i}\) many \((s_{i},t_{i})\)-walks, and these subfamilies are disjoint for distinct triples from \(\mathcal{T}\)._ _A \(\mathcal{T}\)-flow \(\mathcal{P}\) is called non-crossing if (a) each pair of walks in \(\mathcal{P}\) is non-crossing, and (b) for any \((s_{i},t_{i},d_{i})\in\mathcal{T}\), any two walks \(W_{1},W_{2}\in\mathcal{P}_{i}\) and a walk \(W^{\prime}\in\mathcal{P}\setminus\mathcal{P}_{i}\), the walks \(W_{1}+W_{2}\) and \(W^{\prime}\) are non-crossing._ The last condition enforces that all the walks from \(\mathcal{P}_{i}\) can "touch" each other at vertex \(s_{i}\) (or \(t_{i}\)) without the need to cross the other walks. One can imagine each vertex to have a positive area so a non-crossing flow can be depicted as a family of disjoint curves on the plane. By connecting the images of the endpoints of paths in \(\mathcal{P}_{i}\) we can draw \(E(\mathcal{P}_{1}),\ldots,E(\mathcal{P}_{k})\) as connected pairwise-disjoint subsets of the plane (see Figure 14). This interpretation leads to the following observation. Figure 14: Left: An example of a non-crossing flow. The vertex \(b\) is a terminal for the three blue walks. Because of the condition (b) in Definition 5.2, these walks can be connected within the image of \(b\) without crossing the other walks. The edges \(ab\) and the one incident to \(c\) (solid gray lines) have multiplicities \(2\) so the walks are pairwise edge-disjoint. Right: After contracting \(b,c,d\) into a single vertex, we still obtain a non-crossing flow. Note that the edges \(eb\) and \(ed\) now become parallel. **Observation 5.3**.: _Let \(G\) be a plane multigraph, \(\mathcal{T}\) be a multiset of triples from \(V(G)\times V(G)\times\mathbb{N}\), and \(D\subseteq\mathbb{R}^{2}\) be a topological disc such that \(G[V(G)\cap D]\) is connected. Next, let \(G^{\prime}\) be obtained from \(G\) by contracting the set \(V(G)\cap D\) to a single vertex \(s\) and \(\mathcal{T}^{\prime}\) be obtained from \(\mathcal{T}\) by replacing each occurrence of a vertex from \(V(G)\cap D\) with \(s\). Suppose that there exists a non-crossing \(\mathcal{T}\)-flow in \(G\). Then there exists a non-crossing \(\mathcal{T}^{\prime}\)-flow in \(G^{\prime}\)._ Observe that this property would not hold if we replaced "walks" in the definition of a non-crossing flow with "paths" because a path might enter and exit \(V(G)\cap D\) multiple times and all these visits may be necessary even after contraction to avoid crossings. This is the main reason why we prefer to work with walks. Also note that the opposite implication in Observation 5.3 does not necessarily hold even if \(D\) contains no vertices from \(\mathcal{T}\). Non-crossing Multicommodity Flow **Parameter:**\(k\) **Input:** Plane multigraph \(G\), set \(\mathcal{T}\) of \(k\) vertex-disjoint requests \((s_{1},t_{1},d_{i}),\ldots,(s_{k},t_{k},d_{k})\in V(G)\times V(G)\times\mathbb{N}\). **Task:** Determine whether there exists a non-crossing \(\mathcal{T}\)-flow in \(G\). We will refer to each triple \((s_{i},t_{i},d_{i})\in\mathcal{T}\) as a _request_ and to the integer \(d_{i}\) as the _demand_ of this request. All vertices occurring in \(\mathcal{T}\) are referred to as _terminals_. An instance of Non-crossing Multicommodity Flow is called _unitary_ if each demand \(d_{i}\) equals \(1\) and every terminal has degree \(1\). ### Vector-containment gadget We begin describing the reduction from the innermost gadget. In order to provide an interface that will be consistent with the other gadgets, we need to impose a technical condition on the desired flow. Whereas in the problem definition we require that each vertex may occur in at most one request, here we will consider families \(\mathcal{T}\) that violate this condition. Therefore, we need to specify in which order the walks enter a terminal. **Definition 5.4**.: _Let \(\mathcal{P}\) be a non-crossing flow in a plane multigraph \(G\) whose outer face is confined by a simple cycle, \(U\subseteq V(G)\), and \(v\in V(G)\setminus U\) lie on the outer face of \(G\). Next, let \(\mathcal{P}_{v,U}\subseteq\mathcal{P}\) be the family of walks in \(\mathcal{P}\) with one endpoint at \(v\) and the other one at a vertex in \(U\). We define \(\pi\) as the clockwise order on \(\mathcal{P}_{v,U}\) given by the ordering of edges incident to \(v\), starting from the one being next to the outer face._ _We say that \(v\) sees vertices from \(U\) in the order \(u_{1},u_{2},\ldots,u_{k}\) (with respect to flow \(\mathcal{P}\)) if (a) for each \(i\in[k]\) the occurrences of \((v,u_{i})\)-walks in \(\mathcal{P}_{v,U}\) form a continuous interval with respect to \(\pi\), and (b) the order of these intervals matches the order \(u_{1},u_{2},\ldots,u_{k}\)._ A visualization of this property is given in Figure 15. **Definition 5.5**.: _Consider \(k\in\mathbb{N}\), \(\gamma\colon\{0,1\}^{k}\to\mathbb{N}\), and \(Z\subseteq\{0,1\}^{k}\). A plane multigraph \(G\) is a \((k,\gamma,Z)\)-Vector Containment Gadget if the following conditions hold._ 1. \(G\) _has distinguished vertices_ \(z_{1},z_{2},\ldots,z_{k}\) _and_ \(w_{0},w_{1}\)_, where the last two lie on the outer face._ 2. _Let_ \(\mathbf{b}\in\{0,1\}^{k}\)_,_ \(d\in\mathbb{N}\)_, and_ \(\mathcal{T}_{\mathbf{b},d}\) _be the family of following requests:_ 1. \((w_{0},z_{i},2^{k})\) _for each_ \(i\in[k]\) _with_ \(\mathbf{b}_{i}=0\) _,_ * \((w_{1},z_{i},2^{k})\) _for each_ \(i\in[k]\) _with_ \(\mathbf{b}_{i}=1\)_,_ * \(d\) _copies of the request_ \((w_{0},w_{1},1)\)_._ _Then the following conditions are equivalent:_ * \(d\leq\gamma(\mathbf{b})+1_{[\mathbf{b}\in Z]}\)_,_ * _there exists a_ \(\mathcal{T}_{\mathbf{b},d}\)_-flow in_ \(G\)_,_ * _there exists a non-crossing_ \(\mathcal{T}_{\mathbf{b},d}\)_-flow in_ \(G\)_, in which_ \(w_{0}\) _sees_ \(\{z_{i}\mid b_{i}=0\}\) _in the order of decreasing_ \(i\) _and_ \(w_{1}\) _sees_ \(\{z_{i}\mid b_{i}=1\}\) _in the order of increasing_ \(i\)_._ For the existence of a \(\mathcal{T}_{\mathbf{b},d}\)-flow, using \(d\) copies of the request \((w_{0},w_{1},1)\) is equivalent to using a single request \((w_{0},w_{1},d)\). This however does matter for the existence of a non-crossing \(\mathcal{T}_{\mathbf{b},d}\)-flow. The reason for considering \(d\) copies of \((w_{0},w_{1},1)\) instead of just \((w_{0},w_{1},d)\) comes from condition (b) in Definition 5.2: we want to allow the \((w_{0},w_{1})\)-walks to be arbitrarily intertwined with the other walks at \(w_{0}\) or \(w_{1}\) (see Figure 15). On the other hand, we require those other walks to be well-structured. Basically, the vector \(\mathbf{b}\) specifies for each terminal \(z_{i}\) whether it should send the flow to the left \((w_{0})\) or to the right \((w_{1})\). In turn, the condition \(\mathbf{b}\in Z\) governs how many \((w_{0},w_{1})\)-walks can be allocated on top of the walks above. The rest of Section 5.2 is devoted to a construction of a \((k,\gamma,Z)\)-Vector Containment Gadget of size \(2^{\mathcal{O}(k)}\) for a certain function \(\gamma\). #### 5.2.1 Homotopy classes and shortest paths Instead of constructing a vector-containment gadget directly, we begin from a prototype of its dual. Its most important property is the uniqueness of a shortest \((s,t)\)-path in each homotopy class (to be defined later). Operations on bit vectors.We number the coordinates in a size-\(k\) vector from \(1\) to \(k\). Consider a binary vector \(\mathbf{b}=(b_{1},b_{2},\ldots,b_{k})\). When referring to indices or performing arithmetic, we implicitly Figure 15: A conceptual sketch of a \((4,\gamma,Z)\)-Vector Containment Gadget. The flow on the picture corresponds to \(\mathbf{b}=(1010)\): the vertex \(z_{i}\) sends the blue flow to \(w_{0}\) when \(\mathbf{b}_{i}=0\) or to \(w_{1}\) when \(\mathbf{b}_{i}=1\). The vertex \(w_{0}\) sees vertices \(z_{4},z_{2}\) in this order while \(w_{1}\) sees vertices \(z_{1},z_{3}\) in this order. The amount of the available \((w_{0},w_{1})\)-flow (green) depends on \(\gamma(\mathbf{b})\) and on whether \(\mathbf{b}\in Z\). Observe that each path in the flow must cross the red dashed curve whose homotopy class (with respect to \(z_{1},z_{2},z_{3},z_{4}\)) agrees with the vector \(\mathbf{b}\). We will rely on this observation when constructing the gadget. use the big-endian binary decoding \(\{0,1\}^{k}\to[0,2^{k})\) given as \(\sum_{i=1}^{k}b_{i}\cdot 2^{k-i}\). For \(i\in[k]\) let \(\mathbf{b}^{[i]}\) denote a binary string obtained from \(\mathbf{b}\) by reversing its prefix of length \(i\). Note that for every \(i\in[k]\) the mapping \(\mathbf{b}\to\mathbf{b}^{[i]}\) is a bijection; for \(i=1\) it is identity. Construction of the graph \(H_{k}\).We define a plane graph \(H_{k}\) as follows. We draw \(2k\) vertical lines \(Q_{1},Q^{\prime}_{1},Q_{2},Q^{\prime}_{2},\ldots,Q_{k},Q^{\prime}_{k}\), in this order from left to right, and mark \(2^{k}\) vertices on each of them. The vertices on \(Q_{i}\) are referred to as \(v_{i,j}\), where \(j\in[0,2^{k})\), counting from the top to the bottom. Similarly, vertices on \(Q^{\prime}_{i}\) are referred to as \(v^{\prime}_{i,j}\). We add two additional vertices: \(s\) to the left of \(Q_{1}\), and \(t\) to the right of \(Q^{\prime}_{k}\). For each \(\mathbf{b}\in\{0,1\}^{k}\) we draw a curve \(P_{\mathbf{b}}\) which starts at \(s\), crosses all the lines \(Q_{1},Q^{\prime}_{1},Q_{2},Q^{\prime}_{2},\ldots,\)\(Q_{k},Q^{\prime}_{k}\), in this order, and ends at \(t\). The curve \(P_{\mathbf{b}}\) crosses the line \(Q_{i}\) (resp. \(Q^{\prime}_{i}\)) at the vertex \(v_{i,\mathbf{b}^{[i]}}\) (resp. \(v^{\prime}_{i,\mathbf{b}^{[i]}}\)). Each segment of \(P_{\mathbf{b}}\) between crossing consecutive vertical lines is a straight line. The plane graph \(H_{k}\) is obtained from this drawing by turning every crossing of curves \(P_{\mathbf{b}}\), \(P_{\mathbf{b}^{\prime}}\) into a vertex. See Figure 16 for an illustration. We retain the names \(P_{\mathbf{b}},Q_{i},Q^{\prime}_{i}\) to denote the corresponding paths in \(H_{k}\). **Observation 5.6**.: _Let \(\mathbf{b},\mathbf{v}\in\{0,1\}^{k}\) be distinct and \(j\in[k-1]\). The paths \(P_{\mathbf{b}}\) and \(P_{\mathbf{v}}\) in \(H_{k}\) intersect between \(Q^{\prime}_{j}\) and \(Q_{j+1}\) if and only if \(\mathbf{b}^{[j]}-\mathbf{v}^{[j]}\) and \(\mathbf{b}^{[j+1]}-\mathbf{v}^{[j+1]}\) have different signs._ We will study the geometric relations between \(P_{\mathbf{b}}\) and \(P_{\mathbf{v}}\) through bit substrings in \(\mathbf{b}\) and \(\mathbf{v}\). **Lemma 5.7**.: _Let \(\mathbf{b},\mathbf{v}\in\{0,1\}^{k}\) be distinct and \(j\in[k-1]\). If \(\mathbf{b}^{[j]}<\mathbf{v}^{[j]}\) and \(\mathbf{b}^{[j+1]}>\mathbf{v}^{[j+1]}\) then \(\mathbf{b}_{j+1}=1\) and \(\mathbf{v}_{j+1}=0\). Furthermore, there exists \(i\in[j]\) for which \(\mathbf{b}_{i}=0\), \(\mathbf{v}_{i}=1\), and \(\mathbf{b}_{h}=\mathbf{v}_{h}\) for \(i<h<j+1\)._ Proof.: First, targeting a contradiction, suppose that \(\mathbf{b}_{j+1}=\mathbf{v}_{j+1}\). Observe that \(\mathbf{b}^{[j]}\) can be obtained from \(\mathbf{b}^{[j+1]}\) by moving the first bit to the position \(j+1\), and likewise for \(\mathbf{v}^{[j]},\mathbf{v}^{[j+1]}\). When Figure 16: The graph \(H_{4}\). The vertical lines are \(Q_{1},Q^{\prime}_{1},\ldots,Q_{4},Q^{\prime}_{4}\), counting from left to right. For each \(i\in[4]\) the vertices \(v_{i,j}\) and \(v^{\prime}_{i,j}\) are drawn with black boundary when \(j<2^{3}\) and with brown boundary when \(j\geq 2^{3}\). For \(\mathbf{b}=(1001)\) the path \(P_{\mathbf{b}}\) is highlighted in orange and for \(\mathbf{v}=(1110)\) the path \(P_{\mathbf{v}}\) is highlighted in green. Observe that the positions of the first vertices on \(P_{\mathbf{b}},P_{\mathbf{v}}\) correspond to the numbers encoded by \(\mathbf{b},\mathbf{v}\) in binary. The violet path is an example of a different \(\mathbf{v}\)-homotopic path. \(\mathbf{b}_{j+1}=\mathbf{v}_{j+1}\) then this operation does not affect the "\(<\)" relation between the encoded integers, so this implies \(\mathbf{b}^{[j]}>\mathbf{v}^{[j]}\), contrary to the assumption. Hence \(\mathbf{b}_{j+1}\neq\mathbf{v}_{j+1}\) and it must be \(\mathbf{b}_{j+1}>\mathbf{v}_{j+1}\). Now, suppose that \(\mathbf{b},\mathbf{v}\) coincide on first \(j\) coordinates. Since \(\mathbf{b}_{j+1}>\mathbf{v}_{j+1}\), this implies that \(\mathbf{b}^{[j]}>\mathbf{v}^{[j]}\), a contradiction. As a consequence, there is an index in \([j]\) at which \(\mathbf{b},\mathbf{v}\) differ; let \(i\) denote the last such index. Then, the choice of \(i\) implies \(\mathbf{b}_{h}=\mathbf{v}_{h}\) for \(i<h<j+1\). Consider the first coordinate at which \(\mathbf{b}^{[j]},\mathbf{v}^{[j]}\) differ: for the first vector this bit equals \(\mathbf{b}_{i}\) and for the second one it is \(\mathbf{v}_{i}\). Since \(\mathbf{v}^{[j]}>\mathbf{b}^{[j]}\), this implies \(\mathbf{v}_{i}>\mathbf{b}_{i}\). We will refer to the structure observed in the last lemma as a _crossing pair_. **Definition 5.8**.: _Consider two vectors \(\mathbf{b},\mathbf{v}\in\{0,1\}^{k}\). We say that \((i,j)\in[k]^{2}\) is a crossing pair for \((\mathbf{b},\mathbf{v})\) if (a) \(i<j\), (b) the pair \(((\mathbf{b}_{i},\mathbf{v}_{i}),(\mathbf{b}_{j},\mathbf{v}_{j}))\) equals either \(((0,1),(1,0))\) or \(((1,0),(0,1))\) and (c) \(\mathbf{b}_{h}=\mathbf{v}_{h}\) for each \(i<h<j\). We define \(C(\mathbf{b},\mathbf{v})\) to be the set of crossing pairs for \((\mathbf{b},\mathbf{v})\)._ As an example, consider \(\mathbf{b}=(1001101)\) and \(\mathbf{v}=(0101000)\). They differ at positions \(1,2,5,7\) and \(C(\mathbf{b},\mathbf{v})=\{(1,2),(2,5)\}\). **Observation 5.9**.: _When \((i_{1},j_{1})\) and \((i_{2},j_{2})\) are different crossing pairs for some \((\mathbf{b},\mathbf{v})\) then \(j_{1}\leq i_{2}\) or \(j_{2}\leq i_{1}\)._ We would like to employ some notion of a homotopy class for the \((s,t)\)-paths. However, instead of working with the topological notion of homotopy, we introduce a simpler definition, tailored just for our analysis of the graph \(H_{k}\). Let \(E^{i}\subseteq E(H_{k})\) denote the set of edges between \(V(Q_{i})\) and \(V(Q^{\prime}_{i})\). Each edge from \(E^{i}\) is of the form \(v_{i,j}v^{\prime}_{i,j}\) for \(j\in[0,2^{k})\). For \(b\in\{0,1\}\) we define \(\mathsf{Half}(k,b)\subseteq[0,2^{k})\) to be \([0,2^{k-1})\) when \(b=0\) and \([2^{k-1},2^{k})\) when \(b=1\). Next, we define \(E^{i}_{b}\subseteq E^{i}\) as \(\{v_{i,j}v^{\prime}_{i,j}\mid j\in\mathsf{Half}(k,b)\}\). The edges from \(E^{i}_{0}\) have black circles as endpoints on Figure 16 whereas the ones from \(E^{i}_{1}\) have brown circles as endpoints. **Definition 5.10**.: _For \(\mathbf{b}\in\{0,1\}^{k}\) we say that an \((s,t)\)-path \(P\) in \(H_{k}\) is \(\mathbf{b}\)-homotopic if \(E(P)\cap E^{i}\subseteq E^{i}_{\mathbf{b}_{i}}\) for each \(i\in[k]\)._ A canonical example of a \(\mathbf{b}\)-homotopic path is \(P_{\mathbf{b}}\). Note that it might be the case that some \((s,t)\)-path is not \(\mathbf{b}\)-homotopic for any \(\mathbf{b}\in\{0,1\}^{k}\) according to our definition. We can now express some geometric properties of \((s,t)\)-paths in terms of crossing pairs. **Lemma 5.11**.: _Consider two vectors \(\mathbf{b},\mathbf{v}\in\{0,1\}^{k}\). Let \(R\) be a \(\mathbf{v}\)-homotopic \((s,t)\)-path in \(H_{k}\). The graph given by the intersection \(R\cap P_{\mathbf{b}}\) contains at least \(|C(\mathbf{b},\mathbf{v})|\) connected components disjoint from \(s\) and \(t\)._ Proof.: Let \((i,j)\in C(\mathbf{b},\mathbf{v})\). The path \(R\) must contain a subpath \(R^{i,j}\) that starts at \(Q^{\prime}_{i}\), ends at \(Q_{j}\), and is internally contained between \(Q^{\prime}_{i}\) and \(Q_{j}\). Similarly, let \(P^{i,j}_{\mathbf{b}}\) be the subpath of \(P_{\mathbf{b}}\) between \(Q^{\prime}_{i}\) and \(Q_{j}\). By the definition of a crossing pair, the endpoints of \(R^{i,j}\), \(P^{i,j}_{\mathbf{b}}\) are all distinct and they lie in different orders on \(Q^{\prime}_{i}\) and \(Q_{j}\); hence these paths must intersect between \(Q^{\prime}_{i}\) and \(Q_{j}\), exclusively. From Observation 5.9 we obtain that the paths \(R^{i,j}\) constructed for distinct crossing pairs are disjoint. Therefore, each \((i,j)\in C(\mathbf{b},\mathbf{v})\) contributes at least one connected component of \(R\cap P_{\mathbf{b}}\) that is disjoint from \(s,t\). For the special case \(R=P_{\mathbf{v}}\) we can make a stronger observation. **Lemma 5.12**.: _Consider distinct vectors \(\mathbf{b},\mathbf{v}\in\{0,1\}^{k}\). The number of internal vertices shared by \(P_{\mathbf{b}}\) and \(P_{\mathbf{v}}\) equals \(|C(\mathbf{b},\mathbf{v})|\)._ Proof.: Clearly, \(P_{\mathbf{b}}\) and \(P_{\mathbf{v}}\) cannot intersect between \(Q_{i}\) and \(Q^{\prime}_{i}\) for any \(i\in[k]\). Let \(J\subseteq[k-1]\) be the set of indices \(j\) for which \(\mathbf{b}^{[j]}-\mathbf{v}^{[j]}\) and \(\mathbf{b}^{[j+1]}-\mathbf{v}^{[j+1]}\) have different signs. By Observation 5.6, the number of common internal vertices in \(P_{\mathbf{b}}\) and \(P_{\mathbf{v}}\) equals \(|J|\). Let \(j\in J\) and assume w.l.o.g. that \(\mathbf{b}^{[j+1]}>\mathbf{v}^{[j+1]}\) and \(\mathbf{b}^{[j]}<\mathbf{v}^{[j]}\). By Lemma 5.7 we have \(\mathbf{b}_{j+1}=1\) and \(\mathbf{v}_{j+1}=0\). Furthermore, \(\mathbf{b}_{i}=0\) and \(\mathbf{v}_{i}=1\), where \(i\) is the last index in \([j]\) at which \(\mathbf{b},\mathbf{v}\) differ. Hence \((i,j+1)\) forms a crossing pair for \((\mathbf{b},\mathbf{v})\). The crossing pairs obtained for different \(j_{1},j_{2}\in J\) must be different, what implies \(|J|\leq|C(\mathbf{b},\mathbf{v})|\). The equality follows from the Lemma 5.11, as the number of shared internal vertices is no less than the number of components in \(P_{\mathbf{b}}\cap P_{\mathbf{v}}\) disjoint from \(s\) and \(t\). As a next step, we compute the length of the path \(P_{\mathbf{b}}\). It can be expressed with a very convenient formula which will come in useful later. **Definition 5.13**.: _We define function \(\gamma_{k}\colon\{0,1\}^{k}\to\mathbb{N}\) as follows._ \[\gamma_{k}(b_{1}b_{2}\dots b_{k})=\sum_{1\leq j<i\leq k}1_{[b_{i}\neq b_{j}]} \cdot 2^{k-i+j-1}.\] When the parameter \(k\) is clear from the context, we abbreviate \(\gamma=\gamma_{k}\). **Lemma 5.14**.: _For each \(\mathbf{b}\in\{0,1\}^{k}\) the length of the path \(P_{\mathbf{b}}\) in \(H_{k}\) equals \(2k+1+\gamma(\mathbf{b})\)._ Proof.: The length of \(P_{b}\) equals the total number of its crossings with \(P_{\mathbf{v}}\) for \(\mathbf{v}\neq\mathbf{b}\) plus the number of crossings with \(Q_{i},Q^{\prime}_{i}\) (\(2k\) in total) plus one. By Lemma 5.12 it suffices to show that the sum of \(|C(\mathbf{b},\mathbf{v})|\) over \(\mathbf{v}\neq\mathbf{b}\) equals \(\gamma(\mathbf{b})\). To this end, we change the order of summation and, for each pair \(1\leq i<j\leq k\), we count the number of vectors \(\mathbf{v}\neq\mathbf{b}\) for which \((i,j)\in C(\mathbf{b},\mathbf{v})\). So, consider some pair \(1\leq i<j\leq k\). First, note that if \((i,j)\in C(\mathbf{b},\mathbf{v})\) is non-empty for any \(\mathbf{v}\), then \(\mathbf{b}_{i}\neq\mathbf{b}_{j}\). In this case, \((i,j)\in C(\mathbf{b},\mathbf{v})\) if and only if \(\mathbf{v}_{i}\neq\mathbf{b}_{i}\), \(\mathbf{v}_{j}\neq\mathbf{b}_{j}\), and \(\mathbf{b},\mathbf{v}\) coincide between \(i\) and \(j\). These conditions fix exactly \((j-i+1)\) coordinates of \(\mathbf{v}\) and on the remaining coordinates \(\mathbf{v}\) may be arbitrary. Therefore there are exactly \(2^{k-i+j-1}\) vectors \(\mathbf{v}\) for which \((i,j)\in C(\mathbf{b},\mathbf{v})\). This agrees with the definition of function \(\gamma\). Finally, we prove the most crucial property of the graph \(H_{k}\): the uniqueness of a shortest \((s,t)\)-path in each homotopy class. **Lemma 5.15**.: _For each \(\mathbf{b}\in\{0,1\}^{k}\) the path \(P_{\mathbf{b}}\) is the unique shortest \(\mathbf{b}\)-homotopic \((s,t)\)-path in \(H_{k}\)._ Proof.: Let \(R\neq P_{\mathbf{b}}\) be a \(\mathbf{b}\)-homotopic \((s,t)\)-path in \(H_{k}\). We are going to show that \(|R|>|P_{\mathbf{b}}|\). First observe that \(R\) cannot be the path \(P_{\mathbf{v}}\) for any \(\mathbf{v}\neq\mathbf{b}\) as then it would not be \(\mathbf{b}\)-homotopic. Let \(\mathcal{P}\) be the family of all paths of the form \(P_{\mathbf{v}}\), \(Q_{i}\), \(Q^{\prime}_{i}\). Every vertex \(v\in V(H_{k})\setminus\{s,t\}\) is an intersection of some two paths \(P_{1},P_{2}\in\mathcal{P}\). For \(P\in\mathcal{P}\) let \(\Gamma_{R}(P)\) be the set of internal vertices \(v\) in \(R\) such that \(v\in V(P)\) but the predecessor of \(v\) on \(R\) does not belong to \(V(P)\). Observe that when \(v\) is an internal vertex of \(R\) and \(v\) is an intersection of paths \(P_{1},P_{2}\in\mathcal{P}\) then the predecessor of \(v\) is either \(s\) (which belongs to exactly one of \(V(P_{1})\), \(V(P_{2})\)) or it is an intersection of paths \(P^{\prime}_{1},P^{\prime}_{2}\in\mathcal{P}\) with \(|\{P_{1},P_{2}\}\cap\{P^{\prime}_{1},P^{\prime}_{2}\}|=1\). Consequently, the sets \(\Gamma_{R}(P)\) are pairwise disjoint and every internal vertex of \(R\) belongs to some set \(\Gamma_{R}(P)\). We infer that the length of \(R\) equals \(\sum_{P\in\mathcal{P}}|\Gamma_{R}(P)|\) + 1. When \(P=P_{\mathbf{v}}\) for some \(v\in\{0,1\}^{k}\) then \(|\Gamma_{R}(P_{\mathbf{v}})|\) is lower bounded by the number of connected components of \(R\cap P_{\mathbf{v}}\) disjoint from \(s,t\), which in turn is at least \(|C(\mathbf{b},\mathbf{v})|\) due to Lemma 5.11. So \(|\Gamma_{R}(P_{\mathbf{v}})|\geq|C(\mathbf{b},\mathbf{v})|\). When \(P=Q_{i}\) or \(P=Q_{i}^{\prime}\) then \(|\Gamma_{R}(P)|\geq 1\). If in both cases we always had equalities then the length of \(R\) would be the same as \(P_{\mathbf{b}}\) (by Lemma 5.12). Therefore, it is sufficient to show that for some \(\mathbf{v}\in\{0,1\}^{k}\) we have strict inequality \(|\Gamma_{R}(P_{\mathbf{v}})|>|C(\mathbf{b},\mathbf{v})|\). Let \(\mathbf{v}\) be the vector for which the last edge on \(R\) (the one incident to \(t\)) belongs to \(P_{\mathbf{v}}\) (possibly \(\mathbf{v}=\mathbf{b}\)). Since \(R\neq P_{\mathbf{v}}\), we obtain that \(R\cap P_{\mathbf{v}}\) has a connected component of size at least 2 containing \(t\) but not \(s\). The first vertex (with respect to \(R\)) in this component belongs to \(\Gamma_{R}(P_{\mathbf{v}})\) but this component has not been taken into account in the bound \(|\Gamma_{R}(P_{\mathbf{v}})|\geq|C(\mathbf{b},\mathbf{v})|\) above (because it contains \(t\)). Therefore, \(|\Gamma_{R}(P_{\mathbf{v}})|>|C(\mathbf{b},\mathbf{v})|\), what implies \(|R|>|P_{\mathbf{b}}|\). This concludes the proof. #### 5.2.2 Dual flows Before we are ready to finish the construction of a vector-containment gadget, we need to establish a method for constructing non-crossing flows with certain properties in a dual graph. **Lemma 5.16** ([79, Prop. 2.6.4]).: _Let \(G\) be a 2-connected plane multigraph and \(G^{*}\) be its dual. Suppose that \(S\subseteq E(G)\) is an inclusion-minimal \((u,v)\)-edge-separator for some \(u,v\in V(G)\). Then \(S^{*}\) is an edge set of a cycle in \(G^{*}\) such that the vertices \(u,v\) belong to different connected components of \(\mathbb{R}^{2}\setminus S^{*}\)._ **Definition 5.17**.: _Let \(G\) be a connected plane multigraph and \(s,t\in V(G)\) lie on the outer face. Let \(G_{st}\) be obtained from \(G\) by inserting the edge \(st\) within the outer face and \(G_{st}^{*}\) be the dual of \(G_{st}\). Let \(\widehat{s},\widehat{t}\) denote the endpoints of the edge \((st)^{*}\) in \(G_{st}^{*}\) so that \(\widehat{s}\) corresponds to the face incident to the edge \(st\) on the right when considering the orientation of \(st\) from \(s\) to \(t\). The \((s,t)\)-dual of \(G\) is the triple \((G_{st}^{*}\setminus(st)^{*},\widehat{s},\widehat{t})\)._ Figure 17: An \((s,t)\)-dual \((G^{\circ},\widehat{s},\widehat{t})\) of a multigraph \(G\). The vertices of \(G\) are black whereas the vertices of \(G^{\circ}\) are hollow. A shortest \((s,t)\)-path \(P\) in \(G\) is drawn with solid green lines. A non-crossing family of three edge-disjoint \((\widehat{s},\widehat{t})\)-paths in \(G^{\circ}\) is highlighted with colors. Each of these paths must cross some edge of \(P\). They illustrate the construction from Lemma 5.18. We have indicated the distances from \(s\) in \(G\) for the vertices on the paths \(P^{s}\) and \(P^{t}\). See Figure 17 for an example of an \((s,t)\)-dual. We use notation \(G^{\circ}\) to refer to the graph \(G^{*}_{st}\setminus(st)^{*}\) (when \(s,t\) are clear from context). For an edge \(e\in E(G)\) we refer to its counterpart in \(G^{\circ}\) as \(e^{\circ}\). Similarly, for an internal face \(f\) in \(G\) we refer to the corresponding vertex in \(G^{\circ}\) as \(f^{\circ}\). We will utilize the correspondence between the length of the shortest \((s,t)\)-path in \(G\) and the maximal size of an \((\widehat{s},\widehat{t})\)-flow in the \((s,t)\)-dual of \(G\). The following lemma also reveals which of the edges incident to \(\widehat{s},\widehat{t}\) are used in this flow. **Lemma 5.18**.: _Let \(G\) be a 2-connected plane multigraph whose outer face is confined by a simple cycle \(C\), \(s,t\in V(C)\), and \(d=\mathsf{dist}_{G}(s,t)\). Let \((G^{\circ},\widehat{s},\widehat{t})\) be the \((s,t)\)-dual of \(G\). Next, let \(P^{s},P^{t}\) be the \((s,t)\)-paths in \(C\) such that the edges of \(P^{s}\) (resp. \(P^{t}\)) are incident to \(\widehat{s}\) (resp. \(\widehat{t}\))._ _For \(i\in[d]\) let \(v^{s}_{i}\) be the last vertex on \(P^{s}\) with \(\mathsf{dist}_{G}(s,v^{s}_{i})<i\) and \(u^{s}_{i}\) be its successor on \(P^{s}\). Analogously we define vertices \(v^{t}_{i},u^{t}_{i}\in V(P^{t})\). Then there exists a non-crossing family of \(d\) edge-disjoint \((\widehat{s},\widehat{t})\)-paths \(P^{\circ}_{1},P^{\circ}_{2},\ldots,P^{\circ}_{d}\) in \(G^{\circ}\), such that \((v^{s}_{i}u^{s}_{i})^{\circ}\in P^{\circ}_{i}\) and \((v^{t}_{i}u^{t}_{i})^{\circ}\in P^{\circ}_{i}\) for each \(i\in[d]\)._ Proof.: For \(i\in[d]\) let \(\widehat{V}_{i}=\{v\in V(G)\mid\mathsf{dist}_{G}(s,v)\geq i\}\) and let \(V_{i}\subseteq\widehat{V}_{i}\) induce the connected component of \(G[\widehat{V}_{i}]\) that contains \(t\). We set \(S_{i}=E(V_{i},V(G)\setminus V_{i})\). The vertices \(u^{s}_{i},u^{t}_{i}\) belong to \(V_{i}\) because they can be connected to \(t\) with subpaths of \(P^{s},P^{t}\) contained in \(G[\widehat{V}_{i}]\). Therefore, \(v^{s}_{i}u^{s}_{i}\in S_{i}\) and \(v^{t}_{i}u^{t}_{i}\in S_{i}\). We claim that \(S_{i}\) is an inclusion-minimal \((s,t)\)-separator. Let \(vu\in S_{i}\) and \(u\in V_{i}\), \(v\not\in V_{i}\). Observe that \(v\not\in\widehat{V}_{i}\) because otherwise it would belong to the connected component of \(G[\widehat{V}_{i}]\) containing \(t\), which would imply \(v\in V_{i}\). Hence \(\mathsf{dist}_{G}(s,v)=i-1\) and there is an \((s,v)\)-path in \(G\setminus S_{i}\). As \(G[V_{i}]\) is connected, this implies that \(S_{i}\setminus vu\) is not an \((s,t)\)-separator, hence \(S_{i}\) is minimal. Recall that \(G_{st}\) is obtained from \(G\) by inserting the edge \(st\) and \(G^{*}_{st}\) is the dual of \(G_{st}\). Then \(S_{i}\cup\{st\}\) is an inclusion-minimal \((s,t)\)-separator in \(G_{st}\). By Lemma 5.16 the set \(S^{*}_{i}\cup\{(st)^{*}\}\subseteq E(G^{*}_{st})\) forms a cycle in \(G^{*}_{st}\) separating the vertices \(s\) and \(t\) on the plane. This cycle goes through vertices \(\widehat{s}\), \(\widehat{t}\), and the edge \(\widehat{st}=(st)^{*}\). Therefore \(S^{*}_{i}\) forms an edge set of an \((\widehat{s},\widehat{t})\)-path in \(G^{\circ}=G^{*}_{st}\setminus(st)^{*}\); this shall be the path \(P^{\circ}_{i}\). We have \((v^{s}_{i}u^{s}_{i})^{\circ}\in P^{\circ}_{i}\), \((v^{t}_{i}u^{t}_{i})^{\circ}\in P^{\circ}_{i}\), and these paths are edge-disjoint because the sets \(S_{1},\ldots,S_{d}\) are disjoint. Finally we argue that \(P^{\circ}_{1},P^{\circ}_{2},\ldots,P^{\circ}_{d}\) are non-crossing. Let \(D_{i}\subset\mathbb{R}^{2}\) be the connected component of \(\mathbb{R}^{2}\setminus(S^{*}_{i}\cup\{(st)^{*}\})\) containing \(t\) (it may be unbounded). We have \(D_{i}\cap V(G_{st})=V_{i}\) for each \(i\in[d]\). Therefore, \(D_{i}\) is a union of the faces in the dual \(G^{*}_{st}\) corresponding to vertices from \(V_{i}\). Observe that for \(i<d\) the set \(V_{i+1}\) is contained in \(V_{i}\). Consequently, we have \(D_{1}\supset D_{2}\supset\cdots\supset D_{d}\). Since the path \(P_{i}\) is an arc of \(\partial D_{i}\), these paths cannot cross. See Figure 17 for an illustration. #### 5.2.3 Construction of a non-crossing flow A direct approach to construct a vector-containment gadget would be to consider the \((s,t)\)-dual \((H^{\circ},\widehat{s},\widehat{t})\) of the graph \(H_{k}\) and set \(z_{i}\) to be the vertex corresponding to the face between the last edge from \(E^{i}_{0}\) and the first edge from \(E^{i}_{1}\). Consider some \(\mathbf{b}\in\{0,1\}^{k}\) and a flow \(\mathcal{P}\) in \(H^{\circ}\) that consists of (a) \((\widehat{s},\widehat{t})\)-paths, (b) \((\widehat{s},z_{i})\)-paths for \(\mathbf{b}_{i}=0\), and (c) \((\widehat{t},z_{i})\)-paths for \(\mathbf{b}_{i}=1\). Then every path in \(\mathcal{P}\) must cross any \(\mathbf{b}\)-homotopic \((s,t)\)-path in \(H_{k}\) (see Figure 15) and so the length of the shortest such path upper bounds the size of \(\mathcal{P}\). Since \(P_{\mathbf{b}}\) is the unique shortest \(\mathbf{b}\)-homotopic \((s,t)\)-path in \(H_{k}\), subdividing its first edge (the one incident to \(s\)) increases the length of the shortest \(\mathbf{b}\)-homotopic \((s,t)\)-path (see Figure 18). This allows \(\mathcal{P}\) to have one more element. We could thus encode the set \(Z\) by subdivisions of the edges incident to \(s\), increasing the upper bound for the size of \(\mathcal{P}\) exactly when \(\mathbf{b}\in Z\). It is more complicated though to obtain the implication in the other direction: when \(\mathbf{b}\in Z\) we want to construct a non-crossing flow \(\mathcal{P}\) satisfying certain requests of the three mentioned types. Performing such a construction "by hand" would be very tedious and instead we will take advantage of Lemma 5.18. To this end, we need to first subdivide more edges in \(H_{k}\) to make it amenable to this lemma. **Definition 5.19**.: _Let \(Z\subseteq\{0,1\}^{k}\). The graph \(H^{\prime}_{k,Z}\) is obtained from \(H_{k}\) as follows._ 1. _For each_ \(i\in[k]\) _and_ \(j\in[0,2^{k})\)_, each of the two edges incident to_ \(v^{\prime}_{i,j}\) _but not contained in_ \(Q^{\prime}_{i}\) _gets subdivided_ \(2^{k}-1\) _times._ 2. _For each_ \(j\in Z\) _the edge_ \(sv_{1,j}\) _gets subdivided once._ An example is given in Figure 18. There is a 1-1 correspondence between \((s,t)\)-paths in \(H_{k}\) and \(H^{\prime}_{k,Z}\) therefore by a slight abuse of notation we can consider \(\mathbf{b}\)-homotopic \((s,t)\)-paths in \(H^{\prime}_{k,Z}\). **Lemma 5.20**.: _Let \(Z\subseteq\{0,1\}^{k}\) and \(\mathbf{b}\in\{0,1\}^{k}\). The length of the shortest \(\mathbf{b}\)-homotopic \((s,t)\)-path in \(H^{\prime}_{k,Z}\) equals \(k\cdot 2^{k+1}+\gamma(\mathbf{b})+2\) if \(\mathbf{b}\in Z\) and \(k\cdot 2^{k+1}+\gamma(\mathbf{b})+1\) otherwise._ Proof.: First consider an intermediate graph \(H^{\prime}_{k}\) obtained from \(H_{k}\) by the first modification from Definition 5.19. Every \((s,t)\)-path \(P\) in \(H_{k}\) must cross \(Q^{\prime}_{i}\) for each \(i\in[k]\) and so it must contain two edges incident to \(Q^{\prime}_{i}\). Due to the subdivisions, the length of \(P\) in \(H^{\prime}\) increases by at least \(2k\cdot(2^{k}-1)\). The length of \(P_{\mathbf{b}}\) increases by exactly \(2k\cdot(2^{k}-1)\) and so its length in \(H^{\prime}_{k}\) becomes \(k\cdot 2^{k+1}+\gamma(\mathbf{b})+1\). Since \(P_{\mathbf{b}}\) is the unique shortest \(\mathbf{b}\)-homotopic \((s,t)\)-path in \(H_{k}\), it is also the unique shortest \(\mathbf{b}\)-homotopic \((s,t)\)-path in \(H^{\prime}_{k}\). Now consider the second modification from Definition 5.19. Clearly, it cannot decrease the length of any path. If \(\mathbf{b}\not\in Z\) then this modification does not affect any edge on \(P_{\mathbf{b}}\) so its length in \(H^{\prime}_{k,Z}\) is again \(k\cdot 2^{k+1}+\gamma(\mathbf{b})+1\). However, if \(\mathbf{b}\in Z\) then we have subdivided an edge on the unique shortest \(\mathbf{b}\)-homotopic \((s,t)\)-path so now the length of the shortest \(\mathbf{b}\)-homotopic \((s,t)\)-path becomes \(k\cdot 2^{k+1}+\gamma(\mathbf{b})+2\). Figure 18: Graph \(H^{\prime}_{4,Z}\) for \(Z=\{1,2,5,8,9,13\}\). The red edges are subdivided once and the blue edges are subdivides \(2^{4}-1=15\) times. For legibility only one blue edge on the top is drawn subdivided. The gray edges are not being subdivided. The faces \(f_{1},f_{2},f_{3},f_{4}\) are highlighted in light blue. Carving off the cavities.We introduce some additional notation for the two following lemmas. For \(i\in[k]\) we distinguish face \(f_{i}\) as the face between \(Q_{i}\) and \(Q^{\prime}_{i}\) incident to the vertices \(v_{i,2^{k-1}-1}\) and \(v_{i,2^{k-1}}\) (the four highlighted faces in Figure 18). Recall that \(\mathsf{Half}(k,b)\) stands for \([0,2^{k-1})\) when \(b=0\) and for \([2^{k-1},2^{k})\) when \(b=1\). For \(\mathbf{b}\in\{0,1\}^{k}\) we define \(H^{\mathbf{b}}_{k,Z}\) as the plane graph obtained from \(H^{\prime}_{k,Z}\) by removing all the internal vertices in the subdivided \((v_{i,j},v^{\prime}_{i,j})\)-edge for each \(i\in[k]\) and \(j\in\mathsf{Half}(k,1-\mathbf{b}_{i})\). See Figure 19 for a visualization. An edge \(e\in E(H^{\mathbf{b}}_{k,Z})\) is called _exposed_ if \(e\) belongs to the subdivided \((v_{i,j},v^{\prime}_{i,j})\)-edge where \(j=2^{k-1}\) if \(b_{i}=0\) and \(j=2^{k-1}+1\) if \(b_{i}=1\). Note that every exposed edge is incident to the outer face of \(H^{\mathbf{b}}_{k,Z}\). An edge \(e^{\circ}\) in an \((s,t)\)-dual of \(H^{\mathbf{b}}_{k,Z}\) is called exposed if \(e\) is exposed in \(H^{\mathbf{b}}_{k,Z}\). In order to construct a non-crossing \(\mathcal{T}_{\mathbf{b},d}\)-flow in the \((s,t)\)-dual of \(H^{\prime}_{k,Z}\), we first construct a flow in the \((s,t)\)-dual of \(H^{\mathbf{b}}_{k,Z}\) and then translate it to the dual above. We will work with flows consisting of paths instead of walks, what obviously meets our definition of a non-crossing flow. **Lemma 5.21**.: _Let \(Z\subseteq\{0,1\}^{k}\), \(\mathbf{b}\in\{0,1\}^{k}\), and \(d=k\cdot 2^{k+1}+\gamma(\mathbf{b})+1+1_{[\mathbf{b}\in Z]}\). Furthermore, let \((H^{\circ\mathbf{b}},\widehat{s}^{\mathbf{b}},\widehat{t}^{\mathbf{b}})\) be the \((s,t)\)-dual of \(H^{\mathbf{b}}_{k,Z}\)._ _Then there exists a non-crossing family of \(d\) edge-disjoint \((\widehat{s}^{\mathbf{b}},\widehat{t}^{\mathbf{b}})\)-paths \(P_{1}^{\circ},P_{2}^{\circ},\ldots,P_{d}^{\circ}\) in \(H^{\circ\mathbf{b}}\)_ Figure 19: Left: The graph \(H^{\mathbf{b}}_{4,Z}\) for \(\mathbf{b}=(1001)\) and arbitrary \(Z\) (the edge subdivisions are omitted here). The edges on the purple paths are _exposed_. The areas highlighted in color correspond to vertices \(\widehat{s}^{\mathbf{b}},\widehat{t}^{\mathbf{b}}\) in the \((s,t)\)-dual of \(H^{\mathbf{b}}_{4,Z}\). The blue and green curvy lines are examples of the paths from the family constructed in Lemma 5.21. The crux of the lemma is that this family does not contain paths like the red one. The green paths belong to subfamilies \(\mathcal{P}_{3},\mathcal{P}_{4}\) from Lemma 5.25 while the blue ones belong to \(\mathcal{P}_{long}\). _such that (1) every exposed edge belongs to some path \(P_{i}^{\circ}\), and (2) every path \(P_{i}^{\circ}\) contains at most one exposed edge._ Proof.: The distance between \(s\) and \(t\) in \(H_{k,Z}^{\mathbf{b}}\) equals the length of the shortest \(\mathbf{b}\)-homotopic \((s,t)\)-path in \(H_{k,Z}^{\prime}\), which is \(d=k\cdot 2^{k+1}+\gamma(\mathbf{b})+1+1_{[\mathbf{b}\in Z]}\) due to Lemma 5.20. Let \(P^{s},P^{t}\) be the \((s,t)\)-paths within the outer cycle of \(H_{k,Z}^{\mathbf{b}}\), defined as in Lemma 5.18. We also reuse the definitions of vertices \(v_{i}^{s},u_{i}^{s},v_{i}^{t},u_{i}^{t}\) for \(i\in[d]\). In order to derive the claim from Lemma 5.18 we need to prove the following. 1. Every exposed edge is of the form \(v_{i}^{s}u_{i}^{s}\) or \(v_{i}^{t}u_{i}^{t}\) for some \(i\in[d]\). 2. For each \(i\in[d]\) only one of the edges \(v_{i}^{s}u_{i}^{s}\), \(v_{i}^{t}u_{i}^{t}\) can be exposed. All distances considered in this proof are measured with respect to the graph \(H_{k,Z}^{\mathbf{b}}\). **Claim 5.22**.: _For each \(i\in[k]\) and any vertices \(x\in V(Q_{i})\), \(y\in V(Q_{i}^{\prime})\), it holds that \(\mathsf{dist}(s,x)<\mathsf{dist}(s,y)\). Furthermore, for \(i\in[k-1]\) and any vertices \(x\in V(Q_{i}^{\prime})\), \(y\in V(Q_{i+1})\) it holds that \(\mathsf{dist}(s,x)<\mathsf{dist}(s,y)\)._ Proof.: We will use the following three observations. First, for each \(i\in[k]\) any path from \(s\) to \(V(Q_{i}^{\prime})\) must intersect \(V(Q_{i})\) and, when \(i>1\), any path from \(s\) to \(V(Q_{i})\) must intersect \(V(Q_{i-1}^{\prime})\). Next, the minimal distance between \(V(Q_{i})\) and \(V(Q_{i}^{\prime})\) or \(V(Q_{i-1}^{\prime})\) is \(2^{k}\). Finally, for each two \(u,v\in V(Q_{i})\) (resp. \(u,v\in V(Q_{i}^{\prime})\)) we have \(\mathsf{dist}(u,v)<2^{k}\). We prove only the first claim in detail, as the second one has an analogous proof. Let \(y\) be the vertex from \(V(Q_{i}^{\prime})\) that minimizes distance from \(s\) and \(P\) be the shortest \((s,y)\)-path in \(H_{k,Z}^{\mathbf{b}}\). Then \(V(P)\cap V(Q_{i}^{\prime})=\{y\}\). Let \(x^{\prime}\) be a vertex from \(V(Q_{i})\cap V(P)\). Since the minimal distance between \(V(Q_{i})\) and \(V(Q_{i}^{\prime})\) is \(2^{k}\) we have \(\mathsf{dist}(s,x^{\prime})\leq\mathsf{dist}(s,y)-2^{k}\). Now, for any other \(x^{\prime}\in V(Q_{i})\) we have \(\mathsf{dist}(x,x^{\prime})<2^{k}\), what implies \(\mathsf{dist}(s,x)\leq\mathsf{dist}(s,x^{\prime})+\mathsf{dist}(x^{\prime},x )<\mathsf{dist}(s,y)\) and proves the claim due to the choice of \(y\). **Claim 5.23**.: _For each \(i\in[k]\) and \(j\in\mathsf{Half}(k,b_{i})\), it holds that \(\mathsf{dist}(s,v_{i,j}^{\prime})=\mathsf{dist}(s,v_{i,j})+2^{k}\)._ Proof.: By the triangle inequality we have \(\mathsf{dist}(s,v_{i,j}^{\prime})\leq\mathsf{dist}(s,v_{i,j})+2^{k}\). Let \(P\) be a shortest \((s,v_{i,j}^{\prime})\)-path in \(H_{k,Z}^{\mathbf{b}}\). Let \(j^{\prime}\in\mathsf{Half}(k,b_{i})\) be such that \(v_{i,j^{\prime}}^{\prime}\) is the first vertex from \(V(Q_{i}^{\prime})\) on \(P\). Then \(P\) visits \(v_{i,j^{\prime}}\) as well and so \(\mathsf{dist}(s,v_{i,j^{\prime}})=\mathsf{dist}(s,v_{i,j}^{\prime})-\mathsf{ dist}(v_{i,j^{\prime}},v_{i,j}^{\prime})=\mathsf{dist}(s,v_{i,j}^{\prime})-2^{k}-|j-j^{ \prime}|\). Next, \(\mathsf{dist}(s,v_{i,j})\leq\mathsf{dist}(s,v_{i,j^{\prime}})+|j-j^{\prime}|= \mathsf{dist}(s,v_{i,j}^{\prime})-2^{k}\), what gives the inequality in the second direction. It follows that every exposed edge is of the form \(vu\) where \(\mathsf{dist}(s,u)=\mathsf{dist}(s,v)+1\). Moreover, when \(v\) is the \(\ell\)-th vertex on the subdivided \((v_{i,j},v_{i,j}^{\prime})\)-edge (counting from \(v_{i,j}\)) then \(\mathsf{dist}(s,v)=\mathsf{dist}(s,v_{i,j})+\ell\). **Claim 5.24**.: _For each \(i\in[k]\) and any vertex \(x\in V(Q_{i})\), it holds that \(\mathsf{dist}(s,x)+2^{k}<\mathsf{dist}(s,t)\)._ Proof.: Due to Claim 5.22 it suffices to consider \(i=k\). Let \(x\in V(Q_{k})\). Let \(P\) be a shortest \((s,t)\)-path in \(H_{k,Z}^{\mathbf{b}}\). This path must intersect \(V(Q_{k})\); let \(x^{\prime}\) be a vertex from \(V(Q_{k})\cap V(P)\). The minimal distance from \(V(Q_{k})\) to \(t\) is \(2^{k+1}\), hence \(\mathsf{dist}(s,x^{\prime})\leq\mathsf{dist}(s,t)-2^{k+1}\). Since \(\mathsf{dist}(x^{\prime},x)<2^{k}\), the claim follows from the triangle inequality. Let \(vu\) be an exposed edge and \(u\) be the vertex with \(\ell=\mathsf{dist}(s,u)=\mathsf{dist}(s,v)+1\). Suppose w.l.o.g. that \(vu\in E(P^{s})\). From Claims 5.22 and 5.23 we obtain that for every vertex \(v^{\prime}\) that lies further than \(v\) on \(P^{s}\) it holds \(\mathsf{dist}(s,v^{\prime})\geq\ell\). Claim 5.24 implies that \(\ell\leq\mathsf{dist}(s,t)=d\). We therefore obtain property (P1): \(vu=v_{\ell}^{s}u_{\ell}^{s}\) for some \(\ell\in[d]\). Similarly, when \(vu\in E(P^{t})\) then \(vu=v_{\ell}^{t}u_{\ell}^{t}\) for some \(\ell\in[d]\). Finally, consider \(vu\in E(P^{s}),v^{\prime}u^{\prime}\in V(P^{t})\), such that \(vu,v^{\prime}u^{\prime}\) are exposed. Then there is \(i\in[k]\) such that \(V(Q_{i}^{\prime})\) separates \(v\) from \(v^{\prime}\) and one of \(v,v^{\prime}\) belongs to the same connected component of \(H_{k,Z}^{\mathbf{b}}-V(Q_{i}^{\prime})\) as \(s\). By Claims 5.22 and 5.23 the distances \(\mathsf{dist}(s,v)\) and \(\mathsf{dist}(s,v^{\prime})\) are different, what implies property (P2). We apply Lemma 5.18 to obtain a family of \((\widehat{s}^{\mathbf{b}},\widehat{t}^{\mathbf{b}})\)-paths that satisfies the conditions of the lemma. As the last step, we want to employ Lemma 5.21 to construct a certain flow in the \((s,t)\)-dual of \(H_{k,Z}^{\prime}\). The only modification needed involves extending the paths from the \((s,t)\)-dual of \(H_{k,Z}^{\mathbf{b}}\) that end at \(\widehat{s}^{\mathbf{b}}\) or \(\widehat{t}^{\mathbf{b}}\) but, when considered in the \((s,t)\)-dual of \(H_{k,Z}^{\prime}\), this endpoint corresponds to an internal face of \(H_{k,Z}^{\prime}\). Note that when an \((\widehat{s}^{\mathbf{b}},\widehat{t}^{\mathbf{b}})\)-path in the \((s,t)\)-dual of \(H_{k,Z}^{\mathbf{b}}\) reaches its endpoint through an exposed edge then this endpoint corresponds to \((f_{i})^{\circ}\) for some \(i\in[k]\) in the \((s,t)\)-dual of \(H_{k,Z}^{\prime}\). For the remaining cases, we will extend the path to reach \(\widehat{s}\) or \(\widehat{t}\) using the subdivided edges between \(Q_{i}\) and \(Q_{i}^{\prime}\). **Lemma 5.25**.: _Let \(k\in\mathbb{N}\), \(Z\subseteq\{0,1\}^{k}\), \(\mathbf{b}\in\{0,1\}^{k}\), and \(d=k\cdot 2^{k}+\gamma(\mathbf{b})+1+1_{[\mathbf{b}\in Z]}\). Let \((H^{\circ},\widehat{s},\widehat{t})\) be the \((s,t)\)-dual of \(H_{k,Z}^{\prime}\) and \(\mathcal{T}_{\mathbf{b},d}\) be the family of following requests:_ 1. \((\widehat{s},f_{i}^{\circ},2^{k})\) _for each_ \(i\) _with_ \(b_{i}=0\)_,_ 2. \((\widehat{t},f_{i}^{\circ},2^{k})\) _for each_ \(i\) _with_ \(b_{i}=1\)_,_ 3. \(d\) _copies of the request_ \((\widehat{s},\widehat{t},1)\)_._ _Then there exists a non-crossing \(\mathcal{T}_{\mathbf{b},d}\)-flow in \(H^{\circ}\), in which \(\widehat{s}\) sees \(\{f_{i}^{\circ}\mid b_{i}=0\}\) in the order of decreasing \(i\) and \(\widehat{t}\) sees \(\{f_{i}^{\circ}\mid b_{i}=1\}\) in the order of increasing \(i\) (recall Definition 5.4)._ Proof.: Let \((H^{\mathrm{ob}},\widehat{s}^{\mathbf{b}},\widehat{t}^{\mathbf{b}})\) be the \((s,t)\)-dual of \(H_{k,Z}^{\mathbf{b}}\). Let \(\ell=d+k\cdot 2^{k}\) and, for \(i\in[k]\), \(E_{i}^{\circ\mathbf{b}}\subseteq E(H^{\circ\mathbf{b}})\) be the set of \(2^{k}\) exposed edges located between \(Q_{i}\) and \(Q_{i}^{\prime}\). We apply Lemma 5.21 to obtain a non-crossing \((\widehat{s}^{\mathbf{b}},\widehat{t}^{\mathbf{b}})\)-flow \(\mathcal{P}=\{P_{1}^{\circ},\ldots,P_{\ell}^{\circ}\}\) in \(H^{\circ\mathbf{b}}\). For each \(i\in[k]\) there is a subfamily \(\mathcal{P}_{i}\subseteq\mathcal{P}\) of \(2^{k}\) paths containing an edge from \(E_{i}^{\circ\mathbf{b}}\). Moreover, the subfamilies \(\mathcal{P}_{1},\ldots,\mathcal{P}_{k}\) are disjoint. Let \(\mathcal{P}_{long}=\mathcal{P}\setminus(\mathcal{P}_{1}\cup\cdots\cup \mathcal{P}_{k})\). Every internal face of \(H_{k,Z}^{\mathbf{b}}\) is also an internal face of \(H_{k,Z}^{\prime}\). Therefore, every path in \(H^{\circ\mathbf{b}}\) that is internally disjoint from \(\widehat{s}^{\mathbf{b}},\widehat{t}^{\mathbf{b}}\) is also a path in \(H^{\circ}\). We can thus consider the flow \(\mathcal{P}\) in \(H^{\circ}\). When \(P\in\mathcal{P}_{i}\) for some \(i\in[k]\) then one of its endpoints (incident to an exposed edge) becomes \(f_{i}^{\circ}\). The other endpoint (incident to a non-exposed edge) will be either \(\widehat{s}\), \(\widehat{t}\) or \(g^{\circ}\), where \(g\) is an internal face of \(H_{k,Z}^{\prime}\) that is not present in \(H_{k,Z}^{\mathbf{b}}\). When \(P\in\mathcal{P}_{long}\) then the latter scenario applies to both endpoints of \(P\). Let \(g_{1}^{i},\ldots,g_{2^{k-1}}^{i}\) be the internal faces of \(H_{k,Z}^{\prime}\) between \(Q_{i}\) and \(Q_{i}^{\prime}\) that are not present in \(H_{k,Z}^{\mathbf{b}}\), ordered in such a way that \(g_{1}^{i}\) is incident to the outer face, \(g_{2^{k-1}}^{i}=f_{i}\), and \(g_{j+1}^{i}\) shares an edge with \(g_{j}^{i}\). When \(e\in E(H^{\circ\mathbf{b}})\) is incident to \(\widehat{s}^{\mathbf{b}}\) or \(\widehat{t}^{\mathbf{b}}\) in \(H^{\circ\mathbf{b}}\) then either \(e\) is incident to \(\widehat{s}\), \(\widehat{t}\), or some \((g_{j}^{i})^{\circ}\) in \(H^{\circ}\). In the last case, when \(e\) is non-exposed then \(e\) crosses \(Q_{i}\) or \(Q_{i}^{\prime}\). For each \(i\in[k]\), \(j\in[2^{k-1}]\), there is exactly one edge in \(H^{\circ}\) incident to \((g_{j}^{i})^{\circ}\) that crosses \(Q_{i}\) and exactly one that crosses \(Q^{\prime}_{i}\); in total \(2^{k}\) for fixed \(i\). Observe that for each \(j\in[1,2^{k-1}-1]\) there are \(2^{k}\) parallel edges between the vertices \((g_{j}^{i})^{\circ}\) and \((g_{j+1}^{i})^{\circ}\). Therefore, paths from \(\mathcal{P}\) that reach some \((g_{j}^{i})^{\circ}\) via a non-exposed edge can be extended to reach \(\widehat{s}\) (resp. \(\widehat{t}\)) in a non-crossing manner (see Figure 19, right, and the caption below). As a result, the paths from families \(\mathcal{P}_{i}\) are being extended to satisfy requests of types (1, 2), while the paths from \(\mathcal{P}_{long}\) are being extended to satisfy requests of type (3). Finally, we argue that \(\widehat{s}\) (resp. \(\widehat{t}\)) sees the vertices \(f_{1}^{\circ},f_{2}^{\circ},\ldots,f_{k}^{\circ}\) in the right order. First, consider the flow \(\mathcal{P}\) in the graph \(H^{\circ\mathbf{b}}\), in which every path is an \((\widehat{s}^{\mathbf{b}},\widehat{t}^{\mathbf{b}})\)-path. Observe that each subfamily \(\mathcal{P}_{i}\subseteq\mathcal{P}\) forms a continuous interval in \(\mathcal{P}\) ordered with respect to the ordering of edges incident to \(\widehat{s}^{\mathbf{b}}\) (or \(\widehat{t}^{\mathbf{b}}\)). Consider now \(i<j\) with \(\mathbf{b}_{i}=\mathbf{b}_{j}=0\), and let \(P_{i}\in\mathcal{P}_{i}\), \(P_{j}\in\mathcal{P}_{j}\). Let \(e_{i},e_{j}\in E(H^{\circ\mathbf{b}})\) be the edges on respectively \(P_{i},P_{j}\) that are incident to \(\widehat{s}^{\mathbf{b}}\). Then \(e_{i}\) occurs later than \(e_{j}\) in the clockwise ordering of edges incident to \(\widehat{s}^{\mathbf{b}}\) (starting from the edge next to the outer face of \(H^{\circ\mathbf{b}}\)), reflecting the relative order of the edges at the other ends of \(P_{i},P_{j}\). The transformation between the flow in \(H^{\circ\mathbf{b}}\) and the flow in \(H^{\circ}\) preserves the relative order of paths \(P_{i},P_{j}\), yielding the same relation with respect to \(\widehat{s}\). The analysis for the paths ending at \(\widehat{t}\) is symmetric: when \(i<j\) and \(\mathbf{b}_{i}=\mathbf{b}_{j}=1\) then any path from \(\mathcal{P}_{i}\) occurs earlier than any path from \(\mathcal{P}_{j}\) with respect to the clockwise ordering of edges incident to \(\widehat{t}^{\mathbf{b}}\). This concludes the proof. The flow considered above is exactly the one that we require in the vector-containment gadget. We can therefore summarize the construction. **Proposition 5.26**.: _Let \(\widehat{\gamma}_{k}\colon\{0,1\}^{k}\to\mathbb{N}\) be defined as \(\widehat{\gamma}_{k}(\mathbf{b})=k\cdot 2^{k}+\gamma_{k}(\mathbf{b})+1\). For each \(k\) and \(Z\subseteq\{0,1\}^{k}\) there exists a \((k,\widehat{\gamma}_{k},Z)\)-Vector Containment Gadget of size \(2^{\mathcal{O}(k)}\) and it can be constructed in time \(2^{\mathcal{O}(k)}\)._ Proof.: Let \((H^{\circ},\widehat{s},\widehat{t})\) be the \((s,t)\)-dual of the plane graph \(H^{\prime}_{k,Z}\). Both graphs can be constructed in time polynomial in their size, which is \(2^{\mathcal{O}(k)}\). We construct a \((k,\widehat{\gamma}_{k},Z)\)-Vector Containment Gadget using the graph \(H^{\circ}\). We set \(w_{0}=\widehat{s}\), \(w_{1}=\widehat{t}\), and \(z_{i}=(f_{i})^{\circ}\) for \(i\in[k]\). Let \(\mathcal{T}_{\mathbf{b},d}\) be the family of requests defined as in Lemma 5.25. We need to show the following conditions to be equivalent: 1. \(d\leq\widehat{\gamma}_{k}(\mathbf{b})+1_{[\mathbf{b}\in Z]}\), 2. there exists a \(\mathcal{T}_{\mathbf{b},d}\)-flow in \(H^{\circ}\), 3. there exists a non-crossing \(\mathcal{T}_{\mathbf{b},d}\)-flow in \(H^{\circ}\), in which \(\widehat{s}\) sees \(\{f_{i}^{\circ}\mid b_{i}=0\}\) in the order of decreasing \(i\) and \(\widehat{t}\) sees \(\{f_{i}^{\circ}\mid b_{i}=1\}\) in the order of increasing \(i\). The implication (c) \(\Rightarrow\) (b) is trivial. To see (b) \(\Rightarrow\) (a), consider some shortest \(\mathbf{b}\)-homotopic \((s,t)\)-path \(P\) in \(H^{\prime}_{k,Z}\). By Lemma 5.20 the length of \(P\) equals \(k\cdot 2^{k+1}+\gamma(\mathbf{b})+1+1_{[\mathbf{b}\in Z]}\). Each walk \(Q\) in a \(\mathcal{T}_{\mathbf{b},d}\)-flow in \(H^{\circ}\) must cross the path \(P\), i.e., there exists \(e\in E(P)\) such that \(e^{\circ}\in E(Q)\). Therefore the number of paths in a \(\mathcal{T}_{\mathbf{b},d}\)-flow, that is \(k\cdot 2^{k}+d\), cannot be greater than the length of \(P\). This implies \(d\leq\widehat{\gamma}_{k}(\mathbf{b})+1_{[\mathbf{b}\in Z]}\). The last implication (a) \(\Rightarrow\) (c) is proven in Lemma 5.25. ### Subset gadget The aim of the following gadget is to determine whether a given set \(F\subseteq[k]\) is a subset of one of \(2^{r}\) sets from a family \(\mathcal{S}\). To make the notation consistent with the previous gadget, we encode the set family as a function from the bit vectors to the subsets of \([k]\). We use the notation \(2^{[k]}\) to distinguish the family of subsets of \([k]\) from the family of vectors of length \(k\). **Definition 5.27**.: _Let \(r,k\) be integers and \(\mathcal{S}\colon\{0,1\}^{r}\to 2^{[k]}\) be a function. We say that a pair \((G,\mathcal{T})\) is an \((r,k,\mathcal{S})\)-\(\mathsf{Subset}\mathsf{Gadget}\) if the following conditions hold._ 1. \(G\) _is a plane multigraph with_ \(2k\) _distinguished vertices_ \(s_{1},t_{1},s_{2},t_{2},\ldots,s_{k},t_{k}\) _lying on the outer face in this clockwise order._ 2. \(\mathcal{T}\) _is a set of triples from_ \(V(G)\times V(G)\times\mathbb{N}\)_._ 3. _For_ \(F\subseteq[k]\) _let_ \(\mathcal{T}_{F}=\{(s_{i},t_{i},1)\mid i\in F\}\)_. Then there exists a non-crossing_ \((\mathcal{T}\cup\mathcal{T}_{F})\)_-flow in_ \(G\) _if and only if there exists_ \(\mathbf{b}\in\{0,1\}^{r}\) _for which_ \(F\subseteq\mathcal{S}(\mathbf{b})\)_._ Our goal is to construct an \((r,k,\mathcal{S})\)-\(\mathsf{Subset}\mathsf{Gadget}\) of size \(2^{\mathcal{O}(r)}\cdot k^{\mathcal{O}(1)}\) with \(|\mathcal{T}|=(r+k)^{\mathcal{O}(1)}\). #### 5.3.1 The first attempt Let \(\gamma^{0}\colon\{0,1\}^{r}\to\mathbb{N}\) be the zero-function, i.e., \(\gamma^{0}(\mathbf{b})=0\) for all \(\mathbf{b}\in\{0,1\}^{r}\). In this section we present a simplified construction working under the assumption that for any set \(Z\subseteq\{0,1\}^{r}\) there exists an \((r,\gamma^{0},Z)\)-\(\mathsf{Vector}\mathsf{Containment}\mathsf{Gadget}\) of size \(2^{\mathcal{O}(r)}\). Of course, this assumption is overly optimistic but it allows us to first present the pattern propagation mechanism alone. We strongly encourage the Reader to first familiarize with this simplified construction before reading the proper proof in Section 5.3.2. The proper proof builds atop this construction in an incremental way. We use following conventions to describe the constructed graphs. When \(H\) is a graph with a distinguished vertex named \(v\) and a graph \(G\) is constructed using explicit vertex-disjoint copies of the graph \(H\), referred to as \(H_{1},H_{2},\ldots,H_{\ell}\), we refer to the copy of \(v\) within the subgraph \(H_{i}\) as \(H_{i}[v]\in V(G)\). When vertices \(u,v\) are connected by multiple parallel edges, we refer to the number of such edges as the capacity of \(uv\). The ladder.An _\(r\)-ladder_ is a plane multigraph defined as follows (see Figure 20). We begin construction from \(r+1\) disjoint paths \((v_{1,1},v_{1,2},v_{1,3}),\ldots,(v_{r+1,1},v_{r+1,2},v_{r+1,3})\), followed by adding additional edges forming paths \((v_{1,1},v_{2,1},\ldots,v_{r+1,1})\) and \((v_{1,3},v_{2,3},\ldots,v_{r+1,3})\). Next, we duplicate each edge \(2^{3r+5}\) times (i.e., we place this many parallel edges). We create vertices \(u_{0},u_{1}\) on the outer face adjacent respectively to \(v_{1,2}\) and \(v_{r+1,2}\). Let \(f_{1},\ldots,f_{r}\) be the internal faces in the already constructed graph, numbered in such a way that \(v_{1,1}\) is incident to \(f_{1}\) and for each \(i\in[r-1]\) the faces \(f_{i},f_{i+1}\) share an edge. For each \(j\in[r]\) we create vertices \(x_{j},y_{j}\) within the face \(f_{j}\), and then insert edges \(x_{j}v_{j+1,2}\) and \(y_{j}v_{j,2}\). The ring.Let \(\mathcal{H}=H_{1},\ldots,H_{k}\) be a sequence of plane multigraphs that satisfy condition (1) of Definition 5.5: each \(H_{i}\) has \(r+2\) distinguished vertices \(z_{1},\ldots,z_{r},w_{0},w_{1}\) so that the last two lie on the outer face. We build the graph \(\mathsf{Ring}(r,k,\mathcal{H})\) from \(k\) blocks arranged in a ring-like structure, so that \(H_{i}\) will be installed inside the \(i\)-th block. For \(i\in[k]\) we start construction of the plane multigraph \(R_{i}\) from two copies of an \(r\)-ladder, \(L_{i}^{+},L_{i}^{-}\). For \(\odot\in\{+,-\}\) we duplicate the edges incident to \(L_{i}^{\odot}[u_{0}],L_{i}^{\odot}[u_{1}]\) times \(2^{3r+4}\). For \(j\in[r]\) we duplicate the edge incident to \(L_{i}^{\odot}[x_{j}]\) times \(2^{2r+j}\) and the edge incident to \(L_{i}^{-}[y_{j}]\) times \(2^{2r}\). Next, we create six vertices: \(s_{i},t_{i},h_{i}^{+},\widehat{h}_{i}^{+},h_{i}^{-},\widehat{h}_{i}^{-}\). For \(\odot\in\{+,-\}\) we insert \(2^{3r+4}+2^{2r}\) parallel edges \(h_{i}^{\odot}\widehat{h}_{i}^{\odot}\) and \(2^{3r+4}+2^{3r+1}+2\) parallel edges \(h_{i}^{+}h_{i}^{-}\). Next, we put \(2^{3r+1}\) parallel edges between Figure 20: Top left: a 3-ladder labeled with vertices’ and faces’ names. Right: a fragment of the graph \(\mathsf{Ring}(r,k,\mathcal{H})\) for \(r=3\). The subgraph \(R_{i}\) is labeled with the vertices’ names while the subgraph \(R_{i+1}\) is labeled with the edge capacities and the numbers of walks requested between each terminal pair (on a colorful background). The ladder edges with capacities \(2^{3r+5}\) are drawn with thicker lines. The vertices that need to be connected in the flow \(\mathcal{T}_{r,k}\) share common colors and shapes. They are also labeled with letters indicating the types of requests. The gray ovals in the bottom represent the subgraphs \(H_{i}\), \(H_{i+1}\) (the vector-containment gadgets). Bottom left: a sketch of the ring structure obtained from combining the subgraphs \(R_{1},R_{2},\ldots,R_{k}\). and \(L_{i}^{+}[v_{1,1}]\) (the bottom-left corner vertex of the upper ladder), and \(2^{3r+1}\) parallel edges between \(h_{i}^{-}\) and \(L_{i}^{-}[v_{r+1,1}]\) (the upper-left corner vertex of the lower ladder). We insert a single edge between \(s_{i}\) (resp. \(t_{i}\)) and \(L_{i}^{+}[v_{r+1,1}]\) (resp. \(L_{i}^{+}[v_{r+1,3}]\)). Recall that \(H_{i}\) has vertices \(H_{i}[w_{0}],H_{i}[w_{1}]\) on its outer face. We connect \(H_{i}[w_{0}]\) (resp. \(H_{i}[w_{1}]\)) to \(L_{i}^{-}[v_{1,1}]\) (resp. \(L_{i}^{-}[v_{1,3}]\)) (the bottom corners of the lower ladder) via \(2^{2r}\) parallel edges. The arrangement of the vertices on the plane is depicted in Figure 20. Finally, we arrange the multigraphs \(R_{1},R_{2},\ldots,R_{k}\) into a ring. For \(i\in[k-1]\) we insert \(2^{3r+1}\) parallel edges between \(L_{i}^{+}[v_{1,3}]\) and \(h_{i+1}^{+}\), as well as between \(L_{i}^{-}[v_{r+1,3}]\) and \(h_{i+1}^{-}\). The graphs \(R_{k},R_{1}\) get connected in the same way. The constructed ring encloses a bounded region incident to the minus-sides of the multigraphs \(R_{1},R_{2},\ldots,R_{k}\). Next, we define the requests of the gadget, divided into four groups. Although it is possible to achieve the same properties with slightly smaller demands, we choose to use the same numbers that appear in Section 5.3.2 in order to reduce the edit distance between the proofs. The requests.We define a family \(\mathcal{T}_{r,k}\) of requests over \(\mathsf{Ring}(r,k,\mathcal{H})\). 1. In each ladder \(L\) of the form \(L_{i}^{+},L_{i}^{-}\) we create a request \((L[u_{0}],L[u_{1}],2^{3r+4})\), \(2k\) in total. 2. For each \(i\in[k]\) we create a request \((\widehat{h}_{i}^{+},\widehat{h}_{i}^{-},2^{3r+4}+2^{2r})\). 3. For each \(i\in[k]\) and \(j\in[r]\) we create a request \((L_{i}^{+}[x_{j}],L_{i}^{-}[x_{j}],2^{2r+j})\). 4. For each \(i\in[k]\) and \(j\in[r]\) we create a request \((L_{i}^{-}[y_{j}],H_{i}[z_{j}],2^{r})\). For a \(\mathcal{T}_{r,k}\)-flow \(\mathcal{P}\) we use variables \(\mathcal{P}_{i}^{A+}\), \(\mathcal{P}_{i}^{A-}\), \(\mathcal{P}_{i}^{B}\), \(\mathcal{P}_{i,j}^{C}\), \(\mathcal{P}_{i,j}^{D}\) to refer to subfamilies of \(\mathcal{P}\) satisfying the respective types of requests. **Observation 5.28**.: _For every vertex \(v\) of the form \(\widehat{h}_{i}^{\odot}\), \(L_{i}^{\odot}[u_{j}]\), \(L_{i}^{\odot}[x_{j}]\), \(L_{i}^{-}[y_{j}]\), the number of edges incident to \(v\) equals the number of walks in a \(\mathcal{T}_{r,k}\)-flow that have an endpoint at \(v\)._ This observation allows us to exclude cases where one walk would start at a vertex \(v\) and another walk would only pass through \(v\). Intuition.The aim of the (A) requests is to draw a pattern through each ladder which splits it into the left and the right side (see Figure 21). Then the (C) requests must be satisfied by walks between the upper ladder and the lower ladder that traverse the middle belt either through the left or the right side, according to the pattern above. The (B) requests work as guards and ensure that no other walks of type (C) are possible (ruling out the possibility that some walk winds around the entire ring). Since the (C) requests encode powers of two, a partition of them encodes an integer from \(0\) to \(2^{r}-1\). Because the blocks are arranged in a ring structure and the number of walks that can cross \(h_{i}^{+}h_{i}^{-}\) is limited, these integers must coincide. As a consequence, the pattern drawn in each ladder must be the same; in this way, the flow "chooses" a vector \(\mathbf{b}\in\{0,1\}^{r}\). The (D) requests originate from the same faces in the lower ladders as their (C) counterparts, so they also need to stay either on the left or right side of the ladder. This determines which entrance to the vector-containment gadget \(H_{i}\) they can use (left or right). Then the vertex \(H_{i}[z_{j}]\) must be connected to \(H_{i}[w_{0}]\) when \(\mathbf{b}_{j}=0\) or to \(H_{i}[w_{1}]\) when \(\mathbf{b}_{j}=1\). The vector-containment gadget governed by the zero-function \(\gamma^{0}\) alters its behavior depending on whether \(\mathbf{b}\) belongs to a certain set: it allows one additional walk between \(H_{i}[w_{0}]\) and \(H_{i}[w_{1}]\) if and only if the containment occurs. Subsequently, this leaves space for an \((s_{i},t_{i})\)-walk, which must go through \(H_{i}\) because the walks of type (A) and (B) block the other passages. By supplying appropriate sets to the vector-containment gadgets, we enforce that an \((s_{i},t_{i})\)-walk can be accommodated exactly when \(i\in\mathcal{S}(\mathbf{b})\). For \(\mathcal{S}\colon\{0,1\}^{r}\to 2^{[k]}\) and \(i\in[k]\) we define \(Z_{i}^{\mathcal{S}}\subseteq\{0,1\}^{r}\) as the set of vectors \(\mathbf{b}\) for which \(i\in\mathcal{S}(\mathbf{b})\). Assume that for each \(i\in[k]\) there exists a plane multigraph \(H_{i}\) which is an \((r,\gamma^{0},Z_{i}^{\mathcal{S}})\)-VectorContainmentGadget. We define \(\mathcal{H}_{\mathcal{S}}^{vcg}\) as \(H_{1},\ldots,H_{k}\). We are going to show that then \((\mathsf{Ring}(r,k,\mathcal{H}_{\mathcal{S}}^{vcg}),\mathcal{T}_{r,k})\) forms an \((r,k,\mathcal{S})\)-Subset Gadget. We need to prove two implications to establish condition (3) of Definition 5.27. We begin from the easier one: when \(F\subseteq\mathcal{S}(\mathbf{b})\) for some vector \(\mathbf{b}\), then the desired non-crossing flow exists. For a flow \(\mathcal{P}\) and a walk \(W\), we say that \(W\) is non-crossing with \(\mathcal{P}\) if \(W\) does not cross or share an edge with any walk in \(\mathcal{P}\). **Lemma 5.29**.: _Consider \(r,k\in\mathbb{N}\) and \(\mathcal{S}\colon\{0,1\}^{r}\to 2^{[k]}\). Assume that the sequence of multigraphs \(\mathcal{H}_{\mathcal{S}}^{vcg}\) exists. For \(F\subseteq[k]\) let \(\mathcal{T}_{F}=\{(s_{j},t_{j},1)\mid j\in F\}\). If \(F\subseteq\mathcal{S}(\mathbf{b})\) for some \(\mathbf{b}\in\{0,1\}^{r}\) there exists a non-crossing \((\mathcal{T}_{r,k}\cup\mathcal{T}_{F})\)-flow in \(\mathsf{Ring}(r,k,\mathcal{H}_{\mathcal{S}}^{vcg})\)._ Proof.: The construction is depicted in Figure 21. We begin by describing which vertices are being visited by each walk and later we check that the edge capacities are sufficient to accommodate all the walks. First, consider \(i\in[k]\) and \(\odot\in\{+,-\}\). Let \(P\) be a path that traverses the ladder \(L_{i}^{\odot}\) in such a way that the face \(f_{j}\) is to the right of \(P\) exactly when \(\mathbf{b}_{j}=1\). Each walk from \(\mathcal{P}_{i}^{A\odot}\) traverses \(L_{i}^{\odot}\) through the same vertices as \(P\). Next, every walk in the family \(\mathcal{P}_{i}^{B}\) is of the form \((\widehat{h}_{i}^{+},h_{i}^{+},h_{i}^{-},\widehat{h}_{i}^{-})\). Now we describe the families \(\mathcal{P}_{i,j}^{C}\), \(\mathcal{P}_{i,j}^{D}\). First, suppose that \(i\not\in F\). Whenever \(\mathbf{b}_{j}=0\) all the walks from \(\mathcal{P}_{i,j}^{C}\) go through an edge \(h_{i}^{+}h_{i}^{-}\) and otherwise they go through \(h_{i+1}^{+}h_{i+1}^{-}\) (counting modulo \(k\)). We arrange them from left to right in such a way that the families \(\mathcal{P}_{i,j}^{C}\) appear in the order of increasing \(j\) (see Figure 21). For each \(j\in[r]\) there is a common internal face of \(L_{i}^{-}\) incident to \(L_{i}^{-}[x_{j}]\) (the endpoint for \(\mathcal{P}_{i,j}^{C}\)) and \(L_{i}^{-}[y_{j}]\) (the endpoint for \(\mathcal{P}_{i,j}^{D}\)). When constructing the walks in a top-bottom fashion, one can imagine that the walks from \(\mathcal{P}_{i,j}^{D}\) replace the ones from \(\mathcal{P}_{i,j}^{C}\) in the same position among the other walks. The walks from the families \(\mathcal{P}_{i,j}^{D}\) with \(\mathbf{b}_{j}=0\) reach the vertex \(H_{i}[w_{0}]\) in a monotone order (with respect to \(j\)). Similarly, the walks from \(\mathcal{P}_{i,j}^{D}\) with \(\mathbf{b}_{j}=1\) reach the vertex \(H_{i}[w_{1}]\), however this time we arrange them from left to right in the order of decreasing \(j\). It remains to connect \(H_{i}[z_{j}]\) to \(H_{i}[w_{0}]\) (when \(\mathbf{b}_{j}=0\)) or to \(H_{i}[w_{1}]\) (when \(\mathbf{b}_{j}=1\)) in a non-crossing way. Since we do not need to accommodate any additional \(H_{i}[w_{0}]H_{i}[w_{1}]\)-walks, the value of \(d\) in Definition 5.5 is \(0\) and the desired non-crossing flow exists. The condition (2c) in the definition ensures that the order of walks entering \(w_{0}\) (resp. \(w_{1}\)) in \(H_{i}\) matches the order of walks outside \(H_{i}\). When \(i\in F\) we begin from constructing a non-crossing \(\mathcal{T}_{\mathbf{b},1}\)-flow in \(H_{i}\): we request \(2^{r}\) walks from \(H_{i}[z_{j}]\) to \(H_{i}[w_{0}]\) (when \(\mathbf{b}_{j}=0\)) or to \(H_{i}[w_{1}]\) (when \(\mathbf{b}_{j}=1\)) and a single \(H_{i}[w_{0}]H_{i}[w_{1}]\)-walk. Because \(i\in F\subseteq\mathcal{S}(b)\) we have \(\mathbf{b}\in Z_{i}^{\mathcal{S}}\). By the definition of a \((r,\gamma^{0},Z_{i}^{\mathcal{S}})\)-VectorContainmentGadget, there exists a non-crossing \(\mathcal{T}_{\mathbf{b},1}\)-flow in \(H_{i}\). We construct the remainders of walks from \(\mathcal{P}_{i,j}^{D}\), as well as walks from \(\mathcal{P}_{i,j}^{C}\), similarly as before. The only difference is that we place an \(H_{i}[w_{0}]s_{i}\)-walk \(W_{i}^{0}\) in between the left-side walks and an \(H_{i}[w_{1}]t_{i}\)-walk \(W_{i}^{1}\) in between the right-side walks, creating an \(s_{i}t_{i}\)-walk as a result. We keep the same relative position of \(W_{i}^{0}\) (resp. \(W_{i}^{1}\)) among the families \(\mathcal{P}_{i,j}^{C}\), \(\mathcal{P}_{i,j}^{D}\) as the position on which it leaves the subgraph \(H_{i}\) (see Figure 21). Figure 21: Two examples of solutions constructed in Lemma 5.29. The walks from families \(\mathcal{P}^{A\odot}_{i}\) and \(\mathcal{P}^{B}_{i}\) are drawn in black and blue, respectively. The colors red, orange, and green represent walks from families \(\mathcal{P}^{C}_{i,j}\) and \(\mathcal{P}^{D}_{i,j}\) (single color for each \(j\in\{1,2,3\}\)). In each solution, the choice of which colors go through the left or right passage is fixed for all \(i\in[k]\) because otherwise some edge passing the middle belt (along the blue walks from \(\mathcal{P}^{B}_{i}\)) would be overloaded. The \((s_{i},t_{i})\)-walk is drawn in cyan. Finally, we check that we have a sufficient number of parallel edges. First consider the edges \(h_{i}^{+}h_{i}^{-}\): there are \(2^{3r+4}+2^{3r+1}+2\) copies of each. There are \(2^{3r+4}+2^{2r+1}\) walks in \(\mathcal{P}_{i}^{B}\). Next, for each \(j\in[r]\), exactly one of the families \(\mathcal{P}_{i-1,j}^{C}\), \(\mathcal{P}_{i,j}^{C}\) go through \(h_{i}^{+}h_{i}^{-}\). These sum up to \[\sum_{j=1}^{r}2^{2r+j}=2^{2r+1}\cdot(2^{r}-1)=2^{3r+1}-2^{2r+1}.\] The only additional walks that might go through \(h_{i}^{+}h_{i}^{-}\) are \(W_{i-1}^{1}\) and \(W_{i}^{0}\). In total we obtain \[(2^{3r+4}+2^{2r+1})+(2^{3r+1}-2^{2r+1})+2=2^{3r+4}+2^{3r+1}+2\] walks, as intended. The number of walks passing between a vertex \(h_{i}^{\odot}\) and any ladder is at most \(\sum_{j=1}^{r}2^{2r+j}+1\) (the additive 1 depends on whether \(i\in F\)) which is bounded by \(2^{3r+1}\), the capacity of this passage. The number of walks going from \(H_{i}[w_{0}]\) to \(L^{-}[v_{1,1}]\) (resp. from \(H_{i}[w_{1}]\) to \(L^{-}[v_{1,3}]\)) is at most \(r2^{r}+1\leq 2^{2r}\). Within each ladder, every pair of adjacent vertices of the form \(v_{x,y}\) is connected via \(2^{3r+5}\) parallel edges, which upper bounds the total number of walks in \(\mathcal{P}_{i}^{A\odot}\), \(\mathcal{P}_{i,j}^{C}\), \(\mathcal{P}_{i,j}^{D}\), together with \(W_{i}^{0},W_{i}^{1}\). This concludes the proof. We have thus established the first implication in the proof of correctness. Next, we show that any non-crossing \(\mathcal{T}_{r,k}\)-flow must obey certain properties. They will allow us to prove the second implication: when adding requests encoding a set \(F\subseteq[k]\) results in a satisfiable instance, then \(F\) must be a subset of some \(\mathcal{S}(\mathbf{b})\). **Lemma 5.30**.: _Consider \(r,k\in\mathbb{N}\) and \(\mathcal{S}\colon\{0,1\}^{r}\to 2^{[k]}\). Let \(\mathcal{H}=H_{1},\ldots,H_{k}\) be arbitrary and \(\mathcal{P}\) be a non-crossing \(\mathcal{T}_{r,k}\)-flow in \(\mathsf{Ring}(r,k,\mathcal{H})\). Then the following hold._ 1. _Let_ \(i\in[k]\)_,_ \(\odot\in\{+,-\}\)_, and_ \(W\) _be an_ \((h_{i}^{\odot},h_{i+1}^{\odot})\)_-walk internally contained in_ \(L_{i}^{\odot}\)_. Then_ \(W\) _cannot be non-crossing with_ \(\mathcal{P}_{i}^{A\odot}\)_._ 2. _For each_ \(i\in[k]\) _the family_ \(\mathcal{P}_{i}^{B}\) _contains a walk on vertices_ \(\{\widehat{h}_{i}^{+},h_{i}^{+},h_{i}^{-},\widehat{h}_{i}^{-}\}\)_. Moreover, every walk from_ \(\mathcal{P}_{i}^{B}\) _goes through an edge_ \(h_{i}^{+}h_{i}^{-}\)_._ 3. _There exists a vector_ \(\mathbf{b}\in\{0,1\}^{r}\) _such that for each_ \(i\in[k]\)_,_ \(j\in[r]\)_, every walk_ \(P\in\mathcal{P}_{i,j}^{D}\) _contains an_ \((H_{i}[w_{0}],H_{i}[z_{j}])\)_-walk in_ \(H_{i}\) _when_ \(\mathbf{b}_{j}=0\)_, or an_ \((H_{i}[w_{1}],H_{i}[z_{j}])\)_-walk in_ \(H_{i}\) _when_ \(\mathbf{b}_{j}=1\)_._ Proof.: Within this proof, whenever we perform addition +1 or subtraction -1 from \(i\in[k]\), we do it modulo \(k\), that is, we adopt the convention that \(k+1=1\) and \(1-1=k\). Proof of (1).: First we argue that there exists a walk \(Q\in\mathcal{P}_{i}^{A\odot}\) contained entirely within \(L_{i}^{\odot}\). To see this, we count the total number of edges leaving \(L_{i}^{\odot}\); there are at most \(2\cdot 2^{3r+1}+2\cdot 2^{2r}\) of them, which is less than \(|\mathcal{P}_{i}^{A\odot}|=2^{3r+4}\). Therefore, at least one walk from \(\mathcal{P}_{i}^{A\odot}\) never leaves the subgraph \(L_{i}^{\odot}\). Now suppose that \(W,Q\) are edge-disjoint and non-crossing; then \(W\) must go through either \(L_{i}^{+}[u_{0}]\) or \(L_{i}^{+}[u_{1}]\). But the number of edges incident to each of these vertices equals the number of walks in \(\mathcal{P}_{i}^{A+}\). Therefore \(W\) cannot be edge-disjoint with every walk in \(\mathcal{P}_{i}^{A+}\); a contradiction. Proof of (2).: As before, we count the total number of edges leaving the subgraph induced by \(\{\widehat{h}^{+}_{i},h^{+}_{i},h^{-}_{i},\widehat{h}^{-}_{i}\}\) to be \(4\cdot 2^{3r+1}=2^{3r+3}\). Since this number is less than the size of the family \(\mathcal{P}^{B}_{i}\), at least one walk from \(\mathcal{P}^{B}_{i}\) does not leave the vertex set \(\{\widehat{h}^{+}_{i},h^{+}_{i},h^{-}_{i},\widehat{h}^{-}_{i}\}\). Suppose now that \(P\in\mathcal{P}^{B}_{i}\) does not go through any edge \(h^{+}_{i}h^{-}_{i}\). Then \(P\) goes through the vertex \(h^{+}_{i+1}\) or \(h^{+}_{i-1}\). Assume w.l.o.g. the first scenario, so \(P\) needs to traverse \(L^{+}_{i}\). Then \(P\) contains a subwalk that meets the specification of Part (1) of the lemma. This contradicts the assumption that \(\mathcal{P}\) is a non-crossing flow. We need two intermediate observations to reach the last claim of the lemma. **Claim 5.31**.: _For each \(i\in[k]\) there exists a vector \(\mathbf{b}^{i}\in\{0,1\}^{r}\) such that when \(\mathbf{b}^{i}_{j}=0\) then all the walks from \(\mathcal{P}^{C}_{i,j}\) go through an edge \(h^{+}_{i}h^{-}_{i}\) and when \(\mathbf{b}^{i}_{j}=1\) then all the walks from \(\mathcal{P}^{C}_{i,j}\) go through an edge \(h^{+}_{i+1}h^{-}_{i+1}\)._ Proof.: First we argue that for every \(j\in[r]\) each walk from \(P\in\mathcal{P}^{C}_{i,j}\) goes either through \(h^{+}_{i}h^{-}_{i}\) or \(h^{+}_{i+1}h^{-}_{i+1}\). The edges \(\widehat{h}^{+}_{i}h^{+}_{i}\) are saturated by the walks from \(\mathcal{P}^{B}_{i}\), so \(P\) cannot visit the vertex \(\widehat{h}^{+}_{i}\). By Part (2), there is a walk \(W\in\mathcal{P}^{B}_{i}\) on vertex set \(\{\widehat{h}^{+}_{i},h^{+}_{i},h^{-}_{i},\widehat{h}^{-}_{i}\}\). Because \(P\) and \(W\) do not cross, \(P\) is blocked from the left by the path \((\widehat{h}^{+}_{i},h^{+}_{i},h^{-}_{i},\widehat{h}^{-}_{i})\). Similarly, \(P\) is blocked from the right by the path \((\widehat{h}^{+}_{i+1},h^{+}_{i+1},h^{-}_{i+1},\widehat{h}^{-}_{i+1})\). Therefore \(P\) must proceed alongside one of these paths. Suppose now that for some \(j\in[r]\) the family \(\mathcal{P}^{C}_{i,j}\) contains a walk \(W_{0}\) that does not go through any edge \(h^{+}_{i+1}h^{-}_{i+1}\) and a walk \(W_{1}\) that does not go through any edge \(h^{+}_{i}h^{-}_{i}\). By Definition 5.2 of a non-crossing flow, the concatenation \(W_{0}+W_{1}\) does not cross any walk from \(\mathcal{P}^{A+}_{i}\). But then \(W_{0}+W_{1}\) contains a subwalk that meets the specification of Part (1). This contradicts the assumption that \(\mathcal{P}\) is a non-crossing flow. Therefore for each \(j\in[r]\) the choice whether to go via the left passage or the right one is fixed. Let us keep the variable \(\mathbf{b}^{i}\) to indicate the vector defined in Claim 5.31. **Claim 5.32**.: _There exists a single vector \(\mathbf{b}\in\{0,1\}^{r}\) so that \(\mathbf{b}^{i}=\mathbf{b}\) for all \(i\in[k]\)._ Proof.: We define \(\tau(b_{1}b_{2}\ldots b_{r})=\sum_{h=1}^{r}b_{h}\cdot 2^{h-1}\). Suppose that the claim does not hold. Because we work on a ring structure, there exists \(i\in[k]\) for which \(\tau(\mathbf{b}^{i})<\tau(\mathbf{b}^{i+1})\). By Claim 5.31 the number of walks from \(\mathcal{P}^{C}_{i,1}\cup\mathcal{P}^{C}_{i,2}\cup\ldots\mathcal{P}^{C}_{i,r}\) that go through an edge \(h^{+}_{i+1}h^{-}_{i+1}\) equals \(2^{2r+1}\cdot\tau(\mathbf{b}^{i})\). On the other hand, the number of walks from \(\mathcal{P}^{C}_{i+1,1}\cup\mathcal{P}^{C}_{i+1,2}\cup\ldots\mathcal{P}^{C}_{i+ 1,r}\) that go through an edge \(h^{+}_{i+1}h^{-}_{i+1}\) equals \(2^{2r+1}\cdot(2^{r}-1-\tau(\mathbf{b}^{i+1}))\). Since \(\tau(\mathbf{b}^{i})<\tau(\mathbf{b}^{i+1})\), this quantity is at least \(2^{2r+1}\cdot(2^{r}-\tau(\mathbf{b}^{i}))\). In total, we obtain at least \(2^{3r+1}\) walks that go through \(h^{+}_{i+1}h^{-}_{i+1}\). Due to Part (2) of the lemma, all \(2^{3r+4}+2^{2r+1}\) walks from \(\mathcal{P}^{B}_{i+1}\) also go through \(h^{+}_{i+1}h^{-}_{i+1}\). But there are only \(2^{3r+4}+2^{3r+1}+2\) parallel edges \(h^{+}_{i+1}h^{-}_{i+1}\), which are too few to accommodate all \(2^{3r+4}+2^{3r+1}+2^{2r+1}\) walks above, and so we arrive at a contradiction. Proof of (3).: Let \(\mathbf{b}\in\{0,1\}^{r}\) be the vector from Claim 5.32. Consider some \(i\in[k]\) and \(j\in[r]\). When \(\mathbf{b}_{j}=0\) then any walk from \(\mathcal{P}^{C}_{i,j}\) must enter \(L^{-}_{i}\) via \(L^{-}_{i}[v_{r+1,1}]\), that is, the upper left corner, due to Claim 5.31. Since \(|\mathcal{P}^{C}_{i,j}|>2^{2r}\), there is at least one walk in \(\mathcal{P}^{C}_{i,j}\) does not use any edge \(L^{-}_{i}[v_{1,1}]H_{i}[w_{0}]\) (there are only \(2^{2r}\) such parallel edges). This walk includes an \((h^{-}_{i},L^{-}_{i}[x_{j}])\)-walk internally contained in \(L^{-}_{i}\). Note that both the vertices \(L^{-}_{i}[x_{j}]\), \(L^{-}_{i}[y_{j}]\) lie on the face \(f_{j}\) of \(L^{-}_{i}\) and the number of edges incident to each of them equals the number of walks in \(\mathcal{P}^{C}_{i,j}\), \(\mathcal{P}^{D}_{i,j}\), respectively. Therefore no walk from \(\mathcal{P}^{A-}_{i}\) can visit \(L^{-}_{i}[x_{j}]\) nor \(L^{-}_{i}[y_{j}]\). Consequently, when \(\mathbf{b}_{j}=0\) then each walk \(W\in\mathcal{P}^{D}_{i,j}\) must also stay "on the left" of any walk from \(\mathcal{P}^{A-}_{i}\). The walk \(W\) cannot pass through \(L^{+}_{i}\) or cross the path \((\widehat{h}^{+}_{i},h^{+}_{i},h^{-}_{i},\widehat{h}^{-}_{i})\) by the same argument as in Claim 5.31. Therefore, the only possibility for \(W\) to reach \(H_{i}[z_{j}]\) is to enter \(H_{i}\) through \(H_{i}[w_{0}]\) and utilize some \((H_{i}[w_{0}],H_{i}[z_{j}])\)-walk in \(H_{i}\). The argument for the case \(\mathbf{b}_{j}=1\) is analogous. This concludes the proof of Lemma 5.30. Having imposed a structure of a non-crossing \(\mathcal{T}_{r,k}\)-flow, we can finish the correctness proof. **Lemma 5.33**.: _Consider \(r,k\in\mathbb{N}\) and \(\mathcal{S}\colon\{0,1\}^{r}\to 2^{[k]}\). Assume that the sequence of multigraphs \(\mathcal{H}^{vcg}_{\mathcal{S}}\) exists. For \(F\subseteq[k]\) let \(\mathcal{T}_{F}=\{(s_{j},t_{j},1)\mid j\in F\}\). Suppose that there exists a non-crossing \((\mathcal{T}_{r,k}\cup\mathcal{T}_{F})\)-flow in \(\mathsf{Ring}(r,k,\mathcal{H}^{vcg}_{\mathcal{S}})\). Then there exists \(\mathbf{b}\in\{0,1\}^{r}\) for which \(F\subseteq\mathcal{S}(\mathbf{b})\)._ Proof.: Let \(\mathcal{P}\) be a \(\mathcal{T}_{r,k}\)-flow and \(\mathcal{P}_{F}\) be a \(\mathcal{T}_{F}\)-flow so that \(\mathcal{P}\cup\mathcal{P}_{F}\) is non-crossing in \(\mathsf{Ring}(r,k,\mathcal{H}^{vcg}_{\mathcal{S}})\). We apply Lemma 5.30 to \(\mathcal{P}\); let \(\mathbf{b}\in\{0,1\}^{r}\) be the vector given by Part (3) of the lemma. Fix \(i\in F\). We obtain that when \(\mathbf{b}_{j}=0\) then each walk \(P\in\mathcal{P}^{D}_{i,j}\subseteq\mathcal{P}\) contains an \((H_{i}[w_{0}],H_{i}[z_{j}])\)-walk within \(H_{i}\), and when \(\mathbf{b}_{j}=1\) each walk \(P\in\mathcal{P}^{D}_{i,j}\) contains an \((H_{i}[w_{1}],H_{i}[z_{j}])\)-walk within \(H_{i}\). Therefore, the subwalks of \(\mathcal{P}^{D}_{i,j}\) within \(H_{i}\) satisfy request \((H_{i}[w_{0}],H_{i}[z_{j}],2^{r})\) when \(\mathbf{b}_{j}=0\) or request \((H_{i}[w_{1}],H_{i}[z_{j}],2^{r})\) when \(\mathbf{b}_{j}=1\). Now consider the \((s_{i},t_{i})\)-walk \(P_{i}\in\mathcal{P}_{F}\). By Lemma 5.30(2), the walk \(P_{i}\) can cross neither the path \((\widehat{h}^{+}_{i},h^{+}_{i},h^{-}_{i},\widehat{h}^{-}_{i})\) nor the path \((\widehat{h}^{+}_{i+1},h^{+}_{i+1},h^{-}_{i+1},\widehat{h}^{-}_{i+1})\). Next, due to Lemma 5.30(1), the walk \(P_{i}\) cannot contain any subwalk that traverses \(L^{+}_{i}\) nor \(L^{-}_{i}\) from left to right. Hence \(P_{i}\) must go through the following vertices: \[L^{+}_{i}[v_{r+1,1}],L^{+}_{i}[v_{1,1}],L^{-}_{i}[v_{r+1,1}],L^{-}_{i}[v_{1,1}],H_{i}[w_{0}],H_{i}[w_{1}],L^{-}_{i}[v_{1,3}],L^{-}_{i}[v_{r+1,3}],L^{+}_{i}[v_{ 1,3}],L^{+}_{i}[v_{r+1,3}].\] Consequently, \(P_{i}\) contains an \((H_{i}[w_{0}],H_{i}[w_{1}])\)-walk contained in \(H_{i}\). Because \(H_{i}\) is an \((r,\gamma^{0},Z^{\mathcal{S}}_{i})\)-VectorContainmentGadget and \(\gamma^{0}(\mathbf{b})=0\), this implies \(\mathbf{b}\in Z^{\mathcal{S}}_{i}\) (Definition 5.5, (2b) \(\Rightarrow\) (2a)). The argument above works for every \(i\in[k]\), and so the definition of \(Z^{\mathcal{S}}_{i}\) implies that \(i\in\mathcal{S}(\mathbf{b})\) whenever \(i\in F\). Lemmas 5.29 and 5.33 imply that if the \((r,\gamma^{0},Z^{\mathcal{S}}_{i})\)-VectorContainmentGadgets existed, then indeed \((\mathsf{Ring}(r,k,\mathcal{H}^{vcg}_{\mathcal{S}}),\mathcal{T}_{r,k})\) would form an \((r,k,\mathcal{S})\)-SubsetGadget. #### 5.3.2 Dynamic flow generators We will now get rid of the unrealistic assumption that an \((r,\gamma^{0},Z^{\mathcal{S}}_{i})\)-VectorContainmentGadget exists. The construction from the previous section could be easily extended to a setting where the function \(\gamma\) is constant for fixed \(r\), i.e., \(\gamma_{r}(\mathbf{b})=f(r)\) for some function \(f\). One could then simply insert additional requests of the form \((s_{i},t_{i},f(r))\) to generate this many additional units of flow to be pushed through each vector-containment gadget. The real issue is that Proposition 5.26 provides us with a gadget governed by the following function \[\widehat{\gamma}_{r}(b_{1}b_{2}\ldots b_{r})=r\cdot 2^{r}+1+\sum_{1\leq p<q\leq r}1_{[b_ {p}\neq b_{q}]}\cdot 2^{r-q+p-1}.\] This means that the amount of additional flow passing through the vector-containment gadget must depend on the pattern encoded by the bit vector \(\mathbf{b}=b_{1}b_{2}\ldots b_{r}\). We will take advantage of the special form of the function \(\widehat{\gamma}_{r}\) to extend the previous construction with "dynamic flow generators": new requests that could be satisfied either locally, within their ladder, or via walks passing through \(H_{i}\). We are going to insert \(\binom{r}{2}\) new blocks between each pair of blocks in the ring structure. Using the pattern propagation mechanism, we will guarantee that the new block inserted after the \(i\)-th one, labeled with a triple \((i,p,q)\), generates \(2^{r-q+p-1}\) additional units of flow exactly when the \(p\)-th bit and the \(q\)-th bit in the pattern differ, matching the formula for \(\widehat{\gamma}_{r}\). The extended ring.Similarly as before, for \(\mathcal{S}\colon\{0,1\}^{r}\to 2^{[k]}\) and \(i\in[k]\) we define \(Z^{\mathcal{S}}_{i}\subseteq\{0,1\}^{r}\) as the set of vectors \(\mathbf{b}\) for which \(i\in\mathcal{S}(\mathbf{b})\). For \(i\in[k]\) let \(H_{i}\) be the \((r,\widehat{\gamma}_{r},Z^{\mathcal{S}}_{i})\)-Vector Containment Gadget provided by Proposition 5.26. We construct the graph \(\mathsf{ExRing}(r,k,\mathcal{S})\) by extending the building blocks of \(\mathsf{Ring}(r,k,\mathcal{H})\) from the previous construction (see Figure 22). Since the family \(H_{1},\ldots,H_{k}\) is now fixed for given \(\mathcal{S}\), we directly pass \(\mathcal{S}\) as a parameter of the construction. We reuse the notion of \(r\)-ladder from Section 5.3.1. Let \(\Gamma_{r}\) be the set of pairs \((a,b)\in[r]^{2}\) with \(1\leq a<b\leq r\), plus one special element \(\bot\). We have \(|\Gamma_{r}|=\binom{r}{2}+1\). We define an ordering on \(\Gamma_{r}\) so that \(\bot\) is the smallest element and the pairs are ordered lexicographically. For \(i\in[k]\) and \(q\in\Gamma_{r}\) we start constructing the plane multigraph \(R_{i,q}\) from two copies of an \(r\)-ladder, \(L^{+}_{i,q},L^{-}_{i,q}\). For \(\odot\in\{+,-\}\) we duplicate the edges incident to \(L^{\odot}_{i,q}[u_{0}],L^{\odot}_{i,q}[u_{1}]\) times \(2^{3r+4}\). For \(j\in[r]\) we duplicate the edge incident to \(L^{-}_{i,q}[x_{j}]\) times \(2^{2r+j}\). We duplicate the edge incident to \(L^{-}_{i,\bot}[y_{j}]\) (only for \(q=\bot\)) times \(2^{r}\), for all \(j\in[r]\). When \(q=(a,b)\) for some \(1\leq a<b\leq r\), we duplicate the edges incident to \(L^{-}_{i,q}[y_{a}]\), \(L^{-}_{i,q}[y_{b}]\) times \(2^{r-b+a-1}\). Next, for each \(i\in[k]\) and \(q\in\Gamma_{r}\) we create four vertices: \(h^{+}_{i,q},\widehat{h}^{+}_{i,q},h^{-}_{i,q},\widehat{h}^{-}_{i,q}\). For \(\odot\in\{+,-\}\) we insert \(2^{3r+4}+2^{2r}\) parallel edges \(h^{\odot}_{i,q}\widehat{h}^{\odot}_{i,q}\) and \(2^{3r+4}+2^{3r+1}+2^{2r}\) parallel edges \(h^{+}_{i}h^{-}_{i}\). Note that this is different from the previous construction where the third summand was just \(2\). Next, we put \(2^{3r+1}\) parallel edges between \(h^{+}_{i,q}\) and \(L^{+}_{i,q}[v_{1,1}]\) (the bottom-left corner vertex of the upper ladder), and \(2^{3r+1}\) parallel edges between \(h^{-}_{i,q}\) and \(L^{-}_{i,q}[v_{r+1,1}]\) (the upper-left corner vertex of the lower ladder). The arrangement of the vertices on the plane is presented in Figure 22. We connect \(H_{i}[w_{0}]\) (resp. \(H_{i}[w_{1}]\)) to \(L^{-}_{i,\bot}[v_{1,1}]\) (resp. \(L^{-}_{i,\bot}[v_{1,3}]\)) (the bottom corners of the lower ladder) via \(2^{2r}\) parallel edges. We create vertices \(s_{i},t_{i}\) and connect each of them with \(r\cdot 2^{r}+2\) parallel edges to \(L^{+}_{i,\bot}[v_{r+1,1}]\) or \(L^{+}_{i,\bot}[v_{r+1,3}]\), respectively. These steps are omitted for \(q\neq\bot\). Analogously as before, we arrange the multigraphs \(R_{i,q}\) into a ring. We consider the lexicographic order on the set \([k]\times\Gamma_{r}\). For each \(i\in[k]\) and \(q\in\Gamma_{r}\) let \((i^{\rightarrow},q^{\rightarrow})\) denote the successor of \((i,q)\) in this order. When \((i,q)\) is the last element in \([k]\times\Gamma_{r}\), then \((i^{\rightarrow},q^{\rightarrow})\) becomes the first element, i.e., \((1,\bot)\). We insert \(2^{3r+1}\) parallel edges between \(L^{+}_{i,q}[v_{1,3}]\) and \(h^{+}_{i^{\rightarrow},q^{\rightarrow}}\), as well as between \(L^{-}_{i,q}[v_{r+1,3}]\) and \(h^{-}_{i^{\rightarrow},q^{\rightarrow}}\). The constructed ring encloses a bounded region incident to the minus-sides of the multigraphs \(R_{i,q}\). The last step is novel compared to the previous construction. For each \(i\in[k]\) and \(q\in\Gamma_{r}\), \(q\neq\bot\), we create vertices \(g^{+}_{i,q},g^{-}_{i,q}\), connected via \(r\cdot 2^{r}\) parallel edges to \(h^{+}_{i^{\rightarrow},q^{\rightarrow}}\) or \(h^{-}_{i,q}\), respectively. The new edges incident to \(h^{+}_{i^{\rightarrow},q^{\rightarrow}}\) (resp. \(h^{-}_{i,q}\)) are located between the edges to \(L^{+}_{i,q}[v_{1,3}]\) and \(\widehat{h}^{+}_{i^{\rightarrow},q^{\rightarrow}}\) (resp. between \(\widehat{h}^{-}_{i,q}\) and \(L^{-}_{i,q}[v_{1,1}]\)). For \(\odot\in\{+,-\}\) we connect \(g^{\odot}_{i,q}\) via \(r\cdot 2^{r}\) parallel edges to its predecessor in the ordering given by \(\Gamma_{r}\), unless \(q=(1,2)\) (i.e., \(q\) is first in the ordering). The vertex \(g^{+}_{i,(1,2)}\) gets connected to Figure 22: A fragment of the graph \(\mathsf{ExRing}(r,k,\mathcal{S})\) for \(r=3\), comprising subgraphs \(R_{i,\perp}\), \(R_{i,(1,2)}\), \(R_{i,(1,3)}\), \(R_{i,(2,3)}\), \(R_{i+1,\perp}\). The edges incident to vertices of the form \(g_{i,q}^{\odot}\). which are not present in the previous construction, are dotted. The gray ovals in the bottom represent the subgraphs \(H_{i}\), \(H_{i+1}\) (the vector-containment gadgets). The vertices’ names and edges’ capacities (i.e., numbers of parallel edges) are provided. The pairs of vertices that need to be connected in a \(\widehat{\mathcal{T}}_{r,k}\)-flow share common colors and shapes. The colorful letters indicate the requests’ types. The number of walks requested between each terminal pair is given on a colorful background. In each ladder of the form \(L_{i,(a,b)}^{-}\) the faces \(f_{a},f_{b}\) are highlighted. Note that the lower part of the figure becomes the interior of the ring structure, so the vertices \(s_{i},t_{i}\) end up on the outer face. via \(r\cdot 2^{r}\) parallel edges while \(g^{-}_{i,(1,2)}\) gets connected to \(h^{-}_{i,\perp}\) via \(r\cdot 2^{r}\) parallel edges, in an analogous way as before. The requests.We define a family \(\widehat{\mathcal{T}}_{r,k}\) of requests over \(\mathsf{ExRing}(r,k,\mathcal{S})\). The first four groups are analogous to those from Section 5.3.1. 1. (\(L[u_{0}]\), \(L[u_{1}]\), \(2^{3r+4}\)) for each ladder \(L\) of the form \(L^{+}_{i,q},L^{-}_{i,q}\). 2. (\(\widehat{h}^{+}_{i,q}\), \(\widehat{h}^{-}_{i,q}\), \(2^{3r+4}+2^{2r}\)) for each \(i\in[k]\), \(q\in\Gamma_{r}\). 3. (\(L^{+}_{i,q}[x_{j}]\), \(L^{-}_{i,q}[x_{j}]\), \(2^{2r+j}\)) for each \(i\in[k]\), \(q\in\Gamma_{r}\), \(j\in[r]\). 4. (\(L^{-}_{i,\perp}[y_{j}]\), \(H_{i}[z_{j}]\), \(2^{r}\)) for each \(i\in[k]\), \(j\in[r]\). 5. (\(L^{-}_{i,(a,b)}[y_{a}]\), \(L^{-}_{i,(a,b)}[y_{b}]\), \(2^{r-b+a-1}\)) for each \(i\in[k]\), \(1\leq a<b\leq r\). 6. (\(s_{i}\), \(t_{i}\), \(r\cdot 2^{r}+1\)) for each \(i\in[k]\). For a \(\widehat{\mathcal{T}}_{r,k}\)-flow \(\mathcal{P}\) we use variables \(\mathcal{P}^{A+}_{i,q}\), \(\mathcal{P}^{A-}_{i,q}\), \(\mathcal{P}^{B}_{i,q}\), \(\mathcal{P}^{C}_{i,q,j}\), \(\mathcal{P}^{D}_{i,j}\), \(\mathcal{P}^{E}_{i,a,b}\), \(\mathcal{P}^{F}_{i}\) to refer to subfamilies of \(\mathcal{P}\) satisfying the respective types of requests. We make note of an observation analogous to the one (5.28) from the previous construction. **Observation 5.34**.: _For every vertex \(v\) of the form \(\widehat{h}^{\odot}_{i,q}\), \(L^{\odot}_{i,q}[u_{j}]\), \(L^{\odot}_{i,q}[x_{j}]\), \(L^{-}_{i,\perp}[y_{j}]\), \(L^{-}_{i,(a,b)}[y_{a}]\), \(L^{-}_{i,(a,b)}[y_{b}]\), the number of edges incident to \(v\) equals the number of walks in a \(\widehat{\mathcal{T}}_{r,k}\)-flow that have an endpoint at \(v\)._ In order to keep the calculations as clean as possible, we will neglect small values of \(r\) (for which we will be able to solve the instance we reduce from in polynomial time) and work in the setting where the following convenient inequalities hold. They compare the maximal amount of flow that needs to go through particular edges with these edges' capacities. **Lemma 5.35**.: _For \(r\geq 6\), \(k\geq 1\), and a \(\widehat{\mathcal{T}}_{r,k}\)-flow \(\mathcal{P}\), the following bounds hold for each fixed \(i\in[k]\), \(q\in\Gamma_{r}\), and \(\odot\in\{+,-\}\), with summation over all \(q^{\prime}\in\Gamma_{r}\) and \(j\in[r]\)._ 1. \(\sum|\mathcal{P}^{E}_{i,q^{\prime}}|\leq r\cdot 2^{r}\)__ 2. \(|\mathcal{P}^{B}_{i,q}|+\sum|\mathcal{P}^{C}_{i,q,j}|+4\cdot(\sum|\mathcal{P}^ {E}_{i,q^{\prime}}|+|\mathcal{P}^{F}_{i}|+1)\leq 2^{3r+4}+2^{3r+1}+2^{2r}\)__ 3. \(\sum|\mathcal{P}^{D}_{i,j}|+\sum|\mathcal{P}^{E}_{i,q^{\prime}}|+|\mathcal{P} ^{F}_{i}|+1\leq 2^{2r}\)__ 4. \(|\mathcal{P}^{A\odot}_{i,q}|+\sum|\mathcal{P}^{C}_{i,q,j}|+\sum|\mathcal{P}^ {D}_{i,j}|+4\cdot(\sum|\mathcal{P}^{E}_{i,q^{\prime}}|+|\mathcal{P}^{F}_{i}|+1 )\leq 2^{3r+5}\)__ 5. \(\sum|\mathcal{P}^{C}_{i,q,j}|+2\cdot(\sum|\mathcal{P}^{E}_{i,q^{\prime}}|+| \mathcal{P}^{F}_{i}|+1)\leq 2^{3r+1}\)__ Proof.: (Part 1.) We estimate \[\sum_{1\leq a<b\leq r}|\mathcal{P}^{E}_{i,(a,b)}|=\sum_{1\leq a<b\leq r}2^{r-b +a-1}=\sum_{b=1}^{r}\left(2^{r-b}\cdot\sum_{a=1}^{b-1}2^{a-1}\right)\leq\sum_{b =1}^{r}2^{r-b}\cdot 2^{b}=r\cdot 2^{r}.\] (Part 2.) Using the bound above, we obtain \[\sum_{1\leq a<b\leq r}|\mathcal{P}^{E}_{i,(a,b)}|+|\mathcal{P}^{F}_{i}|+1\leq r \cdot 2^{r+1}+2.\] Starting from \(r=6\) the right-hand side becomes bounded by \(2^{2r-2}\). By multiplying this by \(4\) we obtain \(2^{2r}\). It remains to inspect the remaining summands. \[|\mathcal{P}^{B}_{i,q}|+\sum_{j=1}^{r}|\mathcal{P}^{C}_{i,q,j}|=(2^{3r+4}+2^{2 r+1})+\sum_{j=1}^{r}2^{2r+j}=(2^{3r+4}+2^{2r+1})+(2^{3r+1}-2^{2r+1})=2^{3r+4}+2^{ 3r+1}.\] (Part 3.) We have already established that \(\sum_{1\leq a<b\leq r}|\mathcal{P}^{E}_{i,(a,b)}|+|\mathcal{P}^{F}_{i}|+1\leq r \cdot 2^{r+1}+2\leq 2^{2r-2}\). From the second inequality we can derive \(\sum_{j=1}^{r}|\mathcal{P}^{D}_{i,j}|=r\cdot 2^{r}\leq 2^{2r-2}\). Summing these two terms leads to the claimed bound. (Part 4.) We have \(|\mathcal{P}^{A\odot}_{i,q}|=2^{3r+4}\) and \(\sum_{j=1}^{r}|\mathcal{P}^{C}_{i,q,j}|\leq 2^{3r+1}\). From the previous calculations we get \(\sum_{j=1}^{r}|\mathcal{P}^{D}_{i,j}|\leq 2^{2r-2}\) and the last term is bounded by \(2^{2r}\). In total, these numbers clearly cannot exceed \(2^{3r+5}\). (Part 5.) We have \(\sum_{j=1}^{r}|\mathcal{P}^{C}_{i,q,j}|=2^{3r+1}-2^{2r+1}\) while the second term can be at most \(2^{2r}\). The bound follows. Recall that \((i^{\rightarrow},q^{\rightarrow})\) stands for the successor of \((i,q)\) in the cyclic ordering of \([k]\times\Gamma_{r}\); we will also denote by \((i^{\leftarrow},q^{\leftarrow})\) the predecessor of \((i,q)\). We are going to show that \((\mathsf{ExRing}(r,k,\mathcal{S}),\widehat{\mathcal{T}}_{r,k})\) forms a truly working \((r,k,\mathcal{S})\)-\(\mathsf{Subset}\mathsf{Gadget}\). We move on to the first implication in the proof of correctness. The sketch of the construction is provided in Figures 23, 24, and 6 (on page 16) but we need to check (in a rather tedious way) that the edge capacities suffice to accommodate the flow. **Lemma 5.36**.: _Consider \(r\geq 6\), \(k\geq 1\), and \(\mathcal{S}\colon\{0,1\}^{r}\to 2^{[k]}\). For \(F\subseteq[k]\) let \(\mathcal{T}_{F}=\{(s_{j},t_{j},1)\mid j\in F\}\). If \(F\subseteq\mathcal{S}(\mathbf{b})\) for some \(\mathbf{b}\in\{0,1\}^{r}\), then there exists a non-crossing \((\widehat{\mathcal{T}}_{r,k}\cup\mathcal{T}_{F})\)-flow in \(\mathsf{ExRing}(r,k,\mathcal{S})\)._ Proof.: We deal with the requests of types A,B,C,D similarly as in Lemma 5.29. Again, we begin with describing which vertices are being visited by each walk and later we check that the edge capacities are large enough to receive all the walks. For each \(i\in[k]\), \(q\in\Gamma_{r}\), and \(\odot\in\{+,-\}\), we consider a path \(P\) that traverses the ladder \(L^{\odot}_{i,q}\) in such a way that the face \(f_{j}\) is to the right of \(P\) exactly when \(\mathbf{b}_{j}=1\). Each walk from \(\mathcal{P}^{A\odot}_{i,q}\) traverses \(L^{\odot}_{i,q}\) through the same vertices as \(P\). Every walk in the family \(\mathcal{P}^{B}_{i,q}\) is of the form \((\widehat{h}^{+}_{i,q},h^{+}_{i,q},h^{-}_{i,q},\widehat{h}^{-}_{i,q})\). Now consider \(i\in[k]\), \(q\in\Gamma_{r}\), and \(j\in[r]\). When \(\mathbf{b}_{j}=0\) then all the walks from the family \(\mathcal{P}^{C}_{i,q,j}\) go through an edge \(h^{+}_{i,q}h^{-}_{i,q}\) and otherwise they go through \(h^{+}_{i^{\rightarrow},q^{\rightarrow}}h^{-}_{i^{\rightarrow},q^{\rightarrow}}\). When \(\mathbf{b}_{j}=0\) then all the walks from the family \(\mathcal{P}^{D}_{i,j}\) go through an edge \(L^{-}_{i,\perp}[v_{1,1}]H_{i}[w_{0}]\) and otherwise through \(L^{-}_{i,\perp}[v_{1,3}]H_{i}[w_{1}]\). Let \(d_{i}=\gamma_{r}(\mathbf{b})+1_{[i\in\mathcal{S}(\mathbf{b})]}\). Note that the condition \(i\in\mathcal{S}(\mathbf{b})\) is equivalent to \(\mathbf{b}\in Z^{\mathcal{S}}_{i}\). We define family \(\mathcal{T}_{i}\) following Definition 5.5: for each \(j\in[r]\) with \(\mathbf{b}_{j}=0\) we add request \((H_{i}[w_{0}],H_{i}[z_{j}],2^{r})\), for each \(j\in[r]\) with \(\mathbf{b}_{j}=1\) we add request \((H_{i}[w_{1}],H_{i}[z_{j}],2^{r})\), and finally we add request \((H_{i}[w_{0}],H_{i}[w_{1}],d_{i})\). From the definition of an \((r,\gamma_{r},Z^{\mathcal{S}}_{i})\)-\(\mathsf{Vector}\mathsf{Containment}\mathsf{Gadget}\), we obtain that there exists a non-crossing \(\mathcal{T}_{i}\)-flow \(\mathcal{P}^{H}_{i}\) in \(H_{i}\). Moreover, in this flow the vertices \(H_{i}[w_{0}]\), \(H_{i}[w_{1}]\) see the vertices \(H_{i}[z_{j}]\) in the order consistent with the ordering of families \(\mathcal{P}^{D}_{i,j}\) on the edges leaving Figure 23: An illustration for the proof of Lemma 5.36. For a less detailed version with only walks of types (E, F) see Figure 6. Due to the abundance of different walks in the flow, the walks are only roughly sketched with the colorful curves. Their colors represent the types of requests: black (A), blue (B), green (C, D), orange (C, D), red (C, D), and purple (E, F). For \(a=1,b=2\) the request of type (E) cannot be satisfied within the ladder \(L_{1,2}^{-}\) because the vertices \(L_{1,2}^{-}[y_{1}]\), \(L_{1,2}^{-}[y_{2}]\) lie on different sides of the black curve (the flow for a request of type (A)). The same applies to \(a=2,b=3\). The respective flow must use the lower dotted edges to reach the subgraph \(R_{i,\perp}\), traverse the subgraph \(H_{i}\), and proceed through the upper dotted edges. The general strategy of bundling the walks on parallel edges stays the same as in Figure 21, while the detailed view on how the purple walks are routed is provided in Figure 24. \(H_{i}\) (recall Definition 5.4). However, we have no control how the \((H_{i}[w_{0}],H_{i}[w_{1}])\)-walks in \(\mathcal{P}^{H}_{i}\) are intertwined with the other walks at \(H_{i}[w_{0}]\) and at \(H_{i}[w_{1}]\). Therefore, we will adjust the routing of the remaining walks to fit between the families \(\mathcal{P}^{D}_{i,j}\) in the same fashion. We move on to the requests of type (E). Consider \(i\in[k]\) and \(1\leq a<b\leq r\). If \(\mathbf{b}_{a}=\mathbf{b}_{b}\) then the faces \(f_{a},f_{b}\) of the ladder \(L^{-}_{i,(a,b)}\) are on the same side of the walks from family \(\mathcal{P}^{A-}_{i,(a,b)}\). The walks in family \(\mathcal{P}^{E}_{i,(a,b)}\) must connect \(L^{-}_{i,(a,b)}[y_{a}]\) to \(L^{-}_{i,(a,b)}[y_{b}]\). The only issue is to avoid crossing the walks from families \(\mathcal{P}^{C}_{i,(a,b),j}\), \(j\in[r]\), which also need to reach vertices within \(L^{-}_{i,(a,b)}\). Observe that both vertices \(L^{-}_{i,(a,b)}[y_{a}]\), \(L^{-}_{i,(a,b)}[y_{b}]\) can be reached from \(L^{-}_{i,(a,b)}[v_{1,1}]\) (when \(\mathbf{b}_{a}=\mathbf{b}_{b}=0\)) or from \(L^{-}_{i,(a,b)}[v_{1,3}]\) (when \(\mathbf{b}_{a}=\mathbf{b}_{b}=1\)) without crossing the other walks. Hence every walk in \(\mathcal{P}^{E}_{i,(a,b)}\) can be obtained via a concatenation of such two walks. Suppose now that \(\mathbf{b}_{a}\neq\mathbf{b}_{b}\) and assume w.l.o.g. that \(\mathbf{b}_{a}=0\) and \(\mathbf{b}_{b}=1\). First, one can reach \(L^{-}_{i,(a,b)}[v_{r+1,1}]\) from \(L^{-}_{i,(a,b)}[y_{a}]\) without crossing other walks, and then one can move to \(h^{-}_{i,(a,b)}\). Due to ordering of edges incident to \(h^{-}_{i,(a,b)}\), the walks can proceed "down" to \(g^{-}_{i,(a,b)}\) and then follow the "lower" path from \(g^{-}_{i,(a,b)}\) to \(h^{+}_{i,\perp}\) (see Figure 23). Looking from the other end, starting at \(L^{-}_{i,(a,b)}[y_{b}]\), one first reaches \(L^{-}_{i,(a,b)}[v_{r+1,3}]\), then \(h^{-}_{i^{-},q^{\rightarrow}}\), \(h^{+}_{i^{\rightarrow},q^{\rightarrow}}\), \(g^{+}_{i,(a,b)}\), and follows the "upper" path towards \(h^{+}_{i,(1,2)}\) (the vertex just to the right of the subgraph \(R_{i,\perp}\)). So far we have explained how to reach the left side the subgraph \(R_{i,\perp}\) from each \(L^{-}_{i,(a,b)}[y_{a}]\) with \(\mathbf{b}_{a}=0\) and how to reach the right side \(R_{i,\perp}\) from each \(L^{-}_{i,(a,b)}[y_{b}]\) with \(\mathbf{b}_{b}=1\). The total amount of flow from families \(\mathcal{P}^{E}_{i,(a,b)}\) that need to traverse \(R_{i,\perp}\) equals \[\sum_{1\leq a<b\leq r}1_{[\mathbf{b}_{a}\neq\mathbf{b}_{b}]}\cdot 2^{r-b+a-1}.\] We also need to take care of the request \((s_{i},t_{i},r\cdot 2^{r}+1)\) and, when \(i\in\mathcal{S}(\mathbf{b})\), the request \((s_{i},t_{i},1)\) from \(\mathcal{T}_{F}\). In total, we need to push exactly \(d_{i}\) units of flow through \(R_{i,\perp}\), from left to right. The flow \(\mathcal{P}^{H}_{i}\) in \(H_{i}\) already includes this many \((H_{i}[w_{0}],H_{i}[w_{1}])\)-walks. It remains to group the walks coming from the left side (that is, from \(s_{i}\) and \(h^{+}_{i,\perp}\)) into bundles, reflecting the bundles of \((H_{i}[w_{0}],H_{i}[w_{1}])\)-walks in \(\mathcal{P}^{H}_{i}\) between the \((H_{i}[w_{0}],H_{i}[z_{j}])\)-walks at vertex \(H_{i}[w_{0}]\), and accommodate these bundles respectively between the families \(\mathcal{P}^{C}_{i,\perp,j}\). Similarly, we group the walks coming from the right side (that is, from \(t_{i}\) and \(h^{+}_{i,(1,2)}\)) accordingly to the bundles of \((H_{i}[w_{0}],H_{i}[w_{1}])\)-walks in \(\mathcal{P}^{H}_{i}\) between the \((H_{i}[w_{1}],H_{i}[z_{j}])\)-walks at vertex \(H_{i}[w_{1}]\). The order of walks in these two families is symmetric, so one can match the requested endpoints (see Figure 24). Finally, we check that the edge capacities are sufficient to accommodate all the walks. The edges incident to vertices of the form \(g^{\odot}_{i,q}\) have capacity \(r\cdot 2^{r}\). In an extreme case, such an edge might be utilized by all families \(\mathcal{P}^{E}_{i,(a,b)}\) for \(1\leq a<b\leq r\) and fixed \(i\). The total number of walks in these families is bounded by \(r\cdot 2^{r}\) due to Lemma 5.35(1). Now consider the edges \(h^{+}_{i,q}h^{-}_{i,q}\): there are \(2^{3r+4}+2^{3r+1}+2^{2r}\) copies of each. In our construction each walk from family \(\mathcal{P}^{B}_{i,q}\) uses one of these edges. For each \(j\in[r]\), exactly one of the families \(\mathcal{P}^{C}_{i^{\rightarrow},q^{\leftarrow},j}\), \(\mathcal{P}^{C}_{i,q,j}\) goes through \(h^{+}_{i}h^{-}_{i}\). This gives a number of walks equal to \(\sum_{j=1}^{r}|\mathcal{P}^{C}_{i,q,j}|\). We might also need to accommodate the walks of types (E), (F), and the \((s_{i},t_{i})\)-walk requested by \(\mathcal{T}_{F}\). Each of these walks might go at most twice through \(h^{+}_{i,q}h^{-}_{i,q}\). For \(q=\perp\) there might be also walks indexed with \((i^{+},q^{\leftarrow})\) that traverse \(h^{+}_{i,q}h^{-}_{i,q}\) to the left of family \(\mathcal{P}^{B}_{i,q}\). We upper bound the number of all these walks by multiplying \(\sum_{1\leq a<b\leq r}|\mathcal{P}^{E}_{i,(a,b)}|+|\mathcal{P}^{F}_{i}|+1\) by 4. In total we get a quantity bounded \(2^{3r+4}+2^{3r+1}+2^{2r}\), due to Lemma 5.35(2). The edges of the form \(H_{i}[w_{0}]L^{-}_{i,\perp}[v_{1,1}]\) or \(H_{i}[w_{1}]L^{-}_{i,\perp}[v_{1,3}]\) have capacity \(2^{2r}\). Such an edge might be used by all families \(\mathcal{P}^{D}_{i,j}\), \(\mathcal{P}^{E}_{i,(a,b)}\), \(\mathcal{P}^{F}_{i}\), for fixed \(i\), and the \((s_{i},t_{i})\)-walk requested in \(\mathcal{T}_{F}\). By Lemma 5.35(3), the total number of these walks is at most \(2^{2r}\). For each \(i\in[k]\), \(q\in\Gamma_{r}\), and \(\odot\in\{+,-\}\), the edges within the ladder \(L^{\odot}_{i,q}\), which are not adjacent to any terminal, might need to accommodate the all respective families of types (A), (C), (D), (E), (F), and the \((s_{i},t_{i})\)-walk requested by \(\mathcal{T}_{F}\). The walks of the last three types might traverse up to four copies of a single edge: by going "up" and "down" on each of two sides of the ladder. We use Lemma 5.35(4) to bound the total number of necessary parallel edges by \(2^{3r+5}\), that is, the number of copies for each edge. The last non-trivial case is the passage between a vertex of the form \(h^{\odot}_{i,q}\) and a ladder. The edges therein are utilized by the walks of type (C) for a fixed pair \((i,q)\) and possibly the walks of types (E), (F) plus the \((s_{i},t_{i})\)-walk requested by \(\mathcal{T}_{F}\). We multiply by 2 the total amount of flow from the last three types to cover potential detours of walks. By Lemma 5.35(5), we need no more than \(2^{3r+1}\) parallel edges, which is exactly the capacity. This concludes the construction of a non-crossing \((\widehat{\mathcal{T}}_{r,k}\cup\mathcal{T}_{F})\)-flow. We move on to the second implication in the correctness proof. It will be convenient to formally define a _pattern_ of a walk \(Q\in\mathcal{P}^{A\odot}_{i,q}\). We do not use the term "homotopy" in order to avoid confusion with Definition 5.10. Figure 24: A topological view on the construction from Lemma 5.36 for \(r=4\) and \(\mathbf{b}=(1010)\), illustrating walks traversing the vector-containment gadget \(H_{i}\). The length of the red dashed curve, measuring how many \((H_{i}[w_{0}],H_{i}[w_{1}])\)-walks can pass through the gadget, is governed by the choice of the vector \(\mathbf{b}\). The walks of types (C) and (D) are drawn in black, with a single curve representing each family \(\mathcal{P}^{C}_{i,\perp,j}\) or \(\mathcal{P}^{D}_{i,j}\). The labels \(x_{j},y_{j}\) mark vertices \(L^{-}_{i,\perp}[x_{j}]\), \(L^{-}_{i,\perp}[y_{j}]\). For \((a,b)\in\{(1,3),(2,4)\}\) the vertices \(L^{-}_{i,(a,b)}[y_{a}]\), \(L^{-}_{i,(a,b)}[y_{b}]\) lie on the same side of the pattern drawn by walks of type (A), so they can be connected within the ladder \(L^{-}_{i,(a,b)}\). For the remaining pairs \((a,b)\), the flow \(\mathcal{P}^{E}_{i,(a,b)}\) must go through the subgraph \(H_{i}\). Each such family, as well as the family of \((s_{i},t_{i})\)-walks, is represented by a single curve, except for \(\mathcal{P}^{E}_{i,(3,4)}\). In that last case, two walks are drawn to demonstrate that even walks satisfying a single request may exhibit different behavior when traversing \(H_{i}\) (they pass vertex \(H_{i}[z_{4}]\) from different sides). The coloring of the curves illustrates that all the requests can be satisfied in a non-crossing way. **Definition 5.37**.: _Consider \(i\in[k]\), \(q\in\Gamma_{r}\), and \(\odot\in\{+,-\}\). Let \(Q\) be an \((L^{\odot}_{i,q}[u_{0}],L^{\odot}_{i,q}[u_{1}])\)-walk in \(\mathsf{ExRing}(r,k,\mathcal{S})\) and \(\mathbf{b}\in\{0,1\}^{r}\). We say that \(Q\) has \(\operatorname{pattern}\)\((i,q,\odot,\mathbf{b})\) if \(Q\) is contained in the subgraph \(L^{\odot}_{i,q}\) and for each \(j\in[r]\) there exists a walk \(W_{j}\) such that:_ 1. \(W_{j}\) _starts at_ \(L^{\odot}_{i,q}[x_{j}]\) _and ends at_ \(h^{\odot}_{i,q}\) _(when_ \(\mathbf{b}_{j}=0\)_) or_ \(h^{\odot}_{i^{\to},q^{\to}}\) _(when_ \(\mathbf{b}_{j}=1\)_);_ 2. \(W_{j}\) _is internally contained in_ \(L^{\odot}_{i,q}\)_;_ 3. \(W_{j}\) _and_ \(Q\) _are non-crossing._ Intuitively, the face \(f_{j}\) is to the left of \(Q\) when \(\mathbf{b}_{j}=0\) and to the right when \(\mathbf{b}_{j}=1\). The next lemma plays the same role as Lemma 5.30 in Section 5.3.1. We prove that any non-crossing \(\widehat{\mathcal{T}}_{r,k}\)-flow must enjoy essentially the same structure as the solution constructed in Lemma 5.36. The difference with Lemma 5.30 is that now some walks of types (A), (B), (C) may potentially use the new connections through the \(g\)-vertices. We argue that their numbers are large enough, compared to those edges' capacities, to ensure that the majority of walks exhibits the same behaviour as before. **Lemma 5.38**.: _Consider \(r\geq 6\), \(k\geq 1\), and \(\mathcal{S}\colon\{0,1\}^{r}\to 2^{[k]}\). Let \(\mathcal{P}\) be a non-crossing \(\widehat{\mathcal{T}}_{r,k}\)-flow in \(\mathsf{ExRing}(r,k,\mathcal{S})\). Then the following hold._ 1. _Let_ \(i\in[k]\)_,_ \(q\in\Gamma_{r}\)_,_ \(\odot\in\{+,-\}\)_, and_ \(W\) _be an_ \((h^{\odot}_{i,q},h^{\odot}_{i^{\to},q^{\to}})\)_-walk internally contained in_ \(L^{\odot}_{i,q}\)_. Then_ \(W\) _crosses with_ \(\mathcal{P}^{A\odot}_{i,q}\)_._ 2. _For each_ \(i\in[k]\)_,_ \(q\in\Gamma_{r}\)_, the family_ \(\mathcal{P}^{B}_{i,q}\) _contains a walk on vertices_ \(\{\widehat{h}^{+}_{i,q},h^{+}_{i,q},h^{-}_{i,q},\widehat{h}^{-}_{i,q}\}\)_. Moreover, there are at least_ \(|\mathcal{P}^{B}_{i,q}|-r\cdot 2^{r}\) _walks in_ \(\mathcal{P}^{B}_{i,q}\) _that go through an edge_ \(h^{+}_{i,q}h^{-}_{i,q}\)_._ 3. _There exists a vector_ \(\mathbf{b}\in\{0,1\}^{r}\) _such that for each_ \(i\in[k]\)_,_ \(q\in\Gamma_{r}\)_, and_ \(\odot\in\{+,-\}\)_, there is a walk_ \(Q^{\odot}_{i,q}\in\mathcal{P}^{A\odot}_{i,q}\) _with pattern_ \((i,q,\odot,\mathbf{b})\)_._ Proof.: The proof of Part (1) is analogous to the one of Lemma 5.30(1). Proof of (2).: We count the total number of edges leaving the subgraph induced by \(\{\widehat{h}^{+}_{i,q},h^{+}_{i,q},h^{-}_{i,q},\widehat{h}^{-}_{i,q}\}\) to be \(4\cdot 2^{3r+1}+2r\cdot 2^{r}\). For \(r\geq 6\) this is less than \(2^{3r+4}<|\mathcal{P}^{B}_{i,q}|\). Hence there is at least one walk from \(\mathcal{P}^{B}_{i,q}\) that does not leave the vertex set \(\{\widehat{h}^{+}_{i,q},h^{+}_{i,q},h^{-}_{i,q},\widehat{h}^{-}_{i,q}\}\). Suppose now that \(P\in\mathcal{P}^{B}_{i,q}\) does no go through any edge \(h^{+}_{i,q}h^{-}_{i,q}\) nor \(h^{+}_{i,q}g^{+}_{i,q}\). Then \(P\) needs to traverse \(L^{+}_{i,q}\) or \(L^{+}_{i^{\to},q^{\to}}\). This means that \(P\) contains a subwalk that meets the specification of Part (1) of the lemma. This contradicts the assumption that \(\mathcal{P}\) is a non-crossing flow. Since the number of edges \(h^{+}_{i,q}g^{+}_{i,q}\) is \(r\cdot 2^{r}\), we obtain that at least \(|\mathcal{P}^{B}_{i,q}|-r\cdot 2^{r}\) walks in \(\mathcal{P}^{B}_{i,q}\) go through \(h^{+}_{i,q}h^{-}_{i,q}\). Similarly to the proof of Lemma 5.30, we first establish two intermediate claims. We say that a walk \(W\) leaves a subgraph \(H\) through vertex \(v\) if exactly one of the endpoints of \(W\) belongs to \(V(H)\) and \(v\) is the first vertex on \(W\) (counting from this endpoint) that does not belong to \(V(H)\). Unlike Claim 5.31 in the previous section, we first only specify the vertex through which the walks from \(\mathcal{P}^{C}_{i,q,j}\) leave \(L^{+}_{i,q}\), and then inspect the next edge on these walks in Claim 5.40. **Claim 5.39**.: _For each \(i\in[k]\) and \(q\in\Gamma_{r}\) there exists a vector \(\mathbf{b}^{i,q}\in\{0,1\}^{r}\) such that when \(\mathbf{b}^{i,q}_{j}=0\) then all the walks from \(\mathcal{P}^{C}_{i,q,j}\) leave \(L^{+}_{i,q}\) through vertex \(h^{+}_{i,q}\), and when \(\mathbf{b}^{i,q}_{j}=1\) then all the walks from \(\mathcal{P}^{C}_{i,q,j}\) leave \(L^{+}_{i,q}\) through vertex \(h^{+}_{i^{\to},q^{\to}}\)._ Proof.: Suppose that there are walks \(W_{0},W_{1}\in\mathcal{P}^{C}_{i,q,j}\) so that \(W_{0}\) leaves \(L^{+}_{i,q}\) through \(h^{+}_{i,q}\) and \(W_{1}\) leaves \(L^{+}_{i,q}\) through \(h^{+}_{i^{\rightarrow},q^{\rightarrow}}\). By Definition 5.2 of a non-crossing flow, the concatenation \(W_{0}+W_{1}\) does not cross any walk from \(\mathcal{P}^{A+}_{i}\). But then \(W_{0}+W_{1}\) contains a subwalk that meets the specification of Part (1). This contradicts the assumption that \(\mathcal{P}\) is a non-crossing flow. Therefore for each \(j\in[r]\) the choice whether to leave \(L^{+}_{i,q}\) through \(h^{+}_{i,q}\) or \(h^{+}_{i^{\rightarrow},q^{\rightarrow}}\) is fixed. **Claim 5.40**.: _There exists a single vector \(\mathbf{b}\in\{0,1\}^{r}\) so that \(\mathbf{b}^{i,q}=\mathbf{b}\) for all \(i\in[k]\) and \(q\in\Gamma_{r}\)._ Proof.: We define \(\tau(b_{1}b_{2}\ldots b_{r})=\sum_{h=1}^{r}b_{h}\cdot 2^{h-1}\). Suppose that the claim does not hold. Because we work on a ring structure, there exist \(i\in[k]\), \(q\in\Gamma_{r}\), for which \(\tau(\mathbf{b}^{i,q})<\tau(\mathbf{b}^{i^{\rightarrow},q^{\rightarrow}})\). By Claim 5.39 the number of walks from \(\mathcal{P}^{C}_{i,q,1}\cup\mathcal{P}^{C}_{i,q,2}\cup\ldots\mathcal{P}^{C}_{ i,q,r}\) that leave \(L^{+}_{i,q}\) through \(h^{+}_{i^{\rightarrow},q^{\rightarrow}}\) equals \(2^{2r+1}\cdot\tau(\mathbf{b}^{i})\). On the other hand, the number of walks from \(\mathcal{P}^{C}_{i^{\rightarrow},q^{\rightarrow},1}\cup\mathcal{P}^{C}_{i^{ \rightarrow},q^{\rightarrow},2}\cup\ldots\mathcal{P}^{C}_{i^{\rightarrow},q^{ \rightarrow},r}\) that leave \(L^{+}_{i^{\rightarrow},q^{\rightarrow}}\) through \(h^{+}_{i^{\rightarrow},q^{\rightarrow}}\) equals \(2^{2r+1}\cdot(2^{r}-1-\tau(\mathbf{b}^{i^{\rightarrow},q^{\rightarrow}}))\). Since \(\tau(\mathbf{b}^{i,q})<\tau(\mathbf{b}^{i^{\rightarrow},q^{\rightarrow}})\), this quantity is at least \(2^{2r+1}\cdot(2^{r}-\tau(\mathbf{b}^{i,q}))\). In total, we obtain at least \(2^{3r+1}\) walks from the two ladders that meet at \(h^{+}_{i^{\rightarrow},q^{\rightarrow}}\). By Part (1), when a walk of type (C) leaves the ladder \(L^{+}_{i,q}\) (resp. leaves \(L^{+}_{i^{\rightarrow},q^{\rightarrow}}\)) from the right side (resp. the left side), it must at some point reach a neighbor of \(h^{+}_{i^{\rightarrow},q^{\rightarrow}}\) that does not belong to \(V(L^{+}_{i,q})\) (resp. \(V(L^{+}_{i^{\rightarrow},q^{\rightarrow}})\)). There are at most \(r\cdot 2^{r}\) walks that can use an edge \(h^{+}_{i^{\rightarrow},q^{\rightarrow}}\)\(a^{+}_{i,q}\) and no (C)-type walk can use an edge \(h^{+}_{i^{\rightarrow},q^{\rightarrow}}\)\(\widehat{h}^{+}_{i^{\rightarrow},q^{\rightarrow}}\) because they are all used by walks from \(\mathcal{P}^{B}_{i^{\rightarrow},q^{\rightarrow}}\). Also, due to Part (2), at least \(2^{3r+4}+2^{2r+1}-r\cdot 2^{r}\) walks from \(\mathcal{P}^{B}_{i^{\rightarrow},q^{\rightarrow}}\) go through \(h^{+}_{i^{\rightarrow},q^{\rightarrow}}\)\(h^{+}_{i^{\rightarrow},q^{\rightarrow}}\). Since the remaining walks of type (C) need to go through \(h^{+}_{i^{\rightarrow},q^{\rightarrow}}\)\(h^{-}_{i^{\rightarrow},q^{\rightarrow}}\) as well, in total we have at least \(2^{3r+4}+2^{3r+1}+2^{2r+1}-r\cdot 2^{r+1}\) walks that need to go through this passage. On the other hand, there are only \(2^{3r+4}+2^{3r+1}+2^{2r}\) parallel edges \(h^{+}_{i^{\rightarrow},q^{\rightarrow}}\)\(h^{-}_{i^{\rightarrow},q^{\rightarrow}}\). Since for \(r\geq 6\) we have \(r\cdot 2^{r+1}<2^{2r}\), there are too few edges to accommodate all the walks above, and so we arrive at a contradiction. Proof of (3).: Let \(\mathbf{b}\) be the vector from Claim 5.40. Fix \(i\in[k]\), \(q\in\Gamma_{r}\), and \(\odot\in\{+,-\}\). First we argue that there exists a walk \(Q^{\odot}_{i,q}\in\mathcal{P}^{\odot}_{i,q}\) entirely contained in \(L^{\odot}_{i,q}\). This follows from counting the edges leaving \(L^{\odot}_{i,q}\): there are at most \(2\cdot 2^{3r+1}+2\cdot 2^{2r}\) of them, which is less than \(|\mathcal{P}^{\odot}_{i,q}|\). To see that \(Q^{+}_{i,q}\) has pattern \((i,q,+,\mathbf{b})\), fix \(j\in[r]\). The existence of the walk \(W_{j}\) in Definition 5.37 follows from Claim 5.39: it can be chosen as a subwalk of any walk from \(\mathcal{P}^{C}_{i,q,j}\). The argument that \(Q^{-}_{i,q}\) has pattern \((i,q,-,\mathbf{b})\) is the same. This concludes the proof of Lemma 5.38. We can now take advantage of the structure imposed on a non-crossing \(\widehat{\mathcal{T}}_{r,k}\)-flow to analyze which walks need to go through the subgraphs \(H_{i}\). The following lemma is based on the analogous observations as Lemma 5.33. **Lemma 5.41**.: _Consider \(r\geq 6\), \(k\geq 1\), and \(\mathcal{S}\colon\{0,1\}^{r}\to 2^{[k]}\). For \(F\subseteq[k]\) let \(\mathcal{T}_{F}=\{(s_{j},t_{j},1)\mid j\in F\}\). If there exists a non-crossing \((\widehat{\mathcal{T}}_{r,k}\cup\mathcal{T}_{F})\)-flow in \(\mathsf{ExRing}(r,k,\mathcal{S})\), then \(F\subseteq\mathcal{S}(\mathbf{b})\) for some \(\mathbf{b}\in\{0,1\}^{r}\)._ Proof.: Let \(\mathcal{P}\) be a \(\widehat{\mathcal{T}}_{r,k}\)-flow and \(\mathcal{P}_{F}\) be a \(\mathcal{T}_{F}\)-flow so that \(\mathcal{P}\cup\mathcal{P}_{F}\) is non-crossing in \(\mathsf{ExRing}(r,k,\mathcal{S})\). We apply Lemma 5.38 to \(\mathcal{P}\); let \(\mathbf{b}\in\{0,1\}^{r}\) be the vector given by Part (3) of the lemma. Fix \(i\in[k]\) for the rest of the proof. For each \(q\in\Gamma_{R}\) and \(\odot\in\{+,-\}\) we apply Lemma 5.38(1) to obtain that no walk \(W\) that traverses the ladder \(L^{\odot}_{i,q}\) from left to right can be non-crossing with \(\mathcal{P}^{A\odot}_{i,q}\). Let \(j\in[r]\). By the same concatenation argument as in Claim 5.39 we arrive at the following. **Observation 5.42**.: _If there is walk \(W^{\prime}_{j}\), non-crossing with \(\mathcal{P}^{A\odot}_{i,q}\), that starts at \(L^{\odot}_{i,q}[x_{j}]\) and leaves the ladder through vertex \(h^{\odot}_{i,q}\), then \(\mathbf{b}_{j}=0\). Symmetrically, if \(W^{\prime}_{j}\) leaves the ladder through vertex \(h^{\odot}_{i^{\rightarrow},q^{\rightarrow}}\), then \(\mathbf{b}_{j}=1\)._ When \(L^{-}_{i,q}[y_{j}]\) occurs as a terminal in \(\widehat{\mathcal{T}}_{r,k}\) (that is, when (I) \(q=\bot\) or (II) \(q=(a,b)\) and \(j\in\{a,b\}\)) then the number of edges incident to \(L^{-}_{i,q}[x_{j}]\) or \(L^{-}_{i,q}[y_{j}]\) equals the number of walks in \(\mathcal{P}\) ending at this vertex. Therefore, no walk from \(\mathcal{P}^{A-}_{i,q}\) can visit \(L^{-}_{i,q}[x_{j}]\) nor \(L^{-}_{i,q}[y_{j}]\). Since these two vertices share a face, they are both to the left or both to the right of the walk \(Q^{A-}_{i,q}\in\mathcal{P}^{A-}_{i,q}\) with pattern \((i,q,-,\mathbf{b})\). This means that in these cases Observation 5.42 remains true if we can replace \(L^{-}_{i,q}[x_{j}]\) with \(L^{-}_{i,q}[y_{j}]\). By Lemma 5.38(2), each family \(\mathcal{P}^{B}_{i,q}\) contains a walk on vertices \(\{\widehat{h}^{+}_{i,q},h^{+}_{i,q},h^{-}_{i,q},\widehat{h}^{-}_{i,q}\}\). Because the number of edges incident to \(\widehat{h}^{+}_{i,q}\) or \(\widehat{h}^{-}_{i,q}\) equals \(|\mathcal{P}^{B}_{i,q}|\), no walk from \(\mathcal{P}\) can cross the path \((\widehat{h}^{+}_{i,q},h^{+}_{i,q},h^{-}_{i,q},\widehat{h}^{-}_{i,q})\). Together with Observation 5.42, this rules out all the connections between the two sides of a ladder other than the way through \(H_{i}\). **Observation 5.43**.: _For each \(q\in\Gamma_{r}\) and \(a,b\in[r]\) with \(\mathbf{b}_{a}\neq\mathbf{b}_{b}\), any \((L^{-}_{i,q}[y_{a}],L^{-}_{i,q}[y_{b}])\)-walk \(W\), that is non-crossing with the walks in \(\mathcal{P}\) of types (A) and (B), must contain an \((H_{i}[w_{0}],H_{i}[w_{1}])\)-subwalk._ The sum of \(|P^{E}_{i,(a,b)}|\) over \((a,b)\) satisfying \(\mathbf{b}_{a}\neq\mathbf{b}_{b}\) equals \[\sum_{1\leq a<b\leq r}1_{[\mathbf{b}_{a}\neq\mathbf{b}_{b}]}\cdot 2^{r-b+a-1}.\] Next, each \((s_{i},t_{i})\)-walk in \(\mathcal{P}\cup\mathcal{P}_{F}\) must contain an \((H_{i}[w_{0}],H_{i}[w_{1}])\)-subwalk as well. In total, the number \(d_{i}\) of walks that traverse \(H_{i}\) from left to right equals \(\widehat{\gamma}_{r}(\mathbf{b})+1_{[i\in F]}\). Furthermore, the only way for the walks of type (D) to reach the vertices \(H_{i}[z_{j}]\) is to enter \(H_{i}\) from the same side of the walk \(Q^{A-}_{i,\bot}\) as \(L^{-}_{i,\bot}[y_{j}]\) is located. **Observation 5.44**.: _For each \(j\in[r]\), every walk \(P\in\mathcal{P}^{D}_{i,j}\) contains an \((H_{i,}[w_{0}],H_{i}[z_{j}])\)-walk in \(H_{i}\) when \(\mathbf{b}_{j}=0\), or an \((H_{i}[w_{1}],H_{i}[z_{j}])\)-walk in \(H_{i}\) when \(\mathbf{b}_{j}=1\)._ Therefore, the subwalks of \(\mathcal{P}^{D}_{i,j}\) within \(H_{i}\) satisfy request \((H_{i}[w_{0}],H_{i}[z_{j}],2^{r})\) when \(\mathbf{b}_{j}=0\) or request \((H_{i}[w_{1}],H_{i}[z_{j}],2^{r})\) when \(\mathbf{b}_{j}=1\). Together with \(d_{i}\) units of the \((H_{i}[w_{0}],H_{i}[w_{1}])\)-flow, these match the flow requested in Definition 5.5. Since \(H_{i}\) is an \((r,\widehat{\gamma}_{r},Z^{S}_{i})\)-Vector Containment Gadget this implies that \(\mathbf{b}\in Z^{S}_{i}\) whenever \(i\in F\). By the definition of \(Z^{S}_{i}\), we obtain \(i\in F\Rightarrow i\in\mathcal{S}(\mathbf{b})\). This concludes the proof of the containment \(F\subseteq\mathcal{S}(\mathbf{b})\). We can finally summarize the entire construction of the subset gadget. **Proposition 5.45**.: _There is a polynomial-time algorithm that, given \(r\geq 6\), \(k\geq 1\), and a function \(\mathcal{S}\colon\{0,1\}^{r}\to 2^{[k]}\), outputs an \((r,k,\mathcal{S})\)-Subset Gadget\((G,\mathcal{T})\) with \(|V(G)|+|E(G)|=k\cdot 2^{\mathcal{O}(r)}\) and \(|\mathcal{T}|=\mathcal{O}(k\cdot r^{3})\). Moreover, for each request \((u_{i},v_{i},d_{i})\in\mathcal{T}\) it holds that \(d_{i}\leq\mathcal{O}(2^{3r})\)._ Proof.: For each \(i\in[k]\) we use Proposition 5.26 to construct an \((r,\gamma_{r},Z_{i}^{\mathcal{S}})\)-Vector Containment Gadget of size \(2^{\mathcal{O}(r)}\) in time polynomial in \(2^{r}\), which is the input size. This allows us to construct the graph \(\mathsf{ExRing}(r,k,\mathcal{S})\). It is divided into \(k\cdot\binom{(r)}{2}+1\) blocks, each equipped with \(\mathcal{O}(r)\) requests from \(\widehat{\mathcal{T}}_{r,k}\). Due to Lemmas 5.36 and 5.41, the pair \((\mathsf{ExRing}(r,k,\mathcal{S}),\widehat{\mathcal{T}}_{r,k})\) forms an \((r,k,\mathcal{S})\)-Subset Gadget. ### From a set cover to a non-crossing flow In this section, we finish the reduction from Set Cover to Non-crossing Multicommodity Flow. We need two more simple gadgets, which are adaptations of the gadgets used in the NP-hardness proof of Planar Disjoint Paths[64]. The first gadget encodes which of \(\ell\) sets should cover an element \(i\in[k]\). **Definition 5.46**.: _For \(\ell\in\mathbb{N}\), a pair \((G,\mathcal{T})\) is an \(\ell\)-Existential Gadget if the following conditions hold._ 1. \(G\) _is a plane graph with_ \(2\ell\) _distinguished vertices_ \(s_{1},t_{1},s_{2},t_{2},\ldots,s_{\ell},t_{\ell}\) _lying on the outer face in this clockwise order._ 2. \(\mathcal{T}\) _is a set of triples from_ \(V(G)\times V(G)\times\{1\}\)_._ 3. _For_ \(F\subseteq[\ell]\)_, let_ \(\mathcal{T}_{F}=\{(s_{i},t_{i},1)\mid i\in F\}\)_. Then, there exists a non-crossing_ \((\mathcal{T}\cup\mathcal{T}_{F})\)_-flow in_ \(G\) _if and only if_ \(|F|<\ell\)_._ When seeking a set cover of size \(\ell\), we make a single copy of an \(\ell\)-Existential Gadget for each \(i\in[k]\). We will allow an index \(j\in[\ell]\) to belong to \(F\) when the element \(i\) is not covered by the set \(S_{j}\) in a solution \(S_{1},S_{2},\ldots,S_{\ell}\) to Set Cover. Condition (3) ensures that one of the indices will be missing in \(F\), implying that \(i\) gets covered. **Lemma 5.47**.: _For each \(\ell\geq 3\) there exists an \(\ell\)-Existential Gadget \((G_{\ell},\mathcal{T}_{\ell})\) with \(|V(G_{\ell})|+|E(G_{\ell})|+|\mathcal{T}_{\ell}|=\mathcal{O}(\ell)\). Furthermore, \((G_{\ell},\mathcal{T}_{\ell})\) can be constructed in time \(\ell^{\mathcal{O}(1)}\)._ Proof.: A construction of a \(3\)-Existential Gadget \((G_{3},\mathcal{T}_{3})\) is given in Figure 25. We refer to this figure in the arguments below. We define \(\mathcal{T}_{3}\) as \(\{(u_{1},v_{1},1),(u_{2},v_{2},1)\}\). We argue that \((G_{3},\mathcal{T}_{3})\) satisfies Figure 25: An illustration for Lemma 5.47. Top left: A \(3\)-Existential Gadget \((G_{3},\mathcal{T}_{3})\) with \(\mathcal{T}_{3}\) given as \(\{(u_{1},v_{1},1),(u_{2},v_{2},1)\}\). Top right: Constructing a non-crossing \((\mathcal{T}_{3}\cup\mathcal{T}_{F})\)-flow in \(G_{3}\) is possible whenever \(F\) misses some element from \(\{1,2,3\}\). Bottom: A construction of an \((\ell+1)\)-Existential Gadget using an \(\ell\)-Existential Gadget and a \(3\)-Existential Gadget. The black vertices become the new terminals. condition (3) of Definition 5.46. If \(F=[3]\), then each orange edge must be used by an \((s_{i},t_{i})\)-walk in any \((\mathcal{T}_{3}\cup\mathcal{T}_{F})\)-flow, as otherwise the walks in the flow would not be edge-disjoint. Let \(G^{\prime}_{3}\) be obtained from \(G_{3}\) by removing the orange edges. Then \(u_{1},u_{2},v_{1},v_{2}\) lie on the outer face of \(G^{\prime}_{3}\) in this order. Since the pairs \((u_{1},v_{1})\) and \((u_{2},v_{2})\) cross and each of these vertices has degree \(2\) in \(G^{\prime}_{3}\), no non-crossing \(\mathcal{T}_{3}\)-flow exists in \(G^{\prime}_{3}\). Consequently, a non-crossing \((\mathcal{T}_{3}\cup\mathcal{T}_{F})\)-flow exists in \(G_{3}\). On the other hand, whenever \(|F|<3\), then a non-crossing \((\mathcal{T}_{3}\cup\mathcal{T}_{F})\)-flow exists in \(G_{3}\): see the top right of the figure. Suppose now that an \(\ell\)-Existential Gadget \((G_{\ell},\mathcal{T}_{\ell})\) with the claimed size exists. We show inductively how to construct an \((\ell+1)\)-Existential Gadget \((G_{\ell+1},\mathcal{T}_{\ell+1})\). We build \(G_{\ell+1}\) from a disjoint union of \(G_{\ell}\) and \(G_{3}\), insert new vertices \(u,v\) on the outer face, and add edges \(uG_{\ell}[s_{\ell}]\), \(uG_{3}[t_{1}]\), \(vG_{\ell}[t_{\ell}]\), \(vG_{3}[s_{1}]\) (see Figure 25, bottom). We define \(\mathcal{T}_{\ell+1}\) as a union of the requests \(\mathcal{T}_{\ell}\) in \(G_{\ell}\) and \(\mathcal{T}_{3}\) in \(G_{3}\), together with request \((u,v,1)\). The distinguished vertices of \(G_{\ell+1}\) are: \(G_{\ell}[s_{1}],G_{\ell}[t_{1}],\ldots,G_{\ell}[s_{\ell-1}],G_{\ell}[t_{\ell- 1}],G_{3}[s_{2}],G_{3}[t_{2}],G_{3}[s_{3}],G_{3}[t_{3}]\). There are \((\ell+1)\) pairs of them; let \(\widehat{\mathcal{T}}\) denote a family of \(\ell+1\) unitary requests, one for each pair. Clearly, the terminals of \(G_{\ell+1}\) can be arranged on the outer face in the presented order. To establish condition (3) of Definition 5.46, we need to show that there is no non-crossing \((\mathcal{T}_{\ell+1}\cup\widehat{\mathcal{T}})\)-flow in \(G_{\ell+1}\) but removing any request from \(\widehat{\mathcal{T}}\) suffices to construct the flow. Suppose that there exists a non-crossing \((\mathcal{T}_{\ell+1}\cup\widehat{\mathcal{T}})\)-flow \(\mathcal{P}\) in \(G_{\ell+1}\). It contains a \((u,v)\)-walk \(W_{uv}\) that needs to go either through \(G_{\ell}\) or \(G_{3}\). Suppose w.l.o.g. the first scenario. Then \(W_{uv}\) contains a \((G_{\ell}[s_{\ell}],G_{\ell}[t_{\ell}])\)-subwalk \(W\) in \(G_{\ell}\). Let \(\widehat{\mathcal{T}}_{\ell}\) be the family of \(\ell-1\) requests from \(\widehat{\mathcal{T}}\) concerning the terminals of \(G_{\ell}\). Next, let \(\mathcal{P}_{\ell}\) be the non-crossing \((\mathcal{T}_{\ell}\cup\widehat{\mathcal{T}}_{\ell})\)-flow contained in \(\mathcal{P}\). Since no walk from \(\mathcal{P}_{\ell}\) can use edges from \(E(W_{uv})\), this flow must be entirely contained in the graph \(G_{\ell}\). Therefore, the non-crossing flow \(\mathcal{P}_{\ell}\cup\{W\}\) satisfies all the requests from \(\mathcal{T}_{\ell}\) and all of the form \((G_{\ell}[s_{i}],G_{\ell}[t_{i}])\) for \(i\in[\ell]\). This contradicts the assumption that \((G_{\ell},\mathcal{T}_{\ell})\) is an \(\ell\)-Existential Gadget. Next, consider a family \(\widehat{\mathcal{T}}^{\prime}\) obtained from \(\widehat{\mathcal{T}}\) by removal of any single request. We argue that there exists a non-crossing \((\mathcal{T}_{\ell+1}\cup\widehat{\mathcal{T}}^{\prime})\)-flow in \(G_{\ell+1}\). Suppose w.l.o.g. that \(\widehat{\mathcal{T}}^{\prime}\) is missing a request concerning a pair of terminals from \(G_{\ell}\). Let \(\widehat{\mathcal{T}}^{\prime}_{\ell}=\left(\widehat{\mathcal{T}}^{\prime} \setminus\{(G_{3}[s_{2}],G_{3}[t_{2}],1),(G_{3}[s_{3}],G_{3}[t_{3}],1)\} \right)\cup\{(G_{\ell}[s_{\ell}],G_{\ell}[t_{\ell}],1)\}\). Note that \(\widehat{\mathcal{T}}^{\prime}_{\ell}\) has less than \(\ell\) elements. By the definition of an \(\ell\)-Existential Gadget, there exists a non-crossing \((\mathcal{T}_{\ell}\cup\widehat{\mathcal{T}}^{\prime}_{\ell})\)-flow \(\mathcal{P}_{\ell}\) in \(G_{\ell}\). Next, let \(\widehat{\mathcal{T}}^{\prime}_{3}=\{(G_{3}[s_{2}],G_{3}[t_{2}],1),(G_{3}[s_{3} ],G_{3}[t_{3}],1)\}\). Again by the definition, there exists a non-crossing \((\mathcal{T}_{3}\cup\widehat{\mathcal{T}}^{\prime}_{3})\)-flow \(\mathcal{P}_{3}\) in \(G_{3}\). We take the union of \(\mathcal{P}_{\ell}\) and \(\mathcal{P}_{3}\) and extend the \((G_{\ell}[s_{\ell}],G_{\ell}[t_{\ell}])\)-walk from \(\mathcal{P}_{\ell}\) with edges \(uG_{\ell}[s_{\ell}]\), \(vG_{\ell}[t_{\ell}]\), so it becomes a \((u,v)\)-walk. This forms a non-crossing \((\mathcal{T}_{\ell+1}\cup\widehat{\mathcal{T}}^{\prime})\)-flow in \(G_{\ell+1}\). We have thus established that \((G_{\ell+1},\mathcal{T}_{\ell+1})\) is indeed an \((\ell+1)\)-Existential Gadget. In the inductive step we increase the size of the graph and the number of requests by \(\mathcal{O}(1)\), so the claimed bound holds. The construction of \((G_{\ell},\mathcal{T}_{\ell})\) can be easily performed in time polynomial in size of \(G_{\ell}\). Suppose we want to encode an instance \((k,\mathcal{S},\ell)\) of Set Cover with \(|\mathcal{S}|=2^{r}\). Here is the first (naive) attempt. We make \(k\) copies of an \(\ell\)-Existential Gadget, \(\ell\) copies of an \((r,k,\mathcal{S})\)-Subset Gadget and, for each \(i\in[k]\), \(j\in[\ell]\), we add terminals \(u_{i,j}\), \(v_{i,j}\), connected to the \(j\)-th pair of terminals in the \(i\)-th existential gadget and the \(i\)-th pair of terminals in the \(j\)-th subset gadget. For each created pair \(u_{i,j}\), \(v_{i,j}\), we demand a single unit of flow between \(u_{i,j}\) and \(v_{i,j}\). By condition (3) of Definition 5.46, for each \(i\in[k]\) there needs to be at least one \(j\in[\ell]\) for which the \((u_{i,j},v_{i,j})\)-walk goes through the \(j\)-th subset gadget. Next, condition (3) of Definition 5.27 implies that for each \(j\in[\ell]\) the set of such indices \(i\) forms a subset of some set from \(\mathcal{S}\). The problem is that already for \(\ell=k=3\) such a graph contains \(K_{3,3}\) as a minor, so it cannot be planar. To circumvent this issue, we need yet another gadget to allow the links between each \(i\)-th existential gadget and each \(j\)-th subset gadget to cross. Since the number of such links is only \(k\cdot\ell\), we can afford adding \(\mathcal{O}(1)\) new requests to implement such a crossing in a planar fashion. **Lemma 5.48**.: _There exists a pair \((G,\mathcal{T})\) (called a_ Junction Gadget_) with the following properties._ 1. \(G\) _is a plane graph with_ \(8\) _distinguished vertices_ \(s_{1},t_{1},s_{2},t_{2},s_{3},t_{3},s_{4},t_{4}\) _lying on the outer face in this clockwise order._ 2. \(\mathcal{T}\) _is a set of triples from_ \(V(G)\times V(G)\times\{1\}\)_._ 3. _For_ \(F\subseteq[4]\)_, let_ \(\mathcal{T}_{F}=\{(s_{i},t_{i},1)\mid i\in F\}\)_. Then, there exists a non-crossing_ \((\mathcal{T}\cup\mathcal{T}_{F})\)_-flow in_ \(G\) _if and only if_ \(\{1,3\}\not\subseteq F\) _and_ \(\{2,4\}\not\subseteq F\)_._ Proof.: The graph \(G\) is depicted in Figure 26. We refer to this figure in the arguments below. The family \(\mathcal{T}\) is given as \(\{(u_{1},v_{1},1),(u_{2},v_{2},1)\}\). Let \(F\subseteq[4]\). The three edges crossing the dotted line separate \(\{u_{1},u_{2},s_{2},t_{4}\}\) from \(\{v_{1},v_{2},t_{2},s_{4}\}\); so, when \(\{2,4\}\subseteq F\), then there can be no \((\mathcal{T}\cup\mathcal{T}_{F})\)-flow in \(G\). Suppose now that \(\{1,3\}\subseteq F\) and there exists a non-crossing \((\mathcal{T}\cup\mathcal{T}_{F})\)-flow in \(G\). The orange edges must be utilized by the \((s_{1},t_{1})\)-walk and the \((s_{3},t_{3})\)-walk, as otherwise the walks would not be edge-disjoint with the \(\mathcal{T}\)-flow. Let \(G^{\prime}\) be obtained from \(G\) by removing the orange edges. Then \(u_{1},u_{2},v_{1},v_{2}\) lie on the outer face of \(G^{\prime}\) in this order. Since the pairs \((u_{1},v_{1})\) and \((u_{2},v_{2})\) cross and each of these vertices has degree \(2\) in \(G^{\prime}\), no non-crossing \(\mathcal{T}\)-flow exists in \(G^{\prime}\). Hence we arrive at a contradiction. The four flows on the right of the figure demonstrate that whenever \(\{1,3\}\not\subseteq F\) and \(\{2,4\}\not\subseteq F\), then a non-crossing \((\mathcal{T}\cup\mathcal{T}_{F})\)-flow exists in \(G\). We are ready to present the proper reduction. **Theorem 5.49**.: _There is a polynomial-time algorithm that, given an instance \((k,\mathcal{S},\ell)\) of Set Cover, outputs an equivalent instance \((G,\mathcal{T})\) of Non-crossing Multicommodity Flow with \(|\mathcal{T}|=\mathcal{O}(k^{5})\). The demands \(d_{i}\) for \((s_{i},t_{i},d_{i})\in\mathcal{T}\) are bounded by \(2^{\mathcal{O}(k)}\)._ Proof.: By removing duplicates in the family \(\mathcal{S}\), we can assume that it contains at most \(2^{k}\) sets. Next, by padding the family \(\mathcal{S}\) with empty sets and increasing its size at most twice, we can assume that the size \(\mathcal{S}\) is a power of \(2\). Let \(r\in\mathbb{N}\) be such that \(|\mathcal{S}|=2^{r}\). We can solve \((k,\mathcal{S},\ell)\) in polynomial time when \(r<6\) or \(\ell<3\). Therefore, we can assume that \(6\leq r\leq k\) and \(\ell\geq 3\), so Figure 26: An illustration for Lemma 5.48: a Junction Gadget. The graph \(G\) is on the left and \(\mathcal{T}\) is given as \(\{(u_{1},v_{1},1),(u_{2},v_{2},1)\}\). The four flows on the right demonstrate that whenever \(F\) excludes one of \(1,3\) and one of \(2,4\), then we can construct a non-crossing \((\mathcal{T}\cup\mathcal{T}_{F})\)-flow. we will meet preconditions of the used lemmas. Note that the running time of the form \(2^{\mathcal{O}(r)}\) is polynomial in the input size. By a slight abuse of notation, we represent the set family \(\mathcal{S}\) as a function \(\{0,1\}^{r}\to 2^{[k]}\). The choice of this function is irrelevant. We begin the construction of the graph \(G\) by creating \(k\) copies \(A_{1},\dots,A_{k}\) of an \(\ell\)-Existential Gadget (Lemma 5.47) and \(\ell\) copies \(B_{1},\dots,B_{\ell}\) of an \((r,k,\mathcal{S})\)-Subset Gadget (Proposition 5.45). We arrange them on the plane as illustrated in Figure 27. For each \(i\in[k]\), \(j\in[\ell]\), we draw two L-shaped paths, first connecting vertices \(A_{i}[s_{j}]\) and \(B_{j}[t_{i}]\), the second one connecting vertices \(A_{i}[t_{j}]\) and \(B_{j}[s_{i}]\). We refer to the union of these two paths as the \((i,j)\)-road. When the \((i_{1},j_{1})\)-road and the \((i_{2},j_{2})\)-road cross, we place there a copy of the Junction Gadget \(J_{i_{1},j_{1},i_{2},j_{2}}\) (Lemma 5.48). Let \(\mathcal{H}\) be the family of all constructed gadgets of three types. Now each road becomes divided into subroads, each connecting a pair of terminals \(s_{x},t_{x}\) from some gadget \(H_{1}\in\mathcal{H}\) to a pair of terminals \(s_{y},t_{y}\) from some gadget \(H_{2}\in\mathcal{H}\). We place two new vertices \(u,v\) in the middle of this subroad and insert edges \(uH_{1}[s_{x}]\), \(uH_{2}[t_{y}]\), \(vH_{1}[s_{y}]\), \(vH_{2}[t_{x}]\). The edges are chosen in such a way as to avoid any edge crossings; note that the ordering of terminals \(s_{1},t_{1},s_{2},t_{2},\dots\) for each gadget is clockwise around its outer face. For each such pair \((u,v)\), we insert a request \((u,v,1)\) to a family \(\mathcal{T}_{road}\). For a gadget \(H\in\mathcal{H}\), let \(\mathcal{T}[H]\) denote the family of its internal requests. We define \(\mathcal{T}\) as \(\mathcal{T}_{road}\cup\bigcup_{H\in\mathcal{H}}\mathcal{T}[H]\). This finishes the construction of the instance \((G,\mathcal{T})\). **Claim 5.50**.: _If there exist \(\ell\) vectors \(\mathbf{b}_{1},\dots,\mathbf{b}_{\ell}\in\{0,1\}^{r}\) for which \(\bigcup_{i=1}^{\ell}\mathcal{S}(\mathbf{b}_{i})=[k]\), then there exists a non-crossing \(\mathcal{T}\)-flow in \(G\)._ Proof.: Consider the \((i,j)\)-road for \(i\in[k]\), \(j\in[\ell]\). If \(i\in\mathcal{S}(\mathbf{b}_{j})\), then for each pair \((u,v)\) of terminals on the \((i,j)\)-road, we route the \((u,v)\)-walk through the gadget \(H\in\mathcal{H}\) that is closer to the subset gadget \(B_{j}\). Otherwise, we route the \((u,v)\)-walk through the gadget closer to the existential gadget \(A_{i}\). First, observe that in every junction gadget \(J\) we either use a connection between \(J[s_{1}]\) and \(J[t_{1}]\) or between \(J[s_{3}]\) and \(J[t_{3}]\). We also either use a connection between \(J[s_{2}]\) and \(J[t_{2}]\) or between \(J[s_{4}]\) Figure 27: (Figure 3 restated) A visualization of the reduction in Theorem 5.49 with \(k=3\), \(\ell=2\). The existential gadgets are on the top and the subset gadgets are on the right. The terminal pairs in each gadget are numbered in a clockwise manner. The three squares in the middle are the junction gadgets. The (1,1)-road is highlighted. The red \(\mathcal{T}_{road}\)-flow encodes a solution \(S_{1}=\{1\}\), \(S_{2}=\{2,3\}\). and \(J[t_{4}]\). By condition (3) of Lemma 5.48, such two connections can be realized via a non-crossing flow within \(J\) together with \(\mathcal{T}[J]\). Consider the existential gadget \(A_{i}\) for \(i\in[k]\). Let \(F_{i}\subseteq[\ell]\) indicate the indices \(j\) for which a \((u,v)\)-walk from the \((i,j)\)-road goes through \(A_{i}\). Since there is at least one \(j\in[\ell]\) with \(i\in\mathcal{S}(\mathbf{b}_{j})\), we obtain \(|F_{i}|<\ell\). By condition (3) of Definition 5.46, there exists a non-crossing \((\mathcal{T}[A_{i}]\cup\mathcal{T}_{F_{i}})\)-flow in \(A_{i}\). Finally, consider the subset gadget \(B_{j}\). Let \(F^{\prime}_{j}\subseteq[k]\) indicate the indices \(i\) for which a \((u,v)\)-walk from the \((i,j)\)-road goes through \(B_{j}\). By construction, we have \(F^{\prime}_{j}=\mathcal{S}(\mathbf{b}_{j})\). Hence by condition (3) of Definition 5.27, a non-crossing \((\mathcal{T}[B_{j}]\cup\mathcal{T}_{F^{\prime}_{j}})\)-flow exists in \(B_{j}\). The claim follows. **Claim 5.51**.: _If there exists a non-crossing \(\mathcal{T}\)-flow \(\mathcal{P}\) in \(G\), then there exist vectors \(\mathbf{b}_{1},\ldots,\mathbf{b}_{\ell}\in\{0,1\}^{r}\) for which \(\bigcup_{i=1}^{\ell}\mathcal{S}(\mathbf{b}_{i})=[k]\)._ Proof.: Since the walks from \(\mathcal{P}\) are edge-disjoint, the requests from family \(\mathcal{T}_{road}\) forbid any walk from family \(\mathcal{T}[H]\), for \(H\in\mathcal{H}\), to use edges outside the subgraph \(H\). Therefore, for every \(H\in\mathcal{H}\), the \(\mathcal{T}[H]\)-flow included in \(\mathcal{P}\) is entirely contained in \(H\). Let \(F_{i}\subseteq[\ell]\) indicate the indices \(j\) for which a \((u,v)\)-walk from the \((i,j)\)-road goes through \(A_{i}\). Similarly, let \(F^{\prime}_{j}\subseteq[k]\) indicate the indices \(i\) for which a \((u,v)\)-walk from the \((i,j)\)-road goes through \(B_{j}\). Consider the \((i,j)\)-road for \(i\in[k]\), \(j\in[\ell]\). By the properties of a junction gadget, no two pairs \((u_{1},v_{1})\), \((u_{2},v_{2})\) of terminals located on this road can make use of a single junction gadget. This implies that either \(j\in F_{i}\) or \(i\in F^{\prime}_{j}\) or possibly both conditions hold. By the properties of an existential gadget, for each \(i\in[k]\) there exists \(\tau(i)\in[\ell]\) such that \(\tau(i)\not\in F_{i}\). Consequently, \(i\in F^{\prime}_{\tau(i)}\). By the properties of a subset gadget, for each \(j\in[\ell]\) there exists a vector \(\mathbf{b}_{j}\) such that \(F^{\prime}_{j}\subseteq\mathcal{S}(\mathbf{b}_{j})\). We infer that an element \(i\in[k]\) is contained in the set \(\mathcal{S}(\mathbf{b}_{\tau(i)})\). Therefore, a set cover of size \(\ell\) exists. We have thus established that the instances \((k,\mathcal{S},\ell)\) and \((G,\mathcal{T})\) are equivalent. All the requests in \(\mathcal{T}\) are unitary apart from those in the subset gadget. Proposition 5.45 guarantees that the demands in this gadget are bounded by \(\mathcal{O}(2^{3r})=2^{\mathcal{O}(k)}\). It remains to count the number of requests in \(\mathcal{T}\). Each of \(k\) existential gadgets requires \(\mathcal{O}(\ell)\) requests. Each of \(\ell\) subsets gadgets requires \(\mathcal{O}(kr^{3})\) requests. The number of junction gadgets equals the number of crossings between the roads, which is at most the number of roads squared, that is, \(\mathcal{O}(k^{2}\ell^{2})\). Each such gadget contains only \(\mathcal{O}(1)\) requests. The size of the family \(\mathcal{T}_{road}\) is proportional to the number of junction gadgets, so also \(|\mathcal{T}_{road}|=\mathcal{O}(k^{2}\ell^{2})\). Since both \(r,\ell\) are bounded by \(k\), we obtain \(|\mathcal{T}|=\mathcal{O}(k^{5})\). Finally, each of the three types of gadgets can be constructed in time polynomial in the input size. This concludes the proof. ### Implementing weights As the last step, we need to get rid of large demands in the request family \(\mathcal{T}\). Our construction of the subset gadget requires demands as large as \(2^{\mathcal{O}(k)}\) and we cannot afford requesting this many vertex-disjoint paths in a meaningful reduction. Instead, we shall implement such a request with \(\mathcal{O}(r^{2})\) unitary requests by utilizing the construction by Adler and Krause [3]. We begin by simplifying the requests so that the demands are of the form \(2^{i}-1\) and the number of edges incident to each terminal \(v\) equals the number of paths starting at \(v\). We also place additional "guarding" requests of demand \(1\) that will come in handy in further topological arguments. This operation is depicted in Figure 28. **Definition 5.52** (Binary simplification).: _Let \((G,\mathcal{T})\) be an instance of Non-crossing Multicommodity Flow and \((s,t,d)\in\mathcal{T}\). We obtain an instance \((G^{\prime},\mathcal{T}^{\prime})\) from \((G,\mathcal{T})\) as follows. First, we replace the vertex \(s\) (resp. \(t\)) in \(G\) with a cycle \(C_{s}\) (resp. \(C_{t}\)) having a single vertex for each edge incident to \(s\) (resp. \(t\)). We multiply each of the edges on the cycle \(|E(G)|\) times. We pick an arbitrary vertex \(v\) on \(C_{s}\) (resp. \(C_{t}\)) and create a new vertex \(s^{\prime}\) (resp. \(t^{\prime}\)) in the interior of the cycle, connected to \(v\) via \(d\) parallel edges. Next, let \(D\subseteq\mathbb{N}\) be the set of 1's in the binary representation of \(d\), i.e., \(d=\sum_{i\in D}2^{i}\). For each \(i\in D\) we create vertices \(v^{i}_{s}\), \(u^{i}_{s}\), adjacent to \(s^{\prime}\), and vertices \(v^{i}_{t}\), \(u^{i}_{t}\), adjacent to \(t^{\prime}\). We place them in a clockwise manner around \(s^{\prime}\) and in an counter-clockwise manner around \(t^{\prime}\). For every vertex of the form \(u^{i}_{s},u^{i}_{t}\), we multiply the only edge incident to it times \(2^{i}-1\). We remove \((s,t,d)\) from \(\mathcal{T}\) and replace it with \(\bigcup_{i\in D}\{(v^{i}_{s},v^{i}_{t},1),(u^{i}_{s},u^{i}_{t},2^{i}-1)\}\). We say that \((G^{\prime},\mathcal{T}^{\prime})\) is obtained from \((G,\mathcal{T})\) via binary simplification of \((s,t,d)\)._ **Lemma 5.53**.: _Let \((G^{\prime},\mathcal{T}^{\prime})\) be obtained from \((G,\mathcal{T})\) via binary simplification of \((s,t,d)\in\mathcal{T}\). Then these two instances of Non-crossing Multicommodity Flow are equivalent._ Proof.: Note that by the definition of the problem, no other terminals from \(\mathcal{T}\) coincide with \(s,t\). Consider a non-crossing \(\mathcal{T}\)-flow \(\mathcal{P}\) in \(G\) and let \(\mathcal{P}_{st}\subseteq\mathcal{P}\) be the family of \(d\) walks connecting \(s\) with \(t\). Since the vertices of the cycle \(C_{s}\) (resp. \(C_{t}\)) are connected via \(|E(G)|\) parallel edges and each walk from \(\mathcal{P}\) visiting \(s\) or \(t\) must use some edge incident to this vertex, there is enough space to route all the walks from \(\mathcal{P}\setminus\mathcal{P}_{st}\) alongside the cycle. The walks from \(\mathcal{P}_{st}\) can be translated into \((V(C_{s}),V(C_{t}))\)-walks in \(G^{\prime}\). Let us order them as \(P_{1},\ldots,P_{d}\) in a clockwise manner around \(C_{s}\), starting from an arbitrary one. Then \(P_{1},\ldots,P_{d}\) arrive at \(C_{t}\) in an counter-clockwise manner. By Definition 5.2 of a non-crossing flow, no two walks \(P_{i},P_{j}\) cross with any other walk from \(\mathcal{P}\) at \(C_{s}\) (resp. \(C_{t}\)). Therefore we can route them using the innermost edges on the cycles \(C_{s},C_{t}\) to reach \(s^{\prime}\) and \(t^{\prime}\) in an order reflecting the terminal pairs \((v^{i}_{s},v^{i}_{t})\) and \((u^{i}_{s},u^{i}_{t})\) (see Figure 28). Consider now a non-crossing \(\mathcal{T}^{\prime}\)-flow \(\mathcal{P}^{\prime}\) in \(G^{\prime}\). Let \(\mathcal{P}^{\prime}_{st}\subseteq\mathcal{P}^{\prime}\) be the family of walks realizing the requests created due to binary simplification of \((s,t,d)\). We contract the vertex set \(V(C_{s})\) together with all the vertices lying inside \(C_{s}\) into a single vertex \(s\), and similarly for \(C_{t}\), thus obtaining the Figure 28: Left: An example of binary simplification of a request \((s,t,d)\), where \(d\) has two 1’s in the binary representation. Right: A correspondence between non-crossing flows in the original multigraph and the multigraph after modification. Note that the edges on the cycles are duplicated. Two exemplary \((s,t)\)-paths are drawn in red. graph \(G\) again. By Observation 5.3, this operation transforms \(\mathcal{P}^{\prime}\) into a non-crossing flow in \(G\). Because there are exactly \(d\) parallel edges between \(s^{\prime}\) and the fixed vertex on \(C_{s}\) (resp. \(t^{\prime}\) and the fixed vertex on \(C_{t}\)), no paths from \(\mathcal{P}^{\prime}\setminus\mathcal{P}^{\prime}_{st}\) can visit \(s^{\prime}\) or \(t^{\prime}\). The only terminals contracted into \(s\) (resp. \(t\)) are the ones created due to binary simplification of \((s,t,d)\), so we obtain a non-crossing \(\mathcal{T}\)-flow. The binary simplification imposes a convenient structure on the multigraph, which we will utilize next to replace each request of demand \(2^{i}-1\) with just \(i\) unitary requests. We will analyze the reduction using the following artificial problem. **Definition 5.54** ([3, Def. 3]).: _Given a subset \(X\) of the plane and a set of \(k\) pairs of terminals \(\mathcal{T}\subseteq X^{2}\), the Topological Disjoint Paths problem is to determine whether there exist \(k\) pairwise disjoint curves in \(X\), such that each curve \(P_{i}\) is homeomorphic to \([0,1]\) and its ends are \(s_{i}\) and \(t_{i}\) where \((s_{i},t_{i})\in\mathcal{T}\)._ We will use this problem only for the sake of analysis, so we do not have to specify how the set \(X\) is encoded. For \(k\in\mathbb{N}\) we define an instance \((X_{k},\mathcal{T}_{k})\) of Topological Disjoint Paths. This is a concise version of [3, Definition 4], where the set \(X_{k}\) is called a _disc-with-edges_. See Figure 29 for an illustration. **Definition 5.55**.: _Let \(L_{k}\) be an ordered list of elements from \(\{s_{1},t_{1},s_{2},t_{2},\ldots,s_{k},t_{k}\}\) defined inductively. For \(k=1\) we set \(L_{1}=(s_{1},t_{1})\). Assume that \(L_{k}\) is already constructed with \(t_{k}\) being its last element. We obtain \(L_{k+1}\) from \(L_{k}\) by inserting \(s_{k+1}\) before \(t_{k}\) and inserting \(t_{k+1}\) after \(t_{k}\). We define \(\mathcal{T}_{k}=\{(s_{i},t_{i})\mid i\in[k]\}\)._ _Now we construct the set \(X_{k}\subseteq\mathbb{R}^{2}\). Let \(D\subseteq\mathbb{R}^{2}\) be a closed disc. We place the elements of \(L_{k}\) on the boundary of \(D\) in the counter-clockwise order. For \(i\in[k]\) let \(S_{i},T_{i}\) be the connected components of \(\partial D\setminus L_{k}\) neighboring \(t_{i}\). For each \(i\in[k]\) let \(E_{i}\) be a family of \(2^{i-1}-1\) curves connecting \(S_{i}\) and \(T_{i}\) outside \(D\), in such a way that the are no crossings in \(\bigcup_{i=1}^{k}E_{i}\). We set \(X_{k}=D\cup\bigcup_{i=1}^{k}E_{i}\) and refer to the curves from \(E_{1},\ldots,E_{k}\) as the edges of \(X_{k}\)._ _Next, let \(Y_{k}\subseteq X_{k}\cap\partial D\) be the union of the set \(L_{k}\) and all the endpoints of edges in \(X_{k}\). We set \(Y_{k}^{t}=Y_{k}\cap(E_{k}\cup\{t_{k}\})\) and \(Y_{k}^{s}=Y_{k}\setminus Y_{k}^{t}\)._ Figure 29: Left: The instance \((X_{4},\mathcal{T}_{4})\) of Topological Disjoint Paths. The set \(X_{4}\) is gray and the points from \(\mathcal{T}_{4}\) are black discs. The black and hollow discs form the set \(Y_{4}\), which is divided into \(Y_{4}^{s}\) and \(Y_{4}^{t}\). Middle: The unique solution to \((X_{4},\mathcal{T}_{4})\) traverses the disc \(2^{4}-1\) times from left to right. Right: The gadgets \(G_{4}^{s}\) and \(G_{4}^{t}\). Note that \(2^{1-1}-1=0\) so \(E_{1}=\emptyset\). The sets \(Y^{s}_{k},Y^{t}_{k}\) divide the distinguished points in \(X_{k}\) into the left side and the right side. Observe that \(|Y^{s}_{k}|=|Y^{t}_{k}|=2^{k}-1\). The instance \((X_{k},\mathcal{T}_{k})\) is designed to have a unique solution, depicted in Figure 29. In this solution, the \((s_{i},t_{i})\)-path must "go around" the \((s_{i-1},t_{i-1})\)-path, thus traversing the disc twice as many times. **Lemma 5.56** ([3, Lem. 1, Thm. 4]).: _For each \(k\in\mathbb{N}\) the instance \((X_{k},\mathcal{T}_{k})\) of Topological Disjoint Paths has a unique solution (up to homeomorphism). The curves in this solution contain \(2^{k}-1\) subcurves connecting a point in \(Y^{s}_{k}\) to a point in \(Y^{t}_{k}\)._ Adler and Krause [3] used this lemma to construct an instance of Planar \(k\)-Disjoint Paths with a \((2^{k}-1)\times(2^{k}-1)\)-grid, in which every vertex is used by the unique solution. This constitutes an example of a large-treewidth instance in which no vertex is irrelevant. We want to turn the instance \((X_{k},\mathcal{T}_{k})\) into two gadgets that can be plugged into a plane multigraph, enforcing a flow of size \(2^{k}-1\) between the gadgets by using only \(k\) terminal pairs. To this end, we need to translate the topological structure of the disc-with-edges into a graph structure. **Definition 5.57**.: _Consider the instance \((X_{k},\mathcal{T}_{k})\) of Topological Disjoint Paths. We define a gadget \(G^{s}_{k}\) by placing a vertex \(s^{\prime}\) in the interior of \(D\), inserting edges between \(s^{\prime}\) and each element in \(Y^{s}_{k}\), and adding the edges of \(X_{k}\) with both endpoints in \(Y^{s}_{k}\). Analogously we obtain a gadget \(G^{t}_{k}\) by placing a vertex \(t^{\prime}\) in the interior of \(D\), inserting edges between \(t^{\prime}\) and each element in \(Y^{t}_{k}\), and adding the edges of \(X_{k}\) with both endpoints in \(Y^{t}_{k}\). We refer to the vertices \(s^{\prime},t^{\prime}\) as the roots of the gadgets._ The gadgets are depicted in Figure 29. By a slight abuse of notation, we will treat \(s_{i},t_{i}\) as vertices from \(G^{s}_{k}\cup G^{t}_{k}\) and \(\mathcal{T}_{k}\) as a set of pairs of vertices. **Definition 5.58** (Weight implementation).: _Let \((G,\mathcal{T})\) be an instance of Non-crossing Multi-commodity Flow and \((s,t,d)\in\mathcal{T}\) be such that \(d=2^{i}-1\), \(i>0\), and each of \(s,t\) has exactly \(d\) incident edges, which are parallel. We obtain an instance \((\widehat{G},\widehat{\mathcal{T}})\) from \((G,\mathcal{T})\) as follows. Let \(s^{\prime},t^{\prime}\) be the only neighbors of \(s,t\), respectively, in \(G\). We replace \(s\) with the gadget \(G^{s}_{i}\) rooted at \(s^{\prime}\) and replace \(t\) with the gadget \(G^{t}_{i}\) rooted at \(t^{\prime}\). We replace \((s,t,d)\) in \(\mathcal{T}\) with the set \(\{(s_{j},t_{j},1)\mid(s_{j},t_{j})\in\mathcal{T}_{i}\}\)._ In order to prove correctness of this transformation, we need to show that \(\mathcal{T}_{i}\)-walks in a non-crossing \(\widehat{\mathcal{T}}\)-flow in \(\widehat{G}\) traverse \(2^{i}-1\) many times between the gadgets \(G^{s}_{i},G^{t}_{i}\). We will take advantage of the "guarding" request \((v^{i}_{s},v^{i}_{t},1)\) to reduce the analysis to the case where the \(\mathcal{T}_{i}\)-flow traverses a topological disc. Then we could apply Lemma 5.56 to reveal the structure of the flow. **Lemma 5.59**.: _Let \((G^{\prime},\mathcal{T}^{\prime})\) be obtained from \((G,\mathcal{T})\) via binary simplification of \((s,t,d)\in\mathcal{T}\) and \(i\in\mathbb{N}\) belong to the binary representation of \(d\). Next, let \((\widehat{G},\widehat{\mathcal{T}})\) be obtained from \((G^{\prime},\mathcal{T}^{\prime})\) by applying the transformation from Definition 5.58 to the request \((u^{i}_{s},u^{i}_{t},2^{i}-1)\in\mathcal{T}^{\prime}\). Then the instances \((G^{\prime},\mathcal{T}^{\prime})\) and \((\widehat{G},\widehat{\mathcal{T}})\) are equivalent._ Proof.: First, consider a non-crossing \(\mathcal{T}^{\prime}\)-flow \(\mathcal{P}^{\prime}\) in \(G^{\prime}\) and the family \(\mathcal{P}^{\prime}_{u}\subseteq\mathcal{P}^{\prime}\) realizing the \((u^{i}_{s},u^{i}_{t})\)-paths. They all must visit \(s^{\prime}\) and \(t^{\prime}\). Because \((v^{i}_{s},v^{i}_{t},1)\in\mathcal{P}^{\prime}\), there is a \((v^{i}_{s},v^{i}_{t})\)-path \(P_{v}\) in \(\mathcal{P}^{\prime}\), also visiting \(s^{\prime}\) and \(t^{\prime}\). Since \(P_{v}\) is non-crossing with \(\mathcal{P}^{\prime}_{u}\), the order in which the paths from \(\mathcal{P}^{\prime}_{u}\) enter \(u^{i}_{s}\) is symmetric to the order in which they enter \(u^{i}_{t}\). Therefore, \(\mathcal{P}^{\prime}_{u}\) can be translated into a non-crossing \(\mathcal{T}_{i}\)-flow. Next, consider a non-crossing \(\widehat{\mathcal{T}}\)-flow \(\widehat{\mathcal{P}}\) in \(\widehat{G}\) and the family \(\widehat{\mathcal{P}}_{u}\subseteq\widehat{\mathcal{P}}\) being a \(\mathcal{T}_{i}\)-flow. We need to show that \(\widehat{\mathcal{P}}_{u}\) contains \(2^{i}-1\) subwalks connecting \(s^{\prime}\) and \(t^{\prime}\). We can flip the embedding of \(\widehat{G}\) to make the face incident to \(s^{\prime}\) the outer face, without changing the rotation system; this transformation preserves the property of being a non-crossing flow (see Figure 30). Let \(D_{G}\subseteq\mathbb{R}^{2}\) be the subset of the plane without the outer face of \(\widehat{G}\) and the face incident to \(t^{\prime}\). Due to the existence of a \((v_{s}^{i},v_{t}^{i})\)-path \(P_{v}\) in \(\widehat{\mathcal{P}}\), which is non-crossing with \(\widehat{\mathcal{P}}_{u}\) and has endpoints of degree \(1\), the flow \(\widehat{\mathcal{P}}_{u}\) can be drawn as a family of non-crossing curves in \(D_{G}\setminus P_{v}\). This set is a topological disc, i.e., there is a homotopy that translates \((D_{G}\setminus P_{v},\mathcal{T}_{i})\) into \((X_{i},\mathcal{T}_{i})\). Hence Lemma 5.56 implies that \(\widehat{\mathcal{P}}_{u}\) is equivalent to the unique solution to the Topological Disjoint Paths instance \((X_{i},\mathcal{T}_{i})\), and so it contains \(2^{i}-1\) walks connecting \(s^{\prime}\) and \(t^{\prime}\). This concludes the proof. We are ready to summarize our reduction. Recall that an instance \((G,\mathcal{T})\) of Non-crossing Multicommodity Flow is called unitary if every demand in \(\mathcal{T}\) is \(1\) and every terminal occurring in \(\mathcal{T}\) has degree \(1\) in \(G\). Observe that the newly created requests are unitary and the new terminals are of degree \(1\). Hence applying both transformations to all original requests yields a unitary instance. **Proposition 5.60**.: _Let \((G,\mathcal{T})\) be an instance of Non-crossing Multicommodity Flow and \(\ell\in\mathbb{N}\) be such that \(d_{i}\leq 2^{\ell}\) for each \((s_{i},t_{i},d_{i})\in\mathcal{T}\). Then in polynomial time we can transform \((G,\mathcal{T})\) into an equivalent unitary instance \((\widehat{G},\widehat{\mathcal{T}})\) of Non-crossing Multicommodity Flow satisfying \(|\widehat{\mathcal{T}}|=\mathcal{O}(\ell^{2})\cdot|\mathcal{T}|\)._ Proof.: We first apply binary simplification to all requests in \(\mathcal{T}\), thus increasing the number of requests by the factor of \(\mathcal{O}(\ell)\). By Lemma 5.53, the obtained instance \((G^{\prime},\mathcal{T}^{\prime})\) is equivalent to \((G,\mathcal{T})\). Next, we apply weight implementation to each non-unitary request in \(\mathcal{T}^{\prime}\), obtaining a unitary Figure 30: An illustration to the proof of Lemma 5.59 for \(d=4,i=2\). Left: A multigraph after binary simplification and weight implementation. The vertices \(s,t\) got replaced by cycles \(C_{s},C_{t}\). The vertices \(s^{\prime},t^{\prime}\) and the incident parallel edges are part of the gray area. Next, the vertices \(u_{s}^{2},u_{t}^{2}\) and request \((u_{s}^{2},u_{t}^{2},3)\) got replaced with the gadgets \(G_{2}^{s},G_{2}^{t}\) and requests \((s_{1},t_{1},1)\), \((s_{2},t_{2},1)\). The crux of the lemma is to show that the green walk cannot use the dotted shortcut and it needs to enter the cycle \(C_{t}\). Middle: After flipping the embedding we can assume that \(s^{\prime}\) lies on the outer face. Right: After removing the \((v_{s}^{2},v_{t}^{2})\)-walk, the remaining gray area becomes a topological disc. This reduces the analysis to the instance \((X_{2},\mathcal{T}_{2})\) where the unique solution traverses \(2^{i}-1=3\) times between \(C_{s}\) and \(C_{t}\). instance \((\widehat{G},\widehat{\mathcal{T}})\). Again, the number of requests is being multiplied by \(\mathcal{O}(\ell)\). The instance \((\widehat{G},\widehat{\mathcal{T}})\) is equivalent to \((G^{\prime},\mathcal{T}^{\prime})\) due to Lemma 5.59. We remark that almost all the requests used in our reduction from Set Cover have demands being powers of two. The only exceptions are the requests of type (F) in Section 5.3.2. While it is possible to replace each of them with \(r\) requests of demand \(2^{r}\) (without increasing the total number of requests significantly) and shave off a single \(\mathcal{O}(\ell)\)-factor in weight implementation, we have chosen not to further complicate the description of the already complex gadget. Besides, we think that Proposition 5.60 in its general form might find applications also outside our context. ### From a non-crossing flow to disjoint paths Recall that for a vertex set \(X\subseteq V(G)\), a set of pairs \(\mathcal{T}\subseteq X^{2}\) is called realizable if there exists a \(\mathcal{T}\)-linkage (that is, a \(\mathcal{T}\)-family of vertex-disjoint paths) in \(G\). Also recall the notion of a proper embedding from Section 3 and the property of being cross-free from Section 4.3. **Lemma 5.61**.: _Let \(I\) be a noose and \(X\subseteq I\) be a finite set of size \(k\). There exists a subcubic plane graph \(H\) properly embedded in \(\mathsf{Disc}(I)\) such that \(H\cap I=X\), \(\deg_{H}(x)\leq 2\) for every \(x\in X\), and every cross-free \(\mathcal{T}\subseteq X^{2}\) is realizable in \(H\). This graph can be constructed in time polynomial in \(k\)._ Proof.: The claim is trivial for \(k\leq 2\). For \(k\geq 3\) we use a construction similar to that from Lemma 4.39 but instead of a \((k,k)\)-cylindrical grid (that have vertices of degree 4) we use a \(k\)-cylindrical wall (with \(k\) cycles and \(k\) edges between each pair of consecutive cycles; see Figure 3 on page 11 with 3 cycles) and identify the degree-2 vertices on the outer face with \(X\). A \(k\)-cylindrical wall is a subcubic graph. The argument that every cross-free \(\mathcal{T}\subseteq X^{2}\) is realizable is the same as for the cylindrical grid and follows from the criterion from Lemma 4.38. Armed with this gadget, we can assume that a given graph is simple and subcubic. **Lemma 5.62**.: _There is a polynomial-time algorithm that, given a unitary instance \((G,\mathcal{T})\) of Non-crossing Multicommodity Flow, transforms it into an equivalent unitary instance \((G^{\prime},\mathcal{T}^{\prime})\) such that \(G^{\prime}\) is simple, subcubic, and \(|\mathcal{T}^{\prime}|=|\mathcal{T}|\)._ Proof.: By the definition of a unitary instance, when \(v\in V(G)\) occurs as a terminal in \(\mathcal{T}\) then \(\deg_{G}(v)=1\). Therefore it suffices to reduce the degrees of non-terminal vertices. Let \(v\in V(G)\) be a vertex of degree \(k=\deg_{G}(v)\geq 2\); then \(v\) does not appear in \(\mathcal{T}\). We draw a noose \(I\) around \(v\), intersecting each edge from \(E_{G}(v)\) once, and no more edges or vertices from \(G\). Let \(X\) be the set of intersections of \(I\) with \(E_{G}(v)\). We replace \(v\) with a gadget from Lemma 5.61, creating a subcubic subgraph \(H\) properly embedded in \(\mathsf{Disc}(I)\). Since \(\deg_{H}(x)\leq 2\) for for every \(x\in X\), the degree of \(x\) in \(G^{\prime}\) becomes 3. We argue that this transformation yields an equivalent instance. Consider a non-crossing \(\mathcal{T}\)-flow \(\mathcal{P}\) in \(G\). Let \(\mathcal{T}_{v}\subseteq X^{2}\) be the set of pairs representing pairs of consecutive edges from \(E_{G}(v)\) traversed by walks from \(\mathcal{P}\) (recall that no path from \(\mathcal{P}\) has \(v\) as an endpoint). Then \(\mathcal{T}_{v}\) is cross-free with respect to \(H\). By Lemma 5.61, there exists a \(\mathcal{T}_{v}\)-linkage in \(H\). Since vertex-disjoint paths are clearly non-crossing, this allows us to transform \(\mathcal{P}\) into a non-crossing \(\mathcal{T}\)-flow in \(G^{\prime}\). On the other hand, when a non-crossing \(\mathcal{T}\)-flow exists in \(G^{\prime}\), it can be turned into a non-crossing \(\mathcal{T}\)-flow in \(G\) by simply contracting \(H\) back to a single vertex (see Observation 5.3). We apply this modification to every vertex in \(G\) with degree at least \(2\), thus creating a subcubic graph. Note that every pair of parallel edges in \(G\) connects two vertices of degree at least \(2\), and during the transformation their endpoints become distinct. Hence the outcome is also a simple graph. In a subcubic graph, the notions of (non-crossing) edge-disjointness and vertex-disjointness coincide, so we can treat an instance of Non-crossing Multicommodity Flow as an instance of Planar (Edge-)Disjoint Paths. Note that in the first problem we assume that a plane embedding is provided in the input, while in the last two we do not. However, once the reduction is done, we can discard the fixed embedding. **Theorem 5.63**.: _There is a polynomial-time algorithm that, given an instance \((k,\mathcal{S},\ell)\) of Set Cover, outputs an equivalent instance \((G,\mathcal{T})\) of Planar (Edge-)Disjoint Paths with \(|\mathcal{T}|=\mathcal{O}(k^{7})\)._ Proof.: Theorem 5.49 allows us to compute an instance \((G,\mathcal{T})\) of Non-crossing Multicommodity Flow with \(|\mathcal{T}|=\mathcal{O}(k^{5})\) that is equivalent to \((k,\mathcal{S},\ell)\). The demands in \((G,\mathcal{T})\) are bounded by \(2^{\mathcal{O}(k)}\). Next, we use Proposition 5.60 to transform \((G,\mathcal{T})\) into an equivalent unitary instance \((\widehat{G},\widehat{\mathcal{T}})\) with \(|\widehat{\mathcal{T}}|=\mathcal{O}(k^{7})\). Subsequently, \((\widehat{G},\widehat{\mathcal{T}})\) can be transformed into an equivalent unitary instance \((G^{\prime},\mathcal{T}^{\prime})\) such that \(G^{\prime}\) is simple, subcubic, and \(|\mathcal{T}^{\prime}|=|\widehat{\mathcal{T}}|\) (Lemma 5.62). Let \(\mathcal{T}_{DP}=\{(s,t)\mid(s,t,1)\in\mathcal{T}^{\prime}\}\). In a subcubic graph with all terminals of degree \(1\), every walk in a solution is a path and two paths are edge-disjoint if and only if they are vertex-disjoint. In turn, vertex-disjointness implies being non-crossing. Hence the following statements are equivalent. 1. There is a non-crossing \(\mathcal{T}^{\prime}\)-flow in \(G^{\prime}\). 2. There is a \(\mathcal{T}_{DP}\)-family of vertex-disjoint paths in \(G^{\prime}\). 3. There is a \(\mathcal{T}_{DP}\)-family of edge-disjoint paths in \(G^{\prime}\). As a consequence, \((G^{\prime},\mathcal{T}_{DP})\) is a yes-instance of Planar Disjoint Paths (resp. Planar Edge-Disjoint Paths) if and only if \((G^{\prime},T^{\prime})\) is a yes-instance of Non-crossing Multicommodity Flow. This concludes the proof. The Set Cover problem parameterized by the universe size is known not to admit a polynomial kernel unless coNP \(\subseteq\) NP/poly [27]. Theorem 5.63 implies the same for Planar (Edge-)Disjoint Paths parameterized by the number of the requests. Since Set Cover under this parameterization is also WK[1]-complete [50], we establish WK[1]-hardness of Planar (Edge-)Disjoint Paths as well. As a consequence, we obtain Theorems 1.1, 1.2, and 1.3. ## 6 Conclusion We conclude the paper with several open questions. Possibly, the ideas used in this paper will be useful in solving some of these questions. In particular, we believe that our construction of a \(2^{\mathcal{O}(k^{2})}\cdot n^{\mathcal{O}(1)}\)-time algorithm for Planar Disjoint Paths, based on the irrelevant edge rule, could be easier to generalize to bounded-genus graphs than the approach based on enumerating homotopy classes of [17, 71]. It is unclear, however, how to extend the treewidth reduction procedure [2] and the \(n^{\mathcal{O}(k)}\)-time algorithm [91]. This leads to the first open question. * Can Disjoint Paths on graph classes substantially larger than the class of planar graphs (such as bounded-genus or minor-free graphs) be solved in time \(2^{k^{\mathcal{O}(1)}}\cdot n^{\mathcal{O}(1)}\), and does it admit a polynomial kernel when parameterized by \(k+\mathsf{tw}\)? Currently, the best known running time for proper minor-closed graph classes is galactic as in the general case. * Can Planar Disjoint Paths be solved in time \(2^{o(k^{2})}\cdot n^{\mathcal{O}(1)}\) or even \(2^{\mathcal{O}(k)}\cdot n^{\mathcal{O}(1)}\)? We remark that the existing NP-hardness proof for Planar Disjoint Paths[64] only implies that the problem is not solvable in time \(2^{o(\sqrt{k})}\cdot n^{\mathcal{O}(1)}\) unless the ETH is false. Note that even though \(2^{o(\sqrt{k})}\) may seem a natural parameter dependency for a "genuinely planar" problem, for the related Planar Steiner Tree problem a \(2^{o(k)}\cdot n^{\mathcal{O}(1)}\) lower bound is known [78]. * Can the extension of Planar Disjoint Paths to directed graphs be solved in time \(2^{k^{\mathcal{O}(1)}}\cdot n^{\mathcal{O}(1)}\), and does it admit a polynomial kernel when parameterized by \(k+\mathsf{tw}\)? Currently, the best known running time is \(2^{2^{k^{\mathcal{O}(1)}}}\cdot n^{\mathcal{O}(1)}\)[24]. * While Disjoint Paths is long known to be in FPT, the existing algorithms are very complicated. Can one design a "simple" \(f(k)\cdot n^{\mathcal{O}(1)}\)-time (or even \(n^{f(k)}\)-time) algorithm for this problem? * Being a close relative of Disjoint Paths, can Topological Minor Testing on the class of planar graphs be solved in time \(2^{k^{\mathcal{O}(1)}}\cdot n^{\mathcal{O}(1)}\)? Currently, the best known running time is \(2^{2^{2^{k^{\mathcal{O}(1)}}}}\cdot n^{\mathcal{O}(1)}\)[37]. Note that the question is not asked for Minor Testing since it becomes "easy" on planar graphs (due to reasons that do not apply to Disjoint Paths and Topological Minor Testing on planar graphs, and not to Minor Testing in general) [1]. * The Min-Sum Disjoint Paths problem is the optimization version of Disjoint Paths where we do not only need to determine whether a solution exists, but, if the answer is positive, find one where the sum of the lengths of the solution paths is minimized. The Shortest Disjoint Paths is a restricted case of this problem where the task is to determine whether there exists a solution where every path is a shortest one between its endpoints. Although introduced more than 20 years ago [32], to date, all we know on Shortest Disjoint Paths is that it is W[1]-hard [69] (and hence also Min-Sum Disjoint Paths is W[1]-hard), in XP [69], and, on digraphs, in P when \(k=2\)[7] (in contrast to Disjoint Paths). The status of Min-Sum Disjoint Paths is grimmer: all we know is that it is in P when \(k=2\)[8], and, on digraphs, it is NP-hard even when \(k=2\) (because it generalizes Disjoint Paths). Specifically, we ask: Is Shortest Disjoint Paths (or even Min-Sum Disjoint Paths) on planar graphs in FPT? Is Min-Sum Disjoint Paths (on undirected graphs) in XP? Is Shortest Disjoint Paths on digraphs in XP? * Does Disjoint Paths admit a polynomial kernel when restricted to chordal graphs? Currently, a positive answer is known for split graphs [49, 95], and, more generally, well-partitioned chordal graphs [4]. * For which other problems that admit a treewidth reduction can we show the impossibility of a polynomial treewidth reduction? We refer to [37, 43, 45, 46, 72, 76] for some examples of problems other than Disjoint Paths and Minor Testing that are known to admit super-polynomial treewidth reductions. Lastly, we remark that it might be interesting to study lossy kernels and FPT approximation algorithms [34] for optimization versions of the above-mentioned problems in future works.
2309.01551
Is Your Learned Query Optimizer Behaving As You Expect? A Machine Learning Perspective
The current boom of learned query optimizers (LQO) can be explained not only by the general continuous improvement of deep learning (DL) methods but also by the straightforward formulation of a query optimization problem (QOP) as a machine learning (ML) one. The idea is often to replace dynamic programming approaches, widespread for solving QOP, with more powerful methods such as reinforcement learning. However, such a rapid "game change" in the field of QOP could not pass without consequences - other parts of the ML pipeline, except for predictive model development, have large improvement potential. For instance, different LQOs introduce their own restrictions on training data generation from queries, use an arbitrary train/validation approach, and evaluate on a voluntary split of benchmark queries. In this paper, we attempt to standardize the ML pipeline for evaluating LQOs by introducing a new end-to-end benchmarking framework. Additionally, we guide the reader through each data science stage in the ML pipeline and provide novel insights from the machine learning perspective, considering the specifics of QOP. Finally, we perform a rigorous evaluation of existing LQOs, showing that PostgreSQL outperforms these LQOs in almost all experiments depending on the train/test splits.
Claude Lehmann, Pavel Sulimov, Kurt Stockinger
2023-09-04T12:05:45Z
http://arxiv.org/abs/2309.01551v2
# Is Your Learned Query Optimizer Behaving As You Expect? ###### Abstract. The current boom of learned query optimizers (LQO) can be explained not only by the general continuous improvement of deep learning (DL) methods but also by the straightforward formulation of a query optimization problem (QOP) as a machine learning (ML) one. The idea is often to replace dynamic programming approaches, widespread for solving QOP, with more powerful methods such as reinforcement learning. However, such a rapid "game change" in the field of QOP could not pass without consequences - other parts of the ML pipeline, except for predictive model development, have large improvement potential. For instance, different LQOs introduce their own restrictions on training data generation from queries, use an arbitrary train/validation approach, and evaluate on a voluntary split of benchmark queries. In this paper, we attempt to standardize the ML pipeline for evaluating LQOs by introducing a new _end-to-end benchmarking framework_. Additionally, we guide the reader through each data science stage in the ML pipeline and provide novel insights from the machine learning perspective, considering the specifics of QOP. Finally, we perform a _rigorous evaluation of existing LQOs_, showing that PostgreSQL outperforms these LQOs in almost all experiments depending on the train/test splits. + Footnote †: PostgreSQL abandons exhaustive methods for queries with 12 or more FROU items. + Footnote †: PostgreSQL abandons exhaustive methods for queries with 12 or more FROU items. + Footnote †: PostgreSQL abandons exhaustive methods for queries with 12 or more FROU items. + Footnote †: PostgreSQL abandons exhaustive methods for queries with 12 or more FROU items. + Footnote †: PostgreSQL abandons exhaustive methods for queries with 12 or more FROU items. ## 1. Introduction Over the last decade, machine learning (ML) approaches have heavily dominated classical query optimization methods. This trend could be explained by the increased spread of deep learning (DL) applications and the nature of the query optimization problem (QOP) itself. Having in total \(O(n!)\) possible logical plans in the worst case for queries where the join graph is a clique with \(n\) tables, the problem is classified as NP-hard [37]. This implies that exhaustive methods cannot solve the problem for a higher order of joins1, thus demanding the need for heuristical approaches. Footnote 1: footnotemark: In Figure 1, we compare typical pipelines for classical and learned query optimizers. The _classical approach_, implemented inside database management systems (DBMS), has the stages of query representation via logical and physical plans, with a follow-up search of an optimal plan using cardinality-based cost model estimations. In addition to dynamic programming-based methods, genetic algorithms [34] are also used since they are proven to be more efficient for queries with a high number of joins [28]. The bottom part of Figure 1 shows _learned query optimizers_ (LQO), the most recent trend for end-to-end query optimization. These approaches require a more complicated pipeline because of the use of ML methods. Looking at it from the _ML perspective_, the pipeline should consist of several stages, namely (1) training data generation, (2) query & plan encoding, (3) ML model training, and (4) ML model evaluation. The violation of theoretical ML principles [30] at each stage and the absence of a unified reproducible framework make it currently _impossible to fairly compare the results of LQOs_. Let us briefly describe what can go wrong at each stage, i.e., the major challenges of the ML pipeline for LQOs from both a data science and an engineering perspective and how we solve them as contributions of this paper. \(\bullet\)**Training Data Generation2** _When no ready-to-use training data is provided for benchmarking, opportunities for biased data creation appear._ For LQOs, we observed Figure 1. Comparison of classical and learned query optimizers (LQO) - see top and bottom halves, respectively. The stages (1) Training Data Generation, (3) LQO Training, and (4) LQO Evaluation are the primary components of our End-to-End Benchmarking Framework. Together with the (2) Query & Plan Encoding stage, they form the typical machine learning pipeline for a LQO. that only the queries are given as SQL statements for popular benchmarks such as JOB (Krishnan et al., 2017). The key problem is that these statements cannot be explicitly used as input for ML models without querying the databases (DB) and extracting metadata such as cardinalities or execution times. This implies a gap between the given benchmark data and the actual features used to train LQOs, which are strongly correlated with the parametric conditions when querying the database. Contribution: _We discuss general limitations that could hamper the process of similar training data creation and prove a lemma on fair training data generation in Section 3._ \(\bullet\)**Query & Plan Encoding** _Encoding the queries such that the principle of invariance3 is broken, leads to inconsistencies in the performance._ For example, when different queries are encoded using column selectivities, it is possible that large sections of the encoding (or even the full vectors) are identical. This is because there exist many filter combinations that result in the same selectivity. Hence, the model would potentially suffer from the mismatch between features and target variables and will only perform well if this inconsistency is mitigated. Footnote 3: We formulate the principle of invariance (Krishnan et al., 2017) in data generation as follows: When given the same input, the data generating system should return the same output. Contribution: _We diagnose the invariance issues in particular methods and give encoding recommendations in Section 4._ \(\bullet\)**LQO Training** _Contravening the basic training techniques and misapplying internal mathematical models makes your ML model behave unexpectedly._ Complicated DL models are hard to train, which makes hyperparameter tuning and validation procedures the cornerstone for gaining high predictive power. Moreover, injecting additional mathematical mechanisms can have adverse side effects that negatively impact the training itself and, in turn, the query performance. Contribution: _We propose enhancements to make the training process of LQO methods more stable and reliable in Section 5._ \(\bullet\)**LQO Evaluation** _When your model is trained and then evaluated on a non-fixed train/test split, comparisons become data-centric rather than model-centric, i.e., the choice of the train/test split strongly correlates with the model's performance. For example, the performance on two different train/test splits of the same type (such as randomly splitting queries) is not comparable. Explicit examples of this can be seen in Figure 5 of our experiments). Despite the existence of public query optimization benchmarks like the one in (Krishnan et al., 2017), it remains an open question about which queries serve as the train data and which ones are the test data. The attempts to suggest the procedure of a unified evaluation were only made recently in (Zhou et al., 2018), (Zhou et al., 2018) and (Zhou et al., 2018). Contribution: _We unify the ways of train/test data splitting for LQO and introduce a procedure to test different levels of generalization in Section 6._ \(\bullet\)**Reproducibility** _Any ML method developed in academia has negligible practical value if it cannot be reproduced on arbitrary software and hardware._ With ML approaches finding widespread use in academic research, navigating the realm of learned query optimization presents challenges, as it requires proficiency in the core subject of database research and numerous related engineering fields. These approaches typically require complex programming code and (ML) models with inherent stochasticity. Hence, reproducibility is becoming a growing concern in academia, particularly where ML is applied. Contribution: _We suggest an **End-to-End Benchmarking Framework** - a novel meta-benchmarking framework that is capable of equalizing the conditions under which the ML-based LQOs are trained and tested, guaranteeing consistency in comparisons in Section 7._ Main contribution: We perform an extensive evaluation of existing LQOs using our end-to-end benchmarking framework in Section 8. **Our results demonstrate that current LQOs often do not perform better than PostgreSQL.** These findings indicate that _novel research is required to make LQOs competitive_ compared to more traditional approaches - and not only in specific cases. The paper is organized as follows: First, we briefly review recent LQO methods in Section 2. Then we dissect the data science stages in the ML pipeline applied to query optimization, not only discussing potential hurdles that can occur while processing each stage but also suggesting ways of mitigating them via theoretical and practical recommendations (see Sections 3-6). Based on the challenges in the reproducibility of ML approaches, we propose our End-to-End Benchmarking Framework (see Section 7). Afterward, we perform an elaborate experimental evaluation of recently released LQOs from an ML perspective (see Section 8). We conclude the paper in Section 9. ## 2. Related Work Before end-to-end LQOs appeared, significant progress had been made toward using modern ML approaches for query optimization. For instance, DQ (Krishnan et al., 2017), ReJOIN (Zhou et al., 2018; Zhu et al., 2018), and others (Krishnan et al., 2017; Krishnan et al., 2017) apply reinforcement learning (RL) in an exploration-exploitation strategy with the goal of finding the optimal join order. These methods use a cost model to produce a "join score" reward for the learning agent. The first end-to-end LQO Neo (Zhou et al., 2018) uses a neural network (NN) to estimate the latencies of a full query plan given a sub-plan as an input. The optimal plan is predicted via a greedy tree search in the join and scan space and consecutive bottom-up plan construction. RTOS (Zhou et al., 2018) assumes that the join graph is built as a sequence of join operations between two tables, ignoring scans, and applies a graph NN to train an RL agent. The predicted query plan is built similarly to Neo, though it applies a depth-first search. Bao (Bao, 2018) sits on top of the PostgreSQL query optimizer, controlling the execution flow by enabling or disabling a subset of join and scan operations. These subsets are referred to as hint sets, and Bao provides neither the full join order nor which scan types are used for which table but rather advises the query optimizer about which operations not to use. Balsa (Balsa, 2018) is based on the same architecture as Neo. However, it introduces several modifications to the training pipeline: it pre-trains using the cost model estimations of a DBMS instead of real latencies, it uses timesouts during query executions, and it does not sample training data from the replay buffer but rather uses the data points produced by the most recent NN state. Lero (Lero, 2018) formulates the problem as a learning-to-rank (LTR) task and generates various candidate query plans from the DBMS by changing the internal cardinality estimations. The plan comparator module selects the better of two generated candidate plans, similarly choosing the optimal plan during inference. LEON (Lero, 2018) is another LTR method. Unlike Lero, it brute-forces many possible physical plans in a dynamic programming manner and prunes them before training. Training happens only on the top chosen SQL/query plan-pairs, ranked by their latency and posterior uncertainty estimation obtained from a Bayesian NN. LOGER (Lero et al., 2017) uses the conceptual ML model pipeline from RTOS, though extending the action space for join order recommendations by adding the join type. RTOS restricts the operation recommendation, i.e., which join type not to use, by applying \(\epsilon\)-beam search for plan prediction. HybridQO (Luo et al., 2019) uses a mix of cost and latency estimations, like some other methods, but in a different manner: it first gets the candidate plans from the DBMS via hints. Those hints are obtained from the top levels of the query plan tree explored by a Monte-Carlo Tree Search (MCTS) with an upper confidence bound and using the cost as a target (the cost is estimated with an NN from RTOS). Then, the same network architecture is used to predict the latency and uncertainty from the candidate plans. A multi-head performance estimator makes the final plan selection. In the recent paper (Zhou et al., 2019), the authors question the reasonability of training complicated and computationally costly LQOs. As an alternative, they suggest the combination of look-ahead information passing (LIP), in which adaptive semi-join techniques and adaptive join algorithms (AJA) are used. The latter checks whether a hash join should be replaced by a nested loop join at runtime. In this paper, we introduce neither a new LQO nor a classical alternative. Instead, we provide _recommendations to improve LQOs_ based on a _vast evaluation of existing LQOs from an ML perspective_. ## 3. Training data generation Typically, ML problems have publicly available benchmarks with ready-to-use training data that is identical for all participants. QOP benchmarks differ regarding the provided data and only serve as a source for generating the training data, suitable as an input into ML models. This makes the whole ML pipeline vulnerable to inconsistencies in the data generation process, namely: (1) Having training data generated under unreasonable restrictions reduces the domain of data points available for training and potentially decreases the generalization of the ML models. (2) The generated training data can result in cases where the same input leads to a different output (or target). In this section, we first explain the choice of the benchmark and then discuss the issues around generating the training data from it. ### Dataset Choice We use the IMDB database and the corresponding JOB benchmark from (Kang et al., 2017) for all the experiments in this paper. We do not use the STATS-CEB benchmark suggested in (Kang et al., 2017), as it was originally developed for challenges in cardinality estimation as opposed to end-to-end query optimization, which is the focus of this paper. We also do not use the TPC benchmark family (Zhou et al., 2019), as it has underlying assumptions of multivariate uniformity, which does not create reasonable challenges for LQO methods. A recent paper (Zhou et al., 2019) confirms this motivation, where the authors claim that the JOB benchmark is the most challenging one for LQOs. ### Reduced Complexity of Query Plans During our evaluation of LQOs, we noticed that some authors suggest severely reducing the number of possible physical plans by, for example, disabling nested loop joins (as has been done in (Kang et al., 2017)). This might yield improvements for some queries but solves the query optimization challenge by using a data-dependent solution at the cost of reduced generalizability. As a logical continuation, we can formulate and prove the following lemma: Lemma 3.1 ().: _Limitations like disabling the scan/join methods, non-exact optimization, or join tree types lead to a possible increase in the chances of finding a sub-optimal plan._ Proof.: We prove the statement by contradiction, showing that there is an example of a query for which a limited optimizer finds a slower plan than the one found without limitations. The PostgresPro Community (Kang et al., 2017) discussed that any of the join methods could have an advantage over others depending on the selectivity of subqueries. The authors of (Kang et al., 2017) show experimentally that disabling nested loop joins in PostgresSQL can improve the performance of query 16b or harm the performance of query 24b. For bitmap and tid scans, the Genetic Query Optimizer (GEQO), and bushy trees, we provide extensive experiments producing the counter-examples in Sections 8.4 to 8.6 respectively. ### Invariant Training Data Generation The data used as an input into LQO ML models, which all have either a reward or a prediction value, has a canonical view of \((D,y)\)-pairs: \(D\) refers to the vector of _feature variables_, consisting of either an independent set of variables \(X\) for supervised methods, or \((s,a,s^{{}^{\prime}})\) - a set of _state_, _action_ and _next action_, respectively, for RL methods. \(y\) is a _target variable_, which is either the _query latency_, _cost_, or the _ranking_ depending on the ML model used. In this subsection, we discuss why both types of variables are subject to the absence of invariance during training data generation. #### 3.3.1. Feature Variables: Dynamic Optimization The vast majority of LQOs use the pg_hint_plan extension (Kang et al., 2017) to _force PostgresSQL_ to _execute an explicit query plan_ rather than using a plan predicted by the built-in query optimizer. However, one should not expect that a plan with its hints is really executed. This is due to the dynamic updates of the plan during execution (Bauer et al., 2017), referred to as _dynamic optimization_. All the LQOs we evaluated force the DBMS to execute their plans during the stage of plan encoding, hence potentially training on incorrect data. Dynamic optimization could also be the reason for a possible discrepancy between the executed plan and the output provided by EXPLAIN. This means that LQOs, which rely on the cardinality estimations from EXPLAIN, potentially introduce significantly inaccurate estimations. _Recommendation:_ This could be mitigated via a _direct RL approach_, where the DBMS is treated as a "black box". The objective function is directly maximized via gradient descent without the need to learn transition probabilities (i.e., the stochastic behavior of the DBMS) and without the need to solve Bellman equations (Billman et al., 2016). #### 3.3.2. Dependent Variables: Cold vs. Hot Cache If a query is executed several times, the executing time decreases due to reading pre-calculated information from previous runs (hot cache) instead of creating everything from scratch (cold cache). We want to create a situation that yields comparable and consistent results for every query. Hence, the cache status should be either fully cold or hot, i.e., when all potential caching has been performed. No intermediate "warm" cache should be allowed. However, it is unreasonable to expect a full cold cache situation (Friedman, 2017). Moreover, it is an ethical question if it is fair enough to run queries with a cold cache, considering that it disables all the optimization techniques that the DBMS has based on cache buffers. _Recommendation_: Taking into account potential correlations of queries inside workloads like JOB due to the use of base templates/patterns, we believe that _forcing a hot cache setting is fairer_, as it mitigates the influence of previously executed queries on the execution time of any particular query. The way of achieving a hot cache setup is discussed in detail in Section 7.3; conceptually, it is a consecutive run of the same query until the latency converges. ## 4. Query & Plan Encoding In this section, we discuss which information can be extracted from SQL queries and their physical plans as input to the ML model. Moreover, we explain which principles should be followed so that LQO models are trained smoothly. The recent LQOs, to the best of our knowledge, are all _query-driven methods_ in contrast to data-driven methods used for cardinality estimation (Friedman, 2017; Dosov et al., 2017)). In other words, LQOs use queries as an indispensable proxy to the data underneath the DBMS. It implies that the encoding schema for a query should be both expressive and robust. We will now discuss the _principles of encoding robustness and expressiveness_ and how we can achieve them. ### Encoding Robustness Table 1 gives an overview of the main encoding components used by various LQOs. Note that we distinguish between _query encoding_ and _plan encoding_. For instance, the text attributes of the query can either be encoded based on their cardinality or using word2vec. Moreover, encodings can be aggregated using either stacking or pooling, sometimes with additional post-transformations. We notice that Bao and Lero do not use a query encoding but only a plan encoding. For instance, Bao does not identify which table is used in a particular node of the query plan, using only table cardinalities and costs. Such a representation can benefit from more schema-agnosticism and easier re-training when the database schema changes, though it violates the _principle of invariance_. Let us consider the following thought experiments. Applying different filters in a query can result in the same cardinality for the table. Similarly, tables with the same cardinalities can have the same encoding. Since query latency could be arbitrary and differ a lot when accessing different tables, this plan encoding will result in a 1-to-many mapping of \((D,y)\)-data pairs. However, ideally, we want a 1-to-1 mapping between \(D\) and \(y\) to uniquely identify the latency or costs \(y\) of a given query and the respective plan. Moreover, even having the query encoding as an additional input cannot guarantee invariance under a single cardinality encoding of the attributes. As we have discussed in the example above, applying different filters for a given column can result in the same cardinality estimation, i.e., leads to the loss of invariance. _Recommendation_: To avoid spoiling the training process by not having the ideal 1-to-1 mappings for \((D,y)\)-data pairs, one can _use the embeddings instead of single value representations_, e.g., embeddings for text attributes like in Neo, and explicit vectorization of filters like in RTOS. ### Encoding Expressiveness The final set of features should clearly reflect both the global and local context. In query optimization, the global context is the _query_ (as it does not change throughout the physical plan space search), and the local context is the _query plan_. This concept comes from Graph CNNs (Golovolov et al., 2017). The basic idea is that applying more rounds of convolutions in the neural architectures will result in a graph node embedding with more global graph context and less local context. Here, we do not try to find the trade-off between the amount of query-based and plan-based information but rather try to use both as balanced input data. Continuing the idea of using graph NNs, graph transformers (Dosov et al., 2017) are used in LOGER in an adjacent context for query encoding aggregation. On the other hand, methods like Bao and Lero are missing the query encoding part, which increases the probability of converging to a local optimum (Chen et al., 2018). _Recommendation_: We would suggest not only _using both the query and the plan encoding_ but also _applying feature extraction preliminary to the training loop_, which will result in better convergence. ## 5. Training Learned Query Optimizers In this section, we discuss how the "brains" of LQOs work and what conditions should be met to make them work as expected. The key feature of recent LQOs is the possibility to learn the entire query optimizer process with the help of ML models. From Table 1, it is visible how different the training pipelines are among LQOs. For example, a query plan having a tree representation structure implies two possibilities when processing: some can treat it as an image and apply Tree Convolutions (Vaswani et al., 2017), others treat it as a sequence of node pairs (i.e., text) and apply a Tree-LSTM (Vaswani et al., 2017). However, there is still no common ground, e.g., for the performance analysis during model training or the choice of the training method. In this section, we discuss the most widespread issues of LQOs at the training phase. ### Avoiding ML Model Overfitting Overfitting is a typical ML problem when the model performance improves on the training data and at the same time deteriorates on the validation data (Vaswani et al., 2017). From the definition, it is clear that RL-based methods do not suffer from this problem because they learn an optimal policy by maximizing or minimizing a non-stationary objective function that depends on the action policy itself. However, RL methods might get stuck in a sub-optimal policy without enough exploration (Vaswani et al., 2017). Contrarily, classical supervised methods are prone to converge to a suboptimal solution. To avoid overfitting, commonly _hyperparameter tuning via cross-validation (CV)_, _early stopping_ and _regularization_ are applied. Regularizations like, e.g., dropout (Vaswani et al., 2017) are straightforward and simply increase the number of hyperparameters that need to be tuned, though other techniques are harder to tweak. Among recent LQOs, only RTOS applies CV to measure final aggregated performance metrics, though this does not help choose the final model. Balsa uses early stopping with performance improvement on the non-fixed validation set. LEON is doing a similar early stopping procedure, though using accuracy as a target metric. Bao uses a continuous "time series" testing of the model on previously unseen queries. _Recommendation_: For RL methods, one can still _use hyperparameter tuning_ as it would also help improve the general model performance. For QOPs, accuracy for both cost and latency is a suboptimal quality metric as we do not know the optimal plan in advance (at least for higher-order joins). Thus, using accuracy as an early-stopping or cross-validation criterion is undesirable. The holdout data should be fixed (not CV, not "time series"), as the measurement on it should be comparable (Balsa et al., 2017). ### Changing Target Variables On-the-Fly Query optimization has interesting specifics regarding the target to be optimized, which could either be a cost or a latency. It results in finding a _trade-off between speed_ (as costs could be quickly estimated by an arbitrary cost model) _and accuracy_ (as latency gives the exact value for how long the query takes to execute). Some methods like HybridQO take advantage of both by first training the model that suggests plans based on cost and then training another model that chooses between candidates based on latencies. At the same time, methods like RTOS, Balsa, Lero, and LEON try to use a single predictive model that first pre-trains using costs and then continues training with latencies. A key issue of this approach is that latencies and costs have significantly different numerical properties, and any progress made in the pre-training phase is lost, as the model needs to adapt to an entirely new scale and variance of the target values (Balsa et al., 2017). _Recommendation_: You can _exchange the cost and latency on-the-fly during the training when using learning-to-rank models_, since real values are transformed into relative rankings forming the target variable (Krishnan et al., 2017). Another approach is to _use an architecture that chains the ML models_ like in HybridQO, where different target variables are served to different models in the ML pipeline. ## 6. Evaluating Learned Query Optimizers In this section, we outline the importance of choosing the right test set, how this decision influences the model's measured performance, and the concept of covariate shift. ### Test Set Choice The _train/test split_ is a cornerstone of any supervised method. This split is used to differentiate between which part of the data an ML model is allowed to see during training and which part is used to test its ability to perform on previously unseen data, measuring the generalization ability of the model. The extended JOB workload introduced by Neo (Sandes et al., 2017) was a first attempt to test the ability of models to deal with previously unseen queries that are distinct from the original JOB queries. In particular, the queries added in Ext-JOB exhibit additional operators that are not present in JOB (such as GROUP BY or ORDER BY operators). Due to the nature of merge joins (Sandes et al., 2017), the LQOs with a preference towards this join method tend to gain an advantage from including ORDER BY operators. As a result, the comparison between different methods is unfairly skewed. Balsa introduced JOB-Slow, where the 19 slowest queries shape the test set, and all other queries are the training set. This is an intuitively simple-to-understand train/test split that emphasizes the queries that have the most impact on the overall execution time for a full workload. However, all the 19 queries of the JOB-Slow test set have 11 or fewer joins, while 11 queries have just 6 or fewer joins. Figure 2 shows a scatter plot of the execution time vs. the number of joins. We observe that queries having between 6 and 11 joins have the largest execution times and thus, the highest potential for being optimized. At the same time, this is the range where non-exhaustive optimizers are typically disabled (such as PostgreSQL's GEQO by default being enabled only for 12 or more FROM items). Hence, exhaustive methods can still fully explore the space of possible plans. Another approach for splitting queries was introduced by (Sandes et al., 2017), where the authors built train/test splits based on the number of joins. For example, all queries with 3 or 4 joins form the test set, \begin{table} \begin{tabular}{l|c c c c c|c c c|c c c c c} \hline \multirow{2}{*}{LQO} & \multicolumn{4}{c|}{Query Encoding} & \multicolumn{4}{c|}{Plan Encoding} & \multicolumn{4}{c}{Training Specifics} \\ \cline{2-13} & Adjacency & Numerical & Test & Encoding & & Join & Scan & Table & & & Plan & Model & Testing & \multirow{2}{*}{} & DBMS \\ & Matrix\({}^{1}\) & Attributes\({}^{2}\) & Attributes & Aggregation & Type & Type & Identifier\({}^{3}\) & Data\({}^{*}\) & ML Model & & & & & \\ \hline Neo (Sandes et al., 2017) & ✓ & cardinality & word2vec & stacking & ✓ & ✓ & ✓ & - & Regression & Tree-CNN & Plan & Static & - \\ RTOS (Sandes et al., 2017) & ✓ & filters & cardinality & FC + pooling & - & - & ✓ & - & Regression & Tree-LSTM & Plan & CV & - \\ Bao (Sandes et al., 2017) & - & - & - & - & ✓ & ✓ & - & ✓ & Regression & Tree-CNN & Hin set & Time Series & ✓ \\ Bala (Sandes et al., 2017) & ✓ & cardinality & stacking & ✓ & ✓ & ✓ & - & Regression & Tree-CNN & Plan & Static & - \\ Lero (Aandes et al., 2017) & - & - & - & ✓ & ✓ & ✓ & ✓ & LTR & Tree-CNN & Plan & Static & ✓ \\ LEON (Aandes et al., 2017) & ✓ & cardinality & stacking & ✓ & ✓ & ✓ & - & LTR & Tree-CNN & Plan & Static & - \\ LOGER (Aandes et al., 2017) & ✓ & filters & cardinality & FC + pooling + GT & ✓ & - & ✓ & - & Regression & Tree-LSTM & Hin & Static & - \\ HybridQO (Aandes et al., 2017) & ✓ & cardinality & stacking + FC & ✓ & ✓ & ✓ & ✓ & Regression & Tree-LSTM & Plan & Static & - \\ \hline \end{tabular} \({}^{*}\) Other-hot encoding of the join subgraph for a particular (sub)query \({}^{*}\) Filters explicitly encode -, -, and - symbols with min-max scaled filter values \({}^{3}\) One-hot-encoding of tables in the DBMS schema \({}^{4}\) Whether the method uses additional queries (outside of the provided benchmark queries) for training data generation or not \({}^{\circ}\) CV: Cross-validation on JOB, FC. Fully-connected layer in the neural network, GT: Graph transformation, LTR: Learning-to-rank, Static: Static split of JOB, Time Series: Sequential continuous testing on previously unseen queries \end{table} Table 1. Main encoding components of LQOs. We distinguish between _query encoding_ and _plan encoding_. Both Bao and LOGER provide hints about what types of joins not to use. Bao also provides hints for scan types. and all others form the training set. From Figure 2 it is clear that the number of joins is an irrelevant proxy for execution time, according to a regression analysis with \(R^{2}=-0.11\). Thus, splitting queries as such forms groups that are not aligned with the true optimization target, i.e., the execution time. _Recommendation:_ We propose several _edge cases for train/test splits to cover different areas of generalization_, namely the generalization gap and sampling out-of-distribution (see Section 7.2). ### Covariate Shift Another relevant topic for evaluating LQOs is the concept of _covariate shift_, i.e. a change in the database content away from how a method was trained. DBMSes tackle this challenge by continuously updating the internal statistics. For a LQO, however, a change in the database content affects how a query is encoded and thus its prediction. For example, a query about movies with a release date greater than 2022 will continuously increase its result set size, as newly released movies are added to the DBMS. While this topic is often mentioned in aspirational future work, methods like Bao have started to think about designing their encoding to be able to deal with covariate shift by omitting tables and column identifiers in their encoding (see Section 4 for more details). However, as we show in an experiment in Section 8.3, updated cardinality estimates in the encoding are insufficient to keep up with changing database content. _Recommendation:_ We propose that future methods should _include a simple experiment to measure the ability to deal with covariate shift_, as we have performed in Section 8.3. ## 7. Framework for Benchmarking Learned Query Optimizers In this section, we present our framework for benchmarking and thus, more consistently evaluating LQOs. The goal is to approach the benchmarking process holistically, that fairly compares methods in an end-to-end setting. To do this, our benchmark assumes a reproducible setup, particularly regarding engineering, including but not limited to (a) the content of the database underlying a benchmark workload, (b) the full code base of the LQO, (c) the version of the programming language, such as Python, and all used libraries, (d) a detailed configuration of the DBMS (unless all parameters are left on default), as well as (e) all queries and their assignment into train/test splits. ### DBMS Configuration & Database Tuning For analyzing query execution times, both the used hardware and the DBMS configuration greatly impact the comparability of LQOs. Hence, we will now analyze the major parameter settings in a systematic way. Table 2 gives an overview of the different parameter settings used in various publications, compared to the default values of PostgreSQL, as well as the suggested setting for the Join Order Benchmark (Krishnan et al., 2017). Note that the configurations for Neo (Krishnan et al., 2017) and HybridQO (Krishnan et al., 2017) are omitted from Table 2, as their code (Neo) and configuration (HybridQO) are not publicly available. A further observation is that only Balsa and LEON published the full DBMS configuration file among their artifacts. We have categorized the parameters into the following groups: _Join Order:_ The join order is typically forced through libraries such as pg_hint_plan(Krishnan et al., 2017), though PostgreSQL can also be made to follow the explicit order given in the SQL statement by setting join_collapse_limit to 1. The genetic query optimization algorithm (GEQO) of PostgreSQL is used for queries with large number of joins, by default 12 or more. It can either be disabled by setting geo_threshold to a value larger than the number of joins in a workload or disabled completely with the geo parameter. _Working Memory:_ The default values for PostgreSQL's memory are small. Given the amount of RAM available today, increasing the working memory and buffer sizes is advisable. Balsa drastically increases the working memory (work_mem) from 4 MB to 4 GB, while Bao and Neo keep the default value, despite the proposed 2 GB by (Krishnan et al., 2017). Similarly, for the shared_buffers, Balsa uses a much larger buffer at 32 GB compared to the 4 GB recommendation that Bao and Neo use. LOGER further increases the shared_buffers value to 64 GB, though their machine also has more RAM available. Note that the amount of work_mem is available to all workers in parallel query execution, that means for \(N\) amount of workers, the shared_buffers should be at least \(N\times\) work_mem. Furthermore, methods use the default cache size (effective_cache_size) of 4 GB, ignoring the recommendation to increase it to 32 GB by (Krishnan et al., 2017). We conducted experiments for the effective_cache_size parameter with the full JOB workload on PostgreSQL, first executed with the default value of 4 GB and afterward with a value of 32 GB. Using the default value, a handful of queries require a planning time of multiple seconds, with the maximum at around 3 seconds. By increasing the parameter, we could completely remove all outliers with a planning time of more than a second and reduce it for all queries to below 100 milliseconds. _Parallelization:_ These parameters are responsible for all parallelization efforts and define the number of workers and processes used during query execution. To fully utilize a multi-core system, Balsa increases the number of worker processes max_worker_processes to match max_parallel_workers. While increasing the number of parallel workers can speed up query execution, the amount of required compute resources also increases significantly. Figure 2. Scatter plot of the execution time per number of joins for all queries in JOB. LOGER and Lero take a different approach, disabling any parallel query execution completely. _Scan Types:_ These parameters directly change the types of scans that are being used by PostgreSQL and significantly alter the tool set available for query execution. Only Balsa and LEON change these values by disabling both bitmap and tid scans, while neither paper addresses any reasoning for doing so. ### Dataset Split The way the dataset is split into training and test sets has a significant impact on the performance of a trained model. While it is advisable that both sets contain data from a similar distribution, we have to be careful to avoid leaking information from one set to the other. More specifically, the Join Order Benchmark queries are deduced from 33 different base queries (or templates), and the full 113 queries are made up of between 2 and 6 variations of each base query (denoted as 1a, 1b, 1c,...). Variants of the same base query share the same tables and joins but differ in filter statements. These differences can be different filter values (e.g. production_year \(<\) 2000 vs. production_year \(-\) 2023) or applying filters on other columns (e.g. genre = 'horror' vs. name LIKE %an%). Generating queries from templates introduces a strong correlation in the structure of the optimal join plan for some, but not all, queries in the JOB. To measure the effect of potential data leakage, we propose the following sampling techniques to generate dataset splits (see Figure 3 for a visual example of assigning queries into the training and test sets): (1) **Leave One Out Sampling** extracts exactly one variant of each base query into the test set. All other variants of the base query are contained in the training set. This split maximizes the amount of information that can potentially be leveraged from the training onto the test set. We expect this split to be the _easiest to learn_. (2) **Random Sampling** distributes all queries randomly into train and test sets, ignoring any base query or template affiliations. This is a _medium difficulty_ sampling, and it can be applied to any workload, as there is no requirement for the existence of _base query families_. (3) **Base Query Sampling** keeps all queries of the same base query either in the training or the test set. This ensures that the intra-family similarity of the query structure does not leak from the training set into the test set. We believe this to be the _most difficult_ split, as a model cannot apply the join structure learned from one variant of the same base query to another. ### Measuring Query Executions As LQOs are all evaluated by the runtime of queries in a workload, and some LQOs directly predict the execution time for a given physical plan, it is vital that runtime measurements are as consistent as possible. One of the primary reasons for high variance in executing the same query is caused by the buffer and cache states in the DBMS. For example, when the same query is executed twice one after another, the first run generally takes longer than the second one. As buffers and caches switch from cold to hot cache, runtimes become more consistent. In the ideal scenario, we could execute every query many times to achieve a robust measurement. However, every additional execution after the first one takes additional time that is not spent on executing other queries, costing valuable compute resources. To achieve a fair comparison, we executed all queries of the Join Order Benchmark using EXPLAIN ANALYZE 50 times in succession and in order (i.e., 1a, 1a, 1a,..., 1a, 1b, 1b,...), measuring the execution time by extracting it from PostgreSQL's EXPLAIN response. This removes the network latency to the database from our measurements. By evaluating the distribution of execution times for the \(k\)-th iteration empirically, we can propose a value of \(k\) that strikes the balance between costs and robustness. Figure 4 shows the normalized difference in query execution time (relative difference to the first executed query) when comparing pairs of the \(k\)-th and \((k+1)\)-th query execution. We observe, that the query execution time significantly shifts for the majority of \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c} **PostgreSQL Config Parameter** & **Default Values** & JOB (Han et al., 2017) & Bao (Li et al., 2018) & Balsa (Zhou et al., 2018) & LEON (Balsa et al., 2018) & LOFER (Lero et al., 2019) & **Our Framework** \\ \hline \hline Amount of RAM used by authors & & 64 GB & 15 GB & 64 GB & 256 GB & 512 GB & 64 GB \\ **Join Order** & & & & & & & \\ \hline geoq\_threshold & 12 & 18 & & 2 or 1,024 & & \\ geoq & on & & off & off & off\({}^{1}\) \\ **Working Memory** & & & & & & \\ \hline work\_mem & 4 MB & 2 GB & 4 GB & 4 GB & & 4 GB \\ shared\_buffers & 128 MB & 4 GB & 4 GB & 32 GB & & 32 GB \\ temp buffers & 8 MB & & & 32 GB & & & 32 GB \\ effective\_cache\_size & 4 GB & 32 GB & & & & & 32 GB \\ **Parallelization** & & & & & & \\ \hline max\_parallel\_workers & 8 & & & 1 & 0 & \\ max\_parallel\_workers\_per\_gather & 8 & & & 1 & 0 & \\ max\_worker\_processes & 2 & & 8 & & & 8 \\ **Scan Types** & & & & & & \\ \hline enable\_bitmapscan & on & & off & & & \\ enable\_idscan & on & & off & & & \\ \hline \end{tabular} * GEQ is only turned on for Bao and when PostgreSQL. fully controls the query execution. \end{table} Table 2. Overview of different PostgreSQL configurations (database tuning parameters) used in various papers of LQOs. _Deviations from PostgreSQL’s default values are marked in the respective columns. Note, that the values for Neo (Li et al., 2018) and HybridQO (Li et al., 2018) are missing from the table, as their configuration parameters are not publicly available. executed queries at \(k=1\), with a mean reduction of 14.6% between the 1st and 2nd query execution, and another 1.03% from the 2nd to the 3rd. From then on, the fluctuations no longer show a trend that would benefit from more executions. We see, thus empirically, that for robust measurements of execution times, it is important to at least execute a query twice. If time and costs allow, a third execution further improves the robustness, after which one can safely stop. Other recent publications have tackled this challenge by executing all queries \(n\) times and taking an average, for example, in (Kumar et al., 2017) with \(n=5\) or LEON (Beng et al., 2017) with \(n=3\). However, we have empirically seen that taking the third execution is 40% faster than executing five queries and more robust than averaging three measurements (which is strongly influenced by the first outlier measurement). ## 8. Experiments In this section, we present our extensive evaluation of LQOs on the Join Order Benchmark. First, we give an overview of the setup and hardware used; then we discuss different approaches to generate train/test splits. Finally, we show the results of our experiments with a number of ablation studies. ### General Setup #### 8.1.1. Software and Hardware All our experiments were conducted using PostgreSQL version 12.5 by measuring the query execution time through EXPLAIN ANALYZE calls, using both execution and planning time. In addition, we also include the inference time for LQOs. Measurements are taken by executing the same query three times and taking the last query execution (hot cache). Our instance of PostgreSQL is configured largely with default parameters in mind, closely following the configuration used by Balsa (Balsa, 2017). In comparison, we reenabled both bitmap and tid scans, and increased the effective_cache_size from 4 to 32 GB. The main differences to PostgreSQL's default can be seen in Table 2 and primarily include changes to the memory configuration and an increased amount of parallel workers. In addition, we disabled the AUTOVACUUM feature, as the query workload is stable and ANALYZE is run once after loading all data into PostgreSQL. We have decided to follow the configuration of the Balsa experiments, as they include memory settings that strongly follow the best practices guide proposed by PostgreSQL (Rosenberg et al., 2019) and the suggestions of Leis et al. (Leis et al., 2019). Furthermore, Balsa is the first method to increase the number of available workers processes from 2 to 8, given the typical machines with many CPU cores. We further change the effective_cache_size parameter in line with the best practices of PostgreSQL and re-enable both bitmap and tid scans. For the Join Order Benchmark, the authors of Balsa added two additional indexes on the subject_id and status_id columns of the complete_cast table, compared to the indexes provided by (Leis et al., 2019). We also include the additional indexes in our experiments. The experiments were run inside Docker containers, using a Tesla T4 GPU, 64 GB of RAM, and 16 CPU cores. #### 8.1.2. Dataset Split For our experiments, we generated the train/test splits by uniformly sampling across all queries (Random Sampling), the base queries (Base Query Sampling), or the variants of each base query (Leave One Out Sampling). For the Random splits and Base Query splits, we used an 80-20 ratio between training and test sets. The dataset splits are sampled once and shared across all the evaluated methods. Detailed listings of the training and test sets for all splits can be found in our code repository, along with the hyperparameters of all methods. #### 8.1.3. Additional Noteworthy Changes As we evaluate the LQO methods under our unified framework, there are differences to the experiments conducted by the authors of the methods. Hence, direct comparisons to prior results are impossible due to the differences outlined in the previous sections. In addition, Bao was originally trained on 2,500 newly generated queries in the style of the JOB workload. In our experiments, Bao was only trained on the training set of the respective train/test splits and has seen the training queries multiple times. For LEON, we have limited the amount of real time spent on training to twice the time it took Balsa to finish training, i.e., 120 hours. This time budget Figure 4. Difference in normalized execution time between pairwise, successive query executions. For example, at k-1, we show the difference between the 1st and 2nd query execution. Figure 3. Overview of different dataset split sampling types for JOB: Leave One Out Sampling (top), Random Sampling (middle), and Base Query Sampling (bottom). For instance, Base Query 1 has 4 variations: 1a, 1b, 1c and 1d. likely reduces the performance of LEON, but as shown in Section 8.2.2, the inference time heavily dominates its overall runtime, not just the execution time. ### Comparison of Current State-of-the-Art Learned Query Optimizers In this section, we analyze the performance of current state-of-the-art methods for LQOs (namely Neo, Bao, Balsa, and LEON) compared to PostgreSQL as our baseline. We do not include RTOS (Wang et al., 2018), Lero (Wang et al., 2018), LOGER (Chen et al., 2018), and HybridQO (Wang et al., 2018) in our experiments, because they are either unavailable, require to disable parallel query executions, or do not support the full range of our configuration. #### 8.2.1. End-to-End Performance For all algorithms, we report a variety of time measurements defined as follows: (1) **Inference Time:** This measure includes all time that an LQO spends to encode a query, iterate over variations of query plans, gather cost information to guide further decisions, and finally use an ML model to generate predictions. After the inference time has passed, a given SQL query is ready to be sent to PostgreSQL with hints on which scan or join types to use and in which order. (2) **Planning Time:** Once PostgreSQL receives a query, it spends an amount of time on planning the query before a final physical plan is generated and sent for execution. For LQOs with an extension running inside PostgreSQL, typically, the inference time is reported as part of the planning time. (3) **Execution Time:** Encompasses the amount of time spent by PostgreSQL to execute the query and gather the result set. (4) **End-to-end Execution Time:** A combination of the previous three time measurements, measuring how long a method takes to devise a query plan to execute and how much time PostgreSQL spends to get the result from the database. We believe this measurement to be the primary objective for optimization. We would like to note that the inference, planning, and execution times do not include the network latency spent between the program sending the query and PostgreSQL. While this is a potentially significant amount of time spent (particularly for fast queries), the LQOs have no direct impact, and optimizing for network latency is beyond the scope of this evaluation. We increase the difficulty iteratively across experiments, starting with the leave one out sampling, then the random sampling and finally, the base query sampling that generated the train/test splits. All queries were executed three times. The planning and execution times have been taken from the third execution. Figure 5 presents the performance on JOB across all three sampling methods and its individual train/test splits. In summary, we can observe that **PostgreSQL generally performs best, followed by Bao, then Neo, Balsa, and finally LEON**. However, PostgreSQL does not outperform all methods on all splits by a statistically significant margin. In particular, Bao achieves comparable results on most train/test splits. For the _leave one out sampling_, which we consider to be the easiest train/test split, PostgreSQL and Bao execute the test queries in just over 31 seconds. However, Bao spends 8.5 seconds longer to plan queries, resulting in a 25% slower end-to-end execution time. Bao's larger confidence interval gives a first hint that it has found plans that are generally faster than PostgreSQL, but they are not speeding up the execution time enough to have an advantage. LEON is the third fastest method by execution time at 58 seconds, but its inference time is around 9.6 hours long, making its use impractical for interactive querying (with more complex queries requiring proportionally more inference time to complete). The overall third fastest method is thus Neo at 93 seconds, followed by Balsa at 134 seconds. Both methods struggle to close the gap towards PostgreSQL with 286% and 411% slower end-to-end execution times, respectively. For the _random sampling_, i.e., the medium difficulty train/test split, PostgreSQL and Bao remain competitive with each other, with Bao achieving even a lower execution time of 25 vs. PostgreSQL's 28 seconds. However, this is not a statistically significant difference. Including the inference and planning times as well, Bao is again at a slight disadvantage. Compared to the leave one out sampling, both Neo and Balsa achieve 2-3x faster end-to-end execution times, reaching comparable results for PostgreSQL on 2 out of the 3 train/test splits using this sampling. LEON struggles with these queries and two queries time out (26b and 32b) in two separate splits, leading to a drastic increase in the execution time. However, given the large inference time of on average of 3.8 hours, this has little impact on its overall ranking. Finally, let us examine the _base query sampling_, i.e., the most difficult sampling technique. For the first time, Bao only achieves comparable results to PostgreSQL on 1 out of the 3 train/test splits, confirming the increased difficulty. Neo and Balsa struggle particularly with base query split 1. 3 queries of the test set time out for Neo, and 15 queries time out across both train and test set for Balsa. LEON, however, can largely match PostgreSQL's direct execution time if the method can overcome its inference time. In summary, we see the _significant impact of the inference time on the overall end-to-end execution time_. While there are methods that, on some train/test splits, perform comparable to PostgreSQL or even slightly outperform it, these results show the _importance of how queries are split for training_. Furthermore, it is vital that evaluations include the inference time, as it strongly shapes the ranking between methods compared to the execution time alone. #### 8.2.2. Training Time Now that we have compared various methods in their direct query performance, we also take a look at the amount of time to train a model. While the definition of training time is sometimes unclear, we intend to take a holistic look with an _end-to-end training time_, that is, including (a) the time spent collecting query results from the DBMS, (b) any time spent training the model, (c) the ongoing evaluation of the current model's performance, and (d) any pre- or postprocessing, initialization, and artifact generation. In short, the _full amount of time spent from starting the training procedure until it terminates_. We make this distinction not to penalize additional logging or more frequent checks on the model performance but to get a fairer overall comparison. For example, a method might have a very quick training period but spend a lot of time querying training data from the DBMS, while another method needs fewer database queries but uses a more complex NN architecture that spends more time in weight updates. The overall amount of time spent also informs how often a model could be retrained given a time budget. In Figure 6 we compare the end-to-end training times on the \(x\)-axis and the combined workload runtimes (the sum of the end-to-end execution times of all queries in the workload) on the y-axis. Each dot represents one train/test split. For example, one orange dot could be the Neo method on random split 2. Since PostgreSQL's optimizer does not require any inherent training acting as a baseline, its end-to-end training time is set to zero. Among the evaluated methods, Bao requires the least time at around 2 hours, Neo between 20 and 40 hours, Balsa between 40 and 85 hours, and LEON from 110 to 130 hours. As a naive assumption, one would expect that with more time spent on the overall training process, the performance would increase, but we observe exactly the opposite behavior: Methods that have spent _more time to build and train their model reach inferior results_ compared to methods that finish training more quickly. We explain this discrepancy between methods primarily by the amount of plans that have been considered. For example, Neo executed between 4,000 and 8,000 plans in PostgreSQL, Balsa between 19,000 and 21,000 plans. Even ignoring the quality of either methods' executed plans, it is obvious that 2-3x more plans also require more processing time. The authors of Balsa specifically tackled this challenge by allowing all required plans to be executed on multiple DBMS instances in parallel and by timing out long-running queries, which Neo does not. LEON does not fully execute the majority of its generated plans; However, it calls PostgreSQL to ask for cost estimates up to multiple tens of thousands of subplans, such that predicting just a plan for query 29a (the query with 17 aliased tables, the highest amount in all of JOB) takes around 6.5 hours4. Footnote 4: The authors of LEON include a caching mechanism which stores plans and subplan cost estimates on the hard disk. Our reported measurements take advantage of this caching mechanism, with a cache file of 1.7GB. ### Ablation Study: Covariate Shift One of the challenges for query optimizers, in general, is their dependency on up-to-date statistics of the database content. In DBMS, statistics are regularly refreshed, but LQOs do not have Figure 5. Comparative overview of each method’s performance on the _test set_ of various dataset splits on the Join Order Benchmark (JOB). The figure on the left depicts the planning time (darker color) and inference time (lighter color), respectively. Note that Bao runs inside PostgreSQL as an extension and its inference time is directly added to the planning time. The figure on the right side shows the execution times on the same train/test splits. Please observe that the x-axis of both figures is divided into two segments with different x-axes, showing outliers in a logarithmic scale. the luxury to easily update trained model weights, with options to either train a new model from scratch or fine-tune and continue training, adapting to changes to the underlying database. To show whether an encoding that represents the content of the database solely by cardinality (such as Bao) can deal with covariate shift, we conduct the following experiment. We generate a smaller copy of IMDB, referred to as IMDB-50%. As the name implies, we keep 50% of the rows in the title table using Bernoulli sampling, ensuring that the available data is halved, but the distribution of values remains comparable to the original version. The other 50% of rows are dropped using CASCADE to ensure referential integrity. We specifically choose to alter the contents of the title table since it is the only table in IMDB that is part of all JOB queries. Given the nature of our sampling, we see a reduction of 50% for the number of records in all movie-related tables (title, movie_companies, movie_info, movie_info_idx, movie_keyword, and movie_link), as well as the cast-related tables (cast_info and complete_cast). Our sampling on title leaves all other tables unaffected. After the changes had been made to IMDB-50%, the internal statistics of PostgreSQL were updated. Our experiment aims to show that methods like Bao only using the cardinality in their encoding show a performance degradation when more data is added (simulating covariate shift). We train one Bao model on IMDB and a second Bao model on the reduced size IMDB-50% using the same "base query split 1" train/test split. Figure 7 presents the result of evaluating the two Bao models trained on all of IMDB (referred to as Bao-Full) and the reduced IMDB-50% (referred to as Bao-50). Query 16b is a distinctive outlier as it timed out in 1 out of 4 Bao-50 models, while the remaining Bao-50 models generated plans that are 19 seconds slower than Bao-Full. Regarding relative differences, query 31c is 24x slower using Bao-50 at 8.4 seconds compared to Bao-Full with 350ms. Query 17a is 4.5\(\times\) slower at 12.2 seconds compared to Bao-Full with 2.7 seconds. On the other hand, the different cardinality regimes seen by Bao-50 also allow it to improve a few queries over Bao-Full by a factor of 1.9\(\times\) for query 7c, 1.6\(\times\) for 26c and 1.3\(\times\) for 10c. These results indicate that the _DBMS system updating the statistics (i.e., cardinality estimates) is insufficient to keep up with a newly trained model_. This performance degradation further points to difficulties in generalization, particularly when larger cardinality values have not been seen during the training process and are, hence, out of distribution. ### Ablation Study: Bitmap and Tid Scans We have observed multiple publications that disabled bitmap andtid scans, namely Balsa (Balsa, 2018), LEON (Balsa, 2018), and a recently published analysis (Balsa, 2018), without giving a reason for doing so. This experiment aims to see if changing PostgreSQL's tool kit has a significant impact on the query performance of the individual queries. For the comparison, we use the baseline PostgreSQL performance from the previous experiment in Section 8.2, and we have run the same 113 queries from JOB with bitmap and tid scans disabled. Figure 8 presents the queries where the difference between enabling or disabling bitmap and tid scans exceeds 250 ms. On 4 out of 28 queries, this change has no statistically significant impact. For the other 24 queries, disabling speeds up queries 28a, 7c, and 30a relative to their original execution times by a factor of 5.5\(\times\), 2.0\(\times\), and 1.8\(\times\), respectively. On the other hand, queries 30c, 28b, and 15c are slowed down by a factor of 2.4\(\times\), 1.9\(\times\) and 1.5\(\times\), respectively. Figure 8. Comparison of execution times when disabling bitmap and tid scans for PostgreSQL. Error bars indicate the 95% confidence interval. Darker colors indicate the statistical significance. Figure 6. Comparison of the total training time against the combined workload runtime (including inference, planning, and execution time) for the different JOB test sets, where each dot represents a model from a specific train/test split. Figure 7. Comparison of the execution times between Bao-Full and Bao-50 when running queries against the full IMDB. Error bars indicate the 95% confidence interval. Darker colors indicate the statistical significance. These findings show that _allowing PostgreSQL to use bitmap and tid scans significantly impacts the query performance_, particularly for the query templates 7, 8, 28, and 30. An interesting observation here is that the same family that has the highest gain of disabling said scans (query 28a with a speedup of 5.5\(\times\)) also features a large slowdown (query 28b with a slow down by 1.9\(\times\)). ### Ablation Study: Genetic Query Optimizer Similar to the disabling of various scan types, there exist differences in using GEQO, i.e., PostgreSQL's genetic query optimizer, across recent publications. Figure 9 shows the impact of disabling GEQO on the execution time of queries from JOB. All queries with a difference of less than 100 ms are omitted. Differences in execution time that are statistically insignificant are weakly colored, leaving just five queries with a significant difference. Disabling GEQO speeds up the query 30a by a factor of 1.6\(\times\), while the other four queries are slowed down by a factor of 9.9\(\times\) (24b), 2.2\(\times\) (26c), 2.1\(\times\) (28a) and 1.7\(\times\) (28b). As query 24b is executed in just 28 ms, its large slow down factor is not surprising and even with GEQO disabled finishes in 272 ms. While the impact of GEQO is far smaller than that of bitmap and tid scans, there remains a significant impact, particularly on the query template 30, among the slowest queries of the workload. In summary, these results show that it is _paramount that PostgreSQL operates at full capacity_ (i.e., with GEQO enabled) in particular when the LQO does not replace, but rather enhance or guide the existing optimizer (for example, through the use of hints). ### Analysis of Query Plan Types Given that there exists a larger number of bushy compared to left-deep and right-deep plans5, it needs to be asked whether omitting _bushy plans_ (as for example in RTOS, LOGER and HybridQO) is a reasonable choice. In (Henderson et al., 2017), the slowdown for restricted tree shapes was measured in comparison to the optimal plan. The experiments' outcome shows that _left-deep trees are worse than bushy ones_ but still result in a reasonable performance. It is worth noting that these experiments were executed by injecting _true cardinalities_ to the cost model of the query optimizer. Moreover, some constraints on the join method selection according to (Brands et al., 2016) were applied. Footnote 5: Left-deep and right-deep plans are hereafter only referred to as left-deep plans, without loss of generality. By forcing all combinations, we analyzed all possible plans for JOB queries with \(\leq 5\) joins in the spirit of (Henderson et al., 2017). However, rather than using true cardinalities (which are considered the optimal case), we ran our experiments with the _DBMS's internal cardinality estimator_. Moreover, we allowed all join methods to be used. As a result, _bushy plans perform on average like left-deep plans_. We confirm our results by obtaining the minimum of _p-value_ = _0.285_ for a two-side Mann-Whitney U-test (Minnik et al., 2017)6 for the means of execution times. The left-tail performance of bushy trees could be significantly (_p-value_-_0.05) better: at the 7\({}^{th}\) percentile of the combined distribution of execution times, the _p-value_ = _0.015_ for the alternative hypothesis _indicates that bushy trees are superior_. Moreover, values originating from left-deep structures are absent towards the left tail. Footnote 6: The selection of the non-parametric test over the T-test stems from the observed lack of normal distribution plausibility across distinct logical and physical plans. ## 9. Conclusion In this paper, we outline the limitations of current LQO methods and put an emphasis on previously under-reported challenges. We provide a framework to equalize many parameters involved in benchmarking to yield increasingly robust results. We perform an evaluation of current LQO methods on the Join Order Benchmark and show that consistently outperforming PostgreSQL is more difficult than expected, particularly when looking at the query optimization problem as an end-to-end process. We believe that our paper is a first step towards reproducible and consistent benchmark evaluations for LQOs and thus provides important novel insights into LQOs from an ML perspective. ###### Acknowledgements. The project has received funding from the Swiss National Science Foundation under grant number 1921052. We also thank our colleagues from University of Konstanz, namely Michael Grossniklaus, Mehmet Aytimur and Silvan Reiner, and Dennis Gehrig from Zurich University of Applied Sciences, for valuable discussions.
2302.03034
Growing structure based on viscous actuation of constrained multistable elements
Growing soft materials which follow a 3D path in space are critical to applications such as search and rescue and minimally invasive surgery. Here, we present a concept for a single-input growing multi-stable soft material, based on a constrained straw-like structure. This class of materials are capable of maneuvering and transforming their configuration by elongation while executing multiple turns. This is achieved by sequenced actuation of bi-stable frusta with predefined constraints. Internal viscous flow and variations in the stability threshold of the individual cells enable sequencing and control of the robot's movement so as to follow a desired 3D path as the structure grows. We derive a theoretical description of the shape and dynamics resulting from a particular set of constraints. To validate the model and demonstrate the suggested concept, we present experiments of maneuvering in models of residential and biological environments. In addition to performing complex 3D maneuvers, the tubular structure of these robots may also be used as a conduit to reach inaccessible regions, which is demonstrated experimentally.
Ezra Ben Abu, Yaron Veksler, Shai Elbaz, Anna Zigelman, Amir D. Gat
2023-01-26T10:30:59Z
http://arxiv.org/abs/2302.03034v1
# Growing structure based on viscous actuation of constrained multistable elements ###### Abstract Growing soft materials which follow a 3D path in space are critical to applications such as search and rescue and minimally invasive surgery. Here, we present a concept for a single-input growing multistable soft material, based on a constrained straw-like structure. This class of materials are capable of maneuvering and transforming their configuration by elongation while executing multiple turns. This is achieved by sequenced actuation of bi-stable frusta with predefined constraints. Internal viscous flow and variations in the stability threshold of the individual cells enable sequencing and control of the robot's movement so as to follow a desired 3D path as the structure grows. We derive a theoretical description of the shape and dynamics resulting from a particular set of constraints. To validate the model and demonstrate the suggested concept, we present experiments of maneuvering in models of residential and biological environments. In addition to performing complex 3D maneuvers, the tubular structure of these robots may also be used as a conduit to reach inaccessible regions, which is demonstrated experimentally. ## 1 Introduction We present a growing and maneuvering multi-stable soft material; on the sequential activation of constrained bi-stable frusta via viscous flow from a single input. The tubular structure of the robot allows it to transfer materials to unreachable regions, via an internal channel. The constraints define the final configuration of the robot, and the viscous flow allows a sequenced actuation of the straw elements, which enables the robot to follow a desired 3D path as the structure grows. Growing materials' ability to travel along 3D paths in space is fundamental to their utility. Advancing along a 3D path and avoiding obstacles within complex environments is common in applications such as search and rescue missions [11, 7] in natural or man-made disaster sites, where a survivor might be trapped below a pile of porous debris with limited oxygen or water, and minimally invasive surgery [13, 16, 22] such as intravascular catheterization procedures. The inherently large number of degrees of freedom of these robots makes their actuation challenging. Additionally, following complex paths often requires passing through intricate narrow cavities and networks, so these robots have to be slender in order to accomplish this. Slenderness, however, typically limits a robot's ability to move forward and to turn rapidly due to the friction that develops when pushing it forward. Growing materials, lengthening from the tip, involve no relative movement of the body with respect to the terrain. For slender growing materials with strong friction, this provides a convenient solution allowing the robot to maneuver by elongation in complex environments. One example of such robots, which are able to navigate inside a 3D maze through growing, was proposed by Hawkes et al [12] in the context of inverted thin-walled vessels. This concept was also extensively studied by others for various applications [6, 26, 23, 9, 24, 10, 25, 19]. As mentioned above, slender growing materials inherently involve an extremely large number of degrees of freedom. Thus, a major challenge in soft robotics is to simplify the robot's actuation, reducing the required control mechanisms. The development of strategies for efficient actuation and control of growing soft robots is essential to the advancement of the field. For instance, a single input actuation was investigated e.g., by Mosadegh et al [18], who used a pneumatic-network, which consists of small channels in elastomeric materials. Yang et al [28] proposed the design of a "single-unit buckling actuator" which consists of an elastomeric structure with a "nonbuckling center area" connecting to several "buckling pillars." Later on, Jin et al [15] proposed a design of multi-functional robots that operate with a single pressure input and without the need for electronic components. In particular, they utilized viscous flow and snapping arch principles, fully integrated on-board, enabling the control of the incoming airflow. Another design incorporating the interplay between bi-stability and soft actuators, was suggested by Gorissen et al [8] for a single-input jumping robot, where the snapping of elastomeric spherical caps upon pressurization results in a sudden release of energy which leads to a rapid jump. Flow-based sequenced actuation of multiple bi-stable elements was studied by Ben Haim et al [1], who provided a closed-form model for the dynamic control of multiple bi-stable hyperelastic balloons. A design for growing soft-robot with a single actuation input was proposed by Connolly et al [4], who focused on fiber-reinforced actuators, where for a given trajectory, they found the optimal design parameters for an actuator. Here, we present a growing and maneuvering multi-stable structure based on the sequential activation of bi-stable elements via viscous flow. As the multi-stable structure, a commercially available straw composed of a sequence of conical frusta is a convenient and natural candidate. The bi-stability of the conical frusta, resulting in a multi-stability of the straw-shaped structures, was studied by Bende et al [2], who linked the geometry and internal stress properties to multi-stable functionality in the bending and extension states. Most recently, Breitman et al [3], investigated the fluid-solid-interaction dynamics occurring in a multistable straw filled with highly viscous fluid. Other multi-stable truss structures with similar properties to those of conical frusta were recently investigated by Hua and coworkers [14], as well as [27, 29]. The purpose of this study is to explore how viscous flow and a constrained slender multi-stable structure can be leveraged to accomplish controlled growth and maneuvering of multi-stable growing material using sequential activation of bi-stable elements. Below, we derive a theoretical description of the shape and dynamics resulting from a particular set of constraints. Models of residential and biological environments are used to experimentally demonstrate the robot's ability to perform complicated 3D maneuvers and a variety of additional operations such as heart structural intervention and fire extinguishing. ## 2 Results We suggest leveraging internal viscous flow in a single-input multistable growing material to achieve controlled sequenced actuation and growth-based locomotion in a complex 3D environment. As a model for manipulation and control of the proposed robot, we chose a straw-like structure, see Fig. 1(A), which is a common, fluid-sealed, slender multistable structure. In order to maneuver in a complex environment, it is essential to steer while elongating, which may be achieved by creating asymmetric constraints in different segments of the straw. Such constraints can be created by using various methods, and for the current configuration polypropylene sheets were soldered in the desired regions (see methods section), marked by black lines in Fig. 1(A)), which hold one side of the straw closed. ### Using viscous effects to minimize the swept area via sequencing In Fig. 1(A), we show an example of a planar path (marked by a dashed black curve), along with the straw configuration at different times during its growth. The path determines the positions of the constraints, as well as the number of frusta that are stitched together at each location. For the current configuration, each frustum allows the steering angle of \(\approx 16^{\circ}\). Using straws with a different frustum length or a different outer diameter allows various resolutions of the steering angle. We note that before the straw steers in a given direction, the front (closed frusta region) deviates from the desired path. During the propagation along the desired path, this deviation decreases, since fewer closed frusta are left in the straw. We define the _swept area_ as all locations which the soft-robot occupied during the process of growth to its final state. To follow a 3D path accurately, the swept area should be minimized, ideally to only the final 3D path. In Fig. 1 panels (B1) and (B2) we show the results of planar kinematical simulations (see code in SI, as well as kinematic analysis section) of the constrained straw where the constraints' locations and the number of constrained frusta were dictated by the desired final configuration. Figs. 1 compares simulations of the swept area for both sequenced actuation (B1) and random actuation (B2). It can be seen that the swept area in Fig. 1(B1) is much smaller than in Fig. 1(B2), demonstrating the importance of a sequenced actuation to accurately grow along a predefined path. This result is also verified experimentally in Fig. 1(C1) (viscous fluid based sequenced actuation) and (C2) (air pressurization leading to random actuation). More data on the experimental parameters are available in the methods section below. ### The effect of constraints on the stability threshold In Fig. 1 panel (E) we show the pressure in the straw, \(P\), vs the experimentally measured values of the frustum elongation, \(L\), for experiments with and without constraints. All straws contained six frusta, and for the constraint experiments, the straw was bent in each frustum by right-left-right-left-right-left, so that overall, its elongation was forwards. We denote the upward snapping pressures by \(P_{\text{s-b}}^{\text{up}}\) (0.3 atm) and \(P_{\text{s-f}}^{\text{up}}\) (0.22 atm), which are the minimum pressure values needed for opening one frustum, with and without a constraint ("b" represents constrained elongation during bending and "f" represents un-constrained forwards only elongation), respectively. Similarly, the downward snapping pressures denoted by \(P_{\text{s-b}}^{\text{down}}\) (\(-0.08\) atm) and \(P_{\text{s-f}}^{\text{down}}\) (\(-0.1\) atm), represent the maximum pressure values needed for closing one frustum, with and without a constraint. In all measurements, the standard deviation of the snapping pressure is below 0.01atm. It can be seen that, the constraints affect the threshold of stability, since \(P_{\text{s-b}}^{\text{up}}>P_{\text{s-f}}^{\text{up}}\) and \(P_{\text{s-b}}^{\text{down}}<P_{\text{s-f}}^{\text{down}}\). This implies that if the applied pressure is in the range (\(P_{\text{s-f}}^{\text{up}},P_{\text{s-b}}^{\text{up}}\)), then only the unconstrained frusta will open and the constrained ones will remain closed and if the applied pressure is in the range (\(P_{\text{s-b}}^{\text{down}},P_{\text{s-f}}^{\text{down}}\) Figure 1: **Demonstration of growth of a multistable structure along with schematic sketches of all possible configurations and mechanical properties.** (A) The growth of a straw-like structure with constraints (marked by a black continuous lines). The sequenced opening of the bi-stable elements which results in growth along a pre-defined path (marked by a black dashed line). Panels (B1) and (B2) present kinematic simulations for sequenced growth (viscosity dominated dynamics) and unordered growth (negligible viscosity), respectively, where the pink regions denote the swept area, defined as all regions which the structure passed through during the growth process. Panels (C1) and (C2) present experimental results for actuating the configurations by a viscous internal fluid (silicone oil with viscosity of 60 Pa-sec) at times, \(t=0.2T,\ldots,T\), where \(T=6\) sec, and by an inviscid internal fluid (air) at times, \(t=0.2T,\ldots,T\), where \(T=2\) sec, respectively. (D) Illustration of detailed descriptions for all possible element’s configurations. (E) Experimental measurements of stiffness of different states and stability thresholds. Experimental data showing the internal pressure in the straw, \(P\) vs. the frustum elongation, \(L\), corresponding to retracted-extended snap-through (red markers) and retracted-bent snap-through (blue markers). The different markers indicate different experiments and straws. The frustum length was calculated by measuring the average elongation of 6 connected frusta. only the constrained frusta will close and the unconstrained ones will remain open. On the other hand if the applied pressure is equal to or greater than \(P_{\rm s\text{-b}}^{\rm up}\), then the unconstrained frusta will open first, and afterwards the constrained ones. Furthermore, the graph in Fig. 1 panel (E) presents that the elongation of the constrained frusta is reduced by a factor of 2 relative to the unconstrained ones. In addition, the ratio between the unstable regions slopes \(k_{\text{s-b}}\) and \(k_{\text{s-f}}\), for the constrained and unconstrained frusta, respectively, is approximately 2.5, specifically \(k_{\text{s-b}}\approx-27,000\,\)kPa/m and \(k_{\text{s-f}}\approx-11,000\,\)kPa/m. This observation is important for modeling and controlling this system, and particularly for demonstrating that, in the case of negligible viscous effects, the frusta without a constraint opens before the frusta with a constraint. Figure 2: **Growth-based maneuvering in residential and lungs-like structures.** Panel (A) is a residential prototype with three different geometric targets, marked by the red chair. In each series: (A1) is the initial configuration, (A2) and (A3) are intermediate stages, and (A4) is the final configuration. The final length and the duration of the growth process are in the upper row of (A) \(296\text{mm},8\text{sec}\), in the middle row of (A) \(304\text{mm},6\text{sec}\), and in the lower row of (A) \(292\text{mm},9\text{sec}\). Panel (B) is a lungs-like prototype with three different geometric targets, marked by the red balloon. The diameter of all tubes is reduced by approximately 30% at each junction and the opening angle of each junction is \(130^{\circ}\). In each series: (B1) is the initial configuration, (B2) and (B3) are intermediate stages, and (B4) is the final configuration. The final length and the duration of the growth process are in the upper row of (B) \(313\text{mm},6\text{sec}\), in the middle row of (B) \(311\text{mm},7\text{sec}\), and in the lower row of (B) \(313\text{mm},6\text{sec}\). ### Relating the constraints positions to the growth kinematics A straw-structure can be viewed as a one-dimensional lattice composed of multi-stable elements. Each element is constrained to a small set of possible states, where the states may be obtained by different geometric transformations. These transformations can generally be divided into three categories: (i) retracted element, (ii) extended element, and (iii) bent element (see Fig. 2(D)). To describe the full kinematics of the entire structure, a local coordinate system is prescribed for each element. These local coordinate systems may be defined according to the Denavit-Hartenberg notation [5], where the local axial direction of the element is \(x\), whereas the \(z\)-axis lays in the cross-section of the element (see Fig. 2(D)). To transform one coordinate system into another, the following transformation matrix is used, \[{}^{i}T_{j}=\left[\begin{array}{cc}Rot_{3x3}&Trans_{3x1}\\ 0_{1x3}&1\end{array}\right], \tag{1}\] where \(Rot_{3x3}\) is a rotation matrix, \(Trans_{3x1}\) is a translation vector, and \(0_{1x3}\) is a vector of zeros. For a serial structure, it is usually straightforward to construct the transformation matrix between two consecutive elements' coordinate systems. Hence, to transform the world coordinate system into an element's coordinate system, one must multiply all previous local transformation matrices: \[{}^{w}T_{i}={}^{w}T_{0}\cdot{}^{0}T_{1}\cdot{}^{1}T_{2}\cdot\ldots\cdot{}^{i- 1}T_{i}. \tag{2}\] The local transformation from element \((i-1)\) to element \((i)\), denoted by \({}^{i-1}T_{i}\), is comprised of a translation, \(a_{i}\), along \(x_{i}\) (\(Trans_{x_{i}}\left(a_{i}\right)\)) and a bending rotation with angle \(\theta_{i}\) around an axis \(u_{i}\), where the \(u_{i}-\)axis is located in the cross-section of element \(i\) and may be obtained by rotating \(z_{i}\) by an angle \(\alpha_{i}\) around the \(x_{i}-\)axis. To compute the local transformation matrix for straw elements, we can decompose the bending rotation into three rotations (see Fig. 1(D)). First, rotating the coordinate system around \(x_{i}\) with angle \(\alpha_{i}\) (twist), then rotating with angle \(\theta_{i}\) around the new \(z_{i}\) (bend), and finally rotating back around the new \(x_{i}\) with angle \((-\alpha_{i})\), in order to bring the \(z_{i}-\)axis back to its original orientation: \[{}^{i-1}T_{i}=Trans_{x_{i}}\left(a_{i}\right)Rot_{x_{i}}\left(\alpha_{i} \right)Rot_{z_{i}}(\theta_{i})Rot_{x_{i}}\left(-\alpha_{i}\right), \tag{3}\] where the translation and rotation matrices are defined as, \[Trans_{x_{i}}\left(a_{i}\right) =\left[\begin{array}{ccc|c}1&0&0&a_{i}\\ 0&1&0&0\\ 0&0&1&0\\ \hline 0&0&0&1\end{array}\right] \tag{4}\] \[Rot_{x_{i}}\left(\alpha_{i}\right) =\left[\begin{array}{ccc|c}1&0&0&0\\ 0&\cos\left(\alpha_{i}\right)&\sin\left(\alpha_{i}\right)&0\\ 0&-\sin\left(\alpha_{i}\right)&\cos\left(\alpha_{i}\right)&0\\ \hline 0&0&0&1\end{array}\right]\] (5) \[Rot_{z_{i}}\left(\theta_{i}\right) =\left[\begin{array}{ccc|c}\cos\left(\theta_{i}\right)&\sin \left(\theta_{i}\right)&0&0\\ -\sin\left(\theta_{i}\right)&\cos\left(\theta_{i}\right)&0&0\\ 0&0&1&0\\ \hline 0&0&0&1\end{array}\right]. \tag{6}\] The values of the parameters \(a_{i}\), \(\theta_{i}\), and \(\alpha_{i}\) depend on the geometry of the straw element and its state. The geometrical model of a straw element assumes that each element is constructed from a static frustum of length \(l_{static}\) and a dynamic frustum of length \(l_{dyn}\), with outer radius \(r_{out}\) and inner radius \(r_{in}\) (see Fig. 1(D)). Table 1 summarizes the values of the transformation parameters for different element states. The value of the bending angle \(\Theta\) appearing in Table 1 can be estimated by a 2D analysis of a straw element. Assuming the outer and inner radii of the element are constant, the bending angle can be found by using the cosine rule (see Fig. 1(D)): \[\Theta=\arccos\left(\frac{r_{out}^{2}+r_{in}^{2}-h_{dyn}^{2}}{2\cdot r_{in} \cdot r_{out}}\right), \tag{7}\] where \(h_{dyn}\) is the side-length of the dynamic frustum. Assuming \(h_{dyn}\) remains constant for all element states, its value can be calculated for an element in the extended state (see Fig. 1(D)): \[h_{dyn}^{2}=(r_{out}-r_{in})^{2}+l_{dyn}^{2}. \tag{8}\] A kinematic simulation of the entire straw structure was created by combining the forward kinematic computations for all straw elements, as described in equations (1)-(6), using the parameter values for different element states as given in Table 1 and equations (7)-(8). This simulation was then utilized to create the swept area domains for the two scenarios, which were shown in Fig. 1(B1) and Fig. 1(B2). The swept area in Fig. 1(B1) illustrates a sequenced elements' elongation, where all elements were transformed from a retracted state to their final state. The swept area shown in Fig. 1(B2) was obtained by taking a union of 200 domains found from random-order element activation. \begin{table} \begin{tabular}{|l||c|c|c|} \hline & **Retracted** & **Extended** & **Bent** \\ \hline \(a_{i}\) **- Translation in \(\mathbf{\hat{x}}\) direction** & \(l_{static}-l_{dyn}\) & \(l_{static}+l_{dyn}\) & \(l_{static}\) \\ \hline \(\theta_{i}\) **- Bending angle** & 0 & 0 & \(\Theta\) \\ \hline \(\alpha_{i}\)**- Bending direction angle (twist)** & - & - & \(\in[0,2\pi)\) \\ \hline \end{tabular} \end{table} Table 1: Translation and rotation values for different straw element states. Figure 3: **Maneuvering in complex environments and transferring materials to unreachable regions, via an internal channel.** (A) Maneuvering in a heart-like structure simulating structural heart intervention. (B) Maneuvering in a residential setting and transferring water to a specific location. (C) A part of the experimental setup with an enlarged view of the “end tip,” where its cross-sectional illustration is shown in the inset. (D) A view of the general experimental setup which consists of a pressure controller connected by two channels to the growing material. The first channel controlled the pressure in a fluid reservoir and the second channel is used as the working channel. (E) The constraints fabrication process. ### Demonstration of maneuvers and operations in complicated 3D environments In Figs. 2(A) and 2(B), we demonstrate two possible applications of the suggested concept of a controlled single-input growing material with viscous internal fluid, which maneuvers inside a 3D complex environment (a residential model and a model of human lungs-like structure). In both cases, we performed several experiments with different goal positions and recorded the straw growth. In the experiments presented below, the number of constrained frusta is in the range \(6-16\), and the average elongation was approximately \(440\%\) of the resting length. In these demonstrations, the growing multi-stable robot was able to follow desired paths which included passing through thin tubes and corridors, junctions, steep corners, narrow gates, dodging obstacles, overcoming gravity, and changing the plane of motion. In Fig. 2(A), we show locomotion via growth in a residential model for three different final goal locations (marked by a red chair). It can be seen that the number of bends slightly affects the final length, e.g., when the number of bends is \(16\) (with overall steering angle of \(256^{\circ}\)), the length decreases from \(325\) mm (which is the maximal length of the straw) to \(296\) mm (meaning that in this case the overall steering angle of \(256^{\circ}\) results in the elongation decrease of approximately \(10\%\)). In Fig. 2(B), we demonstrate locomotion by growth in a lung-like model for three final goal locations (marked here by a red balloon). At each junction one tube splits into two different tubes, thus at generation \(n\), (\(n=1,2,3,\ldots\)), there are \(2^{n}\) tubes, which increases the complexity of the maneuvering till reaching the goal as the multi-stable structure grows and gets closer to the destination. The diameter in each generation is reduced by approximately \(30\%\), which complicates and restricts the maneuvering abilities. In our case, \(n=3\), and at the last generation the inner tubes' diameter is \(150\%\) of the maneuvering robot's diameter. In Fig. 3 we show the ability of the proposed soft-robot to maneuver and then to perform various operations, such as heart structural intervention in the heart-like structure (see Fig. 3(A)) and putting out the fire in a residential model (see Fig. 3(B)). This figure visualizes that beyond maneuvering in a 3D complex environment and reaching the goal, the soft-robot is capable of transferring various materials such as water or medical equipment. For a detailed experimental setup including the robot's cross-sectional illustration see Fig. 3(C), where note that the working channel is an inner tube attached to the sealed end tip of the straw allowing to transfer the above mentioned materials. The inner tube is sealed and isolated from the internal pressure inside the reservoir, and is flexible enough to move with the material as it grows. ## 3 Concluding remarks Natural phenomena as well as many engineering applications involve geometries that are narrow and complex. Maneuvering in such confined and intricate 3D environments is exceptionally challenging for conventional rigid robots. In this paper, we presented a new concept for a single-input growing material that can steer and elongate along a predefined path, as well as perform a variety of operations. In the proposed class of growing materials, we apply physical phenomena (specifically the interaction between viscosity and multi-stability) to achieve controllable locomotion, enabling single-input control. The proposed growing material is a multi-stable structure with constraints located at positions computed from the kinematic analysis of the desired 3D path. We demonstrated the feasibility of the suggested concept for various scenarios. In all cases, the growing multi-stable structure was able to follow the desired path including through narrow tubes and corridors, junctions, steep corners, narrow gates, dodging obstacles, overcoming gravity, and changing its plane of motion. Apart from viscous-based sequencing, the growing material can be controlled via utilizing the different threshold pressures for the opening of straight and bent frusta (this concept is related to ideas explored by Peretz et al. [20] and Melancon et al [17]). For viscous-based sequencing, the fluidic pressure propagation determines the rate and order of opening of the frustum. The frusta are opened or closed, depending on the sign of pressure, from the inlet toward the closed end of the straw without distinction between frusta with and without constraint. In contrast, in the case of negligible viscous effects under positive pressure, we observe that frusta without a constraint open first and those with a constraint close first when activated with negative pressure. This enables additional modes and geometries of the single-input growing robot. Materials and methods ### Research objective and design This study aims to utilize viscous fluid interacting with multistable elastic structures to construct a simple growing material, which is capable of performing maneuvers in complex 3D environments with obstacles and junctions, such as human lungs or natural disaster sites. As a prototype for the elastic multi-stable structure, we used a commercially available straw to which we attached constraints at specific locations. Fabrication of the constraints (see 3(E)) is accomplished with a standard soldering machine. In order to determine the constraint locations, we use kinematic analysis based on the desired dynamics and final configuration of the robot. ### Fabrication of experimental setup We used a straw made from Polypropylene, which is a relatively inexpensive and convenient material for production [21]. In addition it has a relatively high Young's modulus (\(E=1.3\,\mathrm{G}\cdot\mathrm{Pa}\)) which allows to fabricate a sufficiently reliable structure. In our experiments we employed a straw with 78 frusta. At atmospheric conditions, the maximal and minimal lengths of the straw are 415 mm and 90 mm, respectively. Furthermore, the outer and the inner diameters of the straw are 19 mm and 13 mm, respectively. Generally, straw-like structures with a wide range of outer diameters (6-200 mm) are commercially available. Moreover, even smaller straws, with an outer diameter of 2 mm, can be fabricated by using standard methods. The straw was directly attached to a fluid reservoir using a thread which is a part of the straw end. To prevent penetration, the other thread located at the second end of the straw was removed and sealed with a hot glue. In experiments with viscous fluid, the straw was filled with silicon oil (viscosity of 60 Pa\(\cdot\)sec). The direct attachment between the straw and the fluid reservoir allows to minimize the pressure losses, and therefore increases the efficiency of the system. The fluid reservoir was connected to a pressure controller (ELVEFLOW OB1 MK3+), which in turn was connected to a compressor (CompAir L07). In all experiments, the pressure controller was adjusted to 2.5\(\pm\)0.01 atm and the temperature was kept at room temperature. In Fig. 1 the straw elongation was captured by video, in 4K resolution and 240 fps, whereas in Fig. 2(A) and (B), the straw elongation was captured by video, in 1080p resolution and 30 fps, until it reached the goal. First, after mapping the path, by the aid of our theoretical model the locations of the constraints can be found. Then the constraints are marked on the straw, on the same sides of the straw as the desired turns, for example when a right turn is to be performed, the constraint should be inserted on the right side of the straw. To elongate straight forwards no constraints are needed. Then, the constraints were inserted in the appropriate zones by spreading Polypropylene sheets between the neighboring frusta and soldering them together. In order to guarantee that the constraints will be perfectly adhered, the soldering machine temperature was set to 200\({}^{\circ}\)C, which is 25% higher than the melting point of Polypropylene. Inserting a constraint on a single frustum results in a steering angle of approximately 16\({}^{\circ}\), thus knowing in advance the desired steering angle of the maneuver, the number of adhered frusta is determined. The lungs-like structure (see Fig. 2(B)) consists from Perspex tubes of length 100 mm and with inner diameters of 54, 40, and 30 mm and special connectors (printed using SLA 3D printer -Form 3B+) that fit the tubes' diameters. The inner diameter of the inlet connector is 80 mm. The diameter of all tubes is reduced by approximately 30% at each junction and the opening angle of each junction is 130\({}^{\circ}\). The lungs-like structure and the straw are attached to the bottom of the fluid reservoir. The heart-like structure (see Fig. 3(A)) was fabricated by combining four components which were printed with FDM printer (RAISE3D pro2). The residential model (model of a small house, see Figs. 2(A) and 3(B)), was created from acrylic sheets by using a laser cutting machine (Makeblock Laserbox 40W), where the furniture was printed using SLA 3D printer (Form 3B+). The outer dimensions of the house are 250\(\times\)300\(\times\)150 mm\({}^{3}\). The height of each floor is 75 mm and the gates' dimensions are 40\(\times\)40 mm\({}^{2}\) and 40\(\times\)70 mm\({}^{2}\). The fire setup was fabricated by using a small metal can which was filled with paper soaked with IPA, and then the fire was ignited. The pressure reservoir (see Fig. 3(D)) used in all of the maneuver experiments was made from perspex tube, aluminium cover on top of it, and SLA printed cover on its bottom. For experiments shown in Fig. 2(A) and (B), we used the upper cover with only one pressure inlet, whereas for the experiments shown in Fig. 3(A) we added an additional input inlet (working channel) for performing various operations such as inserting a wire for heart structural intervention or using the channel as a hydrant for splashing the water on fire. This working channel was sealed and isolated from the internal pressure inside the reservoir. **Author contributions:** A.D.G. and E.B.A. conceived the research subject. E.B.A constructed the experimental setup and conducted the experiments. Y.V. performed the theoretical analysis and numerical computations. E.B.A. analyzed the experimental data. E.B.A., Y.V., S.E., A.Z., and A.D.G. wrote the paper. ### Acknowledgements This work was supported by the Ministry of Energy of Israel.
2308.16304
Quantum Wavefront Shaping with a 48-element Programmable Phase Plate for Electrons
We present a 48-element programmable phase plate for coherent electron waves produced by a combination of photolithography and focused ion beam. This brings the highly successful concept of wavefront shaping from light optics into the realm of electron optics and provides an important new degree of freedom to prepare electron quantum states. The phase plate chip is mounted on an aperture rod placed in the C2 plane of a transmission electron microscope operating in the 100-300 kV range. The phase plate's behavior is characterized by a Gerchberg-Saxton algorithm, showing a phase sensitivity of 0.075rad/mV at 300kV, with a phase resolution of approximately $3\cdot10^{-3}\pi$. In addition, we provide a brief overview of possible use cases and support it with both simulated and experimental results.
Chu-Ping Yu, Francisco Vega Ibáñez, Armand Béché, Johan Verbeeck
2023-08-23T12:04:25Z
http://arxiv.org/abs/2308.16304v3
**Quantum Wavefront Shaping with a 48-element Programmable Phase Plate for Electrons** ## Abstract **We present a 48-element programmable phase plate for coherent electron waves produced by a combination of photolithography and focused ion beam. This brings the highly successful concept of wavefront shaping from light optics into the realm of electron optics and provides an important new degree of freedom to prepare electron quantum states. The phase plate chip is mounted on an aperture rod placed in the C2 plane of a transmission electron microscope operating in the 100-300 keV range. The phase plate's behavior is characterized by a Gerchberg-Saxton algorithm, showing a phase sensitivity of 0.075 rad/mV at 300 keV, with a phase resolution of approximately \(3\cdot 10^{-3}\)\(\pi\).** ###### Contents * 1 Introduction * 2 Experimental considerations * 2.1 Description of the Electrostatic Phase Plate * 2.2 Characterization * 3 Application Examples * 3.1 Designer Electron Waveforms * 3.2 Object Sampling with Different Wavefunctions * 3.3 Adaptive Optics * 3.4 Phase programmed Ptychography * 4 Conclusion * A Defocused Images of the Phase Plate * B Pixel Index Introduction Wavefront shaping, or the spatial and time-dependent control over the phase in coherent waves, has revolutionized many diverse scientific fields ranging from radio and light astronomy [1, 2], radar [3], acoustics [4, 5, 6], seismology [7], telecommunication [8, 9] and many more [10]. It requires a device that can apply a position-dependent phase change and can be augmented by adding a control loop to obtain adaptive optimization of the wave with respect to some goal function. In optics, this can be realized with so-called spatial light modulators, which can be based on moving arrays of mirrors or by liquid crystal-based setups that can change refractive index when applying an electric field [11, 12]. Matter waves, as introduced by de Broglie [13], are also amenable to this same concept. Indeed, the working principle of an electron microscope is entirely based on describing the free electrons as coherent quantum waves with wavelengths of the order of picometers. The capability of manipulating these electron waves is an indispensable part of a transmission electron microscope (TEM). The most relevant addition to phase manipulating devices in recent decades is, without a doubt, the spherical aberration corrector [14, 15, 16], which flattens the phase front of the electron wave induced by (high-order) geometric aberrations of the microscope lenses and allows the forming of a sharper and more intense probe in scanning probe applications. Removing unwanted phase aberrations has significantly increased the resolution and current density of the scanning transmission electron microscope (STEM) with many benefits in, e.g., spectroscopic applications. Besides canceling geometric aberrations, the ability to arbitrarily shape the electron wavefront is gradually gaining attention with the hope of improving contrast or selectivity in electron microscopy setups. There has been a renewed surge of such phase modulators and their applications in the past few years. In soft material imaging, different phase plates such as Zernike [17, 18], Boersch [19, 20], Zach [21, 22], or Volta [23, 24, 25] have been implemented in the TEM to imprint a constant phase shift to a (central) part of the electron wave, to increase the contrast when imaging phase objects. Some other designs with relatively higher complexity may modify both the amplitude or phase configuration of the electron wave to create an electron probe of specific shape [26], to increase contrast [27, 28], or to extract specific information from the electron-sample interaction [29, 30], to name a few. Some of these complex modulators even exhibit control over the parameters or magnitude of the modulation. The electrostatic phase plate reported by Verbeeck et al. [31] has demonstrated changes in interference between 4 partial waves by altering their mutual phase relation. Barwick and Batelaan [32] showed that a pulsed laser beam could induce a phase shift in the electron beam and that the contrast of the formed image can be optimized by tuning these laser pulses. Different realizations of using the ponderomotive force to change the phase of an electron beam appeared [33, 34, 35, 36]. The electrostatic phase plate reported by Tavabi et al. [37] has demonstrated a tuneable azimuthal phase by setting up specific electric field boundary conditions, which was interpreted as adding orbital angular momentum to the electron beam. Here we report on an adaptive electrostatic phase plate based on the proof of principle demonstration by Verbeeck et al. [31], but with significantly increased complexity, performance, and practical usefulness. The phase plate consists of 48 openings, or pixels, transparent to an incoming coherent electron wave. The vertical walls of the pixels are made into electrodes so that an electric potential can be established inside, changing the wavelength of that part of the transmitted wave. Since separate voltage sources control each of the 48 pixels, the phase of the entire transmitting coherent electron wave can be programmed at will. This design and the electrostatic nature grant the phase plate several advantages, such as short response time, the ability to realize complex and arbitrary phase configurations, low power dissipation, compactness, low weight, and high stability and repeatability. The experimental part of the paper provides a concise summary of the reported phase plate. The design of the phase plate is described first, as well as the components and mechanism to create a phase shift on an electron wavelet. The manufacturing design choices are briefly discussed in the scope of the challenges faced. The device's optical performance is then evaluated regarding its phase sensitivity and response time. We discuss the applications of the phase plate in the scope of electron microscopy. Using the unique properties of a fast, hysteresis-free programmable phase plate, we demonstrate how novel imaging setups can expand or improve imaging modalities in TEM. We provide simulated examples and early experimental attempts towards electron wave modulation, complex sampling schemes, adaptive optics, and phase-coded ptychography to hint at what phase plates could bring to the electron microscopy community. ## 2 Experimental considerations ### Description of the Electrostatic Phase Plate The basic working principle of the phase plate is sketched in Figure 1-a. A coherent incoming electron wave is made to interact with an insulating membrane that has several holes. The top and bottom surfaces of the membrane are covered with a ground shield, while the inside of the holes is coated with a conductive layer that can be put to a controlled electrostatic potential (\(V_{1}\) and \(V_{2}\) in the simplified sketch). The potential surrounding the holes creates a potential landscape for the fast electrons that accelerates the electron upon entering and decelerates upon leaving this area. This will cause a phase change between the partial waves leaving these holes where one could imagine them as coherent Huygens sources that will constitute a now phase-programmed wave upon propagation in free space. The phase shift \(\phi\) obtained is given by the electrostatic Aharanov-Bohm shift: \[\phi=\frac{\pi e}{\lambda E_{0}}\int_{\Gamma}V(\vec{r})dl \tag{1}\] For an electron wave with wavelength \(\lambda\) and energy \(E_{0}\) and crossing a region of space with an electrostatic potential \(V(\vec{r})\) along a trajectory \(\Gamma\). In the case of a weak perturbation, the electron's trajectory is not altered by this field, and the phase shift becomes directly related to the projected electrostatic potential. The goal of a pixelated phase plate is to create a potential profile that, in projection, leads to a constant phase shift proportional to the voltage applied to each pixel element. This occurs if the projected potential changes as little as possible over the region of each hole, which can be obtained by choosing a high aspect ratio (height/diameter\(>1\)). From a practical perspective, the AdaptEM WaveCrafter phase plate [38] comprises three main elements shown in Figure 1b-e: a dedicated condenser aperture holder containing the phase plate chip, a 48-channel programmable voltage source and a remote computer for control and user interface, respectively. The phase plate used in this work is composed of 48 independent active elements, or pixels, arranged in 4 concentric rings and 12 petals (see Figure 1b). Each element consists of a layered structure similar to the one described by Matsumoto and Tonomura for a single phase shifting element [39]. An aspect ratio of approximately 2 was chosen to avoid lensing, and a total diameter of the active area of 50 \(\mu\)m assures that a modern electron microscope can coherently illuminate the whole device. One considerable advantage of this phase plate design lies in the relatively low voltage (in the mV range) required to induce a phase shift of \(2\pi\). This avoids high electric field breakdown issues in the nanoscale features of the chip and has the benefit that readily available voltage sources, which are simultaneously precise, stable, low power, fast and reliable, can be used. ### Characterization To experimentally examine the projected potential profile, phase reconstruction based on the Gerchberg-Saxton (GS) algorithm [40] was performed on a set of images of the phase plate, where each pixel is excited with increasing electrostatic potential (48 pixels, 11 voltage levels, 528 images in total). For the characterization, the phase plate is inserted in the sample plane of an FEI Tecnai Osiris S/TEM operating at 200 keV and illuminated with a parallel electron beam. The images are taken from the back focal plane of the objective lens (diffraction mode), while the objective lens is largely defocused so that the detector can capture the near-field diffraction pattern of the phase plate. This experiment is aimed to characterize the projected potential on the phase plate when varying the phase inside each pixel in a range between 0 and \(2\pi\). A rough estimation of the voltage corresponding to a \(2\pi\) phase shift was first found by assigning a gradually increasing voltage to half of the pixels randomly and repeatedly. Theoretically, a \(2\pi\) phase shift should not result in any difference in the diffraction pattern formed by the phase plate. Thus a visual inspection of the voltage at which the pattern shows the least variation over time is a reasonable estimation of the value at which the pixels yield a \(2\pi\) phase shift. Once this voltage \(V_{2\pi}\) was found, a series of images with 11 different potentials equally spread between 0 and \(V_{2\pi}\) was taken for each pixel. The defocused condition was specifically chosen so that outgoing waves from the electrodes interfered strongly with each other, and the phase difference between separate neighboring wavelets is significantly encoded in the recorded intensity images (see supplementary). This choice of detection plane was preferred over recording at an in-focus condition that interferes all of the wavelets together for several reasons. First of all, at the right focus, the transmitted electrons are concentrated in a very small region (less than Figure 1: Sketch of the working principle of the phase plate (a). Only 2 pixels are drawn. 3D render of the setup (b) and the main components, including the phase plate (c), the voltage sources (d), and the phase controller computer (e). A reference bar of 30 \(\mu\)m is presented in (c). 1 % of the size of the recorded defocused images), and creating a high enough camera length to sufficiently sample such patterns on a pixelated camera for phase retrieval is not trivial. On top of that, the inversion invariant nature of the wave intensity in the reciprocal space would also challenge obtaining a unique reconstruction and greatly hinder the retrieval algorithm's convergence. The result of the reconstruction is summarized in Figure 2. The phase response of all pixels, as they were individually excited, is fitted using a linear function, representing the phase sensitivity of that pixel to the applied voltage. A phase sensitivity matrix can be constructed showing the phase sensitivity of pixel \(i\) upon exciting pixel \(j\). The phase sensitivity matrix in Figure 2 shows a strong response on the diagonal, meaning that the excited pixel is the only one showing a significant linear phase shift against the applied voltage. An average phase sensitivity of 0.075 rad/mV is found, which translates to a theoretical phase resolution of approximately 3\(\cdot 10^{-3}\)\(\pi\) according to the smallest step size provided by an ideal 16-bit DAC (maximum 2.5 V, smallest step \(2.5\times 2^{-16}\) V). The error matrix, also shown in Figure 2, indicates response deviation from the expected linear behavior, mainly resulting from imperfections in the phase retrieval process, such as the finite pixel size and non-ideal detector response. These can cause a difference between the recorded intensity and the actual waveform. The error is calculated by the root mean square error of the fitted result, which is found, at maximum, to be 3% of 2\(\pi\) (0.19 rad), while on average less than 0.5% of 2\(\pi\) (0.027 rad). Besides the expected response of the phase plate, it is equally important to characterize any non-ideal behavior. The inhomogeneity describes the phase deviation within the pixel area from the ideal constant, homogeneous expectation. We evaluate the standard deviation of the reconstructed phase within each activated pixel and find it to be \(<1.7\%\) of \(2\pi\). The cross-talk refers to the phase response within a pixel region caused by the voltage applied to another pixel. We estimate this as the maximum linear response of a non-excited pixel as a function of any other excited pixel. The off-diagonal lines found exactly 12 pixels away from the main diagonal in both matrices in Figure 2 indicate that the strongest cross-talk is, unsurprisingly, found between neighboring pixels due to how the pixels are ordered in the matrix (see supplementary). The cross-talk is measured to be \(<0.012\) rad/mV, which amounts to 15% of the response of the excited pixel. In summary, the inhomogeneity only creates phase error much less than \(\frac{2\pi}{10}\), which is generally accepted as very good in light optics [41, 42], while the cross-talk is clearly the biggest contributor Figure 2: Phase sensitivity matrix and the corresponding root mean squared error of the linear fitting. to a non-ideal response. This behavior could be significantly improved in the next design iteration, where an additional top-ground layer could shield the effects from neighboring pixels and the conductive tracks leading to those pixels. Characterizing the temporal response of the phase plate is also important for applications that rely on rapid switching between different electron probe shapes or phase configurations. Since the phase shift results from the projected potential in the electrodes, the response of the phase plate can be characterized by the time required to build up the potential. With the criterion of phase error \(<\frac{2\pi}{10}\), the response time is measured to be less than 1.3 \(\mu s\) for reaching from 10 % to 90 % of \(V_{2\pi}\) and is entirely dominated by electronics. ## 3 Application Examples ### Designer Electron Waveforms To demonstrate the capability and visualize the effects of a freely programmable phase plate, we recorded the far-field diffraction patterns of various phase-modulated electron waves in a TEM (Figure 3). These patterns form rather complex configurations compared to ones formed by commonly-used round apertures, even when all phase plate elements are at ground potential. This is due to the amplitude modulation created by the set of holes, which produces highly delocalized tails. Previous theoretical research points out that the proportion of the electrons in these tails is directly related to the fill factor (% of the electron wave not blocked by the material of the phase plate) of the probe forming aperture [43]. Although improvement has been made on the fill factor (current design approximates 30%, while the proof of concept 2x2 version from 2018 [31] had only 17%), a large proportion of the electrons can still be expected in the tails. Figure 3: Realization of various electron quantum states. The three rows of figures, from top to bottom, are the phase configurations set on the phase plate, the simulated probe shapes, and the resulting experimental probe images, respectively. Note the excellent agreement between expected and obtained results showing successful arbitrary wavefront shaping. A close comparison between experimental intensity profiles and simulations is found. From Figure 3, columns (b-e) show a phase shift of \(\pi\) applied to half of the total pixels with different patterns; therefore, the original single intense spot in the diffraction pattern is split into multiple parts due to destructive interference. Double-spots (b), quadruple-spots (c), and even a duodecuple-spot (d) consisting of six \(0-\pi\) pairs are shown. By taking into account the radial distribution of the rings, a checkerboard-like pattern (e) can be created. These patterns cover a few of the 48-dimensional Hadamard basis [44], which defines an orthogonal basis consisting entirely of pixels with either 0 or \(\pi\) phase. Lastly, (f) shows the result of a vortex setup with an orbital angular moment equal to 1 [45]. This is done by creating a phase ramp from 0 to \(2\pi\) in the azimuthal direction. The vortex can be verified by the signature singularity point at the center of the resulting probe approximating one member of the Laguerre-Gaussian orthogonal basis set [46]. The phase plate can also create a phase profile imitating geometric optical elements and aberrations. Typically they can be modeled by a phase shift that follows a Zernike polynomial in the angle with respect to the optical axis [47]. How faithfully the phase plate can recreate such polynomials at different angles has been discussed theoretically in detail by Vega Ibanez et al. [43] and relates to parameters such as the order of aberration, the fill factor, number of pixels, and pixel shape. Here, a defocus effect (second-order in angle) is introduced by either the conventional electromagnetic objective lens of the microscope or by the phase plate to demonstrate this concept. The resulting probe shapes are shown in Figure 4, respectively. The two rows show good resemblance with each other up to 200 nm defocus. Further defocusing causes a steep phase ramp within the area of the individual pixels, which can not be faithfully reproduced anymore by the phase plate. For this reason, the phase plate can obviously not replace an actual (round) lens of any significant strength. ### Object Sampling with Different Wavefunctions Electron microscopy is a process of sampling an unknown material with an electron wave. Once the incident wave interacts with the examined object, the information from the object is imprinted on the wave by creating changes in amplitude, phase, and the creation of inelastic scattering signals. When the measurement result of the interaction between the object and a beam with a given electron waveform provides insufficient information Figure 4: Defocused probes formed by defocusing the microscope lenses (top row) and the phase plate (bottom row) at 300 keV acceleration voltage and an opening angle of 1 mrad. Note the close similarity of both, showing that the phase plate can mimic the action of a round lens to up to 200 nm defocus. about the sample, different waves can be used to interrogate the object. For example, in-line holography [48, 49, 50] is done by recording the intensity of a beam while varying the phase by changing the defocus of the objective lens. STEM essentially describes a process to accumulate information about the material by a dense sampling while spatially scanning a localized electron beam. In both cases, multiple measurements while changing the incoming electron wave enriches the acquired information and eliminates confusion that can sometimes not be resolved with a measurement process that only uses a beam with a single static waveform. Such multi-waveform sampling schemes rely entirely on the ability to alter the wavefunction of the beam electron states. Even though some form of modulation of the wavefunction is present in any electron microscope (e.g., defocus, beam tilt, beam shift, or aberration correctors), they often rely on electromagnetic elements, which can suffer from slow settling times and hysteresis effects. For example, in the acquisition of through focal series images, an update rate in the order of seconds to minutes is typically applied to induce small focal changes in the objective lens [51, 52]. The phase plate presented here can update to an entirely new pattern in a few \(\mu\)s without hysteresis so that complex sampling schemes can be realized efficiently. For instance, the phase plate can cycle over a few different wavefront settings for each probe position in a STEM recording. Compared to through focal TEM acquisition, where the focus is changed between recording image frames, we could now update multiple focus levels for each probe position in a STEM scan, providing, e.g., increased depth of field. This dramatically reduces the difficulty of realigning each image, especially in cases of severe sample drift, and also avoids inconsistencies caused by contamination building up on the sample over time. Changing defocus is just one of the possible wavefunctions to sample an object of interest. Being a non-orthogonal change to the beam wavefunction, it could be argued that this is not even an optimal choice of basis. The adaptability and rapid response of the phase plate can be extended to a wide variety of orthogonal basis sets that can be specifically chosen to efficiently encode selected knowledge about the sampled object into the probing electron waves. This concept is widely used in light microscopy and serves as an important cornerstone for techniques such as stimulated emission depletion (STED) microscopy [53, 54, 55] and switching laser mode microscopy (SLAM) [56, 57]. Two or more waveforms sequentially illuminate the sample, and the sharp feature created by the difference between the illuminating waves can be exploited to increase the resolution of the final image. The same concept can now be applied to electron microscopy with a two-fold beneficial effect. Indeed, changing between a probe state with and without orbital angular momentum will slightly improve image resolution due to differential imaging with both probes (super-resolution). But more importantly, this method also cancels the long probe tails arising from the amplitude modulation of the pixel shapes, as these tails are nearly identical for both probe wavefunctions. This is a far more critical effect as it dramatically increases the practical resolution that can be obtained even when the fill factor of the phase plate is not ideal and shows a way to significantly outperform the results presented earlier for the single waveform aberration correction prospects of programmable phase plates for electrons [43]. The result of this differential scheme is demonstrated with high-angle annular dark field (HAADF)-STEM simulation (Figure 5). Electron probes, as results of the far-field diffraction of three illuminating wave functions, created by a phase plate with zero phase, vortex phase, and a conventional round aperture, are used to scan a single-layer hexagonal boron nitride, with 200 keV electron beam energy, a spherical aberration \(C_{3}\) of 1.2 mm and operating at Scherzer defocus, in agreement with a typical uncorrected TEM instrument. The convergence angles of the electron probes are set to 9.5 and 11 mrad for the round aperture and the phase plate, respectively. We select a larger opening semi-angle for the phase plate since its capability to correct aberrations yields an optimal imaging condition at 11 mrad. The subtraction of the vortex image from the plain phase plate is then presented as the difference image. The simulated images are then juxtaposed to illustrate the effect of the tails, and an intensity profile (orange line) is drawn across each image (at the position of the white dashed lines). Both images from the phase plate have non-zero intensity in the vacuum area (the left half of the simulation box) due to the tails' interaction with the crystal. The profile from the image formed with the round aperture shows much faster decay as the intensity distribution of an aberrated Airy probe is more concentrated. The difference image demonstrates good cancellation of this false background, and the intensity profile quickly converges to zero, with small fluctuations due to slight differences between the tail configuration of the two probes. This result shows that the phase plate can indeed provide an excellent tail effect cancellation when alternating between a flat phase and vortex phase probe. The resolution is significantly improved over the non-corrected round aperture at the expense of some signal loss related to the fill factor and the loss of low-frequency sample information. This demonstrates the potential for aberration correction with a device that is significantly smaller (\(<5\) mm), lighter, faster (\(\mu\)s), more energy efficient (\(<5\) W), and requires far less stringent control over the precision of the voltage/current sources as compared to current multipole correctors. ### Adaptive Optics Using the fast and hysteresis-free phase programming offered by the electrostatic phase plate opens the attractive possibility of adaptive optics. As a proof of concept, such a setup Figure 5: Simulated ADF images of various probe shapes (see the insets) and their Fourier transforms. The line profiles (orange lines) are taken at the position of the white dashed line in each image. Note that the intensity profile in the round aperture image is halved for better presentation. and that the black line in the difference image indicates zero, while in other images zero is set at the bottom of the figures. is realized (fig. 6). An algorithm repeatedly reshapes the electron probe with the phase plate in order to reach a higher variance in the high-angle annular dark field (HAADF) image, which is taken as a figure of merit that links with 'image sharpness' [58]. The algorithm sequentially adds phase modifications from a list of discretized low-order Zernike polynomials to the latest best-performing phase configuration. Zernike polynomials are chosen since they exhibit close similarity to common aberration in the electron microscope and form a complete, orthogonal basis. A HAADF image is consequently recorded with every new probe. If the variance is higher in the new image, the current best is replaced with this new variation. Once all the configurations are tested, their magnitudes in terms of phase value are reduced by half for a further refinement step. The process is demonstrated by inserting the phase plate in the C2 aperture of a probe and image-corrected FEI-Titan operating at 300 keV in microprobe mode with a convergence angle of 1 mrad (to minimize the effect of aberrations and partial coherence effects). The HAADF image is taken from a gold cross-grating test sample with a deliberately introduced initial defocus of approximately 1 \(\mu\)m. The result of the correction is shown in figure 6b. The process converges after 32 iterations with a sharper resulting image, even though 1 \(\mu\)m defocus cannot be entirely compensated by the phase plate due to the steep phase profile. The result shows the feasibility of counteracting the lens defocus automatically. The process takes approximately 1 minute, but this time is currently dominated by sub-optimal software handshaking between scan engine control, image readout, and phase plate control, and can be dramatically improved in the future. As an estimate, with the assumption that an update can be made by evaluating a minimum area of 100x100 pixels at 1 \(\mu\)s dwell time (a reasonable dwell time to produce HAADF images with an acceptable noise level), the update rate for the correction scheme would be 1 khz. This frequency is easily within reach of the phase plate, which currently offers a maximum update rate of 100 kHz, limited by the electronics. This would result in an adaptively optimized image within 10 ms which would be a small fraction of the time to take, e.g., a full 1024x1024 Figure 6: Schematic of the adaptive probe correction with phase plate. The HAADF images before and after the correction are shown below. frame. Of course, this time depends on the beam current, as enough image quality is required to make good decisions on the next step. Further work is needed to evaluate the best goal function and most optimum control loop, but the proof of concept demonstrates the scheme's feasibility. This process could bring significant benefits for the automation of microscopy experiments. Automatic data acquisition and feature identification are widely used for life science research and quality control in the semiconductor industry. With them, the analysis of large amounts of samples can be done without operator intervention, and the demonstrated probe correction scheme can be utilized for maintaining the quality of the optical system over a much longer operation time. This iterative optimization process can also be extended to any technique in electron microscopy where a specific quantifiable property is related to the shape or phase of the electron probe. For example, in electron energy loss spectroscopy, the intensity of a specific plasmon peak can be tracked while reshaping the electron probe until the optimal probe shape selectively highlights the corresponding plasmon mode [59]. ### Phase programmed Ptychography Besides shaping a focused electron beam, the phase modulation capability of the phase plate operating under parallel-beam conditions can bring new opportunities for other microscopy applications. For example, coherent diffraction imaging and ptychography can benefit from using the phase plate as a "modulator" or a "diffuser" to break symmetry in the illuminating beam and thus increase the robustness and convergence rate of the reconstruction. The benefit of a modulator has been widely reported and studied in the field of light microscopy [60] and electron microscopy [61, 62, 63]. Among the reported realizations of ptychography in electron microscopy, with or without a modulator, the reconstruction of the complex object relies on repeated sampling at different locations of the object, with the criterion that the illuminating beam partially overlaps with the sampling at a nearby position. This overlap creates the so-called "information redundancy" [64, 65], which eliminates the twin-image artifact [66] that originates from the central symmetry of the illuminating beam. On the other hand, such symmetry can be easily broken by a random phase configuration introduced by the phase plate instead of the displacement of the beam or the sample. We hereby demonstrate this concept by performing phase reconstruction on simulated diffraction patterns from a target pure phase object (Figure 7a). The diffraction patterns are generated by different illuminating waves, formed with a round aperture, the phase plate set to zero, and one randomly generated phase configuration. The phase reconstruction is again based on the GS algorithm, and the resolved objects are obtained after 50 iterations. The results are shown in Figure 7(b-d). Neither a round aperture nor a zero phase plate could generate a convincing reconstruction result, as the geometry of both apertures is centrosymmetric. However, introducing a random phase configuration increases the reconstruction quality significantly, despite the sample being only illuminated at one beam position. The amplitude modulation of the phase plate inevitably results in missing information about the reconstructed object, which could be filled by moving the beam or the sample to illuminate the whole region of interest at least once. It should be noted here that the phase plate is placed in front of the sample, and all electrons interacting with it are recorded. This means that the limited fill factor does not reduce the electron dose efficiency nor increase beam damage on the sample. ## 4 Conclusion We report the successful realization of arbitrary wavefront shaping of electrons with a novel 48-pixel programmable electrostatic phase plate. The phase plate is capable of introducing a phase shift of more than \(60\pi\), as well as fine-tuning the phase value with step size as small as 3\(\cdot\)10\({}^{-3}\)\(\pi\) for 300 keV coherent electron beams. Cross-talk between pixels was shown to be \(<15\%\) and can be improved further with better shielding electrode geometries. This brings modern adaptive light optics concepts into the domain of electron beam instruments. The rapid response of the device allows up to 100 kHz update rates making it possible to do on-the-fly auto-tuning of differential contrast schemes without a noticeable recording time penalty for the user. The examples demonstrate the potential for a rich field of emerging applications offered by the phase degree of freedom. Immediate use cases focus on electron microscopy, but other electron beam instruments, such as, e.g. e-beam lithography or semiconductor inspection tools, could also profit significantly from this realization. With an even broader perspective, we demonstrate here the arbitrary preparation of coherent quantum states that might be exploited in novel quantum information/computing schemes over a much wider range of electron energies than the ones demonstrated here. Figure 7: Simulated ptychographic phase reconstruction from recorded diffraction patterns with various illuminating beams. (a) The ground truth phase image of the object. (b-d) Reconstruction results from illuminating beams formed by a conventional round aperture, flat phase plate, and phase plate with random phase configuration, respectively. The dark region indicates the opaque part of the aperture. Note the significant improvement in phase reconstruction quality when the incoming beam is phase randomized. As the object is only illuminated once, reconstruction is only possible in those areas where the amplitude is not zero. ## Acknowledgements All authors want to thank Gert de Bont for providing the graphics in figure 1 and Stijn Van den Broek for never-ending support with the focused ion beam instrument. Funding informationThis project is the result of a long-term effort involving many different sources of funding: JV acknowledges funding from an ERC proof of concept project DLV-789598 ADAPTEM, as well as a University IOF proof of concept project towards launching the AdaptEM spin-off and the eBEM project, supported by the European Union's Horizon 2020 research and innovation program FETPROACT-EIC-07-2020: emerging paradigms and communities. This project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No 823717 - ESTEEM3 and via The IMPRESS project from the HORIZON EUROPE framework program for research and innovation under grant agreement n. 101094299. FV, JV, and AB acknowledge funding from G042820N 'Exploring adaptive optics in transmission electron microscopy.' CPY acknowledges funding from a TOP-BOF project from the University of Antwerp.
2307.08967
New estimation of the nuclear de-excitation line emission from the supernova remnant Cassiopeia A
MeV nuclear de-excitation lines serve as a unique tool to study low-energy cosmic rays (CRs), containing both spectral and elemental information of the interacting material. In this paper, we estimated the possible nuclear de-excitation lines from the young supernova remnant Cassiopeia A. Given different CR spectral shapes and interacting materials, we found the predicted fluxes of strong narrow line emissions from the remnant are highly model-dependent, ranging from about $1\times10^{-10}\,{\rm \,cm^{-2}\,s^{-1}}$ to $1\times10^{-6}\, {\rm \,cm^{-2}\,s^{-1}}$ for the 4.44 MeV narrow line and from about $4\times10^{-11}\,{\rm \,cm^{-2}\,s^{-1}}$ to $2\times10^{-7}{\rm \,cm^{-2}\,s^{-1}}$ for the 6.13 MeV narrow line, respectively. Based on the new estimation, we also discussed the detection probability of these line emissions against the MeV diffuse Galactic background under different assumptions of instrument response functions.
Bing Liu, Rui-zhi Yang, Xin-yu He, Felix Aharonian
2023-07-18T04:38:04Z
http://arxiv.org/abs/2307.08967v2
# New estimation of the nuclear de-excitation line emission from the supernova remnant Cassiopeia A ###### Abstract MeV nuclear de-excitation lines serve as a unique tool to study low-energy cosmic rays (CRs), containing both spectral and elemental information of the interacting material. In this paper, we estimated the possible nuclear de-excitation lines from the young supernova remnant Cassiopeia A. Given different CR spectral shapes and interacting materials, we found the predicted fluxes of strong narrow line emissions from the remnant are highly model-dependent, ranging from about \(1\times 10^{-10}\) cm\({}^{-2}\) s\({}^{-1}\) to \(1\times 10^{-6}\) cm\({}^{-2}\) s\({}^{-1}\) for the 4.44 MeV narrow line and from about \(4\times 10^{-11}\) cm\({}^{-2}\) s\({}^{-1}\) to \(2\times 10^{-7}\) cm\({}^{-2}\) s\({}^{-1}\) for the 6.13 MeV narrow line, respectively. Based on the new estimation, we also discussed the detection probability of these line emissions against the MeV diffuse Galactic background under different assumptions of instrument response functions. keywords: cosmic rays - gamma-rays: ISM - ISM: individual objects: Cassiopeia A - ISM: supernova remnants ## 1 Introduction Cosmic rays (CRs) with kinematic energy below 1 \(\mathrm{GeV}/\mathrm{nucleon}\), often referred to as low-energy CRs (LECRs), are most efficient at ionizing and heating gases and play an important role in star-forming and astrochemistry processes (Papadopoulos, 2010; Gabici, 2022). In the direct measurement of CR spectra in the solar system, the flux of LECRs is highly suppressed by the solar modulation effects. Recently, the Voyager satellite has measured the LECR spectra beyond the heliopause (Cummings et al., 2016). However, it is not straightforward that the LECR spectra measured by Voyager can be a good representative of the LECRs in the Galaxy. Due to the fast cooling and slow propagation of LECRs, their flux indirectly estimated via the ionization rate of gases shows a rather inhomogeneous distribution in the Galactic plane (e.g., Indriolo and McCall, 2012). Supernova remnants (SNRs) are thought to be the most prominent CR accelerators in our Galaxy. The \(\gamma\)-ray observations in the energy range of 0.1 - 10 GeV from AGILE and Fermi-LAT have shown strong evidence that the SNRs do accelerate CR protons to a high energy range (Giuliani et al., 2011; Ackermann et al., 2013). Observations of the large ionization rates from molecular material near SNRs, such as IC 443, W28, and W49B, suggest that SNRs may also accelerate a large population of LECRs (Indriolo et al., 2010; Vaupre et al., 2014; Zhou et al., 2022). However, due to the kinetic energy threshold of the pion-decay process (\(\sim\) 280 MeV), we know little about the injection spectrum of LECRs from SNRs. In addition to the ionization effects, the inelastic collisions between LECRs and interstellar gases could excite the heavy nuclei which can emit MeV \(\gamma\)-ray lines via de-excitation, such as the 4.44 MeV line from \({}^{12}\)C and the 6.13 MeV line from \({}^{16}\)O (e.g., Ramaty et al., 1979; Murphy et al., 2009). Thus, from observation of these line emissions, we may derive unique information about the injection of LECR nuclei from the accelerators such as SNRs, with the advantage of excluding the influence of CR electrons (e.g., Benhabiles-Mezhoud et al., 2013; Liu et al., 2021). Cassiopeia A (Cas A, G111.7-02.1) is the remnant of a massive star explosion \(\sim\)340 years ago (Fesen et al., 2006; Krause et al., 2008). As one of the youngest SNRs in our Galaxy, Cas A has been thoroughly investigated from multiwavelength observations despite many open questions that remain debatable. Located about 3.4 kpc away from the solar system (Reed et al., 1995), it is one of the brightest sources in the radio band and shows a significant shell structure with an angular radius of 2.5' (or physical size of 2.5 pc) (Kassim et al., 1995). The synchrotron radiation extends from infrared (Tuffs et al., 1997) to X-rays of about 100 keV (Grefenstette et al., 2017). Although the origin of the X-ray radiation is still under debate, Laming (2001a,b) argued that non-thermal bremsstrahlung can also explain the observed X-ray flux. Early Fermi-LAT observations reveal a hint of the hadronic origin of the \(\gamma\)-ray emissions (Abdo et al., 2010; Yuan et al., 2013). TeV signal from Cas A was also detected (Puehlhofer, 1999), and a significant cutoff at several TeV was revealed by MAGIC and VERITAS observations (Ahnen et al., 2017; Abeysekara et al., 2020). Despite the continuous debate about whether Cas A is a PeVatron or not, a pure hadronic or hybrid origin is preferred to the pure leptonic scenario when explaining the GeV-TeV \(\gamma\)-ray emission from Cas A (e.g., Zirakashvili et al., 2014; Ahnen et al., 2017; Zhang and Liu, 2019; Abeysekara et al., 2020). Thus, one would expect possible MeV de-excitation line emissions arising from Cas A accelerated LECR nuclei interacting with the surrounding medium. In this study, we investigate the potential MeV nuclear line emission from Cas A under different assumptions of the injected CR spectra which are constrained by recent observations in the GeV-TeV range. Various scenarios of the interacting medium are also considered in the calculations applying the latest estimation of the chemical abundances of the ejecta and ambient gas. Moreover, the detection capabilities of the line emissions against the continuum background are also discussed regarding the angular resolutions and energy resolutions of the next-generation MeV telescopes. ## 2 CR spectra and the medium composition around Cas A Before the calculation of the possible de-excitation \(\gamma\)-ray line emission from Cas A, we need to have a general idea of the spectral shape of the accelerated particles and the composition of the interacting medium. Both factors have huge impacts on the estimation results. Given the angular resolution of the next generation MeV telescopes (typically \(\gtrsim 2^{\circ}\) at MeV band) and the distance of Cas A, the line emission from Cas A is very likely to be observed as a point-like source (Liu et al., 2021). Thus, the spatial distributions of the accelerated LECRs, the interacting medium, as well as the resulting line emission will not be considered in this work. ### Spectral distribution of the Cas A accelerated particles SNRs are widely accepted as one kind of the main CR sources in our Galaxy, and they are expected to accelerate relativistic particles with spectra close to simple power laws in momentum \(p\) via the diffusive shock acceleration (e.g., Bell, 1978; Blandford and Ostriker, 1978). Given the "test-particle" limit, the CR production rate \(q\propto p^{-\chi}\) and the momentum index \(\chi\geqslant 2\) in the case of strong shocks. However, when the nonlinear effects are considered, i.e., the feedback of CR energy and pressure on the shock, the accelerated particles may have spectra that show some concavity in momentum space, and the corresponding low-energy flux will be higher than that of the test-particle predictions (e.g., Amato and Blasi, 2005; Caprioli et al., 2011). The energy loss times of the injected protons with kinetic energy \(E\) of 1 MeV and 10 MeV are about \(2\times 10^{3}\) yrs and \(4\times 10^{4}\) yrs assuming an average medium density of 10 cm\({}^{-3}\). Given the relatively young age of Cas A, the deformation of the CR spectra from the freshly injected spectra of Cas A could be ignored. Thus, for simplicity, we chose a piece-wise power law in proton momentum \(p(E)\) with an exponential cutoff at \(E_{\rm cut}\) to describe the spectral distribution of CR protons. The injection flux \(F(E)\) is given by \[F(E)\!\!=\!\!\left\{\begin{array}{ll}N_{0}\left[\frac{p(E)}{p(E_{\rm s})} \right]^{-\alpha_{1}}\exp\left[\frac{-p(E)}{p(E_{\rm cut})}\right],&\mbox{if $E\geq E_{\rm b}$}\\ N_{0}\left[\frac{p(E)}{p(E_{\rm s})}\right]^{-\alpha_{2}}\exp\left[\frac{-p(E)}{ p(E_{\rm cut})}\right],&\mbox{if $E<E_{\rm b}$}\end{array}\right.\\ \!\!.\end{array}\right. \tag{1}\] Here, the cutoff energy \(E_{\rm cut}=10\) TeV and the broken energy \(E_{b}\) is set to be 0.2 GeV, below which the protons do not participate in the pion production process and lose their energy mainly via the ionization process. ### Elemental abundances of the interacting medium According to multiwavelength observations, the ejecta of Cas A is dominated by oxygen (\(\sim 2.55\)M\({}_{\odot}\)), and the rest is comprised mainly of Ne, Si, S, Ar, and Fe, with a total mass of about 3-3.5 M\({}_{\odot}\), meanwhile, the composition of the circumstellar medium (CSM) shows enhancement of N and He relative to the solar abundances and the mass of the shocked CSM is about 10 M\({}_{\odot}\)(e.g., Chevalier and Kirshner, 1978, 1979; Docenko and Sunyaev, 2010; Hwang and Laming, 2012; Laming and Temim, 2020). The ejecta abundances vary with position in the remnant. For example, the study of the X-ray-emitting ejecta in Cas A using the _Chandra_ 1 Ms observation see sub-solar ratios (Hwang and Laming, 2012), while the study of "bulk" unshocked ejecta using the IR Spitzer data get super-solar ratios (Laming and Temim, 2020). To carry on our estimation, regardless of the very large uncertain remains for the exact compositions of the ejecta, we refer to the "average" number density ratios (relative to O) summarized in Table 9 from Docenko and Sunyaev (2010) for Ne, Mg, Si, S, Ar, and Fe, and adopt the solar value (relative to O) for C, N, and Ca. In addition, we change the density ratios of H and He, from 0.01 to 0.05 and 0.1 to 0.5 respectively, to check the impact of their uncertain densities in the ejecta on the flux of the line emission. For the CSM, same as Hwang and Laming (2012), we set the abundance of He 3 times the solar value, that of N 15 times the solar value, and solar values for the rest. Meanwhile, we apply the Voyager measurement of local LECRs (see Cummings et al., 2016, Table 3) as a simplified assumption of the abundance of Cas A accelerated CRs. Above elemental compositions assumed for the calculation are summarized in Table 1. We note that a thorough investigation into the composition of Cas A accelerated particles requires comprehensive knowledge of the acceleration and escape of the particles, as well as the diffusion and mixing of them into the surrounding medium. Such kind of research is beyond the scope of this work, which only serves as an estimate that awaits valuable information from next-generation MeV \(\gamma\)-ray detectors. ## 3 Possible de-excitation line emission from Cas A For the calculation of the \(\gamma\)-ray line emissions, except for the code TALYS which is the newest 1.96 version (Koning et al., 2008; Koning et al., 2014), we applied the same procedure as described in Section 3.1 of Liu et al. (2021), which followed the method developed by Ramaty et al. (1979); Murphy et al. (2009); Benhabiles-Mezhoud et al. (2013). Two main channels of the interactions are considered here. One is the _direct_ process in which the CR protons and \(\alpha\)-particles as projectiles excite heavier elements of the ambient gas and generate narrow \(\gamma\)-ray lines, and the other is the _inverse_ process in which the hydrogen and helium of the ambient gas excite the heavy nuclei of LECRs and produce \(\gamma\)-ray line emission that is broadened. Given the large difference in the elemental compositions between the ejecta and the CSM, here we consider three cases to study the possible influence of various interacting materials. One case (hereafter referred to as case 1) is that the \(\gamma\)-ray emission is generated from the shock-accelerated nuclei interacting only with the ejecta. The second one (case 2) is that the emission is produced by the accelerated nuclei colliding with the CSM. Moreover, a hybrid case (case 3) in which half of the accelerated CRs interact with the ejecta while the other half of the CRs interact with the CSM is also calculated. An estimation of the \(\gamma\)-ray lines from the same population of CRs colliding with the gas medium of solar abundance is also made for further comparison. For each case, we tested possible proton spectra with different assumptions of \(\alpha_{1}\) ranging from 2.0 to 2.7 while setting \(\alpha_{2}=\alpha_{1}\), 3.0, and 4.0, respectively. Meanwhile, the GeV-TeV \(\gamma\)-ray spectral data from recent research of Abeysekara et al. (2020) are used to constrain the overall flux of the accelerated particles, which provides us a maximum for \(N_{0}\) given a certain density and composition of the interacting medium. Here we applied the Eq.(20) in Kafexhui et al. (2014) to calculate the nuclear enhancement factor \(\epsilon\), then derived the effective density \(\epsilon n_{\rm p}n_{\rm H}\) for the pion-decay process, in which \(n_{\rm p}\) represents the density of the accelerated protons and \(n_{\rm H}\) is hydrogen density of the interacting medium. Examples of the pion-decay emissions derived from the above spectral parameter settings are shown in Fig.1. Considering the uncertainty of the relative density of H and He in the ejecta, we varied their number ratios from 0.01 to 0.05 and from 0.1 to 0.5, respectively, for the calculation of case 1. We found that such changes have very little impact (\(\lesssim 3\%\)) on the overall line flux under the premise of oxygen domination. For a more realistic estimation, we also considered the Doppler effect caused by the movement of the ejecta (e.g., Milisavljevic and Fesen, 2013), and adopted an \(\Delta V\) of \(\sim 6000\) km s\({}^{-1}\) from the recent study of Picoquenot et al. (2021) for all the elements. The resulting MeV nuclear de-excitation line emissions for case 1 are exemplified in Fig. 2. As shown in Fig. 2, the line fluxes increase with the spectral index and the presence of a concavity in momentum space will lead to much stronger line emissions. In addition, the same trend is also found for case 2 and case 3. However, assuming the same CR spectral shape, the line fluxes of case 2, are much lower compared to that of case 1, and the morphological difference of the MeV \(\gamma\)-ray emission is also very obvious, as illustrated by the solid lines in Fig. 3. Such contrast reflects the much more abundant heavier nuclei in the oxygen-dominated ejecta compared to those in the CSM. Taking the possible Doppler broadening into account, for case 1 and case 3, the FWHM widths (\(\Delta E\)) of the 4.44 MeV line and the 6.13 MeV line are \(\sim 0.19\) MeV and \(\sim 0.22\) MeV, respectively. The integrated narrow-line fluxes of the 4.44 MeV and the 6.13 MeV lines of these three cases are summarized in Table 2, in which the minimum and maximum are obtained when setting \(\alpha_{1}=\alpha_{2}=2.0\) and \(\alpha_{1}=2.7\), \(\alpha_{2}=4.0\), respectively. As shown in Table 2, the integrated line flux ranges are \(\sim(1\times 10^{-10}-1\times 10^{-6})\) cm\({}^{-2}\) s\({}^{-1}\) for the 4.44 MeV narrow line and \(\sim(4\times 10^{-11}-2\times 10^{-7})\) cm\({}^{-2}\) s\({}^{-1}\) for the 6.13 MeV narrow line, respectively. Our estimation of the integrated 4.44 MeV line flux assuming \(\alpha_{1}=\alpha_{2}=2.1\) for case 1 is \(\sim 1.4\times 10^{-8}\) cm\({}^{-2}\) s\({}^{-1}\), much lower compared to the estimation of Summa et al. (2011). Indeed, the target density, composition of the medium, and the LECR flux can vary dramatically with the distance to the forward shock. Using a uniform density and composition can be quite biased in estimating the MeV line emission. This is also the motivation that we consider three cases in the discussions above. ## 4 Discussion and Conclusion Regardless of the uncertainties from the experiment data and the simulation data from TALYS, as shown in Sect.3, the de-excitation line fluxes resulting from the interaction between the Cas A accelerated CR nuclei and the medium are highly model-dependent: for certain cases, the predicted line fluxes vary by two orders of mag \begin{table} \begin{tabular}{c c c c c} \hline & CR a & Solar b & CSM c & Ejecta d \\ & \(n_{\rm cl}/n_{\rm H}\) & \(n_{\rm cl}/n_{\rm H}\) & \(n_{\rm cl}/n_{\rm O}\) & \(n_{\rm cl}/n_{\rm O}\) \\ \hline H & 1 & 1 & 1 & 1 & 0.01-0.05a \\ He & \(8.140\times 10^{-2}\) & \(8.414\times 10^{-2}\) & 3 & 0.1-0.5a \\ C & \(1.671\times 10^{-3}\) & \(2.455\times 10^{-4}\) & 1 & 0.5a \\ N & \(2.444\times 10^{-4}\) & \(7.244\times 10^{-5}\) & 15 & 0.1a \\ O & \(1.570\times 10^{-3}\) & \(5.370\times 10^{-4}\) & 1 & 1 \\ Ne & \(1.507\times 10^{-4}\) & \(1.122\times 10^{-4}\) & 1 & 0.02 \\ Mg & \(2.264\times 10^{-4}\) & \(3.467\times 10^{-5}\) & 1 & 0.005 \\ Si & \(1.898\times 10^{-4}\) & \(3.388\times 10^{-5}\) & 1 & 0.05 \\ S & \(2.087\times 10^{-5}\) & \(1.445\times 10^{-5}\) & 1 & 0.05 \\ Ar & \(4.554\times 10^{-6}\) & \(3.162\times 10^{-6}\) & 1 & 0.005 \\ Ca & \(1.195\times 10^{-5}\) & \(2.042\times 10^{-6}\) & 1 & 0.004e \\ Fe & \(1.152\times 10^{-4}\) & \(2.884\times 10^{-5}\) & 1 & 0.005 \\ \hline \end{tabular} \end{table} Table 1: The elemental compositions assumed for calculation Figure 1: Examples of the \(\gamma\)-ray emission produced via pion-decay process from Cas A with various assumptions of \(\alpha_{1}\) and \(\alpha_{2}\). The data points are adopted from the GeV-TeV \(\gamma\)-ray observations of Abeysekara et al. (2020). Details are described in Sec.2.1. nitude due to different settings of the spectral indexes, meanwhile, the narrow line fluxes could also differ by two orders of magnitude due to the variation in the elemental compositions of the interacting medium. The continuum MeV \(\gamma\)-rays contributed from the diffuse Galactic emission and Cas A itself should be taken into account when discussing the detectability of line emission. Based on the observation of SPI aboard INTEGRAL, Siegert et al. (2022) re-analyzed the diffuse Galactic emission at 0.5 and 8.0 MeV by fitting energy-dependent spatial template GALPROP (Vladimirov et al., 2011) models within a region of \(\Delta I\times\Delta b=95\degr\times 95\degr\) around the Galactic center. They found the above observed diffuse background is mainly contributed by IC scattering of CR electrons onto the interstellar radiation field, of which the bremsstrahlung component may account for \(\sim 10\%\). We estimate the diffuse MeV emission flux in the direction of Cas A by extrapolating this newest measurement spatially. The possible diffuse background flux in the direction of Cas A within 1 \(\sigma\) uncertainty is shown as the shaded area in Fig. 4 and Fig. 5, in which the angular resolutions of the telescope are assumed to be \(2\degr\) and \(5\degr\), respectively. As for the continuum MeV emission from Cas A itself, which is mainly contributed from bremsstrahlung in the hadronic scenarios, the modeled fluxes are well below or at the same level of the extrapolated diffuse background (e.g., Zhang and Liu, 2019; Abeysekara et al., 2020). Thus, the potential influence from the bremsstrahlung on the detection of the line emissions would be much weaker or similar to that of the diffuse background, and for simplicity, this was not further discussed in the following discussion. Due to the highly model-dependent calculation of the possible nuclear \(\gamma\)-ray line emission from Cas A, future observations from the next-generation MeV telescopes may help us locate the interacting region(s) and constrain the injected CR spectral shape, which can be served as a diagnosis on particle acceleration and escape in Cas A. To be specific, if the interacting material is dominated by ejecta, the strong narrow line emission such as the 4.44 MeV line and the 6.13 MeV line are more likely to be detected. Moreover, the existence of a concavity in the CR injection spectra (as exemplified by the dash-dot lines in Fig. 4), would make the detection easier for case 1 (black) and case 3 (blue), even if the flux of the background continuum is very high due to the limited angular resolution. And a better angular resolution can increase the possibility of detection for case 2 (red). In general, with the possible diffuse background emission added, the detection of the MeV line emission will be more likely from instruments with better angular resolutions ( \(2.0\degr\)) for the point-like source SNR Cas A. In addition, we checked the influence on the detectability regarding the energy resolutions (\(\Delta E/E\), FWHM) of the telescopes. The results are shown in Fig. 5, assuming a \(\Delta E/E\) of 2% (solid lines) or 10% (dotted lines), respectively. We found that the energy resolution is crucial for the detection of such line features. However, as calculated in the section above, we found that the narrow de-excitation lines have intrinsic Doppler broadening of about 2% that is caused by the recoil of the excited nuclei, and additional broadening of about 2% due to the movements of the ejecta-dominated medium. Thus the energy resolution better than this value can hardly improve the detection sensitivity. In conclusion, we estimated the possible MeV line emission from LECRs interaction with the ambient gas in the SNR Cas A under simplified conditions. We found that if the accelerated CRs mainly interact with the ejecta, the line signal from Cas A will be more prominent due to the higher ratio of heavy nuclei therein. And the potential softening of LECR spectra caused by CR feedback to the shock would further enhance the MeV line emissions. We also found that the diffuse MeV \(\gamma\)-ray emissions in the Galactic plane may be the main background in the detection of such line features. Both angular resolution and energy resolution play a significant role in detecting these MeV lines from Cas A. Based on our model-dependent prediction, the integrated narrow line fluxes are \(\sim 1\times 10^{-10}\) cm\({}^{-2}\) s\({}^{-1}\) to \(1\times 10^{-6}\) cm\({}^{-2}\) s\({}^{-1}\) at 4.44 MeV and from \(\sim 4\times 10^{-11}\) cm\({}^{-2}\) s\({}^{-1}\) to \(2\times 10^{-7}\) cm\({}^{-2}\) s\({}^{-1}\) at 6.13 MeV, respectively. Thus, the next generation MeV instruments with a line flux sensitivity of about \(10^{-6}\) cm\({}^{-2}\)s\({}^{-1}\), such as e-ASTROGAM, AMEGO, and COSI (de Angelis et al., 2018; McEnery et al., 2019; Tomsick and COSI Collaboration, 2022) may have chances of detecting these unique spectral features in Cas A, but detectors with \begin{table} \begin{tabular}{l c c c} \hline & Medium & 4.44 MeV & 6.13 MeV \\ \hline case 1 & Ejecta & (0.09–7.71)\(\times 10^{-7}\) & (0.04–2.10)\(\times 10^{-7}\) \\ case 2 & CSM & (0.10–8.84)\(\times 10^{-9}\) & (0.04–2.66)\(\times 10^{-9}\) \\ case 3 & Ejecta+CSM & (0.05–3.93)\(\times 10^{-7}\) & (0.02–1.06)\(\times 10^{-7}\) \\ \hline \end{tabular} \({}^{\mathrm{a}}\) The flux ranges (in units of cm\({}^{-2}\) s\({}^{-1}\)) are obtained from different assumptions of CR spectral index (\(\alpha_{1}\) and \(\alpha_{2}\)). Details are described in Sec.3. \end{table} Table 2: Integrated 4.44 MeV and 6.13 MeV line fluxes of various cases \({}^{\mathrm{a}}\) Figure 3: Comparison of estimated MeV \(\gamma\)-ray differential spectra of Cas A with \(\alpha_{1}=\alpha_{2}=2.7\) for different cases described in Sec.3. Figure 2: Comparison of estimated MeV \(\gamma\)-ray differential spectra with different spectral settings of \(\alpha_{1}\) and \(\alpha_{2}\) for case 1 as described in Sec.3. sensitivities exceeding the capability of projects mentioned above would be more promising for the research of such kind of individual CR sources. The recently proposed large-scale Space Projects like MeGaT (Zhang et al. 2023, private communication) and MeVGRO (Peng et.al 2023, private communication) give optimism that the probes of low-energy particles from Cas A could be realized through the detection of prompt nuclear de-excitation line emission. ## 5 Acknowledgements Bing Liu acknowledges the support from the NSFC under grant 12103049. Rui-Zhi Yang is supported by the NSFC under grants 12041305, and the national youth thousand talents program in China. ## 6 Data availability To calculate emissivities of the de-excitation \(\gamma\)-ray line lines, we used the code TALYS (version 1.96,Koning et al. (2008)), which could be downloaded from [https://tendl.web.psi.ch/tendl_2019/talys.html](https://tendl.web.psi.ch/tendl_2019/talys.html). For a better match with the experiment data, we modified the deformation files of \({}^{14}\)N, \({}^{20}\)Ne, and \({}^{28}\)Si using the results of Benhabiles-Mezhoud et al. (2011). We also used the production cross sections of the specific lines listed in the compilation of Murphy et al. (2009).
2308.09224
Geometric characterizations for strong minima with applications to nuclear norm minimization problems
In this paper, we introduce several geometric characterizations for strong minima of optimization problems. Applying these results to nuclear norm minimization problems allows us to obtain new necessary and sufficient quantitative conditions for this important property. Our characterizations for strong minima are weaker than the Restricted Injectivity and Nondegenerate Source Condition, which are usually used to identify solution uniqueness of nuclear norm minimization problems. Consequently, we obtain the minimum (tight) bound on the number of measurements for (strong) exact recovery of low-rank matrices.
Jalal Fadili, Tran T. A. Nghia, Duy Nhat Phan
2023-08-18T00:56:58Z
http://arxiv.org/abs/2308.09224v1
Geometric characterizations for strong minima with applications to nuclear norm minimization problems ###### Abstract In this paper, we introduce several geometric characterizations for strong minima of optimization problems. Applying these results to nuclear norm minimization problems allows us to obtain new necessary and sufficient quantitative conditions for this important property. Our characterizations for strong minima are weaker than the Restricted Injectivity and Nondegenerate Source Condition, which are usually used to identify solution uniqueness of nuclear norm minimization problems. Consequently, we obtain the minimum (tight) bound on the number of measurements for (strong) exact recovery of low-rank matrices. Convex optimization; Strong minima; Sharp minima; Second order condition; Nuclear norm minimization; Exact recovery. **Mathematics Subject Classification** 52A41 90C25 49J53 49J52 ## 1 Introduction Strong minima is an important property at a local minimizer of an optimization problem such that the difference between the cost value and the optimal value is bigger than a proportional of the norm square of the difference between the corresponding feasible solution and the minimizer. It is an error bound condition that has various applications to sensitivity analysis, robustness, and complexity of algorithms [4, 5, 7, 8, 18, 22, 23, 24, 29, 34, 42, 44]. Finding necessary and sufficient second order conditions for _strong minima_ is a classical research area. For nonlinear programming, the first results in this direction were probably established in [22] under some restrictive conditions. Complete second order characterizations for nonlinear programming under mild conditions such as the Mangasarian-Fromovitz constraint qualification were obtained later in [4, 34]. For constrained (nonpolyhedral) optimization problems with smooth data, necessary and sufficient second order conditions for strong minima are much more involved. They often contain nontrivial "sigma terms", which represent curvatures of some nonpolyhedral structures in the problem. Another important feature of these conditions is that they are usually formulated as "minimax" conditions such that _Lagrange multipliers_ are dependent on the choice of vectors in the critical cone; see, e.g., [7, 8]. Although the aforementioned sigma terms are fully calculated for many seminal classes of optimization such as semi-infinite programming, semi-definite programming, and second order cone programming, their calculations are complicated in general. Moreover, checking the (minimax) sufficient second order conditions is quite a hard task numerically. An important problem that motivates our study in this paper is the _nuclear norm minimization problem_ \[\min_{X\in\mathbb{R}^{n_{1}\times n_{2}}}\quad\|X\|_{*}\quad\text{ subject to}\quad\Phi X=M_{0}, \tag{1.1}\] where \(\|X\|_{*}\) is the nuclear norm of an \(n_{1}\times n_{2}\) matrix \(X\), \(\Phi:\mathbb{R}^{n_{1}\times n_{2}}\to\mathbb{R}^{m}\) is a linear operator, and \(M_{0}\) is a known vector (observation) in \(\mathbb{R}^{m}\). This problem is considered the tightest convex relaxation of the celebrated NP-hard _affine rank minimization problem_ with various applications in computer vision, collaborative filtering, and data science; see, e.g., [1, 13, 15, 16, 40]. There are some essential reasons to study strong minima of this problem. First, strong minima of problem (1.1) guarantees the linear convergence of some proximal algorithms for solving problem (1.1) and related problems; see, e.g., [18, 29, 50]. Second, it is also sufficient for solution uniqueness and robustness of problem (1.1) [23, 24]. Solution uniqueness for problem (1.1) is a significant property in recovering the original low-rank solution \(X_{0}\in\mathbb{R}^{n_{1}\times n_{2}}\) from observations \(M_{0}=\Phi X_{0}\). In [15, 16], Candes and Recht introduced a _nondegenerate condition_ sufficient for solution uniqueness of problem (1.1). It plays an important role in their results about finding a small bound for measurements \(m\) such that solving problem (1.1) recovers exactly \(X_{0}\) from observations \(M_{0}\) over a Gaussian linear operator \(\Phi\). Their condition is recently revealed in [24] to be a complete characterization for the so-called _sharp minima_ introduced by Crome [12] and Polyak [39] independently. This special property of problem (1.1) at \(X_{0}\) guarantees the _robust recovery_ with a linear rate in the sense that any solutions of the following low-rank optimization problem \[\frac{1}{2}\|\Phi X-M\|^{2}+\mu\|X\|_{*} \tag{1.2}\] converge to \(X_{0}\) with a linear rate as \(\mu\downarrow 0\) provided that \(\|M-M_{0}\|\leq c\mu\) for some constant \(c>0\); see [14]. When strong minima occurs in problem (1.1), [24] shows that the convergence rate is Holderian with order \(\frac{1}{2}\). It is also worth noting that solution uniqueness of nuclear norm minimization problem (1.1) can be characterized geometrically via the _descent cone_[1, 13]. As the descent cone is not necessarily closed, using it to check solution uniqueness numerically is not ideal. Another geometric characterization for solution uniqueness of problem (1.1) is established recently in [28], but the set in their main condition is not closed too. As strong minima is necessary for sharp minima and sufficient for solution uniqueness, it is open to study the impact of strong minima to exact recovery. Sufficient second order condition for strong minima of problem (1.1) can be obtained from [18, Theorem 12]. The approach in [18] is rewriting problem (1.1) as a _composite optimization problem_ and applying the classical results in [7, 8]. Some second order analysis on _spectral functions_ including the nuclear norm studied in [17, 18, 20, 37, 51] could be helpful in understanding this result, but these second order computations applied on the nuclear norm still look complicated. Most importantly, the sufficient second order condition obtained in [18, Theorem 12] is still in a minimax form, which makes it hard to check. Our main questions throughout the paper are: 1. Is it possible to obtain simple necessary and sufficient conditions for strong minima of problem (1.1)? 2. Can we avoid the minimax form usually presented in these kinds of second order sufficient conditions? and 3. Is there any efficient way to check strong minima of problem (1.1) numerically? **Our contribution.** To highlight the new ideas and direct to the bigger picture, we study in Section 3 the following composite optimization problem \[\min_{x\in\mathbb{X}}\quad f(x)+g(x), \tag{1.3}\] where \(\mathbb{X}\) is a finite dimensional space, \(f:\mathbb{X}\to\mathbb{R}\) is a twice continuously differentiable function, and \(g:\mathbb{X}\to\overline{\mathbb{R}}\stackrel{{\text{def}}}{{=}} \mathbb{R}\cup\{+\infty\}\) is a proper lower semi-continuous (nonsmooth) function. It covers problem (1.2) and many modern ones in optimization. One of the most popular ways to characterize strong minima of problem (1.3) is using the _second subderivative_[8, 42, 43]. As the function \(g\) is nonsmooth, second subderivative of \(g\) is hard to compute in general. To avoid this computation, we assume additionally that the function \(g\) satisfies the classical _quadratic growth condition_[6, 49]. In the case of convex functions, it is shown in Section 3 that \(\bar{x}\) is a strong solution of problem (1.3) if and only if \(\bar{\sigma}\stackrel{{\text{\tiny def}}}{{=}}-\nabla f(\bar{x} )\in\partial g(\bar{x})\) and the following _geometric_ condition holds \[\operatorname{Ker}\nabla^{2}f(\bar{x})\cap T_{(\partial g)^{-1}(\bar{\sigma})} (\bar{x})=\{0\}, \tag{1.4}\] where \(\partial g:\mathbb{X}\rightrightarrows\mathbb{X}^{*}\) is the subdifferential mapping of \(g\), \(\operatorname{Ker}\nabla^{2}f(\bar{x})\) is the nullspace of the Hessian matrix \(\nabla^{2}f(\bar{x})\), and \(T_{(\partial g)^{-1}(\bar{\sigma})}(\bar{x})\) is the Bouligand _contingent cone_ at \(\bar{x}\) to \((\partial g)^{-1}(\bar{\sigma})\). The action of the contingent cone to a first order structure in the above condition tells us that (1.4) is actually a second order condition. But it looks simpler for computation than the second subderivative; see, e.g., our Corollary 4.2 for the case of nuclear norm. Our results also work when \(g\) is not convex; see Section 3 for more detailed analysis. Another problem considered in Section 3 is the following convex optimization problem with linear constraints \[\min_{x\in\mathbb{X}}\quad g(x)\quad\text{subject to}\quad\Phi x\in K, \tag{1.5}\] where \(g:\mathbb{X}\to\overline{\mathbb{R}}\) is a continuous (nonsmooth) convex function, \(\Phi:\mathbb{X}\to\mathbb{Y}\) is a linear operator between two finite dimensional spaces, and \(K\) is a closed polyhedral set of \(\mathbb{Y}\). This problem covers the nuclear norm minimization problem (1.1) and a handful of other significant optimization problems [1, 3, 13]. When \(g\) is twice continuously differentiable, characterizations for strong minima of problem (1.5) are simple; see, e.g., [8, Theorem 3.120]. But when \(g\) is not differentiable, characterizing strong minima is much more involved. A standard approach is rewriting problem (1.5) as a composite problem, which can be represented by a constrained optimization problem with smooth data [7, 8, 35]. To obtain similar geometric characterization to (1.4) for problem (1.5), we additionally assume the function \(g\) satisfies both quadratic growth condition and _second order regularity_. The latter condition was introduced by Bonnans, Cominetti, and Shapiro in [7] to close the gap between necessary and sufficient second order optimality conditions for constrained and composite optimization problems. But the extra assumption of quadratic growth condition on the function \(g\) in this paper allows us to achieve geometric characterizations for strong minima of problem (1.5) in Theorem 3.5. Many vital classes of nonsmooth functions \(g\) satisfy both the quadratic growth condition and second order regularity. To list a few, we have classes of piecewise linear-quadratic convex functions [43], spectral functions [18, 37], \(\ell_{1}/\ell_{2}\) norms [50], and indicator functions to the set of positive semi-definite matrices [19]. Our results are applicable not only to nuclear norm minimization problem (1.1), but also other different problems in optimization. As the nuclear norm satisfies both the quadratic growth condition and second order regularity, geometric characterizations for strong minima of low-rank problems (1.1) and (1.2) are foreseen. But our studies on problems (1.1) and (1.2) in Section 4 and 5 are not just straightforward applications. In Section 4, we derive simple calculation of the second order structure in (1.4) for the case of nuclear norm. Furthermore, some quantitative characterizations for strong minima of problem (1.2) are obtained via the so-called Strong Restricted Injectivity and Analysis Strong Source Condition that inherit the same terminologies introduced recently in [24] to characterize strong minima/solution uniqueness of group-sparsity optimization problems. These conditions are weaker than the well-known Restricted Injectivity and Nondegenerate Source Condition used in [15, 16] as sufficient conditions for solution uniqueness of nuclear norm minimization problem (1.1); see also [25] for the case of \(\ell_{1}\)-norm. Both conditions can be verified numerically. In Section 5, we obtain new characterizations for strong minima of problem (1.1). Our conditions are not in the form of minimax problems. Indeed, Theorem 5.2 shows that \(X_{0}\) is a strong solution of problem (1.1) if and only if there exists a dual certificate \(\overline{Y}\in\operatorname{Im}\Phi^{*}\cap\partial\|X_{0}\|_{*}\) such that \[\operatorname{Ker}\Phi\cap T_{(\partial\|\cdot\|_{*})^{-1}(\overline{Y})}(X_ {0})=\{0\},\] which has some similarity with (1.4). Necessary and sufficient conditions for strong minima obtained in this section reveal some interesting facts about exact recovery for nuclear minimization problem (1.1). For example, one needs at least \(\frac{1}{2}r(r+1)\) measurements for \(M_{0}=\Phi X_{0}\) to recover exactly the matrix \(X_{0}\) of rank \(r\) as a strong solution of problem (1.1). This bound for \(m\) is very small, but it is tight in the sense that we can construct infinitely many linear operators \(\Phi:\mathbb{R}^{n_{1}\times n_{2}}\to\mathbb{R}^{\frac{1}{2}r(r+1)}\) such that solving problem (1.1) recovers exactly \(X_{0}\). Another compelling result in Section 5 shows that the _low-rank representation problem_[27, 33] always has strong minima, when the linear operator \(\Phi\) in (1.1) is any \(q\times n_{1}\) matrix. Finally in this paper, we discuss numerical methods to check strong minima and compare the results with sharp minima and solution uniqueness. For example, with 100 nuclear norm minimization problems via standard Gaussian linear operators \(\Phi\) and 460 measurements \(M_{0}\in\mathbb{R}^{460}\) observed from the original matrix \(X_{0}\in\mathbb{R}^{40\times 40}\) of rank 3, the case of exact recovery is about 80%, in which about 40% problems have sharp minima and the other 40% problems have strong minima. As the traditional approach for exact recovery in [1, 13, 16] is via sharp minima [24], seeing more unique (strong, non-sharp) solutions in these numerical experiments gives us a more complete picture about exact recovery when number of measurements \(m\) is not big enough; see our Section 6 for further numerical experiments. ## 2 Preliminaries Throughout this paper, we suppose that \(\mathbb{X}\) is an Euclidean space with norm \(\|\cdot\|\) and \(\mathbb{X}^{*}\) is its dual space endowed with the inner product \(\langle v,x\rangle\) for any \(v\in\mathbb{X}^{*}\) and \(x\in\mathbb{X}\). \(\mathbb{B}_{r}(\bar{x})\) is denoted by the closed ball with center \(\bar{x}\in\mathbb{X}\) and radius \(r>0\). Let \(\varphi:\mathbb{X}\to\overline{\mathbb{R}}\stackrel{{\rm def}}{{= }}\mathbb{R}\cup\{+\infty\}\) be a proper extended real-valued function with nonempty domain \(\operatorname{dom}\varphi\stackrel{{\rm def}}{{=}}\{x\in \mathbb{X}|\ \varphi(x)<\infty\}\neq\emptyset\). A point \(\bar{x}\in\operatorname{dom}\varphi\) is called a _strong solution_ (or strong minimizer) of \(\varphi\) if there exist \(c,\varepsilon>0\) such that \[\varphi(x)-\varphi(\bar{x})\geq c\|x-\bar{x}\|^{2}\quad\text{for all}\quad x \in\mathbb{B}_{\varepsilon}(\bar{x}). \tag{2.1}\] In this case, we say that strong minima occurs at \(\bar{x}\). The study of strong minima is usually based on second order theory, where the following structures of subderivatives play crucial roles; see, e.g., [5, 8, 35, 42, 43]. **Definition 2.1** (Subderivatives).: _For a function \(\varphi:\mathbb{X}\to\overline{\mathbb{R}}\) and \(\bar{x}\in\operatorname{dom}\varphi\), the subderivative of \(\varphi\) at \(\bar{x}\) is the function \(d\varphi(\bar{x}):\mathbb{X}\to\overline{\mathbb{R}}\) defined by_ \[d\varphi(\bar{x})(w)\stackrel{{\rm def}}{{=}}\liminf_{t\downarrow 0,w^{\prime}\to w}\frac{\varphi(\bar{x}+tw^{\prime})-\varphi(\bar{x})}{t}\quad \text{for}\quad w\in X. \tag{2.2}\] _The second subderivative of \(\varphi\) at \(\bar{x}\) for \(\bar{v}\in\mathbb{X}^{*}\) is the function \(d^{2}\varphi(\bar{x}|\bar{v}):\mathbb{X}\to\overline{\mathbb{R}}\) defined by_ \[d^{2}\varphi(\bar{x}|\bar{v})(w)\stackrel{{\rm def}}{{=}}\liminf_ {t\downarrow 0,w^{\prime}\to w}\frac{\varphi(\bar{x}+tw^{\prime})-\varphi(\bar{x})- t\langle\bar{v},w^{\prime}\rangle}{\frac{1}{2}t^{2}}\quad\text{for}\quad w\in X. \tag{2.3}\] _The parabolic subderivative of \(\varphi\) at \(\tilde{x}\) for \(w\in\operatorname{dom}d\varphi(\tilde{x})(\cdot)\) with respect to \(z\in\mathbb{X}\) is defined by_ \[d^{2}\varphi(\tilde{x})(w|z)\overset{\text{\tiny{\rm def}}}{=}\liminf_{t\downarrow 0,z^{\prime}\to z}\frac{\varphi(\tilde{x}+tw+\frac{1}{2}t^{2}z^{\prime})- \varphi(\tilde{x})-td\varphi(\tilde{x})(w)}{\frac{1}{2}t^{2}}. \tag{2.4}\] Parabolic subderivatives were introduced by Ben-Tal and Zowe in [5] to study strong minima; see also [43, Theorem 13.66]. Second subderivatives dated back to the seminal work of Rockafellar [42] with good calculus [35] for many important classes of functions. It is well known [43, Theorem 13.24] that \(\tilde{x}\) is a strong solution of \(\varphi\) if and only if \(0\in\partial\varphi(\tilde{x})\) and \[d^{2}\varphi(\tilde{x}|0)(w)>0\quad\text{for all}\quad w\neq 0. \tag{2.5}\] Here \(\partial\varphi(\tilde{x})\) stands for the Mordukhovich _limiting subdifferential_ of \(\varphi\) at \(\tilde{x}\)[36]: \[\partial\varphi(\tilde{x})=\left\{v\in\mathbb{X}^{*}|\ \exists(x_{k},v_{k}) \overset{X\times\mathbb{X}^{*}}{\longrightarrow}(\tilde{x},v),\,\liminf_{x \to x_{k}}\frac{\varphi(x)-\varphi(x_{k})-\langle v_{k},x-x_{k}\rangle}{\|x-x _{k}\|}\geq 0\right\}. \tag{2.6}\] When \(\varphi\) is a proper l.s.c. convex function, this subdifferential coincides with the subdifferential in the classical convex analysis \[\partial\varphi(\tilde{x})=\left\{v\in\mathbb{X}^{*}|\ \varphi(x)-\varphi( \tilde{x})\geq\langle v,x-\tilde{x}\rangle,x\in\mathbb{X}\right\}. \tag{2.7}\] We denote \(\varphi^{*}:\mathbb{X}^{*}\to\overline{\mathbb{R}}\) by the _Fenchel conjugate_ of \(\varphi\): \[\varphi^{*}(v)\overset{\text{\tiny{\rm def}}}{=}\sup\{\langle v,x\rangle- \varphi(x)\rangle|\ x\in\mathbb{X}\}\quad\text{for}\quad v\in\mathbb{X}^{*}. \tag{2.8}\] Next let us recall here some first and second order tangent structures [8, 43] on a nonempty closed set \(K\) of \(\mathbb{X}\) that widely used in this paper. **Definition 2.2** (tangent cones).: _Let \(K\) be a closed set of \(\mathbb{X}\). The Bouligand contingent cone at the point \(\tilde{x}\in K\) to \(K\) is defined by_ \[T_{K}(\tilde{x})\overset{\text{\tiny{\rm def}}}{=}\operatorname{Lim}_{t \downarrow 0}\frac{K-\tilde{x}}{t}=\left\{w\in\mathbb{X}|\ \exists\,t_{k}\downarrow 0,w_{k}\to w, \tilde{x}+t_{k}w_{k}\in K\right\}. \tag{2.9}\] _The inner and outer second order tangent set to \(K\) at \(\tilde{x}\in K\) in the direction \(w\in\mathbb{X}\) are defined, respectively, by_ \[\begin{split} T_{K}^{i,2}(\tilde{x}|w)&\overset{ \text{\tiny{\rm def}}}{=}\operatorname{Lim}_{t\downarrow 0}\frac{K-\tilde{x}-tw}{ \frac{1}{2}t^{2}}=\left\{z\in\mathbb{X}|\ \forall\,t_{k}\downarrow 0,\exists z_{k} \to z,\tilde{x}+t_{k}w+\frac{1}{2}t_{k}^{2}z_{k}\in K\right\}\quad\text{and}\\ T_{K}^{2}(\tilde{x}|w)&\overset{\text{\tiny{\rm def }}}{=}\operatorname{Lim}_{t\downarrow 0}\frac{K-\tilde{x}-tw}{\frac{1}{2}t^{2}}=\left\{z\in \mathbb{X}|\ \exists\,t_{k}\downarrow 0,z_{k}\to z,\tilde{x}+t_{k}w+\frac{1}{2}t_{k}^{2}z_{k}\in K \right\}.\end{split} \tag{2.10}\] The contingent cone \(T_{K}(\tilde{x})\) is a closed set. It contains all \(w\in\mathbb{X}\) such that there exists a sequence \(\{t_{k}\}\downarrow 0\) such that \(\operatorname{dist}\left(\tilde{x}+t_{k}w;K\right)=o(t_{k})\), where \(\operatorname{dist}\left(x;K\right)\) denotes the distance from \(x\in\mathbb{X}\) to \(K\): \[\operatorname{dist}\left(x;K\right)=\min\{\|x-u\|\|\ u\in K\}. \tag{2.11}\] Similarly, the inner second order tangent set to \(K\) at \(\tilde{y}\) is \[T_{K}^{i,2}(\tilde{x}|w)=\left\{z\in\mathbb{X}|\ \operatorname{dist}\left(x+tw+ \frac{1}{2}t^{2}z;K\right)=o(t^{2}),t\geq 0\right\}. \tag{2.12}\] When \(K\) is convex, it is well-known that \[T_{K}(\bar{x})=\{w\in\mathbb{X}|\ \mathrm{dist}\,(\bar{x}+tw;K)=o(t),t\geq 0\}.\] Since the function \(\mathrm{dist}\,(\cdot;K)\) is a convex function, \(T_{K}(\bar{x})\) is a convex set. In this case, the inner second order tangent set \(T_{K}^{i,2}(\bar{y}|w)\) is also convex due to the same reason and formula (2.12). Moreover, the dual of contingent cone is the _normal cone_ to \(K\) at \(\bar{x}\): \[N_{K}(\bar{x})\stackrel{{\mathrm{def}}}{{=}}[T_{K}(\bar{x})]^{- }=\{v\in\mathbb{X}^{*}|\ \langle v,x-\bar{x}\rangle\leq 0\quad\text{for all}\quad x\in K\}. \tag{2.13}\] It is also the subdifferential of the indicator function \(\iota_{K}\) to the set \(K\), which is defined by \(\iota_{K}(x)=0\) if \(x\in K\) and \(+\infty\) otherwise. The normal cone can be characterized via the _support function_ to \(K\) \[\sigma_{K}(v)\stackrel{{\mathrm{def}}}{{=}}\sup\{\langle v,x \rangle|\ x\in K\}\quad\text{for all}\quad v\in\mathbb{X}^{*} \tag{2.14}\] with \(N_{K}(\bar{x})=\{v\in\mathbb{X}^{*}|\ \sigma_{K}(v)\leq\langle v,\bar{x}\rangle\}\). To characterize strong minima for constrained optimization problems, Bonnans, Cominetti, and Shapiro [7, Definition 3] introduced the following _second order regular condition_ on \(K\); see also [8, Definition 3.85]. **Definition 2.3** (Second order regularity).: _The set \(K\) is called to be second order regular at \(\bar{x}\in K\) if for any \(w\in T_{K}(\bar{x})\) the outer second order tangent set \(T_{K}^{2}(\bar{x}|w)\) coincides with the inner second order tangent set \(T_{K}^{i,2}(\bar{x}|w)\) and for any sequence \(x_{k}\in K\) of the form \(x_{k}=\bar{y}+t_{k}w+\frac{1}{2}t_{k}^{2}r_{k}\)_ \[\lim_{k\to\infty}\mathrm{dist}\,(r_{k};T_{K}^{2}(\bar{x},w))=0.\] _The proper l.s.c. function \(\varphi:\mathbb{X}\to\overline{\mathbb{R}}\) is said to be second order regular at \(\bar{x}\in\mathrm{dom}\,\varphi\) if its epigraph of \(\varphi\) is second order regular at \((\bar{x},\varphi(\bar{x}))\)._ The class of second order regular sets cover many important sets in optimization such as any polyhedral set and the set of positive semi-definite matrices and the second order ice cream cone; see, e.g., [8]. Piecewise linear quadratic convex functions are second order regular [7]. Recently, it is proved in [18] some special spectral functions are also second order regular. When the function \(\varphi:\mathbb{X}\to\overline{\mathbb{R}}\) is l.s.c. convex and second order regular at \(\bar{x}\in\mathrm{dom}\,\varphi\), we note from [8, Proposition 3.41] \[T_{\mathrm{epi}\,\varphi}^{i,2}((\bar{x},\varphi(\bar{x}))|(w,d\varphi(\bar{x} )(w)))=\mathrm{epi}\,d^{2}\varphi(\bar{x})(w|\cdot) \tag{2.15}\] for any \(w\in\mathrm{dom}\,d\varphi(\bar{x})\). This is a convex set, which implies that \(d^{2}\varphi(\bar{x})(w|\cdot)\) is a convex function. In this case, it is known from [8, Proposition 103] that \(\varphi\) is _parabolically regular_ at \(\bar{x}\) in a direction \(w\in\mathbb{X}\) for \(v\in\mathbb{X}^{*}\) in the sense that \[d^{2}\varphi(\bar{x}|v)(w)=-[d^{2}\varphi(\bar{x})(w|\cdot)]^{*}(v), \tag{2.16}\] which is the Fenchel conjugate of the function \(d^{2}\varphi(\bar{x})(w|\cdot)\) at \(v\) provided that the pair \((w,v)\in\mathbb{X}\times\mathbb{X}^{*}\) satisfies the condition \(\langle v,w\rangle=d\varphi(\bar{x})(w)\). Next let us slightly modify [8, Theorem 3.108 and Theorem 3.109], which give necessary and sufficient conditions for strong solutions of the following composite problem \[\min_{x\in\mathbb{X}}\quad g(F(x)), \tag{2.17}\] where \(F:\mathbb{X}\to\mathbb{Y}\) is a twice continuously differentiable mapping and \(g:\mathbb{Y}\to\overline{\mathbb{R}}\) is a l.s.c. proper convex function. Suppose that \(y_{0}=F(x_{0})\in\operatorname{dom}g\) with \(x_{0}\in\mathbb{X}\). The Robinson's constraint qualification at \(x_{0}\) for this composite problem is known as \[0\in\operatorname{int}\,(y_{0}+\nabla F(x_{0})\mathbb{X}-\operatorname{dom}g); \tag{2.18}\] see, e.g., [7, 8]. The feasible point \(x_{0}\) is a called a _stationary point_ of problem (2.17) if there exists a _Lagrange multiplier_\(\lambda\in\mathbb{Y}^{*}\) such that \[\nabla F(\bar{x})^{*}\lambda=0\quad\text{and}\quad\lambda\in\partial g(y_{0}). \tag{2.19}\] **Theorem 2.4** (Second order characterizations for strong solutions of composite problems).: _Suppose that Robinson's constraint qualification (2.18) holds at a stationary point \(x_{0}\) and that the function \(g\) is second order regular at \(y_{0}\). Then \(x_{0}\) is a strong solution of problem (2.17) if and only if for any nonzero \(w\) in the critical cone_ \[C(x_{0})\stackrel{{\text{\tiny def}}}{{=}}\{u\in\mathbb{X}|\; dg(y_{0})(\nabla F(x_{0})u)=0\}, \tag{2.20}\] _there exists a Lagrange multiplier \(\lambda\) satisfying condition (2.19) such that_ \[\langle\lambda,\nabla^{2}F(x_{0})(w,w)\rangle+d^{2}g(y_{0}|\lambda)(\nabla F( x_{0})w)>0. \tag{2.21}\] **Proof.** Let us justify the sufficient part first. According to [8, Theorem 3.109], \(x_{0}\) is a strong solution of problem (2.17) provided that for any \(w\in C(x_{0})\setminus\{0\}\) there exists a Lagrange multiplier \(\lambda\in\mathbb{Y}^{*}\) satisfying (2.19) such that \[\langle\lambda,\nabla^{2}F(x_{0})(w,w)\rangle-\Psi^{*}(\lambda)>0, \tag{2.22}\] where \(\Psi(\cdot)\stackrel{{\text{\tiny def}}}{{=}}d^{2}g(y_{0})( \nabla F(x_{0})w|\cdot)\). Since \(g\) is convex and second order regular at \(y_{0}\), equation (2.15) for the function \(g\) tells us that \(d^{2}g(y_{0})(\nabla F(x_{0})w|\cdot)\) is a convex function for any \(w\in C(x_{0})\). Moreover, note that \[\langle\lambda,\nabla F(\bar{x})w\rangle=\langle\nabla F(\bar{x})^{*}\lambda,w\rangle=0=dg(\nabla F(\bar{x})w).\] We obtain from (2.16) that \(d^{2}g(y_{0}|\lambda)(\nabla F(\bar{x})w)=-\Psi^{*}(\lambda)\). This ensures the equivalence between (2.22) and (2.21). Thus \(x_{0}\) is a strong solution of problem (2.17) provided that condition (2.21) holds. To prove the necessary part, we note again that the function \(d^{2}g(y_{0})(\nabla F(x_{0})w|\cdot)\) is convex, then there is no gap between second order necessary and sufficient condition, i.e., condition (2.22) is also a necessary condition for strong solution \(x_{0}\); see [7, Theorem 5.2] or [8, Theorem 3.108]. Due to the equivalence of (2.21) and (2.22) above, condition (2.21) is also necesary for the strong minima at \(x_{0}\). \(\square\) As described in (2.21), the existence of Lagrange multiplier is dependent on the choice of each vector in the critical cone. Under the Robinson's constraint qualification (2.18), (2.21) is a minimax condition in the sense that it is equivalent to \[\min_{w\in C(x_{0}),\|w\|=1}\max_{\lambda\in\Lambda(x_{0})}\left[\langle \lambda,\nabla^{2}F(x_{0})(w,w)\rangle+d^{2}g(y_{0}|\lambda)(\nabla F(x_{0})w )\right]>0, \tag{2.23}\] where \(\Lambda(x_{0})\) is the set of all Lagrange multipliers satisfying (2.19). It is hard to check this condition numerically. On the other hand, its maximin version is more desirable, as it means that there may exist a Lagrange multiplier \(\lambda\) such that inequality (2.21) is valid for any \(w\in C(x_{0})\setminus\{0\}\). However, it is not clear how to close the gap between the minimax and maximin. For the case of (1.1), we will obtain some kind of maximin condition for strong minima in Theorem 5.2. ## 3 Geometric characterizations for strong minima of optimization problems ### Geometric characterizations for strong minima of unconstrained optimization problems In this subsection, we consider the following composite optimization problem \[\min_{x\in\mathbb{X}}\quad\varphi(x)\stackrel{{\text{\tiny def}}}{{=}} f(x)+g(x), \tag{3.1}\] where \(f,g:\mathbb{X}\to\overline{\mathbb{R}}\) are proper functions such that \(\operatorname{int}\left(\operatorname{dom}f\right)\cap\operatorname{dom}g\neq \emptyset\), \(f\) is twice continuously differentiable in \(\operatorname{int}\left(\operatorname{dom}f\right)\), and \(g\) is lower semi-continuous. We assume that \(\bar{x}\in\operatorname{int}\left(\operatorname{dom}f\right)\cap\operatorname {dom}g\) is a _stationary point_ of problem (3.1) in the sense that \[0\in\partial\varphi(\bar{x})=\nabla f(\bar{x})+\partial g(\bar{x})\] due to the sum rule for limiting subdifferential; see, e.g., [36, Proposition 1.107] or [43, Exercise 10.10]. Obviously, \(\bar{x}\) is a stationary point if and only if \(-\nabla f(\bar{x})\in\partial g(\bar{x})\). To characterize strong minima at the stationary point \(\bar{x}\), one of the most typical methods is using the second subderivative \(d^{2}\varphi(\bar{x}|0)\) defined in (2.5). As the function \(f\) is twice continuously differentiable at \(\bar{x}\), it is well-known [43, Exercise 13.18] that \[d^{2}\varphi(\bar{x}|0)(w)=\langle\nabla^{2}f(\bar{x})w,w\rangle+d^{2}g(\bar{ x}|-\nabla f(\bar{x}))(w)\quad\text{for}\quad w\in\mathbb{X}. \tag{3.2}\] Since \(g\) is possibly nonsmooth in many structured optimization problems, the computation of \(d^{2}g(\bar{x}|-\nabla f(\bar{x}))(w)\) could be quite challenging. In this section, we establish several new necessary and sufficient conditions for strong minima without computing second subderivatives under an additional assumption that the function \(g\) satisfies the following _quadratic growth condition_[6, 49]; see also [8, Section 3.5]. **Definition 3.1** (Quadratic growth conditions).: _Let \(g:\mathbb{X}\to\overline{\mathbb{R}}\) be a proper l.s.c. function and \(S\) be a closed subset of \(\mathbb{X}\) with \(\bar{x}\in\operatorname{dom}g\cap S\). We say that \(g\) satisfies the quadratic growth condition at \(\bar{x}\) for some \(\bar{v}\in\partial g(\bar{x})\) with respect to \(S\) if there exist constants \(\varepsilon,\delta>0\) and modulus \(\kappa>0\) such that_ \[g(x)-g(\bar{x})-\langle\bar{v},x-\bar{x}\rangle\geq\frac{\kappa}{2}[ \operatorname{dist}\left(x;S\right)]^{2}\qquad\text{for all}\qquad x\in \operatorname{\mathbb{B}}_{\varepsilon}^{\delta}(\bar{x}|\bar{v}) \tag{3.3}\] _with \(\operatorname{\mathbb{B}}_{\varepsilon}^{\delta}(\bar{x}|\bar{v})\stackrel{{ \text{\tiny def}}}{{=}}\left\{x\in\operatorname{\mathbb{B}}_{ \varepsilon}(\bar{x})|\ g(x)-g(\bar{x})-\langle\bar{v},x-\bar{x}\rangle<\delta\right\}\). The function \(g\) is said to satisfy the quadratic growth condition at \(\bar{x}\) for \(\bar{v}\), if it satisfies this condition at \(\bar{x}\) for \(\bar{v}\in\partial g(\bar{x})\) with respect to_ \[S(\bar{x},\bar{v})\stackrel{{\text{\tiny def}}}{{=}}\left\{x\in \mathbb{X}|\ g(x)-\langle\bar{v},x\rangle\leq g(\bar{x})-\langle\bar{v},\bar{x }\rangle\right\}. \tag{3.4}\] _Finally, we say the function \(g\) satisfies the quadratic growth condition at \(\bar{x}\) if it satisfies this condition at \(\bar{x}\) for any \(\bar{v}\in\partial g(\bar{x})\)._ As the function \(g\) is l.s.c., the set \(S(\bar{x},\bar{v})\) is closed and \(\bar{x}\in S(\bar{x},\bar{v})\). When quadratic growth condition (3.3) holds, it is clear that \[S(\bar{x},\bar{v})\cap\operatorname{\mathbb{B}}_{\varepsilon}(\bar{x})\subset S \cap\operatorname{\mathbb{B}}_{\varepsilon}(\bar{x})\quad\text{for some}\quad \varepsilon>0. \tag{3.5}\] Moreover, for any closed set \(S\) fulfilling (3.5) and \(x\in\operatorname{\mathbb{B}}_{\frac{\varepsilon}{2}}^{\delta}(\bar{x})\), we find some \(u\in S(\bar{x},\bar{v})\) such that \[\operatorname{dist}\left(x;S(\bar{x},\bar{v})\right)=\|x-u\|\leq\|x-\bar{x}\|< \frac{\varepsilon}{2},\] which implies that \(\|u-\tilde{x}\|\leq\|x-\tilde{x}\|+\frac{\varepsilon}{2}<\varepsilon\), i.e., \(u\in S(\tilde{x},\tilde{v})\cap\mathbb{B}_{\varepsilon}(\tilde{x})\). It follows from (3.5) that \[\operatorname{dist}\left(x;S(\tilde{x},\bar{v})\right)=\|x-u\|\geq\operatorname {dist}\left(x;S(\tilde{x},\bar{v})\cap\mathbb{B}_{\varepsilon}(\tilde{x}) \right)\geq\operatorname{dist}\left(x;S\right)\] for any \(x\in\mathbb{B}_{\frac{\varepsilon}{4}}^{\delta}(\tilde{x})\). Hence, if the function \(g\) satisfies the quadratic growth condition at \(\tilde{x}\) for \(\bar{v}\), it also satisfies the quadratic growth condition at \(\tilde{x}\) for \(\bar{v}\) w.r.t. any closed set \(S\) fulfilling (3.5). Many necessary and sufficient conditions for the quadratic growth condition have been established in [6, 8] and [44, 49] under a different name _weak sharp minima_ with order 2. When \(g\) is convex and \(\bar{v}\in\partial g(\tilde{x})\), the set \(S(\tilde{x},\bar{v})\) coincides with \((\partial g)^{-1}(\bar{v})=\partial g^{*}(\bar{v})\). The quadratic growth condition of \(g\) at \(\tilde{x}\) to \(\bar{v}\in\partial g(\tilde{x})\) w.r.t. \((\partial g)^{-1}(\bar{v})\) has been studied and connected with the so-called _Lojasiewicz inequality with exponent \(\frac{1}{2}\)_[9] and the _metric subregularity of the subdifferential_[2, 21, 52] (even for nonconvex cases.) There are broad classes of convex functions satisfying the quadratic growth condition such as _piece-wise linear quadratic convex_ functions [43, Definition 10.20] and many _convex spectral_ functions [18]; see also [50] for some other ones. When \(g\) is not convex, the quadratic growth condition of \(g\) at \(\tilde{x}\) to \(\bar{v}\in\partial g(\tilde{x})\) w.r.t. \((\partial g)^{-1}(\bar{v})\) is the same with the quadratic growth condition of \(g\) at \(\tilde{x}\) for \(\bar{v}\) provided that \[(\partial g)^{-1}(\bar{v})\cap\mathbb{B}_{\varepsilon}(\tilde{x})\subset S( \tilde{x},\bar{v})\cap\mathbb{B}_{\varepsilon}(\tilde{x})\quad\text{for sufficiently small}\quad\varepsilon>0. \tag{3.6}\] It is necessary for the quadratic growth condition (3.3) at \(\tilde{x}\) for \(\bar{v}\) that \(\tilde{x}\) is a local minimizer to the function \(g_{\bar{v}}(x)\stackrel{{\text{\tiny def}}}{{=}}g(x)-\langle \bar{v},x\rangle\), \(x\in\mathbb{X}\). Then, condition (3.6) is similar to the so-called _proper separation of isocost surface_ of \(g_{\bar{v}}\) in [53], which is an improvement of the _proper separation of stationary points_ of \(g_{\bar{v}}\) in [32]. By [21, Theorem 3.1], the function \(g\) satisfies the quadratic growth condition at \(\tilde{x}\) for \(\bar{v}\in\partial g(\tilde{x})\) w.r.t. \((\partial g)^{-1}(\bar{v})\) provided that \(\partial g\) is _metrically subregular_ at \(\tilde{x}\) for \(\bar{v}\) in the sense that there exist \(\eta\), \(\ell>0\) such that \[\operatorname{dist}\left(x;(\partial g)^{-1}(\bar{v})\right)\leq\ell \operatorname{dist}\left(\bar{v};\partial g(x)\right)\quad\text{for}\quad x \in\mathbb{B}_{\eta}(\tilde{x}). \tag{3.7}\] This condition is satisfied when \(\partial g\) is a _piecewise polyhedral_ set-valued mapping, i.e., the graph of \(\partial g\), \(\{(x,v)\in\mathbb{X}\times\mathbb{X}^{*}|\ v\in\partial g(x)\}\) is a union of finitely many polyhedral sets; see, e.g., [43, Example 9.57]. Thus the class of (possibly nonconvex) piecewise linear-quadratic functions fulfills (3.7); see also [53] for several sufficient conditions for (3.7) and some special nonconvex piecewise linear-quadratic regularizers such as SCAD and MCD penalty functions. Although our theory in this section is applicable to nonconvex functions, we focus our later applications on low-rank minimization problems (1.2) when \(g\) is the nuclear norm, which also satisfies the quadratic growth condition [50] but the graph of \(\partial g\) is not piecewise polyhedral. The following lemma plays an important role in our analysis. **Lemma 3.2** (Necessary condition for quadratic growth).: _Let \(g:\mathbb{X}\to\overline{\mathbb{R}}\) be a proper l.s.c. function and \(S\) be a closed subset of \(\mathbb{X}\) with \(\tilde{x}\in\operatorname{dom}g\cap S\). If \(g\) satisfies the quadratic growth at \(\tilde{x}\) for some \(\bar{v}\in\partial g(\tilde{x})\) w.r.t. \(S\) with some modulus \(\kappa>0\), we have_ \[d^{2}g(\tilde{x}|\bar{v})(w)\geq\kappa[\operatorname{dist}\left(w;T_{S}(\tilde {x})\right)]^{2}\quad\text{for all}\quad w\in\mathbb{X}. \tag{3.8}\] _Moreover, if \(g\) satisfies the quadratic growth at \(\tilde{x}\) for \(\bar{v}\), we have_ \[\operatorname{Ker}d^{2}g(\tilde{x}|\bar{v})\stackrel{{\text{\tiny def }}}{{=}}\left\{w\in\mathbb{X}|\ d^{2}g(\tilde{x}|\bar{v})(w)=0\right\}=T_{S( \tilde{x},\bar{v})}(\tilde{x}). \tag{3.9}\] **Proof.** Suppose that inequality (3.3) holds with some \(\varepsilon,\delta,\kappa>0\). Pick \(w\in\mathbb{X}\), we only need to verify (3.8) when \(d^{2}g(\tilde{x}|\bar{v})(w)<\infty\), i.e., \(w\in\operatorname{dom}d^{2}g(\tilde{x}|\bar{v})\). It follows from (2.3) that there exist sequences \(t_{k}\downarrow 0\) and \(w_{k}\to w\) such that \[d^{2}g(\tilde{x}|\bar{v})(w)=\lim_{k\to\infty}\frac{g(\tilde{x}+t_{k}w_{k})-g( \tilde{x})-t_{k}\langle\bar{v},w_{k}\rangle}{\frac{1}{2}t_{k}^{2}}. \tag{3.10}\] Hence, we have \[g(\bar{x}+t_{k}w_{k})-g(\bar{x})-\langle\bar{v},\bar{x}+t_{k}w_{k}-\bar{x}\rangle<\delta\] when \(k\) is sufficiently large. Combining (3.3) and (3.10) gives us that \[\begin{array}{ll}d^{2}g(\bar{x}|\bar{v})(w)&\geq\kappa\limsup_{k\to\infty} \left[\frac{\operatorname{dist}\left(\bar{x}+t_{k}w_{k};S\right)}{t_{k}} \right]^{2}\\ &=\kappa\limsup_{k\to\infty}\left[\operatorname{dist}\left(w_{k}; \frac{S-\bar{x}}{t_{k}}\right)\right]^{2}\end{array} \tag{3.11}\] As \(S\) is closed, there exist \(u_{k}\in\frac{S-\bar{x}}{t_{k}}\), i.e., \(\bar{x}+t_{k}u_{k}\in S\) such that \(\operatorname{dist}\left(w_{k};\frac{S-\bar{x}}{t_{k}}\right)=\|w_{k}-u_{k}\|\). This together with (3.11) implies that \[d^{2}g(\bar{x}|\bar{v})(w)\geq\kappa(\limsup_{k\to\infty}\|w_{k}-u_{k}\|^{2}).\] Hence \(u_{k}\) is bounded. By passing to a subsequence, we suppose that \(u_{k}\) converges to \(u\in\mathbb{X}\). As \(\bar{x}+t_{k}u_{k}\in S\), we have \(u\in T_{S}(\bar{x})\). It follows from the above inequality that \[d^{2}g(\bar{x}|\bar{v})(w)\geq\kappa\|w-u\|^{2}\geq\kappa[\operatorname{dist} \left(w;T_{S}(\bar{x})\right)]^{2},\] which verifies (3.8). To justify (3.9), suppose further that \(g\) satisfies the quadratic growth condition at \(\bar{x}\) for \(\bar{v}\). For any \(w\in\operatorname{Ker}d^{2}g(\bar{x}|\bar{v})\), we obtain from (3.8) that \(\operatorname{dist}\left(w;T_{S(\bar{x},\bar{v})}(\bar{x})\right)=0\), which means \(w\in T_{S(\bar{x},\bar{v})}(\bar{x})\). It follows that \(\operatorname{Ker}d^{2}g(\bar{x}|\bar{v})\subset T_{S(\bar{x},\bar{v})}(\bar{ x})\). Let us prove the opposite inclusion by picking any \(w\in T_{S(\bar{x},\bar{v})}(\bar{x})\). There exist \(t_{k}\downarrow 0\) and \(w_{k}\to w\) such that \(\bar{x}+t_{k}w_{k}\in S(\bar{x},\bar{v})\), i.e., \[g(\bar{x}+t_{k}w_{k})-g(\bar{x})-t_{k}\langle\bar{v},w_{k}\rangle\leq 0\] We obtain from (3.8) that \[0\leq\kappa[\operatorname{dist}\left(w;T_{S(\bar{x},\bar{v})}(\bar{x})\right) ]^{2}\leq d^{2}g(\bar{x}|\bar{v})(w)\leq\liminf_{k\to\infty}\frac{g(\bar{x}+t_ {k}w_{k})-g(\bar{x})-t_{k}\langle\bar{v},w_{k}\rangle}{\frac{1}{2}t_{k}^{2}} \leq 0, \tag{3.12}\] which yields \(w\in\operatorname{Ker}d^{2}g(\bar{x}|\bar{v})\) and verifies \(T_{S(\bar{x},\bar{v})}(\bar{x})\subset\operatorname{Ker}d^{2}g(\bar{x}|\bar{v})\). The proof is complete. Next let us establish the main theorem of this section, which provides a geometric characterization for strong minima of problem (3.1). **Theorem 3.3** (Necessary and sufficient conditions for strong minima).: _Let \(\bar{x}\in\operatorname{int}\left(\operatorname{dom}f\right)\cap\operatorname{ dom}g\) be a stationary point of problem (3.1). If \(\bar{x}\) is a strong solution of problem (3.1), then_ \[\langle\nabla^{2}f(\bar{x})w,w\rangle>0\qquad\text{for all}\quad w\in T_{S( \bar{x},\bar{v})}(\bar{x})\setminus\{0\}. \tag{3.13}\] _Suppose further that \(g\) satisfies the quadratic growth condition at \(\bar{x}\) for \(\bar{v}\stackrel{{\text{\tiny def}}}{{\equiv}}-\nabla f(\bar{x})\) and that \(\nabla^{2}f(\bar{x})\) is positive semidefinite, then \(\bar{x}\) is a strong solution of problem (3.1) if and only if_ \[\operatorname{Ker}\nabla^{2}f(\bar{x})\cap T_{S(\bar{x},\bar{v})}(\bar{x})=\{ 0\}. \tag{3.14}\] **Proof.** As \(f\) is twice continuously differentiable at \(\bar{x}\in\operatorname{int}\,(\operatorname{dom}f)\), we derive from (2.5) and (3.2) that \(\bar{x}\) is a strong solution of \(\varphi\) if and only if there exists some \(\ell>0\) such that \[\langle\nabla^{2}f(\bar{x})w,w\rangle+d^{2}g(\bar{x}|\bar{v})(w)\geq\ell\|w\|^ {2}\quad\text{for all}\quad w\in\mathbb{X}. \tag{3.15}\] To justify the first part, suppose that \(\bar{x}\) is a strong solution of \(\varphi\), i.e., (3.15) holds. Pick any \(w\in T_{S(\bar{x},\varphi)}(\bar{x})\setminus\{0\}\) and find sequences \(t_{k}\downarrow 0\) and \(w_{k}\to w\) such that \(\bar{x}+t_{k}w_{k}\in S(\bar{x},\bar{v})\), which means \[\frac{g(\bar{x}+t_{k}w_{k})-g(\bar{x})-\langle\bar{v},\bar{x}+t_{k}w_{k}-\bar {x}\rangle}{\frac{1}{2}t_{k}^{2}}\leq 0\] By the definition of \(d^{2}g(\bar{x}|\bar{v})(w)\) in (2.3), we have \(d^{2}g(\bar{x}|\bar{v})(w)\leq 0\). This together with (3.15) verifies (3.13). To verify the second part of the theorem, suppose that the function \(g\) satisfies the quadratic growth condition at \(\bar{x}\) for \(\bar{v}\) with some modulus \(\kappa>0\) and \(\nabla f^{2}(\bar{x})\) is positive semidefinite. It is obvious that (3.13) implies (3.14). We only need to prove that (3.14) is sufficient for strong minima at \(\bar{x}\). Suppose that condition (3.14) is satisfied. If condition (3.15) failed, we could find a sequence \(w_{k}\) such that \(\|w_{k}\|=1\) and \[\langle\nabla^{2}f(\bar{x})w_{k},w_{k}\rangle+d^{2}g(\bar{x}|\bar{v})(w_{k}) \leq\frac{1}{k}.\] It follows from (3.8) that \[\frac{1}{k}\geq\langle\nabla^{2}f(\bar{x})w_{k},w_{k}\rangle+\kappa[ \operatorname{dist}\,(w_{0};T_{S(\bar{x},\bar{v})}(\bar{x}))]^{2}. \tag{3.16}\] By passing to a subsequence, assume that \(w_{k}\to w_{0}\) with \(\|w_{0}\|=1\) (without relabeling.) It follows that \[0\geq\langle\nabla^{2}f(\bar{x})w_{0},w_{0}\rangle+\kappa[\operatorname{dist} \,(w_{0};T_{S(\bar{x},\bar{v})}(\bar{x})]^{2}\geq\langle\nabla^{2}f(\bar{x}) w_{0},w_{0}\rangle\geq 0.\] Hence, we have \(\langle\nabla^{2}f(\bar{x})w_{0},w_{0}\rangle=0\) and \(\operatorname{dist}\,(w_{0};T_{S(\bar{x},\bar{v})}(\bar{x}))=0\), which means \[w_{0}\in\operatorname{Ker}\nabla^{2}f(\bar{x})\cap T_{S(\bar{x},\bar{v})}( \bar{x}).\] This is a contradiction to (3.14) as \(\|w_{0}\|=1\). Hence, (3.14) holds for some \(\ell>0\) and \(\bar{x}\) is a strong solution of \(\varphi\). The proof is complete. **Corollary 3.4** (Geometric characterization for strong minima of convex problems).: _Let \(f,g:\mathbb{X}\to\overline{\mathbb{R}}\) be proper l.s.c. convex functions and \(\bar{x}\in\operatorname{int}\,(\operatorname{dom}f)\cap\operatorname{dom}g\) be a minimizer to problem (3.1). Suppose that \(f\) is twice continuously differentiable in \(\operatorname{int}\,(\operatorname{dom}f)\) and that \(\partial g\) is metrically subregular at \(\bar{x}\) for \(\bar{v}=-\nabla f(\bar{x})\). Then \(\bar{x}\) is a strong solution to problem (3.1) if any only if_ \[\operatorname{Ker}\nabla^{2}f(\bar{x})\cap T_{(\partial g^{*})(\bar{v})}(\bar {x})=\{0\}. \tag{3.17}\] **Proof.** As discussed before (3.7), when \(g\) is a convex function, the metric subregularity of \(\partial g\) at \(\bar{x}\) for \(\bar{v}\) implies the quadratic growth condition (they are indeed equivalent [2, 52].) Since \(f\) is convex, \(\nabla^{2}f(\bar{x})\) is positive semidefinite. By Theorem 3.3, \(\bar{x}\) is a strong solution if and only if (3.14) holds. Since \(g\) is a convex function, we have \(S(\bar{x},\bar{v})=(\partial g)^{-1}(\bar{v})=\partial g^{*}(\bar{v})\). Thus (3.17) is equivalent to (3.14). The proof is complete. Unlike many other necessary and sufficient conditions for strong minima, our geometric characterizations (3.14) and (3.17) do not involve the "curvature" or the "sigma-term" of the function \(g\). We still need to compute the contingent cones \(T_{S(\bar{x},\bar{v})}(\bar{x})\) in (3.14) or \(T_{\partial g^{*}(\bar{v})}(\bar{x})\) in (3.17). In Section 4 and 5, we consider the case \(g=\|\cdot\|_{*}\), the nuclear norm and provide a simple calculation of \(T_{\partial g^{*}(\bar{v})}(\bar{x})\). ### Geometric characterization for strong minima of optimization problems with linear constraints In this subsection, we apply the idea in Theorem 3.3 and Corollary 3.4 to the following convex optimization problem with linear constraints \[\min_{x\in\mathbb{X}}\quad g(x)\quad\text{subject to}\quad\Phi x\in K, \tag{3.18}\] where \(g:\mathbb{X}\to\mathbb{R}\) is a continuous (nonsmooth) convex function with full domain, \(\Phi:\mathbb{X}\to\mathbb{Y}\) is a linear operator between two Euclidean spaces, and \(K\) is a closed convex polyhedral set in \(\mathbb{Y}\). Unlike problem (3.1), function \(g\) needs to satisfy more properties such as convexity, quadratic growth condition, and second order regularity stated in the next theorem. Let us recall that \(x_{0}\) is a stationary solution of problem (3.18) if there exists a Lagrange multiplier \(\lambda\in\mathbb{Y}^{*}\), hence a _dual certificate_, such that \[-\Phi^{*}\lambda\in\partial g(x_{0})\quad\text{and}\quad\lambda\in N_{K}(\Phi x _{0}). \tag{3.19}\] The set of Lagrange multipliers is defined by \[\Lambda(x_{0})\stackrel{{\text{\tiny def}}}{{=}}\{\lambda\in N _{K}(\Phi x_{0})|-\Phi^{*}\lambda\in\partial g(x_{0})\}. \tag{3.20}\] The critical cone of this problem at the stationary point \(x_{0}\) is \[C(x_{0})\stackrel{{\text{\tiny def}}}{{=}}\{w\in\mathbb{X}|\; \Phi w\in T_{K}(\Phi x_{0}),dg(x_{0})(w)=0\}. \tag{3.21}\] The point \(x_{0}\) is call a strong solutions of problem (3.18) if there exists \(\varepsilon>0\) and \(c>0\) such that \[g(x)-g(x_{0})\geq c\|x-x_{0}\|^{2}\qquad\text{when}\qquad\Phi x\in K\quad \text{and}\quad x\in\mathbb{B}_{\varepsilon}(x_{0}).\] **Theorem 3.5** (Geometric characterization for strong minima of problem (3.18)).: _Let \(x_{0}\) be a stationary point of problem (3.18). Suppose that the convex function \(g\) is second order regular at \(x_{0}\) and satisfies the quadratic grown condition at \(x_{0}\). Then \(x_{0}\) is a strong solution of problem (3.18) if and only if_ \[\left[\bigcap_{\lambda\in\Lambda(x_{0})}T_{\partial g^{*}(-\Phi^{*}\lambda)}(x _{0})\right]\cap C(x_{0})=\{0\}. \tag{3.22}\] **Proof.** Define \(y_{0}\stackrel{{\text{\tiny def}}}{{=}}\Phi x_{0}\), \(\mathbb{L}\stackrel{{\text{\tiny def}}}{{=}}\operatorname{Im}\Phi\) being a linear subspace of \(\mathbb{Y}\), \(K_{\mathbb{L}}\stackrel{{\text{\tiny def}}}{{=}}K\cap\mathbb{L}\), the mapping \(F:\mathbb{X}\to\mathbb{L}\times\mathbb{X}\) by \(F(x)\stackrel{{\text{\tiny def}}}{{=}}(\Phi x,x)\) for \(x\in\mathbb{X}\), and the function \(G:\mathbb{L}\times\mathbb{X}\to\mathbb{R}\) by \[G(u,x)=\iota_{K_{\mathbb{L}}}(u)+g(x)\quad\text{for}\quad(u,x)\in\mathbb{L} \times\mathbb{X}.\] Hence we can replace \(\mathbb{Y}\) by \(\mathbb{L}\) and \(K\) by \(K_{\mathbb{L}}\) in problem (3.18). Rewrite problem (3.18) as a composite optimization problem (2.17) \[\inf_{x\in\mathbb{X}}\quad G(F(x)). \tag{3.23}\] Observe that \(x_{0}\) is a strong solution of (3.18) if and only if it is a strong solution of (3.23). Robinson's constraint qualification (2.18) for problem (3.23) is \[0\in\operatorname{int}\left(F(x_{0})+\nabla F(x_{0})\mathbb{X}-K_{\mathbb{L}} \times\mathbb{X}\right). \tag{3.24}\] As \(\Phi:\mathbb{X}\to\mathbb{L}\) is a surjective operator, [8, Corollary 2.101] tells us that the above condition is equivalent to the existence of \(w\in\mathbb{X}\) satisfying \[y_{0}+\Phi w\in K_{\mathbb{L}}\quad\text{and}\quad x_{0}+w\in\operatorname{int} \mathbb{X}=\mathbb{X}.\] This condition holds trivially at \(w=0_{\mathbb{X}}\). Thus Robinson's constraint qualification (3.24) holds. The critical cone (2.20) for problem (3.23) is \[\widehat{C}(x_{0})\stackrel{{\text{\tiny def}}}{{=}}\{w\in \mathbb{X}|\;dG(F(x_{0})|\;\nabla F(x_{0})w)=0\}=\{w\in\mathbb{X}|\;\Phi w\in T _{K_{\mathbb{L}}}(y_{0}),dg(x_{0})(w)=0\}. \tag{3.25}\] As \(K_{\mathbb{L}}\) is also a polyhedral in \(\mathbb{L}\), we have \[T_{K_{\mathbb{L}}}(y_{0})=\mathbb{R}_{+}(K_{\mathbb{L}}-y_{0})=\mathbb{R}_{+}( K-y_{0})\cap\mathbb{L}=T_{K}(y_{0})\cap\mathbb{L}. \tag{3.26}\] It follows that the set \(\widehat{C}(x_{0})\) is exactly the critical cone \(C(x_{0})\) defined in (3.21). By (2.19), the set of Lagrange multipliers of problem (3.23) is \[\begin{array}{ll}\widehat{\Lambda}(x_{0})&\stackrel{{\text{ \tiny def}}}{{=}}\{(\lambda,\mu)\in\mathbb{L}^{*}\times\mathbb{X}^{*}|\;\nabla F (x_{0})^{*}(\lambda,\mu)=0,(\lambda,\mu)\in\partial G(F(x_{0}))\}\\ &=\{(\lambda,\mu)\in\mathbb{L}^{*}\times\mathbb{X}^{*}|\;\mu=-\Phi^{*}\lambda, \lambda\in N_{K_{\mathbb{L}}}(y_{0}),\mu\in\partial g(x_{0})\}.\end{array} \tag{3.27}\] Note further that \[\operatorname{epi}G=K_{\mathbb{L}}\times\operatorname{epi}g.\] Since \(\mathbb{L}\) is a polyhedral, it is second order regular [7]. As \(\operatorname{epi}g\) is second order regular, so is \(\operatorname{epi}G\); see, e.g., [8, Propisition 3.89]. By Theorem 2.4, \(x_{0}\) is a strong solution of problem (3.23) if and only if for any \(w\in C(x_{0})\setminus\{0\}\) there exists \((\lambda,\mu)\in\Lambda(x_{0})\) such that \[\langle(\lambda,\mu),\nabla^{2}F(x_{0})(w,w)\rangle+d^{2}G(F(x_{0})|(\lambda, \mu))(\nabla F(x_{0})w)>0. \tag{3.28}\] Observe that \[d^{2}G(F(x_{0})|\nabla F(x_{0})(\lambda,\mu))(\nabla F(x_{0})w)=d^{2}g(x_{0}| \mu)(w)+d^{2}{}_{tK_{\mathbb{L}}}(y_{0}|\lambda)(\Phi w). \tag{3.29}\] Note from (2.3) that \[d^{2}{}_{tK_{\mathbb{L}}}(y_{0}|\lambda)(\Phi w)=\liminf_{z\to\Phi w,\downarrow 0 }\frac{{}_{tK_{\mathbb{L}}}(y_{0}+tz)-{}_{tK_{\mathbb{L}}}(y_{0})-t\langle \lambda,z\rangle}{0.5t^{2}}\geq 0. \tag{3.30}\] By (3.26), we have \[d^{2}{}_{tK_{\mathbb{L}}}(y_{0}|\lambda)(\Phi w)=\liminf_{z\stackrel{{ \tau_{K_{\mathbb{L}}}(y_{0})}}{{\to}}_{\Phi w,\downarrow 0}}\frac{- \langle\lambda,z\rangle}{0.5t}\geq 0. \tag{3.31}\] Since \(w\in C(x_{0})\), we have \(\Phi w\in T_{K_{\mathbb{L}}}(\Phi x_{0})\). As \(\lambda\in N_{K_{\mathbb{L}}}(\Phi x_{0})\) and \(\mu=-\Phi^{*}\lambda\in\partial g(x_{0})\), it follows that \[0=dg(x_{0})(w)\geq\langle\mu,w\rangle=-\langle\Phi^{*}\lambda,w\rangle=- \langle\lambda,\Phi w\rangle\geq 0,\] which implies that \(\langle\lambda,\Phi w\rangle=0\). This together with (3.31) tells us that \(d^{2}{}_{tK_{\mathbb{L}}}(y_{0}|\lambda)(\Phi w)=0\). By (3.29), condition (3.28) is equivalent to \[d^{2}g(x_{0}|-\Phi^{*}\lambda)(w)>0. \tag{3.32}\] Since \(K\) is a polyhedral set, we have \[N_{K_{\mathbb{L}}}(\Phi x_{0})=N_{K\cap\mathbb{L}}(\Phi x_{0})=N_{K}(\Phi x_{0})+ N_{\mathbb{L}}(\Phi x_{0})=N_{K}(\Phi x_{0})+\operatorname{Ker}\Phi^{*}.\] Represent \(\lambda=\lambda_{1}+\lambda_{2}\) for some \(\lambda_{1}\in N_{K}(\Phi x_{0})\) and \(\lambda_{2}\in\operatorname{Ker}\Phi^{*}\), it follows that \[-\Phi^{*}\lambda=-\Phi^{*}(\lambda_{1}+\lambda_{2})=-\Phi^{*}\lambda_{1}.\] Hence \(x_{0}\) is a strong solution of problem (3.23) if any only if for any \(w\in C(x_{0})\setminus\{0\}\) there exists some \(\lambda\in\Lambda(x_{0})\) in (3.19) such that (3.32) holds. Since \(g\) satisfies the quadratic grown condition at \(x_{0}\), we get from Lemma 3.2 that \[\operatorname{Ker}d^{2}g(x_{0}|-\Phi^{*}\lambda)=T_{\partial g^{*}(-\Phi^{*} \lambda)}(x_{0}).\] Hence condition (3.32) means for any \(w\in C(x_{0})\setminus\{0\}\) there exists \(\lambda\in\Lambda(x_{0})\) such that \[w\notin T_{\partial g^{*}(-\Phi^{*}\lambda)}(x_{0})\quad\text{or}\quad w\in \mathbb{X}\setminus T_{\partial g^{*}(-\Phi^{*}\lambda)}(x_{0}).\] This is equivalent to the following inclusion \[C(x_{0})\setminus\{0\}\subset\left[\bigcup_{\lambda\in\Lambda(x_{0})}\left( \mathbb{X}\setminus T_{\partial g^{*}(-\Phi^{*}\lambda)}(x_{0})\right) \right]=\mathbb{X}\setminus\left[\bigcap_{\lambda\in\Lambda(x_{0})}T_{\partial g ^{*}(-\Phi^{*}\lambda)}(x_{0})\right],\] which is also equivalent to (3.22). The proof is complete. The approach of using composite function (3.23) to study problem (3.18) is traditional; see, e.g., [7, 34]. However, by assuming additionally the function \(g\) satisfies the quadratic grown condition at \(x_{0}\), we are able to obtain the new geometric characterization for strong solution in (3.22). The main idea in this result is similar to that in Theorem 3.3. Here we require the function \(g\) to satisfy more assumptions, but when applying this result to the nuclear norm minimization problem (5.1), they are also valid, as the nuclear norm is second order regular and also satisfies the quadratic growth condition [18]. ## 4 Characterizations for strong minima of low-rank optimization problems This section devotes to new characterizations for strong minima of the low-rank optimization problem: \[\min_{X\in\mathbb{R}^{n_{1}\times n_{2}}}\quad h(\Phi X)+\mu\|X\|_{*}, \tag{4.1}\] where \(\Phi:\mathbb{R}^{n_{1}\times n_{2}}\to\mathbb{R}^{m}\) is a linear operator, \(g(X)\stackrel{{\text{\tiny def}}}{{=}}\|X\|_{*}\) is the nuclear norm of \(X\in\mathbb{R}^{n_{1}\times n_{2}}\), \(\mu\) is a positive constant, and \(h:\mathbb{R}^{m}\to\overline{\mathbb{R}}\) satisfies the following standing assumptions [24]: 1. \(h\) is proper convex and twice continuously differentiable in \(\operatorname{int}\left(\operatorname{dom}h\right)\). 2. \(\nabla^{2}h(\Phi X)\) is positive definite for any \(X\in\Phi^{-1}(\operatorname{int}\left(\operatorname{dom}h\right))\). Strongly convex functions with full domain clearly satisfy the above standing assumptions. Another important (non-strongly convex) function with these conditions widely used in statistical/machine learning is the _Kullback-Leiber divergence_. Sufficient conditions for strong minima of problem (4.1) can be obtained from [18, Theorem 12]. However, their result still relies on some computation of \(d^{2}\|\cdot\|_{*}\), which is complicated; see, e.g., [20, 51] and the recent paper [37] for the case of symmetric matrices. We will provide some explicit and computable characterizations for strong minima of problem (4.1) based on Corollary 3.4. The calculation of the contingent cone \(T_{\partial g^{*}(\overline{Y})}(\overline{X})\) is rather simple; see our formula (4.10) below. Let us recall a few standard notations for matrices. The space of all matrices \(\mathbb{R}^{n_{1}\times n_{2}}\) (\(n_{1}\leq n_{2}\)) is endowed with the inner product \[\langle X,Y\rangle\stackrel{{\text{\tiny def}}}{{=}}\text{Tr} \,(X^{T}Y)\quad\text{for all}\quad X,Y\in\mathbb{R}^{n_{1}\times n_{2}},\] where \(\text{Tr}\,\) is the _trace operator_. The Frobenious norm on \(\mathbb{R}^{n_{1}\times n_{2}}\) is \[\|X\|_{F}\stackrel{{\text{\tiny def}}}{{=}}\sqrt{\text{Tr}\,(X^ {T}X)}\quad\text{for all}\quad X\in\mathbb{R}^{n_{1}\times n_{2}}.\] The nuclear norm and spectral norm of \(X\in\mathbb{R}^{n_{1}\times n_{2}}\) are defined respectively by \[\|X\|_{*}\stackrel{{\text{\tiny def}}}{{=}}\sum_{i=1}^{n_{1}} \sigma_{i}(X)\quad\text{and}\quad\|X\|\stackrel{{\text{\tiny def}}} {{=}}\sigma_{1}(X),\] where \(\sigma_{1}(X)\geq\sigma_{2}(X)\geq\ldots\geq\sigma_{n_{1}}(X)\geq 0\) are all singular values of \(X\). Suppose that a _full_ Singular Value Decomposition (SVD) of \(\overline{X}\in\mathbb{R}^{n_{1}\times n_{2}}\) is \[\overline{X}=U\begin{pmatrix}\overline{\Sigma}_{r}&0\\ 0&0\end{pmatrix}_{n_{1}\times n_{2}}V^{T}\quad\text{with}\quad\overline{ \Sigma}_{r}=\begin{pmatrix}\sigma_{1}(\overline{X})&\ldots&0\\ \vdots&\ddots&\vdots\\ 0&\ldots&\sigma_{r}(\overline{X})\end{pmatrix}, \tag{4.2}\] where \(r=\text{rank}\,(\overline{X})\), \(U\in\mathbb{R}^{n_{1}\times n_{1}}\) and \(V\in\mathbb{R}^{n_{2}\times n_{2}}\) are orthogonal matrices. Let \(\mathcal{O}(\overline{X})\) be the set of all such pairs \((U,V)\) satisfying (4.2). We write \(U=\begin{pmatrix}U_{I}&U_{J}\end{pmatrix}\) and \(V=\begin{pmatrix}V_{I}&V_{K}\end{pmatrix}\), where \(U_{I}\) and \(V_{I}\) are the submatrices of the first \(r\) columns of \(U\) and \(V\), respectively. We get from (4.2) that \(\overline{X}=U_{I}\overline{\Sigma}_{r}V_{I}^{T}\), which is known as a _compact SVD_ of \(\overline{X}\). The following lemma is significant in our paper. The first part is well-known [48, Example 2]. The last part was established in [50, Proposition 10], which can be viewed as a direct consequence of [48, Example 1] via convex analysis, the formula of normal cone to a level set [41, Corollary 23.7.1]. **Lemma 4.1** (Subdifferential of the nuclear norm).: _The subdifferential to nuclear norm at \(\overline{X}\in\mathbb{R}^{n_{1}\times n_{2}}\) is computed by_ \[\partial\|\overline{X}\|_{*}=\left\{U\begin{pmatrix}\mathbb{I}_{r}&0\\ 0&W\end{pmatrix}V^{T}|\ \|W\|\leq 1\right\}\quad\text{for any}\quad(U,V)\in\mathcal{O}( \overline{X}). \tag{4.3}\] _Moreover, \(\overline{Y}\in\partial\|\overline{X}\|_{*}\) if and only if \(\|\overline{Y}\|\leq 1\) and_ \[\|\overline{X}\|_{*}=\langle\overline{Y},\overline{X}\rangle. \tag{4.4}\] _Furthermore, for any \(\overline{Y}\in\mathbb{B}\stackrel{{\text{\tiny def}}}{{=}}\{Z \in\mathbb{R}^{n_{1}\times n_{2}}|\ \|Z\|\leq 1\}\), we have_ \[\partial g^{*}(\overline{Y})=N_{\mathbb{B}}(\overline{Y})=\overline{U} \begin{pmatrix}\mathsf{S}_{+}^{p(\overline{Y})}&0\\ 0&0\end{pmatrix}\overline{V}^{T}\quad\text{for any}\quad(\overline{U},\overline{V })\in\mathcal{O}(\overline{Y}), \tag{4.5}\] _where \(\mathsf{S}_{+}^{p}\) is the set of all \(p\times p\) symmetric positive semidefinite matrices and \(p(\overline{Y})\) is defined by_ \[p(\overline{Y})\stackrel{{\text{\tiny def}}}{{=}}\#\{i|\ \sigma_{i}( \overline{Y})=1\}. \tag{4.6}\] Let \(\overline{Y}\in\partial\|\overline{X}\|_{*}\) and \((U,V)\in\mathcal{O}(\overline{X})\). It follows from (4.3) that \(\overline{Y}\) can be represented by \[\overline{Y}=U\begin{pmatrix}\mathbb{I}_{r}&0\\ 0&\overline{W}\end{pmatrix}V^{T} \tag{4.7}\] with some \(\overline{W}\in\mathbb{R}^{(n_{1}-r)\times(n_{2}-r)}\) satisfying \(\|\overline{W}\|\leq 1\). Let \((\hat{U},\widehat{V})\in\mathcal{O}(\overline{W})\) and \(\hat{U}\Sigma\widehat{V}^{T}\) be a full SVD of \(\overline{W}\). We get from (4.7) that \[\overline{Y}=\overline{U}\begin{pmatrix}\mathbb{I}_{r}&0\\ 0&\Sigma\end{pmatrix}\overline{V}^{T}\quad\text{with}\quad\overline{U} \stackrel{{\text{\tiny def}}}{{=}}(U_{I}\;U_{J}\hat{U})\quad\text{ and}\quad\overline{V}\stackrel{{\text{\tiny def}}}{{=}}(V_{I}\;V_{K}\widehat{V}). \tag{4.8}\] Observe that \(\overline{U}^{T}\overline{U}=\mathbb{I}_{n_{1}}\) and \(\overline{V}^{T}\overline{V}=\mathbb{I}_{n_{2}}\). It follows that \((\overline{U},\overline{V})\in\mathcal{O}(\overline{X})\cap\mathcal{O}( \overline{Y})\), which means \(\overline{X}\) and \(\overline{Y}\) have _simultaneous ordered singular value decomposition_[30, 31] with orthogonal matrix pair \((\overline{U},\overline{V})\) in the sense that \[\overline{X}=\overline{U}(\text{Diag}\,\sigma(\overline{X}))\overline{V}^{T} \qquad\text{and}\qquad\overline{Y}=\overline{U}(\text{Diag}\,\sigma(\overline {Y}))\overline{V}^{T}, \tag{4.9}\] where \(\sigma(\overline{X})\stackrel{{\text{\tiny def}}}{{=}}\big{(} \sigma_{1}(\overline{X}),\ldots,\sigma_{n_{1}}(\overline{X})\big{)}^{T}\) and \(\text{Diag}\,\sigma(\overline{X})\stackrel{{\text{\tiny def}}}{{= }}\begin{pmatrix}\sigma_{1}(\overline{X})&\ldots&0&0&\ldots&0\\ 0&\ddots&0&0&\ldots&0\\ 0&\ldots&\sigma_{n_{1}}(\overline{X})&0&\ldots&0\end{pmatrix}_{n_{1}\times n _{2}}\). The following result establishes a geometric characterization for strong solution of the problem (4.1). According to [50, Proposition 11], the subdifferential of the nuclear norm function satisfies the _metric subregularity_ (3.7) at any \(\overline{X}\) for any \(Y\in\partial\|\overline{X}\|_{*}\). **Corollary 4.2** (Geometric characterization for strong minima of low-rank optimization problems).: _Suppose that \(\overline{X}\in\Phi^{-1}(\operatorname{int}(\operatorname{dom}h))\) is a minimizer of problem_ (4.1) _with \(\overline{Y}\stackrel{{\text{\tiny def}}}{{=}}-\frac{1}{\mu}\Phi^{* }\nabla h(\Phi\overline{X})\in\partial\|\overline{X}\|_{*}\). Let \((\overline{U},\overline{V})\in\mathcal{O}(\overline{X})\cap\mathcal{O}( \overline{Y})\) as in_ (4.9) _or_ (4.8)_. Then we have_ \[T_{N_{\text{\tiny B}}(\overline{Y})}(\overline{X})=\left\{\overline{U}\begin{pmatrix} A&B&0\\ B^{T}&C&0\\ 0&0&0\end{pmatrix}\overline{V}^{T}|\;A\in\mathbb{S}^{r},B\in\mathbb{R}^{r\times (p(\overline{Y})-r)},C\in\mathbb{S}_{+}^{p(\overline{Y})-r}\right\}, \tag{4.10}\] _where \(p(\overline{Y})\) is defined in_ (4.6) _and \(\mathbb{S}^{r}\) is the set of all symmetric matrices of size \(r\times r\). Hence \(\overline{X}\) is a strong solution of (4.1) if any only if_ \[\operatorname{Ker}\Phi\cap T_{N_{\text{\tiny B}}(\overline{Y})}(\overline{X})= \{0\}. \tag{4.11}\] _Consequently, \(\overline{X}\) is a strong solution of (4.1) provided that the following Strong Sufficient Condition holds_ \[\operatorname{Ker}\Phi\cap\overline{U}\begin{pmatrix}\mathbb{S}^{p(\overline {Y})}&0\\ 0&0\end{pmatrix}\overline{V}^{T}=\{0\}. \tag{4.12}\] **Proof.** By Lemma 4.1, we have \[\partial g^{*}(\overline{Y})=N_{\text{\tiny B}}(\overline{Y})=\overline{U} \begin{pmatrix}\mathbb{S}_{+}^{p(\overline{Y})}&0\\ 0&0\end{pmatrix}\overline{V}^{T}.\] As \((\overline{U},\overline{V})\in\mathcal{O}(\overline{X})\), we obtain from (4.2) that \[T_{N_{\text{\tiny B}}(\overline{Y})}(\overline{X})=\overline{U}\begin{pmatrix} T_{\mathbb{S}_{+}^{p(\overline{Y})}}\begin{pmatrix}\overline{\Sigma}_{\tau}&0\\ 0&0\end{pmatrix}&0\\ 0&0\end{pmatrix}\overline{V}^{T},\] which is exactly the right-hand side of (4.10) according to the contingent cone formula to \(S_{+}^{p(\overline{Y})}\) in [8, Example 2.65]. Since \(\partial\|\cdot\|_{*}\) is metrically subregular at \(\overline{X}\) for \(\overline{Y}\) by [50, Proposition 11], it follows from Corollary 3.4 that \(\overline{X}\) is a strong solution of problem (4.1) if and only if \[\operatorname{Ker}\left(\Phi^{*}\nabla^{2}h(\Phi\overline{X})\Phi\right)\cap T _{N_{\mathbb{B}}(\overline{Y})}(\overline{X})=\{0\}.\] Since \(\nabla^{2}h(\Phi\overline{X})\succ 0\), we have \(\operatorname{Ker}\left(\Phi^{*}\nabla^{2}h(\Phi\overline{X})\Phi\right)= \operatorname{Ker}\Phi\). The characterization (4.11) for strong minima at \(\overline{X}\) follows from the above condition. Finally, note from (4.10) that \[T_{N_{\mathbb{B}}(\overline{Y})}(\overline{X})\subset\overline{U}\begin{pmatrix} \mathbb{S}^{p(\overline{Y})}&0\\ 0&0\end{pmatrix}\overline{V}^{T}.\] Strong Sufficient Condition (4.12) implies strong minima at \(\overline{X}\) by (4.11). **Remark 4.3**.: The geometric characterization (4.11) for strong minima of problem (4.1) is news. A sufficient condition is indeed obtained from [18, Theorem 12], which considers more general optimization problems involving _spectral functions_. However, their result contains a nontrivial _sigma-term_, which is calculated explicitly in recent papers [17, 37] for the case of symmetric matrices. Our approach is totally different without any sigma-terms. Moreover, our condition is a full characterization for strong minima. Another result about strong minima of problem (4.1) was established in [29, Proposition 12], which plays an important role in proving the local linear convergence of Forward-Backward algorithms solving problem (4.1). The result mainly states that the so-called _Restricted Injectivity_ and _Nondegenerate Condition_ are sufficient for strong minima; see also [24, Proposition 4.27] for similar observation. Let us recall these important conditions here; see further discussions about them in Section 5. Let \((U,V)\in\mathcal{O}(\overline{X})\) and define the _model tangent subspace_ \[\mathbb{T}\stackrel{{\text{\tiny def}}}{{=}}\{U_{I}Y^{T}+XV_{I}^ {T}|\ X\in\mathbb{R}^{n_{1}\times r},Y\in\mathbb{R}^{n_{2}\times r}\} \tag{4.13}\] of \(\mathbb{R}^{n_{1}\times n_{2}}\) with dimension \(\dim\mathbb{T}=r(n_{1}+n_{2}-r)\); see, e.g., [15, 16]. The Restricted Injectivity condition means \[\operatorname{Ker}\Phi\cap\mathbb{T}=\{0\}. \tag{4.14}\] And the Nondegeneracy Condition holds when \[\overline{Y}=-\frac{1}{\mu}\Phi^{*}\nabla h(\Phi\overline{X})\in\operatorname {ri}\partial\|\overline{X}\|_{*}, \tag{4.15}\] where \(\operatorname{ri}\partial\|\overline{X}\|_{*}\) is the _relative interior_ of \(\partial\|\overline{X}\|_{*}\); see [41]. The validity of Nondegeneracy Condition (4.15) implies that \(\overline{X}\) is an optimal solution of problem (4.1). Note that \[\operatorname{ri}\partial\|\overline{X}\|_{*}=\left\{U\begin{pmatrix}\mathbb{I }_{r}&0\\ 0&W\end{pmatrix}V^{T}|\ \|W\|<1\right\}\quad\text{with}\quad r=\operatorname{rank} \,(\overline{X}). \tag{4.16}\] Hence, Nondegeneracy Condition (4.15) means that the number of singular value ones, \(p(\overline{Y})\) in (4.6), is the rank of \(\overline{X}\). In this case, Restricted Injectivity (4.14) clearly implies the Strong Sufficient Condition (4.12). Hence, the combination of Restricted Injectivity (4.14) and Nondegeneracy Condition (4.15) is stronger than our Strong Sufficient Condition (4.12). The following result gives a complete picture about strong minima when Nondegeneracy Condition (4.15) occurs. **Corollary 4.4** (Strong minima under Nondegeneracy Condition).: _Suppose that \(\overline{X}\in\Phi^{-1}(\operatorname{int}(\operatorname{dom}h))\) and Nondegeneracy Condition (4.15) holds. Then \(\overline{X}\) is a strong solution of problem (4.1) if and only if the following Strict Restricted Injectivity holds_ \[\operatorname{Ker}\Phi\cap U_{I}S^{r}V_{I}^{T}=\{0\}, \tag{4.17}\] _where \(U_{I}\overline{\Sigma}V_{I}^{T}\) is a compact SVD of \(\overline{X}\)._ Proof.: As Nondegeneracy Condition (4.15) holds, \(\overline{X}\) is a solution of problem (4.1). In this case, observe from (4.8) and (4.10) that \(T_{N_{\mathbb{B}}(\overline{Y})}(\overline{X})=U_{I}S^{r}V_{I}^{T}.\) The equivalence between strong minima at \(\overline{X}\) and (4.17) follows Corollary 3.4. As the dimension of subspace \(U_{I}S^{r}V_{I}^{T}\) is \(\frac{1}{2}r(r+1)\), which is usually small in low-rank optimization problems, it is likely that condition (4.17) holds when Nondegeneracy Condition (4.15) is satisfied. More discussions about Strict Restricted Injectivity will be added on Section 5. Although geometric characterization (4.11) looks simple, checking it in high dimension is non-trivial. But Strong Sufficient Condition (4.12) and Strict Restricted Injectivity (4.17) can be verified easily. Next we establish some quantitative characterizations for strong minima. Before doing so, we obtain some projection formulas onto subspaces \(\mathbb{T}\) and \(\mathbb{T}^{\perp}\). For any \(X\in\mathbb{R}^{n_{1}\times n_{2}}\), suppose that \(X\) is represented by block matrices as: \[X =\begin{pmatrix}U_{I}&U_{I}\end{pmatrix}\begin{pmatrix}A&B\\ C&D\end{pmatrix}\begin{pmatrix}V_{I}&V_{K}\end{pmatrix}^{T}\quad\text{with} \quad(U,V)\in\mathcal{O}(\overline{X}).\] The projections of \(X\) onto \(\mathbb{T}\) and \(\mathbb{T}^{\perp}\) are computed respectively by \[P_{\mathbb{T}}X=\begin{pmatrix}U_{I}&U_{I}\end{pmatrix}\begin{pmatrix}A&B\\ C&0\end{pmatrix}\begin{pmatrix}V_{I}&V_{K}\end{pmatrix}^{T}\quad\text{and}\quad P _{\mathbb{T}^{\perp}}X=U_{I}DV_{K}^{T}. \tag{4.18}\] The following result provides a formula for critical cone of nuclear norm at \(\overline{X}\) for \(\overline{Y}\in\partial\|\overline{X}\|_{*}\). **Proposition 4.5** (Critical cone of nuclear norm).: _Let \(\overline{Y}\in\partial\|\overline{X}\|_{*}\) and \((\overline{U},\overline{V})\in\mathcal{O}(\overline{X})\cap\mathcal{O}( \overline{Y})\) as in (4.2) and (4.8). Define \(H\stackrel{{\text{\tiny def}}}{{=}}\{k\in\{r+1,\ldots n_{1}\}|\ \sigma_{k}(\overline{Y})=1\}\). Then the critical cone \(\mathcal{C}(\overline{X},\overline{Y})\) of \(\|\cdot\|_{*}\) at \(\overline{X}\) for \(\overline{Y}\) is computed by_ \[\mathcal{C}(\overline{X},\overline{Y})\stackrel{{\text{\tiny def }}}{{=}}\{W\in\mathbb{R}^{n_{1}\times n_{2}}|\ d\|\overline{X}\|_{*}(W)=\langle \overline{Y},W\rangle\}=\left\{W\in\mathbb{R}^{n_{1}\times n_{2}}|\ P_{ \mathbb{T}^{\perp}}W\in\overline{U}_{H}\mathsf{S}_{+}^{|H|}\overline{V}_{H}^{ T}\right\}, \tag{4.19}\] _where \(\overline{U}_{H}\) and \(\overline{V}_{H}\) are submatrices of index columns \(H\) of \(\overline{U}\) and \(\overline{V}\), respectively._ Proof.: For any \(W\in\mathbb{R}^{m\times n}\), it is well-known from convex analysis [41] that \[d\|\overline{X}\|_{*}(W)=\sup_{Y\in\partial\|\overline{X}\|_{*}}\langle Y,W\rangle. \tag{4.20}\] This together with (4.3) and (4.18) implies that \[\begin{array}{ll}d\|\overline{X}\|_{*}(W)&=\sup_{Y\in\partial\|\overline{X} \|_{*}}\langle Y,P_{\mathbb{T}}W+P_{\mathbb{T}^{\perp}}W\rangle=\sup_{Y\in \partial\|\overline{X}\|_{*}}\langle P_{\mathbb{T}}Y,W\rangle+\langle Y,P_{ \mathbb{T}^{\perp}}W\rangle\\ &=\langle E,W\rangle+\|P_{\mathbb{T}^{\perp}}W\|_{*}\end{array} \tag{4.21}\] with \(E\stackrel{{\text{\tiny def}}}{{=}}U_{I}V_{I}^{T}\). As \(\overline{Y}\in\partial\|\overline{X}\|_{*}\), we have \(W\in\mathcal{C}(\overline{X},\overline{Y})\) if and only if \[\langle E,W\rangle+\|P_{\mathbb{T}^{\perp}}W\|_{*}=\langle P_{\mathbb{T}} \overline{Y},W\rangle+\langle P_{\mathbb{T}^{\perp}}\overline{Y},W\rangle= \langle E,W\rangle+\langle P_{\mathbb{T}^{\perp}}\overline{Y},P_{\mathbb{T}^ {\perp}}W\rangle,\] which means \(\|P_{\mathbb{T}^{\perp}}W\|_{*}=\langle P_{\mathbb{T}^{\perp}}\bar{Y},P_{\mathbb{T} ^{\perp}}W\rangle\). By Lemma 4.1, we have \(P_{\mathbb{T}^{\perp}}W\in\partial g^{*}(P_{\mathbb{T}^{\perp}}\bar{Y})\), or equivalently \(P_{\mathbb{T}^{\perp}}W\in\overline{U}_{H}\mathbb{S}^{|H|}_{H}\overline{V}^{T}_ {H}\). The proof is complete. Next, we construct the main result of this section, which contains a quantitative characterization for strong minima. A similar result for group-sparsity minimization problem is recently established in [24, Theorem 5.3]. **Theorem 4.6** (Characterizations for strong minima of low-rank optimization problems).: _Suppose that \(\overline{X}\in\Phi^{-1}(\operatorname{int}(\operatorname{dom}h))\) is a minimizer of problem (4.1) and \(\overline{Y}=-\frac{1}{\mu}\Phi^{*}\nabla h(\Phi\overline{X})\) with decomposition (4.2) and (4.8). The following are equivalent:_ * \(\overline{X}\) _is a strong solution to problem (_4.1_)._ * \(\operatorname{Ker}\Phi\cap\mathcal{E}\cap\mathcal{C}=\{0\}\) _with_ \[\mathcal{E}\stackrel{{\text{\tiny def}}}{{=}}\left\{W \in\mathbb{R}^{n_{1}\times n_{2}}|\ P_{\mathbb{T}}W\in\overline{U}\begin{pmatrix} A&B&0\\ B^{T}&0&0\\ 0&0&0\end{pmatrix}\bar{V}^{T},A\in\mathbb{S}^{r},B\in\mathbb{R}^{r\times(p( \overline{Y})-r)}\right\},\] (4.22) \[\mathcal{C}\stackrel{{\text{\tiny def}}}{{=}}\left\{W \in\mathbb{R}^{n_{1}\times n_{2}}|\ \langle E,W\rangle+\|P_{\mathbb{T}^{\perp}}W\|_{*}=0\right\}\qquad\text{with} \qquad E\stackrel{{\text{\tiny def}}}{{=}}U_{I}V^{T}_{I}.\] (4.23) * _The following conditions (a) and either (b) or (c) are satisfied:_ * (Strong Restricted Injectivity): \(\operatorname{Ker}\Phi\cap\mathcal{E}\cap\mathbb{T}=\{0\}\)_._ * (Strong Nondegenerate Source Condition): _There exists_ \(Y\in\operatorname{Im}\Phi^{*}+\mathcal{E}^{\perp}\) _such that_ \(Y=U\begin{pmatrix}\mathbb{I}_{r}&0\\ 0&Z\end{pmatrix}V^{T}\) _and_ \(\|Z\|<1\)_._ * (Analysis Strong Source Condition): _The_ Strong Source Coefficient \(\zeta(\overline{X})\)_, which is the optimal value of the following spectral norm optimization problem_ \[\min_{Z\in\mathbb{R}^{(m-r)\times(n-r)}}\qquad\|Z\|\qquad\text{subject to}\qquad\mathcal{M}(U_{I}ZV^{T}_{K})=-\mathcal{M}E\] (4.24) _is smaller than_ \(1\)_, where_ \(\mathcal{M}\) _is a linear operator such that_ \(\operatorname{Im}\mathcal{M}^{*}=\operatorname{Ker}\Phi\cap\mathcal{E}\)_._ **Proof.** Let us verify the equivalence between (i) and (ii). By Corollary 4.2, it suffices to show that \[\operatorname{Ker}\Phi\cap T_{N_{\mathbb{B}}(\overline{Y})}(\overline{X})= \operatorname{Ker}\Phi\cap\mathcal{E}\cap\mathcal{C}. \tag{4.25}\] By Proposition 4.5 and Corollary 4.2, we have \[T_{N_{\mathbb{B}}(\overline{Y})}(\overline{X})=\mathcal{E}\cap\mathcal{C}( \overline{X},\overline{Y}). \tag{4.26}\] As \(\overline{Y}\in\operatorname{Im}\Phi^{*}\), note from (4.19) and (4.21) that \[\operatorname{Ker}\Phi\cap\mathcal{C}(\overline{X},\overline{Y})= \operatorname{Ker}\Phi\cap\mathcal{C}.\] This together with (4.26) verifies (4.25) and also the equivalence between (i) and (ii). Next, let us verify the implication [(ii)\(\Rightarrow\)(iii)]. Suppose that (ii) (or (i)) is satisfied. Note from the projection formula (4.18) that \(\mathcal{E}\cap\mathbb{T}\subset T_{N_{\mathbb{B}}(\overline{Y})}(\overline{X})\). It follows from Corollary 4.2 that the Strong Resitricted Injectivity holds. Since \(\overline{Y}\in\partial\|\overline{X}\|_{*}\cap\operatorname{Im}\Phi^{*}\), we obtain from (4.20) and (4.21) that \[\langle E,W\rangle+\|P_{\mathbb{T}^{\perp}}W\|_{*}=dg(\overline{X})(W)\geq \langle\overline{Y},W\rangle=0\quad\text{for any}\quad W\in\operatorname{Ker}\Phi.\] Condition (ii) means \[c\stackrel{{\text{\tiny def}}}{{=}}\min\{\langle E,W\rangle+\|P_ {\mathbb{T}^{\perp}}W\|_{*}|\ W\in\operatorname{Ker}\Phi\cap\mathcal{E},\|W\|_{* }=1\}>0,\] which implies that \[k(W)\stackrel{{\text{\tiny def}}}{{=}}\langle E,W\rangle+\|P_{ \mathbb{T}^{\perp}}W\|_{*}\geq c\|W\|_{*}\quad\text{for all}\quad W\in \operatorname{Ker}\Phi\cap\mathcal{E}. \tag{4.27}\] As \(\operatorname{Im}\mathcal{M}^{*}=\operatorname{Ker}\Phi\cap\mathcal{E}\), for any \(W\in\operatorname{Ker}\Phi\cap\mathcal{E}\), we write \(W=\mathcal{M}^{*}Y\) and derive from (4.27) that \[(1-c)\|P_{\mathbb{T}^{\perp}}\mathcal{M}^{*}Y\|_{*}\geq\|P_{\mathbb{T}^{\perp }}\mathcal{M}^{*}Y\|_{*}-c\|\mathcal{M}^{*}Y\|_{*}\geq-\langle E,\mathcal{M}^ {*}Y\rangle=-\langle\mathcal{M}E,Y\rangle. \tag{4.28}\] Since \(\operatorname{Im}\mathcal{M}^{*}\subset\operatorname{Ker}\Phi\), we have \(\operatorname{Im}\Phi^{*}\subset\operatorname{Ker}\mathcal{M}\) and thus \(\overline{Y}\in\operatorname{Ker}\mathcal{M}\). It follows that \[0=\mathcal{M}\overline{Y}=\mathcal{M}P_{\mathbb{T}}\overline{Y}+\mathcal{M}P_ {\mathbb{T}^{\perp}}\overline{Y}=\mathcal{M}E+\mathcal{M}P_{\mathbb{T}^{\perp }}\overline{Y}.\] This together with (4.28) implies that \[(1-c)\|P_{\mathbb{T}^{\perp}}\mathcal{M}^{*}Y\|_{*}\geq\langle\mathcal{M}P_{ \mathbb{T}^{\perp}}\overline{Y},Y\rangle=\langle P_{\mathbb{T}^{\perp}} \overline{Y},\mathcal{M}^{*}Y\rangle=\langle P_{\mathbb{T}^{\perp}}\overline {Y},P_{\mathbb{T}^{\perp}}\mathcal{M}^{*}Y\rangle.\] Define \(\mathbb{B}_{*}\stackrel{{\text{\tiny def}}}{{=}}\{W\in\mathbb{R} ^{m\times n}|\ \|W\|_{*}\leq 1\}\) the unit ball with respect to nuclear norm. We obtain from the later and the classical minimax theorem [41, Corollary 37.3.2] that \[\begin{array}{rl}1-c&\geq\sup_{W\in\mathbb{B}_{*}}\langle P_{\mathbb{T}^{ \perp}}\overline{Y},W\rangle-u_{\operatorname{Im}P_{\mathbb{T}^{\perp}} \mathcal{M}^{*}}(W)\\ &=\sup_{W\in\mathbb{B}_{*}}\inf_{X\in\operatorname{Ker}\mathcal{M}P_{\mathbb{ T}^{\perp}}}\langle P_{\mathbb{T}^{\perp}}\overline{Y}+X,W\rangle\\ &=\inf_{X\in\operatorname{Ker}\mathcal{M}P_{\mathbb{T}^{\perp}}}\sup_{W\in \mathbb{B}_{*}}\langle P_{\mathbb{T}^{\perp}}\overline{Y}+X,W\rangle\\ &=\inf_{X\in\operatorname{Ker}\mathcal{M}P_{\mathbb{T}^{\perp}}}\|P_{\mathbb{ T}^{\perp}}\overline{Y}+X\|.\end{array}\] Hence there exists \(X_{0}\in\operatorname{Ker}\mathcal{M}P_{\mathbb{T}^{\perp}}\) such that \(\|P_{\mathbb{T}^{\perp}}\overline{Y}+X_{0}\|<1\). Due to the projection formula (4.18), observe that \[1>\|P_{\mathbb{T}^{\perp}}\overline{Y}+X_{0}\|\geq\|P_{\mathbb{T}^{\perp}}(P_ {\mathbb{T}^{\perp}}\overline{Y}+X_{0})\|=\|P_{\mathbb{T}^{\perp}}(\overline {Y}+X_{0})\|.\] Define \(Y_{0}=\overline{Y}+P_{\mathbb{T}^{\perp}}X_{0}\), we have \[\mathcal{M}Y_{0}=\mathcal{M}\overline{Y}+\mathcal{M}P_{\mathbb{T}^{\perp}}X_{ 0}=0.\] Note that \(\operatorname{Ker}\mathcal{M}=\operatorname{Im}\Phi^{*}+\mathcal{E}^{\perp}\), \(\overline{Y}\in\operatorname{Im}\Phi^{*}\subset\operatorname{Ker}\mathcal{M}\), and \(X_{0}\in\operatorname{Ker}\mathcal{M}P_{\mathbb{T}^{\perp}}\). It follows that \(Y_{0}\in\operatorname{Ker}\mathcal{M}=\operatorname{Im}\Phi^{*}+\mathcal{E}^ {\perp}\). Moreover, observe that \(P_{\mathbb{T}}Y_{0}=P_{\mathbb{T}}\overline{Y}=E\) and \(\|P_{\mathbb{T}^{\perp}}Y_{0}\|<1\). Thus, \(Y_{0}\) satisfies the condition in (b). As \(\operatorname{Ker}\mathcal{M}=\operatorname{Im}\Phi^{*}+\mathcal{E}^{\perp}\), (b) and (c) are equivalent, we ensure the implication [(ii)\(\Rightarrow\)(iii)]. It remains to justify the implication [(iii)\(\Rightarrow\)(ii)]. Suppose that the Strong Restricted Injectivity (a) and the Strong Source Condition (b) hold with some \(Y_{0}\in\operatorname{Im}\Phi^{*}+\mathcal{E}^{\perp}\) satisfying the condition in (b). Indeed, pick any \(W\in\operatorname{Ker}\Phi\cap\mathcal{E}\cap\mathcal{C}=\operatorname{Im}^{*} \mathcal{M}\). As \(Y_{0}\in\operatorname{Ker}\mathcal{M}\), we have \(\langle Y_{0},W\rangle=0\). It follows that \[\begin{array}{rl}0=\langle E,W\rangle+\|P_{\mathbb{T}^{\perp}}W\|_{*}&= \langle P_{\mathbb{T}}Y_{0},W\rangle+\|P_{\mathbb{T}^{\perp}}W\|_{*}\\ &=\langle Y_{0},W\rangle-\langle P_{\mathbb{T}^{\perp}}Y_{0},W\rangle+\|P_{ \mathbb{T}^{\perp}}W\|_{*}\\ &=-\langle P_{\mathbb{T}^{\perp}}Y_{0},P_{\mathbb{T}^{\perp}}W\rangle+\|P_{ \mathbb{T}^{\perp}}W\|_{*}\\ &\geq(1-\|P_{\mathbb{T}^{\perp}}Y_{0}\|)\|P_{\mathbb{T}^{\perp}}W\|_{*}.\end{array}\] Since \(\|P_{\mathbb{T}^{\perp}}Y_{0}\|<1\), we have \(P_{\mathbb{T}^{\perp}}W=0\), i.e., \(W\in\mathbb{T}\). This implies that \(W=0\) due to the Strong Restricted Injectivity (a). The proof is complete. **Remark 4.7**.: The Strong Restricted Injectivity (a) means that the linear operator \(\Phi\) is injective on the subspace \(\mathcal{E}\cap\mathbb{T}\). It is similar with the one with the same name in [24, Theorem 5.3] that is used to characterize the uniqueness (strong) solution for _group-sparsity_ optimization problems. This condition is adopted from the Restricted Injectivity (4.14) in [25]; see also [14, 15, 16] for the case of nuclear norm minimization problems. The Strong Restricted Injectivity is certainly weaker than the Restricted Injectivity. The Strong Nondegenerate Source Condition and Analysis Strong Source Condition also inherit the same terminologies introduced in [24, Theorem 5.3] for group-sparsity optimization problems. In [14, 16, 25, 46, 47], the Nondegenerate Source Condition at \(\overline{X}\) means the existence of a _dual certificate_\(Y_{0}\in\operatorname{Im}\Phi^{*}\cap\partial\|\overline{X}\|_{*}\), satisfying \(\|P_{\mathbb{T}^{\perp}}Y_{0}\|<1\), which is equivalent to \[\operatorname{Im}\Phi^{*}\cap\operatorname{ri}\partial\|\overline{X}\|_{*} \neq\emptyset. \tag{4.29}\] This condition is weaker than the Nondegeneracy Condition (4.15). In the case of \(\ell_{1}\) optimization, it is well-known that the Restricted Injectivity and Nondegenerate Source Condition together characterize solution uniqueness [25]. For nuclear norm minimization problem, [15, 16] shown that they are sufficient for solution uniqueness of problems (1.1) and (4.1); see also [46, 47] for more general convex optimization problems. It is worth noting that they are not necessary conditions for solution uniqueness; see, e.g., [24, Example 4.15]. One hidden reason is that the nuclear norm is not _polyhedral_. Recently [24] shows that these two conditions characterize the so-called _sharp minima_ of (5.1), which is somewhat between solution uniqueness and strong minima; see our Remark 5.18 for further discussion. Due to [24, Proposition 4.27], they are sufficient for strong solution of problem (4.1). This fact can be obtained from Theorem 4.6 by observing that our Strong Nondegenerate Source Condition is weaker than Nondegenerate Source Condition, since Strong Nondegenerate Source Condition involving the set \(\mathcal{E}\) means \[(\operatorname{Im}\Phi^{*}+\mathcal{E}^{\perp})\cap\operatorname{ri}\partial \|\overline{X}\|_{*}\neq\emptyset \tag{4.30}\] due to (4.16). **Remark 4.8** (Checking Strong Restricted Injectivity, Strong Sufficient Condition (4.12), and constructing the linear operator \(\mathcal{M}\)).: To check the Strong Restricted Injectivity, observe first that \[\mathcal{E}=\left\{\overline{U}\begin{pmatrix}A&B&0\\ B^{T}&C&D\\ 0&E&F\end{pmatrix}\overline{V}^{T}\in\mathbb{R}^{n_{1}\times n_{2}}|\ A\in \mathcal{S}^{r},B\in\mathbb{R}^{r\times(p-r)}\right\},\] which is a subspace of \(\mathbb{R}^{n_{1}\times n_{2}}\) with dimension \(q\stackrel{{\mathrm{\tiny def}}}{{=}}\frac{r(r+1)}{2}+r(p-r)+(n_ {1}-r)(n_{2}-r)\). Moreover, the restriction of \(\mathcal{E}\) on \(\mathbb{T}\) \[\mathcal{E}\cap\mathbb{T}=\left\{\overline{U}\begin{pmatrix}A&B&0\\ B^{T}&0&0\\ 0&0&0\end{pmatrix}\overline{V}^{T}\in\mathbb{R}^{n_{1}\times n_{2}}|\ A\in \mathcal{S}^{r},B\in\mathbb{R}^{r\times(p-r)}\right\}, \tag{4.31}\] is also a subspace of \(\mathbb{R}^{n_{1}\times n_{2}}\) with dimension \(s\stackrel{{\mbox{\tiny{\rm def}}}}{{=}}\frac{r(r+1)}{2}+r(p-r)\). The set in \(\overline{U}\begin{pmatrix}\mathsf{S}^{P}&0\\ 0&0\end{pmatrix}\overline{V}^{T}\) in Strong Sufficient Condition (4.12) is another subspace of \(\mathbb{R}^{m\times n}\) with dimension \(l\stackrel{{\mbox{\tiny{\rm def}}}}{{=}}\frac{1}{s}p(p+1)\). Suppose that \(\{W_{1},\ldots,W_{s}\}\) form a basis of \(\mathcal{E}\cap\mathbb{T}\), \(\{W_{1},\ldots,W_{l}\}\) form a basis of \(\overline{U}\begin{pmatrix}\mathsf{S}^{P}&0\\ 0&0\end{pmatrix}\overline{V}^{T}\) and \(\{W_{1}\ldots W_{q}\}\) form a basis of \(\mathcal{E}\). For any \(W\in\mathcal{E}\), we write \(W=\lambda_{1}W_{1}+\ldots+\lambda_{q}W_{q}\) and obtain that \[\Phi(W)=\lambda_{1}\Phi(W_{1})+\ldots+\lambda_{q}\Phi(W_{q}). \tag{4.32}\] Define \(\Psi\stackrel{{\mbox{\tiny{\rm def}}}}{{=}}\begin{pmatrix}\Phi (W_{1})&\ldots&\Phi(W_{q})\end{pmatrix}\) to be an \(m\times q\) matrix and \(\Psi_{s}\), \(\Psi_{l}\) to be the submatrices of the first \(s,l\) columns of \(\Psi\), respectively. By (4.32), the Strong Restricted Injectivity is equivalent to the condition that \(\operatorname{Ker}\Psi_{s}=0\), i.e., \(\operatorname{rank}\Psi_{s}=s\). Similarly, the Strong Sufficient Condition (4.12) means \(\operatorname{rank}\Psi_{l}=l\). Next let us discuss how to construct the operator \(\mathcal{M}\). Let \(\widehat{U}\Lambda\widehat{V}^{T}\) be a SVD of \(\Psi\) with \(k=\operatorname{rank}\Psi\). Define \(\widehat{V}_{G}\) to be the \(q\times(q-k)\) submatrix of \(\widehat{V}\) where \(G=\{k+1,\ldots,q\}\). Note that \[\operatorname{Im}\widehat{V}_{G}=\operatorname{Ker}\Psi.\] Determine the linear operator \(\mathcal{M}:\mathbb{R}^{n_{1}\times n_{2}}\to\mathbb{R}^{q-k}\) by \[\mathcal{M}X\stackrel{{\mbox{\tiny{\rm def}}}}{{=}}\widehat{V}_{ G}^{T}\begin{pmatrix}(W_{1},X),\ldots,\langle W_{q},X\rangle\end{pmatrix}^{T}\quad \text{for all}\quad X\in\mathbb{R}^{n_{1}\times n_{2}}. \tag{4.33}\] It is easy to check that \(\operatorname{Im}\mathcal{M}^{*}=\operatorname{Ker}\Phi\cap\mathcal{E}\). **Remark 4.9** (Checking the Analysis Strong Source Condition without using the linear operator \(\mathcal{M}\)).: To verify the Analysis Strong Source Condition, Theorem 4.6 suggests to solve the optimization problem (4.24), which involves the linear operator \(\mathcal{M}\). To avoid the computation of \(\mathcal{M}\), note first that the constraint in (4.24) means \(U_{I}ZV_{K}^{T}+E\in\operatorname{Ker}\mathcal{M}=\operatorname{Im}\Phi^{*}+ \mathcal{E}^{\perp}\). Suppose that \(\mathcal{N}\) is the linear operator with \(\operatorname{Ker}\mathcal{N}=\operatorname{Im}\Phi^{*}\), i.e., \(\operatorname{Im}\mathcal{N}^{*}=\operatorname{Ker}\Phi\), the later condition is equivalent to \[\mathcal{N}(U_{I}ZV_{K}^{T}+E+W)=0\quad\text{for some}\quad W\in\mathcal{E}^{ \perp},\] where \(\mathcal{E}^{\perp}\) is computed by \[\mathcal{E}^{\perp}=\left\{U\begin{pmatrix}A&B&C\\ -B^{T}&0&0\\ D&0&0\end{pmatrix}V^{T}|\ A\in\mathbb{V}_{r},B\in\mathbb{R}^{r\times(p-r)},C \in\mathbb{R}^{r\times(n_{2}-p)},D\in\mathbb{R}^{(n_{1}-p)\times r}\right\}, \tag{4.34}\] which is a subspace of \(\mathbb{T}\) with dimension \(\frac{1}{2}r(r-1)+r(m+n-p-r)\); here \(\mathbb{V}_{r}\subset\mathbb{R}^{r\times r}\) is the set of all skew-symmetric matrices. Hence problem (4.24) is equivalent to the following one \[\min_{Z\in\mathbb{R}^{(n_{1}-r)\times(n_{2}-r)},W\in\mathbb{R}^{n_{1}\times n _{2}}}\qquad\|Z\|\quad\text{subject to}\quad\mathcal{N}(U_{I}ZV_{I}^{T}+W)=- \mathcal{N}E\quad\text{and}\quad W\in\mathcal{E}^{\perp}. \tag{4.35}\] The price to pay is the size of this optimization problem is bigger than the one in (4.24). But finding the linear operator \(\mathcal{N}\) is much easier than \(\mathcal{M}\). **Corollary 4.10** (Quantitative sufficient condition for strong solution).: _Suppose that \(\overline{X}\in\Phi^{-1}(\operatorname{int}(\operatorname{dom}h))\) is a minimizer of problem (4.1). Then \(\overline{X}\) is a strong solution provided that the Strong Restricted Injectivity holds and_ \[\gamma(\overline{X})\stackrel{{\mbox{\tiny{\rm def}}}}{{=}}\|P_{ \mathbb{T}^{\perp}}\mathcal{M}^{*}(\mathcal{M}P_{\mathbb{T}^{\perp}}\mathcal{M} ^{*})^{-1}\mathcal{M}E\|<1, \tag{4.36}\] _where \(\mathcal{M}\) is the linear operator satisfying \(\operatorname{Im}\mathcal{M}^{*}=\operatorname{Ker}\Phi\cap\mathcal{E}\)._ **Proof.** Suppose that Strong Restricted Injectivity holds at the minimizer \(\overline{X}\). We consider the following linear equation: \[\mathcal{MP}_{\mathbb{T}^{\perp}}Y=-\mathcal{M}E\quad\text{for}\quad Y\in \mathbb{R}^{n_{1}\times n_{2}}. \tag{4.37}\] As \(\overline{Y}\in\partial\|\overline{X}\|_{*}\cap\operatorname{Im}\Phi^{*}\subset \partial\|\overline{X}\|_{*}\cap\operatorname{Ker}\mathcal{M}\), we have \[0=\mathcal{M}\overline{Y}=\mathcal{M}(P_{\mathbb{T}}\overline{Y}+P_{\mathbb{ T}^{\perp}}\overline{Y})=\mathcal{M}E+P_{\mathbb{T}^{\perp}}\overline{Y}.\] It follows that \(\overline{Y}\) is a solution to (4.37). Another solution of (4.37) is \[\widehat{Y}\stackrel{{\text{\tiny def}}}{{=}}-(\mathcal{MP}_{ \mathbb{T}^{\perp}})^{\dagger}\mathcal{M}E, \tag{4.38}\] where \((\mathcal{MP}_{\mathbb{T}^{\perp}})^{\dagger}\) is the Moore-Penrose generalized inverse of \(\mathcal{MP}_{\mathbb{T}^{\perp}}\). Next we claim that \(\mathcal{M}^{*}P_{\mathbb{T}^{\perp}}\mathcal{M}^{*}:\operatorname{Im} \mathcal{M}\to\operatorname{Im}\mathcal{M}\) is a bijective mapping. Indeed, suppose that \(\mathcal{MP}_{\mathbb{T}^{\perp}}\mathcal{M}^{*}z=0\) for some \(z\in\operatorname{Im}\mathcal{M}\), we have \[\|P_{\mathbb{T}^{\perp}}\mathcal{M}^{*}z\|^{2}=\langle P_{\mathbb{T}^{\perp}} \mathcal{M}^{*}z,P_{\mathbb{T}^{\perp}}\mathcal{M}^{*}z\rangle=0.\] It follows that \(M^{*}z\in\mathbb{T}\cap\operatorname{Im}\mathcal{M}^{*}=\{0\}\), which implies \(z\in\operatorname{Ker}\mathcal{M}^{*}\cap\operatorname{Im}\mathcal{M}=\{0\}\). Thus \(\mathcal{MP}_{\mathbb{T}^{\perp}}\mathcal{M}^{*}\) is injective in \(\operatorname{Im}\mathcal{M}\). As the operator is self-dual, it is bijective. We obtain that \[\widehat{Y}=-P_{\mathbb{T}^{\perp}}\mathcal{M}^{*}(\mathcal{MP}_{\mathbb{T}^{ \perp}}\mathcal{M}^{*})^{-1}\mathcal{M}E\in\mathbb{T}^{\perp}.\] By the decomposition (4.18), we may write \[\widehat{Y}=U\begin{pmatrix}0&0\\ 0&\widehat{Z}\end{pmatrix}V^{T}\quad\text{for some}\quad\widehat{Z}\in\mathbb{R} ^{(n_{1}-r)\times(n_{2}-r)}.\] Observe from (4.24) that \[\zeta(\overline{X})\leq\|\widehat{Z}\|=\|\widehat{Y}\|=\gamma(\overline{X}).\] If \(\gamma(\overline{X})<1\), we have \(\zeta(\overline{X})<1\). \(\overline{X}\) is a strong solution due to Theorem 4.6. \(\square\) ## 5 Characterizations for strong minima of nuclear norm minimization problems In this section, let us consider the nuclear norm minimization problem (1.1): \[\min_{X\in\mathbb{R}^{n_{1}\times n_{2}}}\quad\|X\|_{*}\quad\text{subject to}\quad\Phi X=M_{0}, \tag{5.1}\] where \(\Phi:\mathbb{R}^{n_{1}\times n_{2}}\to\mathbb{R}^{m}\) is a linear operator (\(n_{1}\leq n_{2}\)), \(M_{0}\in\mathbb{R}^{m}\) is a known observation. This is a particular case of problem (3.18) with \(g(X)=\|X\|_{*}\) for \(X\in\mathbb{R}^{n_{1}\times n_{2}}\). Note that \(X_{0}\) is a solution of this problem if and only if \(\Phi X_{0}=M_{0}\) and \[0\in\partial\|X_{0}\|_{*}+N_{\Phi^{-1}(M_{0})}(X_{0})=\partial\|X_{0}\|_{*}+ \operatorname{Im}\Phi^{*},\] which is equivalent to the existence of a _dual certificate_\(Y\in\operatorname{Im}\Phi^{*}\cap\partial\|X_{0}\|_{*}\). Define \[\Delta(X_{0})\stackrel{{\text{\tiny def}}}{{=}}\operatorname{Im }\Phi^{*}\cap\partial\|X_{0}\|_{*}\] to be the set of all dual certificates. **Lemma 5.1**.: _Let \(\Omega\) be a nonempty closed set of \(\mathbb{R}^{n_{1}\times n_{2}}\). Suppose that \(U_{1},U_{2}\in\mathbb{R}^{n_{1}\times n_{2}}\) and \(V_{1},V_{2}\in\mathbb{R}^{n_{2}\times n_{2}}\) are orthogonal matrices satisfying \(U_{1}\Omega V_{1}^{T}\supset U_{2}\Omega V_{2}^{T}\). Then we have_ \[U_{1}\Omega V_{1}^{T}=U_{2}\Omega V_{2}^{T}. \tag{5.2}\] Proof. Define \(U=U_{1}^{T}U_{2}\) and \(V=V_{1}^{T}V_{2}\). It is easy to check that \(U\) and \(V\) are orthogonal matrices. We have \[U\Omega V^{T}\subset\Omega. \tag{5.3}\] Define the mapping \(\varphi:\Omega\to\Omega\) by \(\varphi(X)=UXV^{T}\) for all \(X\in\Omega\). Note that \(\varphi\) is an _isometry_ with the spectral metric \(d(X,Y)=\|X-Y\|\) for all \(X,Y\in\Omega\). Indeed, we have \[d(\varphi(X),\varphi(Y))=\|U(X-Y)V^{T}\|=\|X-Y\|\quad\text{for all}\quad X,Y \in\Omega.\] Note further that \(\|\varphi(X)\|=\|X\|\) for all \(X\in\Omega\). It follows from (5.3) that \[\varphi(\Omega\cap\overline{\mathbb{B}}_{k}(0))\subset\Omega\cap\overline{ \mathbb{B}}_{k}(0)\quad\text{for any}\quad k\in\mathbb{N}.\] Since \(\Omega\cap\overline{\mathbb{B}}_{k}(0)\) is a compact metric space and \(\varphi\) is an isometry, it is well-known that \[\varphi(\Omega\cap\overline{\mathbb{B}}_{k}(0))=\Omega\cap\overline{\mathbb{ B}}_{k}(0).\] Hence we have \[\varphi(\Omega)=\varphi\left(\bigcup_{k=1}^{\infty}(\Omega\cap\overline{ \mathbb{B}}_{k}(0))\right)=\bigcup_{k=1}^{\infty}\varphi\left(\Omega\cap \overline{\mathbb{B}}_{k}(0)\right)=\bigcup_{k=1}^{\infty}(\Omega\cap \overline{\mathbb{B}}_{k}(0))=\Omega.\] This verifies (5.2) and completes the proof of the lemma. \(\square\) For any \(Y\in\partial\|X_{0}\|_{*}\), recall from (4.6) that \(p(Y)\) to be the number of singular values of \(Y\) that are equal to \(1\). It follows from Lemma 3.2 that \(r\stackrel{{\text{\tiny def}}}{{=}}\operatorname{rank}\left(X_{0} \right)\leq p(Y)\leq n_{1}\). Define the following constant \[q(X_{0})\stackrel{{\text{\tiny def}}}{{=}}\min\{p(Y)|\ Y\in\Delta(X _{0})\}. \tag{5.4}\] When \(X_{0}\) is a minimizer of problem (5.1), \(q(X_{0})\) is well-defined and bigger than or equal to \(\operatorname{rank}\left(X_{0}\right)\). The following theorem is the main result in this section. **Theorem 5.2** (Characterizations for strong minima of nuclear norm minimization problems).: _Suppose that \(X_{0}\) is an optimal solution of problem (5.1). The following are equivalent:_ **(i)**_\(X_{0}\) is a strong solution of problem (5.1)._ **(ii)**_The following equality holds_ \[\bigcap_{Y\in\Delta(X_{0})}\left[\operatorname{Ker}\Phi\cap T_{N_{\mathbb{B}}( Y)}(X_{0})\right]=\{0\}. \tag{5.5}\] **(iii)**_There exists a dual certificate \(\overline{Y}\in\Delta(X_{0})\) such that_ \[\operatorname{Ker}\Phi\cap T_{N_{\mathbb{B}}(\overline{Y})}(X_{0})=\{0\}. \tag{5.6}\] **(iv)**_For any \(Y\in\Delta(X_{0})\) satisfying \(p(Y)=q(X_{0})\), condition (5.6) is satisfied._ **Proof.** Note that the nuclear norm function \(\|\cdot\|_{*}\) is second order regular at \(X_{0}\)[18] and it also satisfies the quadratic grown condition at \(X_{0}\), as \(\partial\|\cdot\|_{*}\) is metrically subregular at \(X_{0}\) for any \(Y\in\partial\|X_{0}\|_{*}\); see [50]. By applying Lemma 4.1 and Theorem 3.5 with \(\mathbb{X}=\mathbb{R}^{n_{1}\times n_{2}}\), \(\mathbb{Y}=\mathbb{R}^{m}\), \(g(\cdot)=\|\cdot\|_{*}\), and \(K=\{M_{0}\}\), we have \(X_{0}\) is a strong solution if and only if \[\left[\bigcap_{Y\in\Delta(X_{0})}T_{N_{\mathbb{B}}(Y)}(X_{0})\right]\cap C(X_{ 0})=\{0\}, \tag{5.7}\] where \(C(X_{0})\) is the critical cone (3.21) computed by \[C(X_{0})=\{W\in\mathbb{R}^{n_{1}\times n_{2}}|\ W\in\operatorname{Ker}\Phi,dg (X_{0})(W)=0\}.\] By Proposition 4.5 and formula (4.10), we note that if \(W\in\operatorname{Ker}\Phi\cap T_{N_{\mathbb{B}}(Y)}(X_{0})\) for some \(Y\in\Delta(X_{0})\) then \(W\in\mathcal{C}(X_{0},Y)\), i.e., \(dg(X_{0})(W)=\langle Y,W\rangle=0\). Thus \(W\in C(X_{0})\) and that \[\left[\bigcap_{Y\in\Delta(X_{0})}T_{N_{\mathbb{B}}(Y)}(X_{0})\right]\cap C(X_ {0})=\left[\bigcap_{Y\in\Delta(X_{0})}T_{N_{\mathbb{B}}(Y)}(X_{0})\right] \cap\operatorname{Ker}\Phi.\] This together with (5.7) verifies the equivalence between (i) and (ii). The implication [(iii)\(\Rightarrow\)(ii)] and [(iv)\(\Rightarrow\)(ii)] are trivial. To justify the converse implications, we first claim the existence of some dual certificate \(\overline{Y}\in\Delta(X_{0})\) such that \(p(\overline{Y})=q(X_{0})\) and \[\bigcap_{Y\in\Delta(X_{0})}\left[T_{N_{\mathbb{B}}(Y)}(X_{0})\right]=T_{N_{ \mathbb{B}}(\overline{Y})}(X_{0}). \tag{5.8}\] We prove this by using a popular version Zorn's lemma [11, Proposition 5.9] on _partially directed order set_. Consider the following partially ordered set (_poset_) \[\mathcal{P}=\left\{T_{N_{\mathbb{B}}(Y)}(X_{0})|\ Y\in\Delta(X_{0})\right\} \tag{5.9}\] with the partial ordering \(\supset\) between sets in \(\mathcal{P}\). Take into account any _downward chain_ \[T_{N_{\mathbb{B}}(Y_{1})}(X_{0})\supset T_{N_{\mathbb{B}}(Y_{2})}(X_{0}) \supset\ldots\supset T_{N_{\mathbb{B}}(Y_{k})}(X_{0})\supset\ldots \tag{5.10}\] for a sequence \(\{Y_{k}\}\subset\Delta(X_{0})\). We can find a subsequence \(\{Y_{k_{l}}\}\) of \(\{Y_{k}\}\) such that they have the same rank \(p\geq r\). According to the computation (4.10), there exist orthogonal matrices \((U_{k_{l}},V_{k_{l}})\in\mathcal{O}(Y_{k_{l}})\cap\mathcal{O}(X_{0})\) such that \[T_{N_{\mathbb{B}}(Y_{k_{l}})}(X_{0})=U_{k_{l}}\Omega_{p}V_{k_{l}}^{T}\quad \text{with}\quad\Omega_{p}\stackrel{{\text{\tiny def}}}{{=}} \left\{\begin{pmatrix}A&B&0\\ B^{T}&C&0\\ 0&0&0\end{pmatrix}\ |\ A\in\mathbb{S}^{r},B\in\mathbb{R}^{r\times(p-r)},C\in\mathbb{S }_{+}^{p-r}\right\}.\] It follows that \[T_{N_{\mathbb{B}}(Y_{k_{l}})}(X_{0})=U_{k_{l}}\Omega_{p}V_{k_{l}}^{T}\supset U _{k_{l+1}}\Omega_{p}V_{k_{l+1}}^{T}=T_{N_{\mathbb{B}}(Y_{k_{l+1}})}(X_{0}). \tag{5.11}\] Since \(\Omega_{p}\) is closed, we obtain from Lemma 5.1 that \[T_{N_{\mathbb{B}}(Y_{k_{l}})}(X_{0})=T_{N_{\mathbb{B}}(Y_{k_{l+1}})}(X_{0}) \quad\text{for all}\quad l=1,2,\ldots.\] Hence the chain (5.10) is bounded below by \(T_{N_{\mathbb{B}}(Y_{1})}(X_{0})\in\mathcal{P}\). This means that every downward chain of of \(\mathcal{P}\) has a minimum in \(\mathcal{P}\). Let us show next that the poset \(\mathcal{P}\) is _directed downward_ with the partial ordering \(\supset\) in the sense that for any two elements \(T_{N_{\mathbb{B}}(Y_{1})}(X_{0})\) and \(T_{N_{\mathbb{B}}(Y_{2})}(X_{0})\) of \(\mathcal{P}\) with \(Y_{1},Y_{2}\in\Delta(X_{0})\), there exists \(Y_{3}\in\Delta(X_{0})\) such that \[T_{N_{\mathbb{B}}(Y_{1})}(X_{0})\supset T_{N_{\mathbb{B}}(Y_{3})}(X_{0})\quad \text{and}\quad T_{N_{\mathbb{B}}(Y_{2})}(X_{0})\supset T_{N_{\mathbb{B}}(Y_{ 3})}(X_{0}). \tag{5.12}\] Indeed, we choose \(Y_{3}=\frac{1}{2}(Y_{1}+Y_{2})\in\Delta(X_{0})\). For any \(W\in T_{N_{\mathbb{B}}(Y_{3})}(X_{0})\), we obtain from (4.10) that \(0=d^{2}g(X_{0}|Y_{3})(W)\). Hence, there exists \(t_{k}\downarrow 0\) and \(W_{k}\to W\) such that \[\frac{1}{k} \geq\frac{g(X_{0}+t_{k}W_{k})-g(X_{0})-0.5\langle Y_{1}+Y_{2},X_ {0}\rangle}{0.5t_{k}^{2}}\] \[=\frac{g(X_{0}+t_{k}W_{k})-g(X_{0})-\langle Y_{1},X_{0}\rangle}{t _{k}^{2}}+\frac{g(X_{0}+t_{k}W_{k})-g(X_{0})-\langle Y_{2},X_{0}\rangle}{t_{k }^{2}},\] which implies that \[0\geq d^{2}g(X_{0}|Y_{1})(W)+d^{2}g(X_{0}|Y_{2})(W).\] Since \(d^{2}g(X_{0}|Y_{1})(W),d^{2}g(X_{0}|Y_{2})(W)\geq 0\), we obtain from Lemma 3.9 and Lemma 4.1 that \[W\in\operatorname{Ker}d^{2}g(X_{0}|Y_{1})=T_{N_{\mathbb{B}}(Y_{1})}(X_{0}) \quad\text{and}\quad W\in\operatorname{Ker}d^{2}g(X_{0}|Y_{2})=T_{N_{\mathbb{ B}}(Y_{2})}(X_{0}).\] This clearly verifies the directed condition (5.12). By [11, Proposition 5.9], the directed downward poset \(\mathcal{P}\) has a minimum in \(\mathcal{P}\) in the sense that there exists \(\overline{Y}\) such that (5.8) is valid. The implication [(ii)\(\Rightarrow\)(iii)] follows from (5.8). Next, let us show that \(p(\overline{Y})=q(X_{0})\) and \[T_{N_{\mathbb{B}}(Y)}(X_{0})=T_{N_{\mathbb{B}}(\overline{Y})}(X_{0}) \tag{5.13}\] for any \(Y\in\Delta(X_{0})\) with \(p(Y)=q(X_{0})\). Indeed, pick any \(Y\) satisfying the later condition, we obtain from (5.8) that \[T_{N_{\mathbb{B}}(Y)}(X_{0})\supset T_{N_{\mathbb{B}}(\overline{Y})}(X_{0}). \tag{5.14}\] Due to the computation (4.10), the maximum ranks of matrices in \(T_{N_{\mathbb{B}}(\overline{Y})}(X_{0})\) and \(T_{N_{\mathbb{B}}(Y)}(X_{0})\) are \(p(\overline{Y})\) and \(p(Y)\), respectively. It follows that \(q(X_{0})\leq p(\overline{Y})\leq p(Y)=q(X_{0})\), which verifies that \(p(\overline{Y})=q(X_{0})\). Moreover, due to (4.10) and (5.14), we get (5.13) from Lemma 5.1. Combining (5.13) and (5.8) ensures the implication [(ii)\(\Rightarrow\)(iv)]. The proof is complete. **Remark 5.3**.: A sufficient condition for strong minima of nuclear norm minimization (5.1) can be obtained from [18, Theorem 12]. However, their condition has a format of a minimax problem: for any element in the critical cone, there exists some Lagrange multiplier such that a certain second order sufficient condition is satisfied, i.e., the Lagrange multiplier used in the sufficient condition depends on the choice of elements in critical cone. This is a typical situation; see (2.23) for instance. In our characterizations for strong minima in part (iii) and (iv) of the above theorem, the existence of dual certificate \(\overline{Y}\) is independent from elements of the critical cone. Condition (iii) is close to the _maximin_ situation. Morever, we know extra information about these dual certificates from (iv) that they should have minimum numbers of singular values that are equal to \(1\). Similarly to Theorem 4.6, condition (5.6) is equivalent to the combination of Strong Restricted Injectivity and Strong Nondegenerate Source Condition. Let us recall the model tangent subspace (4.13) at \(X_{0}\) here: \[\mathbb{T}_{0}\stackrel{{\mbox{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\left}}}}}} \right.}}}}\right.}}}}}}{\leq\{U_{0}Y^{T}+XV_{0}^{T}|\ X\in\mathbb{R}^{n_{1} \times r},Y\in\mathbb{R}^{n_{2}\times r}\},\] where \(U_{0}\Sigma_{0}V_{0}^{T}\) is a compact SVD of \(X_{0}\). **Corollary 5.4** (Strong Restricted Injectivity and Strong Nondegenerate Source Condition for strong minima ).: _Suppose that \(X_{0}\) is a minimizer of problem (5.1). Then \(X_{0}\) is a strong solution of problem (5.1) if and only if both following conditions hold:_ 1. Strong Restricted Injectivity: \(\operatorname{Ker}\Phi\cap\mathcal{E}_{0}\cap T_{0}=\{0\}\)_, where_ \[\mathcal{E}_{0}=\left\{W\in\mathbb{R}^{n_{1}\times n_{2}}|\ P_{T_{0}}W\in U \begin{pmatrix}A&B&0\\ B^{T}&0&0\\ 0&0&0\end{pmatrix}V^{T},A\in\mathbb{S}^{r},B\in\mathbb{R}^{r\times(q(X_{0})-r)}\right\}\] (5.15) _with some_ \((U,V)\in\mathcal{O}(X_{0})\cap\mathcal{O}(Y_{0})\) _with_ \(Y_{0}\in\Delta(X_{0})\) _satisfying_ \(p(Y_{0})=q(X_{0})\)_._ 2. Strong Nondegenerate Source Condition: _There exists_ \(Y\in\operatorname{Im}\Phi^{*}+\mathcal{E}_{0}^{\perp}\) _such that_ \(Y\in\operatorname{ri}\partial\|X_{0}\|_{*}\)_._ **Remark 5.5** (Sharp minima vs Strong minima).: The set \(\mathcal{E}_{0}\) is only dependent on \(X_{0}\). Indeed, it follows from (5.8) and (4.10) that \[\mathcal{E}_{0}=\left\{W\in\mathbb{R}^{n_{1}\times n_{2}}|\ P_{\mathbb{T}}W\in P _{\mathbb{T}}\left(\bigcap_{Y\in\Delta(X_{0})}T_{N_{\mathrm{B}}(Y)}(X_{0}) \right)\right\},\] which is a subspace of \(\mathbb{R}^{n_{1}\times n_{2}}\). As discussed after Theorem 4.6, the Strong Restricted Injectivity is weaker than the Restricted Injectivity \[\operatorname{Ker}\Phi\cap\mathbb{T}_{0}=\{0\}. \tag{5.16}\] The Strong Nondegenerate Source Condition is also weaker than the Nondegenerate Source Condition: \[\operatorname{Im}\Phi^{*}\cap\operatorname{ri}\partial\|X_{0}\|_{*}\neq \emptyset. \tag{5.17}\] This condition means \(q(X_{0})=\operatorname{rank}(X_{0})\). The Nondegenerate Source Condition together with the Restricted Injectivity is used in [15, 16] as sufficient conditions for solution uniqueness of problem (5.1) at \(X_{0}\). These two conditions are shown recently to be equivalent to the stronger property, _sharp minima_ at \(X_{0}\) in [24, Theorem 4.6] in the sense that there exists some \(c>0\) such that \[\|X\|_{*}-\|X_{0}\|_{*}\geq c\|X-X_{0}\|\quad\text{for any $X\in\mathbb{R}^{n_{1} \times n_{2}}$ satisfying}\quad\Phi X=M_{0}. \tag{5.18}\] Our Strong Restricted Injectivity and Strong Nondenerate Source Condition are characterizations for a weaker property, the strong minima for problem (5.1). Of courses, they can also serve as sufficient conditions for solution uniqueness at \(X_{0}\); see [28] for some recent characterizations for this property. In order to check Nondegenerate Source Condition (5.17), one has to show that the _Source Coefficient_\(\rho(X_{0})\), the optimal value of the following optimization problem \[\min_{Z\in\mathbb{T}_{0}^{\perp}}\quad\|Z\|\quad\text{subject to}\quad\mathcal{ N}Z=-\mathcal{N}E_{0}\quad\text{with}\quad E_{0}\stackrel{{\mbox{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tinytiny{\tiny{\tiny{\tiny{\tiny{\tiny \tinytiny{\tinytinytinytiny{\tinytiny\cdot to be smaller than \(1\), where \(\mathcal{N}\) is a linear operator satisfying \(\operatorname{Ker}\mathcal{N}=\operatorname{Im}\Phi^{*}\); see, e.g. [24, Remark 4.5]. When the Restricted Injectivity (5.16) holds, an upper bound for \(\rho(X_{0})\) is \[\tau(X_{0})\stackrel{{\text{\tiny def}}}{{=}}\|\mathcal{N}_{ \mathbf{T}_{0}^{\perp}}^{*}(\mathcal{N}_{\mathbf{T}_{0}^{\perp}}^{*}\mathcal{N }_{\mathbf{T}_{0}^{\perp}})^{-1}\mathcal{N}E_{0}\|\quad\text{with}\quad \mathcal{N}_{\mathbf{T}_{0}^{\perp}}\stackrel{{\text{\tiny def}}}{{= }}\mathcal{N}P_{\mathbf{T}_{0}^{\perp}}. \tag{5.20}\] Hence condition \(\tau(X_{0})<1\) is sufficient for sharp minima; see, e.g., [24, Corollary 4.8]. This condition is known as the _Analysis Exact Recovery Condition_ in [38] for the case of \(\ell_{1}\) optimization. Another independent condition is also used to check solution uniqueness of problem (5.1) is the so-called _Irrepresentability Criterion_[15, 16, 45]: \[\mathbf{IC}(X_{0})\stackrel{{\text{\tiny def}}}{{=}}\|\Phi_{ \mathbf{T}_{0}^{\perp}}^{*}\Phi_{\mathbf{T}_{0}}\left(\Phi_{\mathbf{T}_{0}}^{ *}\Phi_{\mathbf{T}_{0}}\right)^{-1}E_{0}\|<1\quad\text{with}\quad\Phi_{\mathbf{ T}_{0}}\stackrel{{\text{\tiny def}}}{{=}}\Phi P_{\mathbf{T}_{0}}. \tag{5.21}\] Note that \(\mathbf{IC}(X_{0})\geq\rho(X_{0})\). Thus \(\mathbf{IC}(X_{0})<1\) also implies that \(X_{0}\) is a sharp solution of problem (5.1). **Remark 5.6** (Descent cone vs tangent cone).: Sharp minima and strong minima of problem (5.1) are sufficient for solution uniqueness, which is a significant property for exact recovery [1, 13, 16]. An important geometric structure used to study solution uniqueness is the _descent cone_[13] at \(X_{0}\) defined by \[\mathcal{D}(X_{0})\stackrel{{\text{\tiny def}}}{{=}}\text{cone} \left\{X-X_{0}|\ \|X\|_{*}\leq\|X_{0}\|_{*}\right\}. \tag{5.22}\] Indeed, [13] shows that \(X_{0}\) is a unique solution of problem (5.1) if and only if \[\operatorname{Ker}\Phi\cap\mathcal{D}(X_{0})=\{0\}. \tag{5.23}\] Unlike the tangent cones in (5.5) or (5.6), the descent cone \(\mathcal{D}(X_{0})\) may be not closed. Although the direct connection between the descent cone \(\mathcal{D}(X_{0})\) and tangent cones \(T_{N_{\mathbf{B}}(\gamma)}(X_{0})\) is not clear, we claim that \[\operatorname{Ker}\Phi\cap\mathcal{D}(X_{0})\subset\bigcap_{Y\in\Delta(X_{0}) }\left[\operatorname{Ker}\Phi\cap T_{N_{\mathbf{B}}(Y)}(X_{0})\right] \tag{5.24}\] when \(X_{0}\) is a minimizer of problem (5.1), where \(\Delta(X_{0})=\operatorname{Im}\Phi^{*}\cap\partial\|X_{0}\|_{*}\) is the set of dual certificates at \(X_{0}\). Indeed, for any \(W\in\operatorname{Ker}\Phi\cap\mathcal{D}(X_{0})\), there exists \(\tau>0\) such that \(\|X_{0}+\tau W\|_{*}\leq\|X_{0}\|_{*}\). As \(\Phi(X_{0}+\tau W)=\Phi X_{0}\), we have \(\|X_{0}+\tau W\|_{*}=\|X_{0}\|_{*}\). Pick any \(Y\in\Delta(X_{0})\). In view of (4.4) and the definition of \(Y\), it follows that \[\|X_{0}+\tau W\|_{*}=\|X_{0}\|_{*}=\langle Y,X_{0}\rangle=\langle Y,X_{0}+ \tau W\rangle,\] which implies that \(X_{0}+\tau W\in N_{\mathbf{B}}(Y)\) due to (4.4)-(4.5) and thus \(W\in T_{N_{\mathbf{B}}(Y)}(X_{0})\). This verifies inclusion (5.24). Inclusion (5.24) also tells us that condition (5.5) is sufficient for (5.23). This observation is not a surprise in the sense of Theorem 5.2, as strong minima obviously implies solution uniqueness. But solution uniqueness of problem (5.1) does not indicate strong minima; see [24, Example 5.11]. Hence, inclusion (5.24) is strict. Similarly to Corollary 4.4, the following result reveals the role of Strict Restricted Injectivity in strong minima. **Corollary 5.7** (Strict Restricted Injectivity for strong minima of problem (5.1)).: _Suppose that \(U_{0}\Sigma_{0}V_{0}^{T}\) is a compact SVD of \(X_{0}\) with \(r=\text{rank}\ (X_{0})\). If \(X_{0}\) is a strong solution, then the following Strict Restricted Injectivity is satisfied:_ \[\operatorname{Ker}\Phi\cap U_{0}\mathbb{S}^{r}V_{0}^{T}=\{0\}. \tag{5.25}\] _This condition is also sufficient for strong solution at \(X_{0}\) provided that_ Nondegenerate Source Condition (5.17) _holds at \(X_{0}\)._ **Proof.** Note from (4.10), \[T_{N_{\rm B}(Y)}(X_{0})\supset U_{0}S^{r}V_{0}^{T}\quad\text{for any}\quad Y\in \Delta(X_{0}).\] If \(X_{0}\) is a strong solution, combing (5.5) with the latter inclusion verifies (5.25). If Nongegenerate Source Condition (5.17) holds at \(X_{0}\), there exists \(Y_{0}\in\operatorname{Im}\Phi^{*}\cap\operatorname{ri}\partial\|X_{0}\|_{*}\). Hence \(\Delta(X_{0})\neq\emptyset\), i.e., \(X_{0}\) is a solution of problem (5.1). It follows from Lemma 4.1 that \(p(Y_{0})=\operatorname{rank}\,(X_{0})\) and from (4.10) that \[T_{N_{\rm B}(Y)}(X_{0})=U_{0}S^{r}V_{0}^{T}.\] Hence the validity of (5.25) implies that \(X_{0}\) is a strong solution to problem (5.1) due to the equivalence between (i) and (iii) in Theorem 5.2. \(\square\) Suppose that \(U\begin{pmatrix}\Sigma_{0}&0\\ 0&0\end{pmatrix}V^{T}\) is a full SVD of \(X_{0}\). The model tangent space \(\mathbb{T}_{0}\) at \(X_{0}\) can be represented by \[\mathbb{T}_{0}=\left\{U\begin{pmatrix}A&B\\ C&0\end{pmatrix}V^{T}|\ A\in\mathbb{R}^{r\times r},B\in\mathbb{R}^{r\times(n_{2 }-r)},C\in\mathbb{R}^{(n_{1}-r)\times r}\right\}. \tag{5.26}\] It has dimension \(r(n_{1}+n_{2}-r)\). When Restricted Injectivity (5.16) holds, we have \(m\geq r(n_{1}+n_{2}-r)\). Similarly, as the dimension of \(U_{0}S^{r}V_{0}^{T}\) is \(\frac{1}{2}r(r+1)\), it is necessary for Strict Restricted Injectivity (5.25) that \(m\geq\frac{1}{2}r(r+1)\). Next we show that this bound \(\frac{1}{2}r(r+1)\) for \(m\) is tight. **Corollary 5.8** (Minimum bound for strong exact recovery).: _Suppose that \(X_{0}\) is an \(n_{1}\times n_{2}\) matrix with rank \(r\). Then one needs at least \(\frac{1}{2}r(r+1)\) measurements \(m\) of \(M_{0}\) so that solving the nuclear norm minimization problem (5.1) recovers exactly the strong solution \(X_{0}\)._ _Moreover, there exist infinitely many linear operators \(\Phi:\mathbb{R}^{n_{1}\times n_{2}}\to\mathbb{R}^{\frac{1}{2}r(r+1)}\) such that \(X_{0}\) is a strong solution of problem (5.1)._ **Proof.** Suppose that \(U_{0}\Sigma V_{0}^{T}\) is a compact SVD of \(X_{0}\). Let \(\{A_{1},\dots,A_{s}\}\) with \(s\stackrel{{\text{\tiny def}}}{{=}}\frac{1}{2}r(r+1)\) be any basis of \(U_{0}S^{r}V_{0}^{T}\). If \(X_{0}\) is a strong solution of problem (5.1), Strict Restricted Injectivity (5.25) holds by Corollary 5.7. It follows that \(\{\Phi(A_{1}),\dots,\Phi(A_{s})\}\) are linearly independent. Hence, we have \(m\geq s\), which verifies the first part. To justify the second part, we construct the linear operator \(\Phi_{s}:\mathbb{R}^{n_{1}\times n_{2}}\to\mathbb{R}^{s}\) as follows: \[\Phi_{s}(X)\stackrel{{\text{\tiny def}}}{{=}}(\langle A_{k},X \rangle)_{1\leq k\leq s}\in\mathbb{R}^{s}\quad\text{for any}\quad X\in\mathbb{R}^{n_{1} \times n_{2}}. \tag{5.27}\] Note that \(\operatorname{Im}\Phi_{s}^{*}=\operatorname{span}\,\{A_{1},\dots,A_{s}\}=U_{0 }S^{r}V_{0}^{T}\). It follows that \(E_{0}=U_{0}V_{0}^{T}\in\operatorname{Im}\Phi_{s}^{*}\cap\operatorname{ri} \partial\|X_{0}\|_{*}\) is a dual certificate that satisfies Nondegenerate Source Countition (5.17). As \[\operatorname{Ker}\Phi_{s}=(\operatorname{Im}\Phi_{s}^{*})^{\perp}=(U_{0}S^{r }V_{0}^{T})^{\perp},\] we have \(\operatorname{Ker}\Phi_{s}\cap U_{0}S^{r}V_{0}^{T}=\{0\}\). Hence Strict Restricted Injectivity (5.25) holds. By Corollary 5.7 again, \(X_{0}\) is a strong solution of problem (5.1). \(\square\) **Remark 5.9** (Low bounds on the number of measurement for exact recovery).: In the theory of exact recovery, [13] shows that with \(m\geq 3r(n_{1}+n_{2}-r)\) random Gaussian measurements it is sufficient to recover exactly \(X_{0}\) with high probability by solving problem (5.1) from observations \(M_{0}=\Phi X_{0}\); see also [16] for a similar result with a different approach. Also in [13, Propositions 4.5 and 4.6], a lower bound on the number of measurements is discussed for exact recovery in _atomic _norm_ minimization via the descent cone (5.6) and Terracini's Lemma [26]. The latter is used to get an estimate of the dimension of a subspace component of the descent cone. In the case of nuclear norm, this lower bound is indeed \(\min\{n_{1}n_{2},(r+1)(n_{1}+n_{2})-r\}\); see aslo [26, Proposition 12.2]. This bound holds for any linear measurement scheme. Our lower bound \(\frac{1}{2}r(r+1)\) for \(m\) is much smaller and only depends on the rank of \(X_{0}\), but is achieved for special \(\Phi\) only. **Example 5.10**.: Let us consider the following nuclear norm minimization problem \[\min_{X\in\mathbb{R}^{2\times 2}}\|X\|_{*}\quad\text{subject to}\quad\Phi X \stackrel{{\text{\tiny def}}}{{=}}\begin{pmatrix}X_{11}&0\\ 0&X_{22}\end{pmatrix}=\begin{pmatrix}1&0\\ 0&0\end{pmatrix}, \tag{5.28}\] which is a matrix completion problem. Set \(X_{0}=\begin{pmatrix}1&0\\ 0&0\end{pmatrix}\), we have \[\partial\|X_{0}\|_{*}=\begin{pmatrix}1&0\\ 0&[-1,1]\end{pmatrix}\quad\text{ and }\quad\mathbb{T}_{0}=\left\{\begin{pmatrix}a&b \\ c&0\end{pmatrix}\mid a,b,c\in\mathbb{R}\right\}. \tag{5.29}\] Moreover, note that \[\operatorname{Ker}\Phi=\left\{\begin{pmatrix}0&b\\ c&0\end{pmatrix}\mid b,c\in\mathbb{R}\right\}\quad\text{ and }\quad\operatorname{Im}\Phi^{*}=\left\{\begin{pmatrix}a&0\\ 0&d\end{pmatrix}\mid a,d\in\mathbb{R}\right\} \tag{5.30}\] Note that \(\Delta(X_{0})=\operatorname{Im}\Phi^{*}\cap\partial\|X_{0}\|_{*}=\partial\| X_{0}\|_{*}\). Hence \(X_{0}\) is a solution of problem (5.28). However, \(\operatorname{Ker}\Phi\cap\mathbb{T}_{0}=\operatorname{Ker}\Phi\neq\{0\}\), i.e., Restricted Injectivity (5.16) fails. Thus \(X_{0}\) is not a sharp solution of problem (5.28). Note that \(q(X_{0})=1\) and \(Y_{0}=\begin{pmatrix}1&0\\ 0&0\end{pmatrix}\in\Delta(X_{0})\) with \(p(Y_{0})=1\). We have from (4.10) that \[T_{N_{\mathbb{B}}(Y_{0})}(X_{0})=\left\{\begin{pmatrix}a&0\\ 0&d\end{pmatrix}\mid a,d\in\mathbb{R}\right\}.\] It is clear that \(\operatorname{Ker}\Phi\cap T_{N_{\mathbb{B}}(Y_{0})}(X_{0})=\{0\}\). This shows that \(X_{0}\) is a strong solution by Theorem 5.2. \(\square\) **Example 5.11** (Checking strong minima numerically).: Let us consider the following matrix completion problem \[\min_{X\in\mathbb{R}^{3\times 3}}\quad\|X\|_{*}\quad\text{subject to}\quad \operatorname{P}_{\Omega}(X)=M_{0}\stackrel{{\text{\tiny def}}}{{=}} \begin{pmatrix}4&2&4\\ 2&1&0\\ 4&0&0\end{pmatrix}, \tag{5.31}\] where \(\operatorname{P}_{\Omega}\) is the projection mapping defined by \[\operatorname{P}_{\Omega}(X)\stackrel{{\text{\tiny def}}}{{=}} \begin{pmatrix}X_{11}&X_{12}&X_{13}\\ X_{21}&X_{22}&0\\ X_{31}&0&0\end{pmatrix}.\] Define \[X_{0}\stackrel{{\text{\tiny def}}}{{=}}\begin{pmatrix}4&2&4\\ 2&1&2\\ 4&2&4\end{pmatrix}\quad\text{ and }\quad U=V\stackrel{{\text{\tiny def}}}{{=}} \frac{1}{3}\begin{pmatrix}2&-2&1\\ 1&2&2\\ 2&1&-2\end{pmatrix}.\] Note that \(U,V\) are orthogonal matrices, \(P_{\Omega}(X_{0})=M_{0}\), \(X_{0}=U\begin{pmatrix}9&0&0\\ 0&0&0\\ 0&0&0\end{pmatrix}V^{T}\) is an SVD of \(X_{0}\), and \(\operatorname{rank}\,(X_{0})=1\). To check whether \(X_{0}\) is a sharp solution of problem (5.31), we compute the constants \(\tau(E_{0})\) in (5.20) or \(\rho(E_{0})\) from problem (5.19). In this case the linear operator \(\mathcal{N}\) in (5.19) is chosen by \(P_{\Omega^{\perp}}\) as \(\operatorname{Ker}P_{\Omega^{\perp}}=\operatorname{Im}P_{\Omega}\). With some linear algebra, we calculate \(\tau(E_{0})=1.2>1\) with \(E_{0}=1/9X_{0}\). Moreover, by using the cvx package to solve the spectral norm optimization (5.19), the Source Nondegenerate \(\rho(E_{0})\) is exactly \(1\) and gives us a solution \(Z_{0}\). Thus \(X_{0}\) is not a sharp solution of problem (5.28). However, note further that \(Y_{0}=Z_{0}+E_{0}\in\operatorname{Ker}P_{\Omega^{\perp}}=\operatorname{Im}P_{\Omega}\) is a dual certificate, which is computed by \[Y_{0}=\begin{pmatrix}0&0&1\\ 0&1&0\\ 1&0&0\end{pmatrix}.\] Let us check the condition \[\operatorname{Ker}\Phi\cap T_{N_{\mathbf{B}}(Y_{0})}(X_{0})=\{0\} \tag{5.32}\] in (5.6). It follows from (4.3) that \[Y_{0}=UU^{T}Y_{0}VV^{T}=U\begin{pmatrix}1&0&0\\ 0&0&1\\ 0&1&0\end{pmatrix}V^{T}.\] The SVD of the submatrix \(\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\) is certainly \[\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\begin{pmatrix}1&0\\ 0&1\end{pmatrix}\begin{pmatrix}1&0\\ 0&1\end{pmatrix}.\] According to (4.8), \((\overline{U},\overline{Y})\in\mathcal{O}(X_{0})\cap\mathcal{O}(Y_{0})\) with \[\overline{U}\stackrel{{\text{\tiny def}}}{{=}}\frac{1}{3}\begin{pmatrix} 2&1&-2\\ 1&2&2\\ 2&-2&1\end{pmatrix}\quad\text{and}\quad\overline{V}\stackrel{{ \text{\tiny def}}}{{=}}V.\] By formula (4.10), \[T_{N_{\mathbf{B}}(Y_{0})}(X_{0})=\left\{\overline{U}\begin{pmatrix}A&B\\ B^{T}&C\end{pmatrix}\overline{V}^{T}|\ A\in\mathbb{R}^{1\times 1},B\in\mathbb{R}^{1 \times 2},C\in\mathsf{S}_{+}^{2}\right\}.\] It follows that \[\mathcal{E}=\left\{\overline{U}\begin{pmatrix}A&B\\ B^{T}&C\end{pmatrix}\overline{V}^{T}|\ A\in\mathbb{R}^{1\times 1},B\in\mathbb{R}^{1 \times 2},C\in\mathbb{R}^{2\times 2}\right\}\text{ and }\mathcal{E}^{\perp}=\left\{ \overline{U}\begin{pmatrix}0&B\\ -B^{T}&0\end{pmatrix}\overline{V}^{T}|\ B\in\mathbb{R}^{1\times 2}\right\}.\] To verify (5.32), we compute \(\zeta(E_{0})\), which is the optimal solution of problem (4.35) \[\min_{Z\in\mathbb{R}^{2\times 2},W\in\mathbb{R}^{3\times 3}}\quad\|Z\|\quad \text{subject to}\quad P_{\Omega^{\perp}}(\overline{U}_{J}ZV_{J}^{T}+W)=-P_{ \Omega^{\perp}}(E_{0})\quad\text{and}\quad W\in\mathcal{E}^{\perp}.\] This is a convex optimization problem. By using cvx to solve it, we obtain that \(\zeta(E_{0})=1/6\). As \(\zeta(E_{0})<1\), \(X_{0}\) is a strong solution of problem (5.31). The idea of checking strong solution numerically by using Theorem 5.2 in the above example will be summarized again in Section 6 with bigger size of nuclear norm minimization problems. In the later Corollary 5.12, we will show a big class of nuclear minimization problem satisfies both Strict Restricted Injectivity (5.25) and Nondegenerate Source Condition (5.17), but not Restricted Injectivity (5.16). Now let us consider a special case of problem (5.1) \[\min_{X\in\mathbb{R}^{n_{1}\times n_{2}}}\quad\|X\|_{*}\quad\text{ subject to}\quad LX=M_{0}, \tag{5.33}\] where \(L\) and \(M_{0}\) are known \(q\times n_{1}\) and \(q\times n_{2}\) matrices, respectively. This is usually referred as the low-rank representation problem [33]. It is well-known that the optimal solution to problem (5.33) is unique and determined by \(L^{\dagger}M_{0}\), where \(L^{\dagger}\) is the Moore-Penrose opeator of \(L\). In the following result, we advance this result by showing that this unique solution is indeed a strong solution, but not necessarily a sharp solution. Indeed, in this certain class, Strict Restricted Injectivity (5.25) and Nondegenerate Source Condition (5.17) are satisfied, but Restricted Injectivity (5.16) is not. **Corollary 5.12** (Strong minima of low-rank representation problems).: _Let \(L\) be an \(q\times n_{1}\) matrix. If the linear system \(LX=M_{0}\) is consistent, then Strict Restricted Injectivity (5.25) and Nondegenerate Source Condition (5.17) hold at \(X_{0}\stackrel{{\text{\tiny def}}}{{=}}L^{\dagger}M_{0}\) in problem (5.33)._ _Consequently, \(X_{0}\) is the strong solution of the low-rank representation problem (5.33)._ **Proof.** Suppose that \(U\Sigma V^{T}\) is a compact SVD to the matrix \(L\). Thus \(L^{\dagger}=V\Sigma^{-1}U^{T}\) and note that \(L^{\dagger}M_{0}=V\Sigma^{-1}U^{T}M_{0}\). Let \(U_{0}\Sigma_{0}V_{0}^{T}\) be a compact SVD of \(\Sigma^{-1}U^{T}M_{0}\) with \(\Sigma_{0}\in\mathbb{R}^{r\times r}\). Note that \[(VU_{0})^{T}VU_{0}=U_{0}^{T}V^{T}VU_{0}=U_{0}^{T}U_{0}=\mathbb{I}.\] It follows that \(VU_{0}\Sigma V_{0}^{T}\) is a compact SVD of \(X_{0}\). By Lemma 4.1, we have \(E_{0}\stackrel{{\text{\tiny def}}}{{=}}VU_{0}V_{0}^{T}\in \operatorname{ri}\partial\|X_{0}\|_{*}\). Observe further that \[E_{0}=VU_{0}V_{0}^{T}=V\Sigma U^{T}U\Sigma^{-1}U_{0}V_{0}^{T}=L^{T}U\Sigma^{-1 }U_{0}V_{0}^{T}=\Phi^{*}(U\Sigma^{-1}U_{0}V_{0}^{T}),\] which implies that \(E_{0}\in\operatorname{Im}\Phi^{*}\cap\operatorname{ri}\partial\|X_{0}\|_{*}\). This verifies Nondegenerate Source Condition (5.17) and shows that \(X_{0}\) is an optimal solution of problem (5.33). Next, let us check Strict Restricted Injectivity (5.25). For any \(W\in\operatorname{Ker}\Phi\cap(VU_{0})\mathsf{S}^{r}V_{0}^{T}\) with \(r=\operatorname{rank}\left(X_{0}\right)\), we find some \(A\in\mathsf{S}^{r}\) such that \(W=VU_{0}AV_{0}^{T}\). We have \[0=\Phi(W)=LW=U\Sigma V^{T}VU_{0}AV_{0}^{T}=U\Sigma U_{0}AV_{0}^{T}.\] It follows that \[0=U_{0}^{T}\Sigma^{-1}U^{T}(U\Sigma U_{0}AV_{0}^{T})V_{0}=U_{0}^{T}\Sigma^{-1 }\Sigma U_{0}A=U_{0}^{T}U_{0}A=A,\] which also implies that \(W=0\). This verifies Strict Restricted Injectivity (5.25). Consequently, according to Corollary 5.7, \(X_{0}\) is the strong solution of problem (5.33). \(\square\) In the following simple example, we show that the unique solution to (5.33) may be not a sharp solution in the sense (5.18). **Example 5.13** (Unique solutions of low-rank representation problems are not sharp).: Consider the following optimization problem \[\min_{X\in\mathbb{R}^{2\times 2}}\qquad\|X\|_{*}\qquad\text{ subject to}\qquad\begin{pmatrix}1&1\end{pmatrix}X=\begin{pmatrix}1&0\end{pmatrix}. \tag{5.34}\] As \(L=\begin{pmatrix}1&1\end{pmatrix}\) and \(M_{0}=\begin{pmatrix}1&0\end{pmatrix}\), the unique solution to problem (5.34) is \[X_{0}=L^{\dagger}M_{0}=\begin{pmatrix}0.5\\ 0.5\end{pmatrix}\begin{pmatrix}1&0\end{pmatrix}=\begin{pmatrix}0.5&0\\ 0.5&0\end{pmatrix}.\] Pick any \(X_{\varepsilon}\stackrel{{\mathrm{def}}}{{=}}\begin{pmatrix}0.5+ \varepsilon&0\\ 0.5-\varepsilon&0\end{pmatrix}\) and note that \(X_{\varepsilon}\) satisfies linear constraint in (5.34). We have \[\|X_{\varepsilon}\|_{*}=\sqrt{(0.5+\varepsilon)^{2}+(0.5-\varepsilon)^{2}}= \sqrt{0.5+2\varepsilon^{2}}.\] It follows that \[\frac{\|X_{\varepsilon}\|_{*}-\|X_{0}\|_{*}}{\|X_{\varepsilon}-X_{0}\|_{F}}= \frac{\sqrt{(0.5+\varepsilon)^{2}+(0.5-\varepsilon)^{2}}-\sqrt{0.5}}{\sqrt{2 \varepsilon^{2}}}=\frac{\sqrt{0.5+2\varepsilon^{2}}-\sqrt{0.5}}{\sqrt{2 \varepsilon^{2}}}=\frac{\sqrt{2}\varepsilon}{\sqrt{0.5+2\varepsilon^{2}}+ \sqrt{0.5}}.\] This shows that \(X_{0}\) is not a sharp solution in the sense of (5.18). The hidden reason is that Restricted Injectivity (5.16) is not satisfied in this problem. A relatively close problem to (5.33) is \[\min_{X\in\mathbb{R}^{n_{1}\times n_{2}}}\quad h(LX)+\mu\|X\|_{*}, \tag{5.35}\] where the function \(h:\mathbb{R}^{q\times n_{2}}\to[0,\infty]\) satisfies the standing assumptions in Section 4 with open domain and \(\mu\) is a positive number. This is a particular case of (4.1). Next we also show that strong minima occurs in this problem. **Corollary 5.14**.: _Problem (5.35) has a unique and strong solution._ **Proof.** It is easy to see that problem (5.35) has at least an optimal solution \(\overline{X}\). Let \(U\Sigma V^{T}\) be a compact SVD of \(L\) and define \[\overline{Y}\stackrel{{\mathrm{def}}}{{=}}-\frac{1}{\mu}L^{T} \nabla h(L\overline{X})=-\frac{1}{\mu}V\Sigma U^{T}\nabla h(L\overline{X}) \in\partial\|\overline{X}\|_{*}.\] Let \(U_{1}\Sigma_{1}V_{1}^{T}\) be a compact SVD of \(-\frac{1}{\mu}\Sigma U^{T}\nabla h(L\overline{X})\) with \(\Sigma_{1}\in\mathbb{R}^{p\times p}\). As \((VU_{1})^{T}(VU_{1})=\mathbb{I}\), it follows that \(VU_{1}\Sigma_{1}V_{1}\) is a compact SVD of \(\overline{Y}\). By (4.5), we can find \(\overline{A}\in\mathbb{S}_{+}^{p}\) such that \(\overline{X}=VU_{1}\overline{A}V_{1}^{T}\), which means \[\overline{A}=U_{1}^{T}V^{T}\overline{X}V_{1}.\] Next let us estimate \(T_{N_{\mathbb{B}}(\overline{Y})}(\overline{X})\). For any \(W\in T_{N_{\mathbb{B}}(\overline{Y})}(\overline{X})\), we find sequences \(t_{k}\downarrow 0\) and \(W_{k}\to W\) satisfying \(\overline{X}+t_{k}W_{k}\in N_{\mathbb{B}}(\overline{Y})\subset VU_{1} \mathbb{S}_{+}^{p}V_{1}^{T}\) by Lemma 4.1 again. It follows that \[W_{k}\in\frac{1}{t_{k}}VU_{1}(\mathbb{S}_{+}^{p}-\overline{A})V_{1}^{T}\subset VU _{1}\mathbb{S}^{p}V_{1}^{T},\] which implies that \(T_{N_{\mathbb{B}}(\overline{Y})}(\overline{X})\subset VU_{1}\mathbb{S}^{p}V_ {1}^{T}.\) We claim next that \(\operatorname{Ker}\Phi\cap T_{N_{\mathbb{B}}(\overline{Y})}(X_{0})=\{0\}.\) Indeed, take any \(W\in T_{N_{\mathbb{B}}(\overline{Y})}(X_{0})\) with \(\Phi(W)=0\), we find \(B\in\mathbb{S}^{p}\) such that \(W=VU_{1}BV_{1}^{T}\) and \[0=\Phi(W)=LW=U\Sigma V^{T}VU_{1}BV_{1}^{T}=U\Sigma U_{1}BV_{1}^{T}.\] Hence we have \[U_{1}BV_{1}^{T}=\Sigma^{-1}U^{T}U\Sigma U_{1}BV_{1}=0.\] This implies that \(W=0\) and verifies the claim. By Corollary 4.2, \(X_{0}\) is the strong solution of problem (5.35). Numerical Experiments In this section, we perform numerical experiments to demonstrate strong minima, sharp minima, and solution uniqueness for nuclear norm minimization problem (1.1). The experiments were conducted for different matrix ranks \(r\) and numbers of measurements \(m\) for \(M_{0}\). Through the section, we also discuss how to use our conditions to check strong minima for problem (1.1). ### Experiment 1 In the first experiment, we generate \(X_{0}\), an \(n\times n\) matrix of rank \(r\), by sampling two factors \(W\in\mathbb{R}^{n\times r}\) and \(H\in\mathbb{R}^{n\times r}\) with independent and identically distributed (i.i.d.) random entries and setting \(X_{0}=WH^{*}\). We vectorize problem (1.1) in the following form: \[\min_{X\in\mathbb{R}^{n\times n}}\quad\|X\|_{*},\quad\text{subject to}\quad \Phi\;\text{vec}(X)=\Phi\;\text{vec}(X_{0}), \tag{6.1}\] where \(\Phi\in\mathbb{R}^{m\times n^{2}}\) is drawn from the standard Gaussian ensemble, i.e., its entries are independently and identically distributed from a zero-mean unit variance Gaussian distribution. We declare \(X_{0}\) to be recovered (a solution to (6.1)) if \(\|X_{\text{opt}}-X_{0}\|_{F}/\|X_{0}\|_{F}<10^{-3}\), as proposed in [15]. In order to check sharp minima, it is required to verify Restricted Injectivity (5.16), compute \(\tau(X_{0})\) (5.20) or Source Coefficient \(\rho(X_{0})\) (5.19); see [24] or Remark 5.5. To verify strong minima at \(X_{0}\), we use Strong Source Coefficient \(\zeta(X_{0})\) in (4.24) or (4.35) and use it. Specifically, let \(U_{0}\begin{pmatrix}\Sigma_{0}&0\\ 0&0\end{pmatrix}V_{0}^{\text{T}}\) be a full SVD of \(X_{0}\). We respectively denote \(u_{i}\) and \(v_{j}\) as the \(i\)th and \(j\)th column of \(U_{0}\) and \(V_{0}\) for \(1\leq i,j\leq n\). Note from the formula of the tangent model space \(\mathbb{T}_{0}\) in (5.26) that \(\mathcal{B}=\{u_{i}v_{i}^{T}:(i,j)\notin[n-r+1,n]\times[n-r+1,n]\}\) forms a basis for \(\mathbb{T}_{0}\). Thus, the Restricted Injectivity holds if \(\text{rank}\,\Phi B=r(2n-r)\), where \(B\) is a matrix whose columns are all vectors from the basis \(\mathcal{B}\). To compute \(\tau(X_{0})\), we assume that \(U\Sigma V^{T}\) is an SVD of \(\Phi\) and denote \(V_{G}\) by the matrix whose columns are the last \(n^{2}-r\) columns of \(V\). We then solve the following vectorized problem for the optimal solution \(Z^{*}\) by using the cvxpy package and compute \(\tau(X_{0})=\|Z^{*}\|\): \[\min_{Z\in\mathbb{T}_{0}^{\perp}}\|Z\|_{F}\quad\text{subject to}\quad N\text{ vec}(Z)=-N\text{vec}(E_{0}), \tag{6.2}\] where \(N=V_{G}^{T}\) and \(\mathbb{T}_{0}^{\perp}\) is known by \[\mathbb{T}_{0}^{\perp}=\left\{U_{0}\begin{pmatrix}0&0\\ 0&D\end{pmatrix}V_{0}^{T}|\;D\in\mathbb{R}^{(n-r)\times(n-r)}\right\}. \tag{6.3}\] To calculate \(\rho(X_{0})\), we solve the following vectorized problem of (5.19) for the optimal value \(\rho(X_{0})\) by using the cvxpy package: \[\min_{Z\in\mathbb{T}_{0}^{\perp}}\|Z\|\quad\text{subject to}\quad N\text{vec }(Z)=-N\text{vec}(E_{0}), \tag{6.4}\] where \(N\) and \(\mathbb{T}_{0}^{\perp}\) are respectively determined in (6.2) and (6.3). \(X_{0}\) is a sharp solution of problem (6.1) if either \(\tau(X_{0})\) or \(\rho(X_{0})\) is smaller than \(1\); see Remark 5.5. Due to the possible (small) error in computation, we classify sharp minima if \(X_{0}\) is recovered and either \(\tau(X_{0})<0.99\) or \(\rho(X_{0})<0.95\) To classify strong minima, we consider the cases when \(X_{0}\) is recovered, and \(\tau(X_{0})>0.99\) and \(0.95<\rho(X_{0})<1.05\). Let \(Z_{0}\) be an optimal solution problem of (6.4) expressed by the following form: \[Z_{0}=U_{0}\begin{pmatrix}0&0\\ 0&D_{0}\end{pmatrix}V_{0}^{T}\quad\text{and}\quad Y_{0}=U_{0}\begin{pmatrix} \mathbb{I}&0\\ 0&D_{0}\end{pmatrix}V_{0}^{T} \tag{6.5}\] with some \(D_{0}\in\mathbb{R}^{(n-r)\times n-r}\). Note that \(Y_{0}\) is a dual certificate of \(X_{0}\). According to Theorem 5.2, \(X_{0}\) is a strong solution provided that \[\operatorname{Ker}\Phi\cap T_{N_{\mathbb{B}}(Y_{0})}(X_{0})=\{0\}. \tag{6.6}\] By Theorem 4.6, this condition holds when the Restricted Injectivity is satisfied and the Strong Source Coefficient \(\zeta(X_{0})\), the optimal value of problem (4.24) or (4.35) is smaller than \(1\). Let \(\widehat{U}\widehat{\Sigma}\widehat{V}\) be an SVD of \(D_{0}\). We write \(U_{0}=[U_{I}\ U_{I}]\) and \(V_{0}=[V_{I}\ V_{K}]\), where \(U_{I}\) and \(V_{I}\) are the first \(r\) columns of \(U_{0}\) and \(V_{0}\), respectively. Define \(\overline{U}=[U_{I}\ U_{J}\widehat{U}]\) and \(\overline{V}=[V_{I}\ V_{J}\widehat{V}]\), it follows that \((\overline{U},\overline{V})\in\mathcal{O}(X_{0})\cap\mathcal{O}(Y_{0})\) by (4.8). To compute \(\zeta(X_{0})\), we solve the vectorized problem of (4.35): \[\min_{Z\in\mathbb{T}_{0}^{\perp},W\in\mathcal{E}^{\perp}}\|Z\|\quad\text{ subject to}\quad N\mathrm{vec}(Z+E_{0}+W)=0, \tag{6.7}\] where \(\mathbb{T}_{0}^{\perp}\) is determined in (6.3), \(E_{0}=U_{I}V_{I}^{T}\), and \(\mathcal{E}^{\perp}\) is taken from (4.34): \[\mathcal{E}^{\perp}=\left\{\overline{U}\begin{pmatrix}A&B&C\\ -B^{T}&0&0\\ D&0&0\end{pmatrix}\overline{V}^{T}|\ A\in\Psi_{r},B\in\mathbb{R}^{r\times(p-r)},C\in\mathbb{R}^{r\times(n-p)},D\in\mathbb{R}^{(n-p)\times r}\right\}. \tag{6.8}\] We classify strong (non-sharp) minima if \(\zeta(X_{0})<0.95\). To illustrate the occurrence of strong minima, sharp minima, and solution uniqueness of problem (6.1), we create following graphs representing the proportion of each situation with respect to the number of measurements in Figures 1. For each fixed \(n=40\) and \(2\leq r\leq 7\), at any measurements \(m\) we study \(100\) random cases and record the percentage of cases that are recovered, sharply recovered, and strongly (not sharply) recovered in black, blue, and red curves, respectively. Observe that the percentage of cases where \(X_{0}\) is a strong (not sharp) solution is highest at approximately \(40\%\) and is more than the cases of sharp minima when the number of measurements is not big enough. This phenomenon was obtained at different measurements for different ranks, indicating a significant number of cases with strong (not sharp) solutions. Additionally, higher ranks require more measurements to achieve the highest percentage of cases with strong (not sharp) solutions. We also plot the average values of \(\tau(X_{0})\), IC(\(X_{0}\)) (Irrepresentability Criterion (5.21)), \(\rho(X_{0})\), and \(\zeta(X_{0})\) for each number of measurements in Figures 2 with different ranks. It seems that using \(\rho(X_{0})\) to check sharp minima gives us more cases than using \(\tau(X_{0})\). Moreover, \(\zeta(X_{0})\) is significantly smaller than both \(\tau(X_{0})\) and \(\rho(X_{0})\) while IC(\(X_{0}\)) is slightly greater than \(\tau(X_{0})\). ### Experiment 2 In the second experiment, we study the following matrix completion problem: \[\min_{X\in\mathbb{R}^{n\times n}}\quad\|X\|_{*}\quad\text{subject to}\quad X _{ij}=(X_{0})_{ij},\quad(i,j)\in\Omega. \tag{6.9}\] by a similar process to the first one. We also generate \(X_{0}\), an \(n\times n\) matrix of rank \(r\), by sampling two factors \(W\in\mathbb{R}^{n\times r}\) and \(H\in\mathbb{R}^{n\times r}\) with i.i.d. random entries and setting \(X_{0}=WH^{*}\). However, Figure 1: Proportions of cases for which \(X_{0}\) is a solution, sharp solution, and strong (not sharp) solution with respect to the number of measurements. Figure 2: Evolution of the average value of \(\tau(E_{0})\), Source Nondegenerate \(\rho(E_{0})\), and Strong Source Coefficient \(\zeta(E_{0})\) with respect to the number of measurements. this time we sample an indexed subset \(\Omega\) of \(m\) entries uniformly at random from \([n]\times[n]\). Cvxpy package is also used to solve problem (6.9) with an optimal solution \(X_{\text{opt}}\). \(X_{0}\) is said to be recovered if \(\|X_{\text{opt}}-X_{0}\|_{F}/\|X_{0}\|_{F}<10^{-3}\). To classify sharp minima, we check Restricted Injectivity (5.16) (as done in the first experiment), compute \(\tau(X_{0})\) in (5.20) or Source Coefficient \(\rho(X_{0})\) in (5.19), and restrict \(\tau(X_{0})\leq 0.99\) or \(\rho(X_{0})\leq 0.95\). Specifically, to calculate \(\tau(E_{0})\), we follow the following steps. It is similar to the first experiment, denote \(u_{i}\) and \(v_{j}\) by the \(i\)th and \(j\)th column of \(U_{0}\) and \(V_{0}\), where \(U_{0}DV_{0}^{T}\) is a full SVD of \(X_{0}\). We define \(B_{ij}=P_{\Omega}(u_{i}v_{j}^{T})\) for all \((i,j)\in\Gamma\stackrel{{\text{\tiny def}}}{{=}}\{(i,j)\in[n] \times[n]|\ (i,j)\not\in[n-r,n]\times[n-r,n]\}\), where \(P_{\Omega}\) is the projection mapping defined by \(P_{\Omega}(X)_{ij}=X_{ij}\) if \((i,j)\in\Omega\) and \(0\) otherwise. Then we solve the following linear system for \(\alpha_{ij}\): \[\sum_{(i,j)\in\Gamma}\alpha_{ij}P_{\Gamma}\left(U_{0}^{T}B_{ij}V_{0}\right)= \begin{pmatrix}I_{r}&0\\ 0&0\end{pmatrix}.\] It is not difficult to obtain from (5.20) that \[\tau(E_{0})=\|Y-E_{0}\|\,,\] where \(Y=\sum_{(i,j)\in\Gamma}\alpha_{ij}B_{ij}\) and \(E_{0}=U_{0}\begin{pmatrix}I_{r}&0\\ 0&0\end{pmatrix}V_{0}^{T}\). To compute \(\rho(X_{0})\), we transform problem (5.19) to the case of matrix completion as follows: \[\min_{Z\in\mathbb{T}_{0}^{\perp}}\|Z\|\quad\text{subject to}\quad Z_{ij}=-(E_{0})_{ ij},\quad(i,j)\notin\Omega \tag{6.10}\] and solve by using cvxpy package for the optimal value \(\rho(X_{0})\), where \(\mathbb{T}_{0}^{\perp}\) is determined in (6.3). Similarly to (6.5), we denote \(Z_{0}\) be an optimal solution of problem (6.10) and denote \(Y_{0}=Z_{0}+E_{0}\) as a dual certificate of \(X_{0}\). To check if \(X_{0}\) is a strong solution, we only need to verify (6.6) with \(\operatorname{Ker}\Phi=\operatorname{Ker}P_{\Omega}\) by Theorem 5.2. According to Theorem 4.6, the later holds under the Restricted Injectivity and \(\zeta(X_{0})<1\), which is the optimal solution of the following problem, a version of (4.35) for matrix completion: \[\min_{Z\in\mathbb{T}_{0}^{\perp},W\in\mathcal{E}^{\perp}}\|Z\|\quad\text{ subject to}\quad(Z+E_{0}+W)_{ij}=0,\quad(i,j)\not\in\Omega. \tag{6.11}\] Here \(\mathbb{T}_{0}^{\perp}\) and \(\mathcal{E}^{\perp}\) are defined (6.3) and (6.8). We classify a case of strong minima (strong recovery) if \(X_{0}\) is recovered and \(\zeta(X_{0})\leq 0.95\), but \(\tau(X_{0})>0.99\) and \(0.95<\rho(X_{0})<1.05\). In Figures 3, we plot the proportion of cases when \(X_{0}\) is a unique solution, sharp solution, and strong (not sharp) solution in relation to the number of measurements. Additionally, Figures 4 displays the curves of the average value of \(\tau(X_{0})\), \(\rho(X_{0})\), and \(\zeta(X_{0})\) with respect to the number of measurements. Based on Figure 3, we can see that the highest percentage of cases where \(X_{0}\) is a strong solution (but not a sharp one) is just around \(15\%\). It is much smaller than the \(40\%\) in Experiment 1. A possible reason is due to the special structure of the linear operator \(\Phi=P_{\Omega}\) in (6.9), which is chosen in such a way that each row contains only one entry of \(1\) and the remaining entries are \(0\). Additionally, these entries of \(1\) must be in distinct columns. Similar to Experiment 1, we found that higher ranks require more measurements to achieve the highest percentage of cases with strong (not sharp) solutions. As shown in Figure 4, the average values of \(\tau(X_{0})\), \(\rho(X_{0})\), and \(\zeta(X_{0})\) change depending on the number of measurements. The difference between the curves of \(\tau(X_{0})\) and \(\rho(X_{0})\) is less noticeable in comparison to Experiment 1, but the curve \(\zeta(X_{0})\) is still significantly lower than both of \(\tau(X_{0})\) and \(\rho(X_{0})\). Figure 4: Evolution of the average value of \(\tau(E_{0})\), Source Nondegenerate \(\rho(E_{0})\), and Strong Source Coefficient \(\zeta(E_{0})\) with respect to the number of measurements. Figure 3: Proportions of cases for which \(X_{0}\) is a solution, sharp solution, and strong (not sharp) solution with respect to the number of measurements.
2305.10951
Making More of Little Data: Improving Low-Resource Automatic Speech Recognition Using Data Augmentation
The performance of automatic speech recognition (ASR) systems has advanced substantially in recent years, particularly for languages for which a large amount of transcribed speech is available. Unfortunately, for low-resource languages, such as minority languages, regional languages or dialects, ASR performance generally remains much lower. In this study, we investigate whether data augmentation techniques could help improve low-resource ASR performance, focusing on four typologically diverse minority languages or language variants (West Germanic: Gronings, West-Frisian; Malayo-Polynesian: Besemah, Nasal). For all four languages, we examine the use of self-training, where an ASR system trained with the available human-transcribed data is used to generate transcriptions, which are then combined with the original data to train a new ASR system. For Gronings, for which there was a pre-existing text-to-speech (TTS) system available, we also examined the use of TTS to generate ASR training data from text-only sources. We find that using a self-training approach consistently yields improved performance (a relative WER reduction up to 20.5% compared to using an ASR system trained on 24 minutes of manually transcribed speech). The performance gain from TTS augmentation for Gronings was even stronger (up to 25.5% relative reduction in WER compared to a system based on 24 minutes of manually transcribed speech). In sum, our results show the benefit of using self-training or (if possible) TTS-generated data as an efficient solution to overcome the limitations of data availability for resource-scarce languages in order to improve ASR performance.
Martijn Bartelds, Nay San, Bradley McDonnell, Dan Jurafsky, Martijn Wieling
2023-05-18T13:20:38Z
http://arxiv.org/abs/2305.10951v2
Making More of Little Data: Improving Low-Resource Automatic Speech Recognition Using Data Augmentation ###### Abstract The performance of automatic speech recognition (ASR) systems has advanced substantially in recent years, particularly for languages for which a large amount of transcribed speech is available. Unfortunately, for low-resource languages, such as minority languages, regional languages or dialects, ASR performance generally remains much lower. In this study, we investigate whether data augmentation techniques could help improve low-resource ASR performance, focusing on four typologically diverse minority languages or language variants (West Germanic: Gronings, West-Frisian; Malayo-Polynesian: Besemah, Nasal). For all four languages, we examine the use of self-training, where an ASR system trained with the available human-transcribed data is used to generate transcriptions, which are then combined with the original data to train a new ASR system. For Gronings, for which there was a pre-existing text-to-speech (TTS) system available, we also examined the use of TTS to generate ASR training data from text-only sources. We find that using a self-training approach consistently yields improved performance (a relative WER reduction up to 20.5% compared to using an ASR system trained on 24 minutes of manually transcribed speech). The performance gain from TTS augmentation for Gronings was even stronger (up to 25.5% relative reduction in WER compared to a system based on 24 minutes of manually transcribed speech). In sum, our results show the benefit of using self-training or (if possible) TTS-generated data as an efficient solution to overcome the limitations of data availability for resource-scarce languages in order to improve ASR performance. ## 1 Introduction Self-supervised learning (SSL) enables speech representation learning without the need for (manually) labeled data. Although this approach is very effective, pre-training an SSL model is costly. This cost (e.g., training time, resources, and memory) increases with the number of languages added to the model. Furthermore, transferring information across languages, or extending a pre-trained model to new data or to a different domain is computationally expensive, and catastrophic forgetting may occur [1]. To alleviate this, SSL models are therefore often fine-tuned on the target task with target domain data. For the task of automatic speech recognition (ASR), fine-tuning approaches generally require less data, but training ASR systems that perform well for languages with very little data remains challenging. This leads to (digitally) underrepresented communities and domains such as minority languages, regional languages and dialects not profiting sufficiently from most recent technological advancements. Recent studies explored fine-tuning of pre-trained self-supervised models for ASR using speech from low-resource languages (e.g., 2022; 2022), and difficulties of modeling resource-scarce languages and dialects were acknowledged in previous work [1]. It remains an open question to what extent model performance is dependent on the amount of fine-tuning data and the type of language, when the total amount of available data for a language is limited. Having a better understanding of how limited training data affects model performance paves the way for creating meaningful speech technology for a wider range of languages. In this paper, we fine-tune pre-trained SSL models for ASR using varying amounts of data from four typologically diverse minority languages or language variants: Gronings, West-Frisian, Besemah and Nasal, which have a limited amount of data available. We specifically investigate whether data augmentation approaches can be used to generate additional training data to improve the performance of these models, particularly when very little resources are available. By using data from (ongoing) language documentation projects, we evaluate a real-world use of our experimental setup. Previous work describes the benefits of data augmentation by adopting a self-training approach, which generates labels (i.e. transcriptions) for unlabeled speech (e.g., Xu et al.2020, 2021; Kahn et al.2020; Zhang et al.2021; Berrebi et al.2022; Khurana et al.2022; Lugosch et al.2022). Various self-training methods are proposed, including iterative approaches, decoding with an external (text-based) language model, or filtering approaches that improve the quality of the generated labels. However, limited conclusions can be drawn from these works on the effectiveness of self-training in a very low-resource, real-world setting, as these studies either use datasets with more than 10 hours of data (which may not be available for very small languages), only considered modeling English, or reported average performance over a set of languages that strongly varied in terms of training data size. We therefore complement this work by investigating the benefits of self-training for four typologically different, true low-resource languages. To this end, we use a standard self-training approach to evaluate the potential benefit of a simple system in a real-world setup, which nevertheless yields substantial performance improvements (relative word-error-rate (WER) reductions up to 20.5%). In addition to self-training, several studies (e.g., Rosenberg et al.2019; Du and Yu2020; Rossenbach et al.2020a) reported on augmenting the training data with synthetic speech generated using a text-to-speech (TTS) system. For this reason, we also examine whether this approach is useful in our low-resource setup. We recognize that not all very low-resource languages may have sufficient amounts of data available for TTS development, and we therefore only generate synthetic training examples for Gronings, one of the four low-resource languages in our dataset that has an existing TTS system available. We show the benefit (i.e. up to 25.5% relative reduction in WER) of augmenting the training data by using an existing TTS system, and analyze the effect of adding different amounts of synthetic speech on the model performance. Our datasets, code, and newly trained models are publicly available.1 Footnote 1: [https://github.com/Bartelds/asr-augmentation](https://github.com/Bartelds/asr-augmentation) ## 2 Data As indicated, we use transcribed speech from Gronings, West-Frisian, Besemah, and Nasal. For the latter two minority languages, only four hours of manually transcribed speech data are available. For all language varieties, we therefore limit the amount of manually transcribed speech data to four hours. We divide each dataset into 80% for training, 10% for development and 10% for testing. The development and test sets therefore include approximately 24 minutes of speech, and the training set contains approximately 3.2 hours of transcribed speech. In line with Wei et al.2022, we allow for speaker overlap between the sets due to the limited number of speakers per language variant, as they found that it had limited effects on the performance of ASR models. All data have been anonymized by assigning recordings a random identifier, and no other meta-information that could be used for identifying the speakers were collected or extracted. We obtained consent from the communities to publicly release the datasets for Gronings, Besemah, and Nasal. The West-Frisian data can be obtained by emailing the authors (ISLRN: 340-994-352-616-4). ### Gronings and West-Frisian Gronings is a Low-Saxon language variant that is spoken in the province of Groningen, which is located in the northern part of the Netherlands. Within this language variant, there is regional lexical, grammatical and acoustic variation. We use data from an ongoing language documentation project that aims to record the speech of all variants of Gronings. To date, read-aloud speech from three speakers has been recorded (two female speakers and one male speaker) for three different variants, namely Hogelanders, Oldambters, and Westerkwartiers. This data, consisting of almost 14 hours of transcribed speech data, is included in this study. From these 14 hours, four hours of manually transcribed speech was extracted for training, development and testing. The remaining data was partly used for generating additional training data. The 2,130 transcribed recordings in this dataset, comprised of book texts and corresponding recordings, have an average duration of 6.8 seconds (SD: 4.9). We normalized the transcriptions by excluding all characters that do not occur in the Gronings alphabet.2 In addition, we also include transcribed speech data from three different speakers (two female speakers and one male speaker), yielding a total of 19 minutes of speech data. This data was extracted from the publicly available dataset provided by San et al.2021. These recordings have a mean duration of 3.5 seconds (SD: 1.3). We only use this subset of data for out-of-domain testing. West-Frisian is the second official language of the Netherlands and is spoken in the province of Friesland, which is also located in the northern part of the Netherlands. For this study, we extracted four (out of eight) hours of transcribed speech data from the FAME! ASR corpus (Yilmaz et al., 2017) that contains radio and television speech from Dutch-Frisian bilinguals. The extracted dataset includes 4,919 transcribed speech samples from 277 speakers (68 female, 199 male speakers, and 10 unknown) with an average duration of 2.9 seconds (SD: 0.7). We removed all characters from the transcripts that are not part of the West-Frisian alphabet (Yilmaz et al., 2016). ### Besemah and Nasal Besemah and Nasal are two Austronesian languages that are spoken in southern Sumatra, Indonesia. For both languages, approximately 45 hours of informal conversation data were collected through fieldwork. For each language, four hours of conversational data have been transcribed, which are used in this study. For Besemah, there are 7,835 transcribed utterances from 46 speakers (30 female speakers and 16 male speakers) with an average sample length of 1.8 seconds (SD: 0.3). The Nasal dataset contains 7,672 transcribed utterances from 40 speakers (15 female speakers and 25 male speakers) with an average duration of 3.9 seconds (SD: 0.3). We normalized all transcriptions to the working orthographies developed for Besemah and Nasal as part of ongoing collaborative language documentation projects. ## 3 Methods We fine-tune the pre-trained multilingual XLS-R model with 317 million parameters on different amounts of training data from the four languages in our dataset (Babu et al., 2021). Note that we chose the smallest publicly available pre-trained XLS-R model to minimize the computational requirements needed for (reproducing) this study. XLS-R is pre-trained on approximately 436,000 hours of speech in 128 different languages. This data was collected from a variety of sources, including parliamentary speech (372,000 hours in 23 European languages), read speech from Multilingual Librispeech (44,000 hours in eight European languages) and Common Voice (7,000 hours in 60 languages), speech from YouTube from the VoxLingua107 corpus (6,600 hours in 107 languages), and conversational telephone speech from the BABEL corpus (approximately 1,000 hours in 17 African and Asian languages). The majority of the training data is from Indo-European languages (87%), and the language that is most represented is English (roughly 70,000 hours). While the model does include a small portion of West-Frisian data (i.e. 15 hours), this is not the case for Gronings, Besemah, and Nasal. The architecture and pre-training objective of XLS-R are similar to those of wav2vec 2.0 (Baevski et al., 2020). The model is trained as a single end-to-end system, and consists of a convolutional encoder, a quantizer, and a 24-layer Transformer model. Speech representations are learned through a contrastive task that is applied to the quantized encoder representations. After pre-training, the model can be fine-tuned for speech recognition using transcribed speech. A linear projection is added on top of the Transformer network to predict characters from the transcriptions using connectionist temporal classification (CTC; Graves et al. 2006). We include a multilingual model in our study, because previous work showed that multilingual pre-training transfers well to low-resource languages (e.g., Bartelds and Wieling 2022; Khurana et al. 2022). We experimented with fine-tuning other models (for example the Dutch wav2vec 2.0 model included by Bartelds and Wieling 2022), but preliminary results showed that XLS-R was superior. The hyperparameters of our fine-tuning experiments follow those reported in Baevski et al. (2020) for comparable data sizes, except for the learning rate, which we tune on the basis of the development data by evaluating the following range: \([5\mathrm{e}-4,1\mathrm{e}-4,5\mathrm{e}-5,1\mathrm{e}-5]\). In addition, we reduce the batch size and use gradient accumulation to make sure our experiments run on limited compute hardware (i.e. a single Nvidia 40 GB A100 GPU). We evaluate the fine-tuned models in terms of word error rate (WER), which is a commonly used evaluation metric based on the number of substitutions, deletions, and additions between two transcripts, and report performance on the test set using the fine-tuned model checkpoint that has the lowest WER on the validation set. Additionally, we investigate whether it is beneficial to further pre-train the XLS-R model using limited data and computational hardware before fine-tuning the model for ASR. As pre-training is computationally expensive, we only evaluate the performance on Gronings, for which we perform the broadest range of experiments. Specifically, we pre-train on the four hours of Gronings training data with the test set samples removed for 100,000 steps and use a learning rate of \(1\mathrm{e}{-5}\), which was selected after briefly experimenting with a range of learning rates that we evaluated on the validation set. Similar to the fine-tuning experiments, we use gradient accumulation and a small batch size. The total computational budget for this study is about 390 hours on a 40 GB A100 GPU (160 fine-tuning runs of roughly 2 hours each, and pre-training runs of roughly 70 hours). We perform all experiments using the HuggingFace Transformers library, version 4.24.0 Wolf et al. (2020). ## 4 Experimental Setup For each of the languages, we use varying amounts of training data for fine-tuning the multilingual XLS-R model. Additionally, for Gronings, we also fine-tune the XLS-R model that is further pre-trained on Gronings. For all experiments, we start from the full training dataset of 192 minutes (80% of four hours), and divide this set repeatedly into smaller subsets until reaching roughly 20 minutes (50% of each split). Consequently, we have training sets of 192, 96, 48 and 24 minutes, respectively. In the self-training approach, we fine-tune the pre-trained XLS-R models on one of the subsets of data (i.e. 24, 48, or 96 minutes) as the initial step. We regard this model as the teacher model, which is then used to transcribe the remaining portion of speech data from the full training data (i.e. without the labels). The resulting automatically transcribed data, in conjunction with the original labeled data, is subsequently used to fine-tune a second model, referred to as the student model, which ideally outperforms the teacher model. This approach is shown in Figure 1. For example, we fine-tune a XLS-R teacher model on 24 minutes of manually transcribed speech data and use this model to label the remaining 168 minutes of speech data contained in the full training set. The combined data (e.g., 24 minutes of natural speech with correct labels and 168 minutes of automatically transcribed speech obtained through self-training) are subsequently used to fine-tune a new student model. We apply this procedure to each of the three training splits to investigate in which cases self-training may be beneficial in a low-resource setting. Our decoding procedure does not use an external language model (LM) due to the limited availability of text-based training materials for all languages, and also to ensure a fair comparison between languages. This is supported by previous work that found no improvement in speech recognition performance when limited amounts of textual data are available for LM training San et al. (2023). Note that in addition to the self-training approach, preliminary experiments were conducted with other data augmentation techniques (following Sriram et al.2022). Specifically, we experimented with adding noise to the speech signal, raising or lowering the pitch of the speaker, and simulating far-field speech. These techniques, however, did not improve the speech recognition performance, and we discarded them from our experimental setup to limit the amount of comparisons. ### Additional Generated Training Data For Gronings, we investigate the effect of using additional generated training data obtained through self-training or via a TTS system. This additional training data is generated on the basis of the remaining manually transcribed speech data we have available for Gronings. Specifically, from this data we only use the audio recordings combined with the associated automatically generated transcriptions in the self-training procedure, while we only use the transcriptions of these recordings together with the associated synthetic speech generated using the TTS system during the synthetic speech procedure (explained below). We did not use the speech data in combination with the associated manually generated transcriptions for training, since we are interested in the performance of the two aforementioned data augmentation techniques. Note that for these experiments, we only use the smallest subset of manually transcribed speech training data (i.e. 24 minutes) to investigate the added benefit of generating a relatively large amount of additional fine-tuning data. Inspired by Xu et al. (2020), we conduct three iterations of self-training to incrementally improve the quality of the generated transcriptions. Specifically, we fine-tune an XLS-R teacher model on the 24-minute subset of Gronings as the first step. This model is then used to transcribe the remaining unlabeled portion of the original training data (i.e. 168 minutes). The combined data is then used to fine tune a student model. We use the new student model to transcribe another set of 168 minutes of unlabeled speech, and add this data to our training data, which now contains 24 minutes of original data and two times 168 minutes (i.e. 336 minutes) of data that was transcribed through self-training. We then fine-tune another student model using the new training data (i.e. 24 + 336 minutes) and use it to transcribe an additional set of 336 minutes of unlabeled data to examine the effects of substantially increasing the training data. Finally, we also add these data to our training data and fine-tune a final student model on the complete amount of training data (i.e. 24 + 336 + 336 minutes). Each of these student models is then evaluated on the test set. ### Synthetic speech In addition to transcribing unlabeled speech through self-training, we generate synthetic speech samples on the basis of the original transcriptions using an existing TTS system that was trained on about two hours of read speech from a single female speaker of the Hogelanders variant of Gronings. This system uses the FastSpeech 2 architecture (Ren et al., 2020), and was previously developed for integration (pending) in the online language documentation project on Gronings.3 We use this existing TTS system to generate synthetic training data using the transcripts of the same sets of recordings that were used for the self-training experiments explained above. To line up with the self-training models, we fine-tune three XLS-R models using different amounts of training data. The first model is fine-tuned using the 24-minute subset of manually transcribed speech supplemented with synthetic speech generated using the transcripts that correspond to the remaining 168 minutes of manually transcribed training data. The second model is fine-tuned on the same subset augmented with the second set of 168 minutes of additional TTS-generated recordings (i.e. based on the transcriptions of the second set of 168 minutes of training data also used in the self-training experiment described above). We then augment the training data once more by adding synthetic speech samples using the transcripts from the final set of additional training data (i.e. 336 minutes), and fine-tune the XLS-R model on the complete amount of training data. This approach is visualized in Figure 2. Footnote 3: [https://woordwaark.nl](https://woordwaark.nl) ## 5 Results We show the word error rates (WERs) for Gronings, West-Frisian, Besemah, and Nasal in Figure 3. The WERs for the development set are presented in Appendix A. For each of the languages, we observe a clear performance increase (i.e. lower WERs) when the amount of manually transcribed training data becomes larger. The WERs decrease between 30.1% and 53.3% when we use the complete set of training data (i.e. 192 minutes of manually transcribed speech data) instead of the 24-minute subset. Importantly, Figure 3 also shows that self-training is beneficial for each of the languages. Student models improve over their teacher models in almost all cases. The improvement is particularly strong when the teacher model was based on a very small amount of data (i.e. 24 minutes) and ranges between 6.3% and 13.9%. ### Further Pre-Training In Figure 4, we show the fine-tuning results for varying amounts of training data (similar to those shown in Figure 3) based on an XLS-R model that was further pre-trained on Gronings. For comparison, this figure also shows the performance of the original fine-tuned models for Gronings. Pre-training generally results in a small increase in performance (up to a 9.3% improvement) when only manually transcribed speech data was used to Figure 1: Visualization of the self-training approach where a teacher model is fine-tuned on manually transcribed data and subsequently used to transcribe unlabeled speech. A student model is then fine-tuned on the combined datasets. fine-tune the model. Additionally, when a model was fine-tuned on data obtained using self-training, the performance gains were minimal (up to 1.7% improvement). ### Additional Generated Training Data The effect of using additional augmented training data on ASR model performance is visualized in Figure 4(a). To better evaluate these results, we also added the self-training results shown in Figure 2(a) to this figure. Our results for self-training show that increasing the amount of automatically generated fine-tuning data is beneficial, albeit to a lesser extent than the benefit of using the first set of 168 minutes of speech with automatically generated transcriptions. Nevertheless, the performance of the model fine-tuned using 24 minutes of manually transcribed speech data plus 672 minutes of speech data with automatically generated transcriptions yields a relative WER reduction of 20.5% com Figure 3: WERs for the test sets of Gronings, West-Frisian, Besemah, and Nasal using varying amounts of training data. Hatched bars indicate when additional training data generated by self-training (ST) was used. Figure 2: Visualization of the TTS-based approach, where synthetic speech is generated by an existing TTS model (trained on a separate two-hour single-speaker dataset), and new models are subsequently trained on both manually transcribed speech and synthetic speech. pared to the corresponding teacher model. Consequently, its performance is close to the performance of the model fine-tuned on 48 minutes of manually transcribed speech data. Figure 4(a) also shows that an even greater performance gain, namely a WER reduction of 38.6% relative to the model trained using 24 minutes of manually transcribed speech, can be achieved when using an existing TTS system to generate additional training data.4 There is no clear benefit, however, of generating successively larger sets of synthetic speech. Nevertheless, the performance of the model fine-tuned using 24 minutes of manually transcribed speech data plus 168 minutes of synthetic speech data generated using the TTS systems is almost identical to the performance of a model fine-tuned using 96 minutes of manually transcribed speech data. Footnote 4: Robinson et al. (2022) show that synthetic speech from a high-resource language TTS system may be used to generate additional training data for a low-resource language. We experimented with an existing Dutch TTS system to generate synthetic speech for Gronings, but this did not lead to improvements in performance. ### Out-of-domain results The results presented in Figure 4(a) might overestimate the model performance, as the speaker whose data was used for training the available TTS system was also included in the Gronings test set. We therefore also report the fine-tuned model performance on an out-of-domain test set, which does not include any of the speakers that are included in the training data. The results are shown in Figure 4(b). While the performance on the out-of-domain data is clearly worse compared to the original test set, the pattern of the results for the self-training approach remains similar (with a relative WER improvement of up to 16.0%). Furthermore, the benefit of augmenting the training data using a TTS system is still present, but it is less pronounced than before (with a WER improvement of up to 25.5%). Nevertheless, both data augmentation techniques still offer a substantial improvement in WER when the availability of manually transcribed training data is limited. ## 6 Discussion and Conclusion We investigated whether data augmentation techniques are beneficial to improve the performance of ASR systems for four typologically different languages with a limited amount of real-world training data available. We evaluated the performance of XLS-R models fine-tuned using varying amounts of training data, showing that the model performance generally improves (i.e. resulting in lower WERs) when (more, in the case of self training) augmented training data is used. The greatest performance gains across the four languages were observed when the amount of manually transcribed data used for fine-tuning was increased. Nevertheless, we also observed substantial increases in model performance by augmenting very limited amounts of training data through self-training. For Gronings, we found that fine-tuning a model on ad Figure 4: WERs for the test set of Gronings using an XLS-R model that was further pre-trained on Gronings (CPT: bars with vertical lines). Hatched bars indicate when additional training data generated by self-training (ST) was used. For comparison, the results using the model without further pre-training are shown as well (bars without vertical lines). ditional data obtained through iterative self-training performed almost as well as a model fine-tuned on double the amount of manually transcribed speech data. Importantly, self-training only requires collecting additional unlabeled speech data, which is typically much easier to obtain than transcribed speech, making it a valuable approach for low-resource languages. Moreover, using an existing TTS system for generating additional synthetic training data was likewise shown to be beneficial. We observed that the benefit of augmenting the training data via the TTS system yielded larger performance gains (even on par with a model fine-tuned on four times the minimum amount of manually transcribed speech data we considered) than using the iterative self-training procedure. However, in contrast to self-training, no beneficial effect was present when increasing the amount of generated data. This pattern held true irrespective of using the general test set for evaluation or an out-of-domain test set instead. While not many minority languages have a suitable TTS system available, generating speech data using such a system is very easy as it only requires written text. Of course, our results also show that when the material is available to train a TTS system (i.e. using audio recordings and associated transcriptions) it is likely better to use these resources directly for training the ASR system. While we showed the benefit of iterative self-training when a very small amount of training data is available, the benefit of supplying more and more self-trained training data was diminishing. Our result extends the findings for English by Xu et al. (2020) to a new set of languages variants. It is possible that the transcriptions generated by a specific teacher model in the self-training approach contain useful information, but that this is negated to a large extent by the generated errors of the model. As teacher models fine-tuned on larger amounts of manually transcribed training data are expected to yield higher quality transcriptions (as shown in e.g., San et al.2022), the effect of generating more data might be more beneficial in these cases. However, this should be investigated in future work. When using the TTS system for augmenting our training data, we did not see a benefit of increasing the amount of generated synthetic speech. As the additional training data represents data from a single speaker (as the TTS system was trained on the basis of data from a single speaker), the model might have been been overfitting to that specific speaker. Future work, therefore, needs to investigate alternatives (or additions) to using a TTS system for generating additional training data. For example, by investigating whether model performance can be improved using speaker adaptation methods or cross-lingual voice conversion (e.g., Rossenbach et al.2020; Baas and Kamper2022). We found only minor performance gains when we fine-tuned the XLS-R model that was further pretrained on Gronings (using all training and development data). Specifically, self-training appeared to have greater performance gains than continuing pre-training (CPT), and combining CPT and self-training only marginally improved results. Given the large computational cost of CPT as opposed to the two data augmentation methods, it is clear that Figure 5: WERs for the regular test set and out-of-domain test set of Gronings when additional training data generated by self-training (ST) or a text-to-speech system (TTS) was used. CPT is not cost-effective. It may be that CPT only yields appreciable performance gains once a sufficient amount of unlabeled audio can be obtained (e.g. 200 hours of Ainu: Nowakowski et al., 2023). However, obtaining such a large amount of data for minority languages or language variants such as Gronings, Besemah, and Nasal is unlikely. It is therefore important to further investigate how a limited amount of target language data can be used effectively for self-supervised pre-training. For example, Paraskevopoulos et al. (2023) reported that using an additional 70-hour out-of-domain corpus alongside a 12-hour target corpus was crucial in improving performance. Given that similar language regularization approaches have been effective for neural machine translation (e.g. Neubig and Hu, 2018), it may be possible that this strategy could also be beneficial for further pre-training in speech (e.g., using a 70-hour Indonesian speech corpus alongside the target four hour Besemah corpus). In conclusion, our results show that data-augmentation techniques may serve as a cost-effective way to improve ASR performance for low-resource languages and variants. While the performance of the four systems is not comparable to systems developed for high-resource languages, these systems may serve as a starting point for these language varieties. We hope our experiments help further more inclusive speech technology for low-resource languages. ## Limitations While we show a clear benefit of data augmentation when the amount of available training data is limited, the performance gain seems to be lower when a larger quantity of manually transcribed speech data is available. Whether data augmentation is always beneficial is an open question. We did not measure the effect of sociolinguistic variables on the performance of the models. A risk might be that especially for the models for Gronings, which were developed on the basis of speech data from only a few speakers, results might be negatively affected by differences in language background (such as speaking a different variety of Gronings, or being from a different social group). We likewise did not measure the effect of non-linguistic variation (e.g., use of different microphones) on the performance of the models. While Bartelds et al. (2022) showed that wav2vec 2.0 representations are relatively unaffected by non-linguistic variation, we aim to further explore this in future work. Finally, we evaluated the effect of training data size and data augmentation on four different minority languages or language variants, each using a single test set. Of course, using a different test set might have affected the results. However, given that the pattern of results was similar across a range of language varieties we do not expect this difference to be large. ## Ethics Statement Our paper evaluated various methods that could make developing automatic speech recognition systems more viable for languages where paired audio and transcriptions are difficult to obtain. In our experiments, we only used already publicly available data (West-Frisian) or data for which we have obtained informed consent for public release from the data custodians (Gronings, Besemah, Nasal). To make our findings as relevant as possible for other language projects, we minimized the amount of computing time used. ## Acknowledgements The authors thank the Center for Information Technology of the University of Groningen for their support and for providing early access to the Habrok high performance computing cluster. We also thank the community members of the four languages, and the three anonymous reviewers for their insightful feedback.
2305.01684
The maximum accretion rate of a protoplanet: how fast can runaway be?
The hunt is on for dozens of protoplanets hypothesised to reside in protoplanetary discs with imaged gaps. How bright these planets are, and what they will grow to become, depend on their accretion rates, which may be in the runaway regime. Using 3D global simulations we calculate maximum gas accretion rates for planet masses $M_{\rm p}$ from 1$\,M_{\oplus}$ to $10\,M_{\rm J}$. When the planet is small enough that its sphere of influence is fully embedded in the disc, with a Bondi radius $r_{\rm Bondi}$ smaller than the disc's scale height $H_{\rm p}$ -- such planets have thermal mass parameters $q_{\rm th} \equiv (M_{\rm p}/M_{\star}) / (H_{\rm p}/R_{\rm p})^3 \lesssim 0.3$, for host stellar mass $M_{\star}$ and orbital radius $R_{\rm p}$ -- the maximum accretion rate follows a Bondi scaling, with $\max \dot{M}_{\rm p} \propto \rho_{\rm g} M_{\rm p}^2 / (H_{\rm p}/R_{\rm p})^3$ for ambient disc density $\rho_{\rm g}$. For more massive planets with $0.3 \lesssim q_{\rm th} \lesssim 10$, the Hill sphere replaces the Bondi sphere as the gravitational sphere of influence, and $\max \dot{M}_{\rm p} \propto \rho_{\rm g} M_{\rm p}^1$, with no dependence on $H_{\rm p}/R_{\rm p}$. In the strongly superthermal limit when $q_{\rm th} \gtrsim 10$, the Hill sphere pops well out of the disc, and $\max \dot{M}_{\rm p} \propto \rho_{\rm g} M_{\rm p}^{2/3} (H_{\rm p}/R_{\rm p})^1$. Applied to the two confirmed protoplanets PDS 70b and c, our numerically calibrated maximum accretion rates imply their Jupiter-like masses may increase by up to a factor of $\sim$2 before their parent disc dissipates.
Nick Choksi, Eugene Chiang, Jeffrey Fung, Zhaohuan Zhu
2023-05-02T18:00:04Z
http://arxiv.org/abs/2305.01684v2
# The maximum accretion rate of a protoplanet: how fast can runaway be? ###### Abstract The hunt is on for dozens of protoplanets hypothesised to reside in protoplanetary discs with imaged gaps. How bright these planets are, and what they will grow to become, depend on their accretion rates, which may be in the runaway regime. Using 3D global simulations we calculate maximum gas accretion rates for planet masses \(M_{\rm p}\) from 1 \(M_{\oplus}\) to 10 \(M_{\rm J}\). When the planet is small enough that its sphere of influence is fully embedded in the disc, with a Bondi radius \(r_{\rm Bondi}\) smaller than the disc's scale height \(H_{\rm p}\) -- such planets have thermal mass parameters \(q_{\rm th}\equiv(M_{\rm p}/M_{\star})/(H_{\rm p}/R_{\rm p})^{3}\lesssim 0.3\), for host stellar mass \(M_{\star}\) and orbital radius \(R_{\rm p}\) -- the maximum accretion rate follows a Bondi scaling, with \(\max\dot{M}_{\rm p}\propto\rho_{\rm g}M_{\rm p}^{2}/(H_{\rm p}/R_{\rm p})^{3}\) for ambient disc density \(\rho_{\rm g}\). For more massive planets with \(0.3\lesssim q_{\rm th}\lesssim 10\), the Hill sphere replaces the Bondi sphere as the gravitational sphere of influence, and \(\max\dot{M}_{\rm p}\propto\rho_{\rm g}M_{\rm p}^{1}\), with no dependence on \(H_{\rm p}/R_{\rm p}\). In the strongly superthermal limit when \(q_{\rm th}\gtrsim 10\), the Hill sphere pops well out of the disc, and \(\max\dot{M}_{\rm p}\propto\rho_{\rm g}M_{\rm p}^{2/3}(H_{\rm p}/R_{\rm p})^{1}\). Applied to the two confirmed protoplanets PDS 70b and c, our numerically calibrated maximum accretion rates imply their Jupiter-like masses may increase by up to a factor of \(\sim\)2 before their parent disc dissipates. keywords: planets and satellites: formation - planets and satellites: general - planets and satellites: fundamental parameters - protoplanetary discs - planet-disc interactions ## 1 Introduction The Atacama Large Millimeter Array (ALMA) is imaging circumstellar discs at high angular resolution and finding annular gaps in dust (ALMA Partnership et al., 2015; Huang et al., 2018; Cieza et al., 2019) and gas (Isella et al., 2016; Fedele et al., 2017; Favre et al., 2019; Zhang et al., 2021). A popular interpretation is that these gaps are opened by embedded planets and the density waves they excite (Goldreich and Tremaine, 1980; Goodman and Rafikov, 2001; Kanagawa et al., 2016; Zhang et al., 2018; Dong and Fung, 2017; Bae et al., 2017). Velocity-resolved channel maps of gas emission lines also reveal non-Keplerian gas motions that could be stirred by planets (Teague et al., 2018, 2019; Pinte et al., 2020, 2023). Dozens of potential planets have been identified; see Table 1 for a compilation. Efforts to confirm their presence by direct imaging are accelerating (Cugno et al., 2019; Zurlo et al., 2020; Asensio-Torres et al., 2021; Jorquera et al., 2021; Facchini et al., 2021; Huelamo et al., 2022; Currie et al., 2022; Follette et al., 2022; Cugno et al., 2023), but so far only the protoplanets PDS 70b and c have been captured in their own light (Haffert et al., 2019; Wang et al., 2020, 2021; Zhou et al., 2021). Prospects for direct imaging depend critically on accretion luminosities. The planet masses \(M_{\rm p}\) inferred from fitting disc substructures are usually \(\gtrsim 10\,M_{\oplus}\)(Zhang et al., 2018; also our Table 1), large enough that the planets may have acquired massive gas envelopes (e.g. Piso and Youdin, 2014). The self-gravity of these envelopes can lead to "runaway" accretion whereby the mass doubling time of a planet \(M_{\rm p}/\dot{M}_{\rm p}\) decreases with increasing \(M_{\rm p}\)(e.g. Pollack et al., 1996). Runaway can be thermodynamic, brought about by large envelope luminosities and short cooling times in quasi-hydrostatic equilibrium, or hydrodynamic, characterized by flows that accelerate to planetary free-fall velocities (Mizuno et al., 1978; Ginzburg and Chiang, 2019). The outcome of runaway is commonly presumed to be Jupiter-sized gas giants, though how this process unfolds and in particular how it ends remain uncertain. What are the relevant planet accretion rates, and how do they depend on planet mass and disc parameters? Numerical simulations have provided data and fitting formulae in various patches of parameter space (e.g. Tanigawa and Watanabe, 2002; D'Angelo et al., 2003; Machida et al., 2010; Bethune and Rafikov, 2019), but we are not aware of an analytic or unifying theory. To the usual problems associated with accretion -- how material cools and how it sheds angular momentum -- we need to add, for a protoplanet orbiting a star, how gas moves in their combined potential, including rotational forces, in 3D. Lambrechts et al. (2019) point out that what several large-scale disc-planet simulations report as mass accretion rates are actually only upper limits, as permanent accretion of mass depends on smaller-scale physics (e.g. cooling of the planetary interior) which simulations typically do not resolve. In trying to understand from first principles how protoplanets accrete, Ginzburg and Chiang (2019) started with the simplest model, that runaway accretion takes the form of Bondi accretion from a uniform medium with no angular momentum (see, e.g., the textbook by Frank et al., 2002). The assumption of uniform background density would be justified if the planet were fully embedded in the disc, i.e. if its gravitational radius of influence, measured by the Bondi radius \(r_{\rm Bondi}\), were smaller than the local circumstellar disc height \(H_{\rm p}\). The ratio of the two lengths is the thermal mass parameter \[q_{\rm th} \equiv \frac{r_{\rm Bondi}}{H_{\rm p}} \tag{1}\] \[= \frac{M_{\rm p}}{M_{\star}(H_{\rm p}/R_{\rm p})^{3}}\,,\] where \(r_{\rm Bondi}=GM_{\rm p}/c_{\rm s}^{2}\), \(G\) is the gravitational constant, \(M_{\rm p}\) is the planet mass, \(M_{\star}\) is the host stellar mass, \(H_{\rm p}=c_{\rm s}/\Omega_{\rm p}\), and \(\Omega_{\rm p}\) is the planet's Keplerian frequency at orbital radius \(R_{\rm p}\). On the one hand, roughly half of hypothesised gap-opening planets have \(q_{\rm th}\lesssim 1\) (see Table 1), motivating a Bondi-like accretion rate that scales as \(\dot{M}_{\rm p}\propto M_{\rm p}^{2}\). On the other hand, the spherically symmetric Bondi solution ignores the meridional flow patterns seen in 3D simulations (Szulagyi et al., 2014; Fung et al., 2015; Ormel et al., 2015). More massive "superthermal" planets with \(q_{\rm th}\gtrsim 1\) sample more of the disc's vertical density gradient. Stellar tidal forces also enter; these pace accreting material down to the planet's Hill sphere, which in the superthermal regime now lies inside the Bondi radius. As with subthermal planets, there seems no consensus for how the superthermal accretion rate scales with input parameters. A simple argument based on the Hill sphere and Keplerian shear yields an accretion rate \(\dot{M}_{\rm p}\propto M_{\rm p}^{2/3}\)(e.g. Rosenthal et al., 2020, their equation 7, and references therein). But many studies (e.g. Mordasini et al., 2015; Lee, 2019; Lambrechts et al., 2019) adopt the empirical scaling \(\dot{M}_{\rm p}\propto M_{\rm p}^{4/3}\) by Tanigawa and Watanabe (2002) from their 2D numerical simulations. The two options lie on opposite sides of the \(\dot{M}_{\rm p}\propto M_{\rm p}^{1}\) scaling which divides power-law growth from super-exponential runaway growth. Our goal here is to help clear up what seems like a longstanding confusion. We utilize 3D isothermal numerical simulations of planet-disk interactions, similar to those used by others, to decide how the protoplanet accretion rate \(\dot{M}_{\rm p}\) depends on planet mass \(M_{\rm p}\), local disc gas density \(\rho_{\rm g}\), and disc aspect ratio \(H_{\rm p}/R_{\rm p}\), starting in the sub-thermal regime (\(\sim\)1 \(M_{\oplus}\)) and working our way systematically to the superthermal limit (\(\sim\)10 \(M_{\rm J}\)). Actually our findings will be restricted to \(\max\dot{M}_{\rm p}\), as we track only how much mass potentially accretes upon entering a planet's gravitational sphere of influence, not how much actually accretes (see also Lambrechts et al., 2019). Section 2 details our numerical methods. Section 3 reports max \(\dot{M}_{\rm p}\) and how its dependence on input parameters can be understood and reproduced using simple arguments. Section 4 summarises, discusses how our work makes sense of previous numerical studies, and connects to observations. ## 2 Simulation Setup Most of our simulations are performed with the Eulerian hydrodynamics code Athena++(Stone et al., 2020), outfitted with a second-order van Leer time integrator (integrator = v12), a second-order piecewise linear spatial reconstruction of the fluid variables (xorder = 2), and the Harten-Lax-van Leer-Einfeld Riemann solver (-flux hlle). For some regions of parameter space, we check our results against published simulations by Fung et al. (2019) that used the Lagrangian-remap, GPU code PEnGUIn(Fung et al., 2015). The setup of our Athena++ simulations is described below, with differences between PEnGUIn and Athena++ highlighted. ### Equations solved Athena++ solves the 3D Euler equations: \[\frac{\partial\rho}{\partial t}+\nabla\cdot(\rho{\bf v})=0 \tag{2}\] \[\frac{\partial\left(\rho{\bf v}\right)}{\partial t}+\nabla\cdot( \rho{\bf v}\otimes{\bf v})=-\nabla P-\rho\nabla\Phi \tag{3}\] where \(\rho\), \({\bf v}\), and \(P\) are the gas density, velocity, and pressure, and \(\Phi\) is the gravitational potential. We use an isothermal equation of state \[P=\rho c_{\rm s}^{\,2} \tag{4}\] with constant sound speed \(c_{\rm s}\). In the hydrodynamic runaway phase of giant planet formation, the planet's atmosphere cools rapidly and so the isothermal approximation seems appropriate, at least on Bondi sphere scales (Piso and Youdin, 2014; Lee and Chiang, 2015; Ginzburg and Chiang, 2019). Simulations are performed in the frame rotating at the planet's orbital angular frequency \(\Omega_{\rm p}=1\), using spherical coordinates (\(R\), \(\Theta\), \(\Psi\)) centred on the star, where \(R\) is radius, and \(\Theta\) and \(\Psi\) are the polar and azimuthal angles, respectively. In this frame the planet is fixed at (\(R_{\rm p}\), \(\Theta_{\rm p}\), \(\Psi_{\rm p}\)) = (1, \(\pi/2\), \(\pi\)). The gravitational potential is the sum of the potentials due to the star of mass \(M_{\star}\) and the planet of mass \(M_{\rm p}\), plus the indirect potential arising from our star-centred grid: \[\Phi = -\frac{GM_{\star}}{R}-\frac{GM_{\rm p}}{\sqrt{R^{2}+R_{\rm p}^{2} -RR_{\rm p}\sin\Theta\cos(\Psi-\Psi_{\rm p})}}\times f_{\rm soft} \tag{5}\] \[+ \frac{GM_{\rm p}R\sin\Theta\cos(\Psi-\Psi_{\rm p})}{R_{\rm p}^{2}}\] where \(G\) is the gravitational constant. When the distance from the planet \(d=\sqrt{R^{2}+R_{\rm p}^{2}-RR_{\rm p}\sin\Theta\cos(\Psi-\Psi_{\rm p})}\) exceeds \(r_{\rm soft}\), we set \(f_{\rm soft}=1\). Closer to the planet, the potential is softened (\(f_{\rm soft}<1\)) according to \[f_{\rm soft}=\left(\frac{d}{r_{\rm soft}}\right)^{4}-2\left(\frac{d}{r_{\rm soft }}\right)^{3}+2\left(\frac{d}{r_{\rm soft}}\right)\qquad\quad{\rm if}\;\;d<r_ {\rm soft}\,. \tag{6}\] We set \(r_{\rm soft}\) to three times the smallest cell size. The PEnGUIn simulations use a different softening prescription given by equation 11 of Fung et al. (2019). A subset of our Athena++ runs simulate planetary accretion using sink cells. Gas densities inside cells for which \(d<r_{\rm sink}\) are depleted at a rate \[\frac{\partial\rho}{\partial t}=-\frac{\rho}{\tau_{\rm sink}} \tag{7}\] where \(r_{\rm sink}=\min(r_{\rm Bondi},r_{\rm Hill})/10\), \(r_{\rm Bondi}=GM_{\rm p}/c_{\rm s}^{2}\), \(r_{\rm Hill}=3^{-1/3}\left(M_{\rm p}/M_{\star}\right)^{1/3}R_{\rm p}\), and \(\tau_{\rm sink}=r_{\rm sink}/c_{\rm s}\). At our fiducial resolution, \(r_{\rm sink}\simeq 2r_{\rm soft}=0.1r_{\rm Bondi}\) for subthermal runs. For superthermal runs, \(r_{\rm sink}\simeq r_{\rm soft}=0.1r_{\rm Hill}\). The mass removed is not added to the planet; for typical parameters of non-self-gravitating discs, the mass removed over the simulation duration is \(\ll M_{\rm p}\). In Appendix B we test the sensitivity of our results to \(r_{\rm sink}\). ### Initial and boundary conditions In the Athena++ runs, the planet mass is initially zero and is ramped up to its final mass \(M_{\rm p,final}\) over one orbital period \(2\pi/\Omega_{\rm p}\): \[M_{\rm p}(t) = M_{\rm p,final}\sin^{2}\left[\frac{t}{2\pi\Omega_{\rm p}^{-1}} \times\frac{\pi}{2}\right]\ \ \ {\rm for}\ \ t<2\pi\Omega_{\rm p}^{-1}\] (8) \[= M_{\rm p,final}\ \(\Psi_{\rm p}\!+\!10r_{\rm Bondi}/R_{\rm p}\), and \(\Theta=\pi/2\) to \(\pi/2-3H_{\rm p}/R_{\rm p}\). Only the upper half of the disc at \(\Theta<\pi/2\) is simulated; the flow is assumed symmetric about the midplane, with boundary conditions there as appropriate (e.g., \(v_{\Theta}=0\) at \(\Theta=\pi/2\)). Runs with smaller \(q_{\rm th}\) are especially computationally costly, so for \(q_{\rm th}\leq 0.05\) we limit the upper boundary to \(\Theta=\pi/2-30r_{\rm Bondi}/R_{\rm p}\). At all boundaries except for the midplane the flow is fixed to its initial conditions. For subthermal runs in PEnGUIn, the simulation domain and boundary conditions are the same as in Athena++, except in PEnGUIn the full 2\(\pi\) in azimuth is simulated with periodic boundary conditions, and a reflecting boundary condition is used for the \(\Theta\)-boundary above the midplane. For superthermal runs where \(q_{\rm th}\geq 1\), both Athena++ and PEnGUIn use radial domains that span \(\pm 10H_{\rm p}\) around the planet and azimuthal domains that cover \(2\pi\). Wave-killing zones in Athena++ damp reflections near the radial boundaries: \[\frac{\partial X}{\partial t} =-\left(\frac{X-X(t=0)}{\tau}\right)K(R)\] \[K(R) =1-\sin^{2}\left[\frac{\pi}{2}\left(\frac{R-R_{1}}{R_{\rm Kill,1 }-R_{1}}\right)\right]\ Because we simulate only half the disc and assume symmetry about the midplane, mass flow rates reported in this paper are \(2\times\) those simulated. ### Subthermal limit Figure 2 shows the meridional velocity field (in the \(r_{\rm cyl}-z\) plane) around a subthermal planet in a simulation without any sink cells. Velocities have been averaged over azimuth \(\phi\), and time-averaged from \(t=10\Omega_{\rm p}^{-1}\) to \(15\Omega_{\rm p}^{-1}\). In agreement with other studies that do not use sink cells (Tanigawa et al., 2012; Fung et al., 2015; Szulagyi et al., 2016; Bethune and Rafikov, 2019), gas flows in along the planet's poles, from \(\theta\simeq 60^{\circ}\) to \(\theta=0\) (blue arrows with \(v_{r}<0\)). Figure 3 shows velocity and density along \(\theta=0\) for a few subthermal models. For \(q_{\rm th}=0.05-0.2\), and independently of \(H_{\rm p}/R_{\rm p}\), infalling gas achieves Mach 1 at \(z\simeq 0.35r_{\rm Bondi}\) (Fig. 3a), at which point \(\rho=8\rho_{0}\) (Fig. 3b). Since these simulations do not include sink cells, gas eventually exits through the midplane (red arrows in Fig. 2). The top panel of Figure 4 plots the time-averaged inflow rates \(\dot{M}_{\rm p,in}(r)\) and outflow rates \(\dot{M}_{\rm p,out}(r)\) (solid and dashed lines, respectively) from the Bondi radius to inside of the sonic point for runs with various \(q_{\rm th}\) and \(H_{\rm p}/R_{\rm p}\). Regions at \(r\gtrsim 0.2r_{\rm Bondi}\) are in a near-steady state, with inflow and outflow rates matching to within 15%, and both nearly constant with \(r\). At \(r\lesssim 0.2r_{\rm Bondi}\), flow rates rise with decreasing \(r\), implying by continuity that the density field here changes with time -- a consequence of the slight mismatch between inflow and outflow rates. Since this mismatch is less physical than numerical, we focus on the more steady region at \(r\gtrsim 0.2r_{\rm Bondi}\) which offers a well-defined \(\dot{M}_{\rm p,in}\) for every simulation. This inflow rate increases with \(q_{\rm th}\) and \(H_{\rm p}/R_{\rm p}\), spanning two orders of magnitude across our parameter space. The bottom panel of Fig. 4 plots the same data in units of \[\dot{M}_{\rm Bondi} \equiv r_{\rm Bondi}^{2}\rho_{0}c_{\rm s}\] \[=q_{\rm th}^{2}\left(\frac{H_{\rm p}}{R_{\rm p}}\right)^{3}\rho_ {0}R_{\rm p}^{3}\Omega_{\rm p}. \tag{13}\] So normalised, the time-averaged inflow rates for \(q_{\rm th}\leq 0.2\) and \(0.2<r/r_{\rm Bondi}<1\) in sink-less Athan++ and PEnGUln runs collapse to \[\dot{M}_{\rm p,in}\simeq 3.5\dot{M}_{\rm Bondi}. \tag{14}\] ### Superthermal limit As \(q_{\rm th}\) increases above 1, \(r_{\rm Bondi}\) becomes larger than the planet's Hill radius: \[r_{\rm Hill} =\left(\frac{q}{3}\right)^{1/3}R_{\rm p}\] \[=\left(\frac{1}{3}\right)^{1/3}q_{\rm th}^{1/3}H_{\rm p}\] \[=\left(\frac{1}{3}\right)^{1/3}q_{\rm th}^{-2/3}r_{\rm Bondi}. \tag{15}\] When \(r_{\rm Hill}<r_{\rm Bondi}\), stellar tidal forces are more important than thermal pressure in limiting how much gas can be gravitationally bound to the planet. Figures 5 and 6 show that for \(q_{\rm th}\geq 1\) there is a well-defined \(\dot{M}_{\rm p,in}\) for \(0.4\lesssim r/r_{\rm Hill}\lesssim 1\), motivating a Hill scaling for \(\dot{M}_{\rm p,in}\) for superthermal planets by analogy with our earlier Bondi scaling for subthermal planets. We start at \(1\leq q_{\rm th}\leq 3\), in the "3D" regime where the Hill sphere is still embedded in the circumstellar disc (\(r_{\rm Hill}<H_{\rm p}\)). Here the Hill sphere presents a cross-sectional area Figure 1: Time evolution of the inflow rate (solid curve) and outflow rate (dashed) evaluated at \(r=r_{\rm Bondi}\) for our \(q_{\rm th}=0.1,H_{\rm p}/R_{\rm p}=0.035\) Athena++ simulation without a sink cell. The planet mass \(M_{\rm p}\) is ramped up from 0 at \(t=0\) to its final value at \(t=2\pi\Omega_{\rm p}^{-1}\) (vertical line). Beyond this time, the simulation is in a quasi-steady state where outflow nearly balances inflow. In reality the difference between inflow and outflow — i.e. the true net accretion rate — depends on the circumplanetary physics of cooling and viscosity which our simulations do not capture. Thus our paper focuses on just the inflow rate as an upper limit on the true accretion rate. Figure 2: Flow around a subthermal planet, located at \((r_{\rm cyl},\ z)=(0,0)\), with \(q_{\rm th}=0.05\) and \(H_{\rm p}/R_{\rm p}=0.035\), from an Athena++ simulation without using sink cells. Data are time-averaged from \(t=(10-15)\,\Omega_{\rm p}^{-1}\). Inflows (planet-centred radial velocity \(v_{r}<0\)) are tagged blue and outflows are tagged red. The length of each arrow scales as the meridional gas velocity \(\sqrt{v_{r}^{2}+v_{r_{\rm cyl}}^{2}}\), averaged over azimuth \(\phi\), with the longest arrow having a magnitude of \(6.5c_{\rm s}\). The black curve marks the Bondi radius \(r=r_{\rm Bondi}\). Gas flows in along the planet’s poles and, because the simulation does not include sink cells, exits through the midplane. of \(\sim\!\!r_{\rm Hill}^{2}\) to gas shearing toward it at speed \(\sim\!\Omega_{\rm p}r_{\rm Hill}\). The inflow rate then scales as \[\dot{M}_{\rm Hill,\,3D} \equiv r_{\rm Hill}^{2}\times\Omega_{\rm p}r_{\rm Hill}\times\rho_{0} \tag{16}\] \[= \frac{q_{\rm th}}{3}\left(\frac{H_{\rm p}}{R_{\rm p}}\right)^{3} \rho_{0}R_{\rm p}^{3}\Omega_{\rm p}\,,\] a weaker dependence on planet mass than \(\dot{M}_{\rm Bondi}\propto q_{\rm th}^{2}\). The bottom panel of Fig. 5 confirms the expected scaling, showing that for \(0.4\leq r/r_{\rm Hill}\leq 1\) and \(1\leq q_{\rm th}\leq 3\), our data from sink-less Athena++ and PEnGUIn simulations collapse to \[\dot{M}_{\rm p,in}\simeq 4\dot{M}_{\rm Hill,\,3D}\,. \tag{17}\] When \(q_{\rm th}\gtrsim 10\), the Hill sphere "pops out" of the circumstellar disc (\(r_{\rm Hill}>H_{\rm p}\)), as illustrated in Figure 7. The density near the Hill sphere's pole is so low that the inflow comes mostly from the midplane; accretion is now more 2D. Midplane gas presents a cross-sectional area to the Hill sphere of \(\sim\!r_{\rm Hill}H_{\rm p}\) and flows in at a rate \[\dot{M}_{\rm Hill,\,2D} \equiv r_{\rm Hill}H_{\rm p}\times\Omega_{\rm p}r_{\rm Hill}\times \rho_{0} \tag{18}\] \[= \left(\frac{q_{\rm th}}{3}\right)^{2/3}\left(\frac{H_{\rm p}}{R_{ \rm p}}\right)^{3}\rho_{0}R_{\rm p}^{3}\Omega_{\rm p}\,,\] which scales even more weakly with planet mass than \(\dot{M}_{\rm Hill,\,3D}\). The bottom panel of Fig. 6 shows that for \(0.4\leq r/r_{\rm Hill}\leq 1\) and \(q_{\rm th}\geq 10\), our data from sink-less Athena++ simulations collapse to \[\dot{M}_{\rm p,in}\simeq 9\dot{M}_{\rm Hill,\,2D}\,. \tag{19}\] We find that for larger \(q_{\rm th}\) the outflow rate \(\dot{M}_{\rm p,out}\) equilibrates more Figure 4: _Top:_ Time-averaged mass inflow rates \(\dot{M}_{\rm p,in}\) (solid lines) across planet-centred spheres of radius \(r\) for subthermal planets, using simulations without sink cells. Coloured lines show Athena++ results for different input parameters, time-averaged from \(t=(10-1)\Omega_{\rm p}^{-1}\). The dotted line is the inflow rate for a PEnGUIn simulation with \(q_{\rm th}=0.1\) and \(H_{\rm p}/R_{\rm p}=0.035\), time-averaged from \(t=(20-21)\times 2\pi\Omega_{\rm p}^{-1}\). We focus on the most steady region at \(r\gtrsim 0.2r_{\rm Bondi}\) where each simulation converges to a value of \(\dot{M}_{\rm p,in}\) that is nearly constant with \(r\), and interpest this inflow rate as an upper limit to the planet’s accretion rate. Since these simulations do not include sink cells to permanently accrete gas, outflow rates (dashed lines) balance inflow rates. _Bottom_: Same as top, but showing only the inflow rates \(\dot{M}_{\rm p,in}\) normalised by the Bondi rate \(\dot{M}_{\rm Bondi}=r_{\rm Bondi}^{2}\rho_{0}c_{s}\). Figure 3: Time-averaged inflow velocity \(-v_{x}\) and density \(\rho\) along the planet-centred \(\theta=0\) polar streamline, for \(q_{\rm th}\leq 0.2\), as measured with sink-less Athena++ simulations. In all cases, the inflow becomes supersonic at \(z\simeq 0.35r_{\rm Bondi}\), at which point \(\rho\simeq 8\rho_{0}\). slowly than \(\dot{M}_{\rm p,in}\). The data for Fig. 6 were taken when \(\dot{M}_{\rm p,in}\) had equilibrated but \(\dot{M}_{\rm p,out}\) had not. We have checked for \(q_{\rm th}=10\) and \(H_{\rm p}/R_{\rm p}=0.095\) that when the simulation is extended to \(1000{\rm Q}_{\rm p}^{-1}\), outflow grows to match inflow, as expected for sink-less runs. Figure 8 plots gas streamlines in the disc midplane around a \(q_{\rm th}=1\) planet. Most of the material that crosses the Hill sphere is sourced by a subset of horseshoe orbits flowing in from either side of the planet's orbit (see also fig. 4 of Lubow et al., 1999; fig. 3 of Tanigawa and Watanabe, 2002). Since the simulation does not include sink cells, nearly all of the inflowing gas also exits the Hill sphere, so that \(\dot{M}_{\rm p,out}\approx\dot{M}_{\rm p,in}\). ### Gaps The inflow rates in Figs. 4-6 were time-averaged between \(t=(10-15)\,{\rm Q}_{\rm p}^{-1}\), before the planets have cleared gaps around themselves. Since the planet is fed by co-orbital material (Fig. 8), we expect that inflow rates should scale in proportion to the surface density in the gap, a.k.a. the gap depth. To test this, we extend the runtime of our \(q_{\rm th}=10\), \(H_{\rm p}/R_{\rm p}=0.095\) simulation to \(1000{\rm Q}_{\rm p}^{-1}\) which allows gaps to develop more fully. The left panel of Figure 9 shows the gap carved by the planet at the end of this extended simulation. We compute the average surface density in the gap \(\Sigma_{\rm g}\) by summing the mass in all cells in an annulus with \(R_{\rm p}-r_{\rm Hill}<R<R_{\rm p}+r_{\rm Hill}\), excluding those in the circumplanetary region with \(\Psi_{\rm p}-2r_{\rm Hill}/R_{\rm p}<\Psi<\Psi_{\rm p}+2r_{\rm Hill}/R_{\rm p}\), and dividing by the surface area of the excised annulus. The right panel of Fig. 9 shows that the decline of \(\Sigma_{\rm g}\) over the simulation duration (solid blue curve) is roughly paralleled by the decline in \(\dot{M}_{\rm p,in}\) through the Hill sphere (solid black curve), and Figure 5: _Top:_ Mass inflow rates \(\dot{M}_{\rm p,in}\) (solid lines) across planet-centred spheres of radius \(r\) for marginally superthermal planets with \(1\leq q_{\rm th}\leq 3\), using simulations without sink cells. Coloured lines show \(k\)then++ results for different input parameters, time-averaged from \(t=(10-15)\,{\rm Q}_{\rm p}^{-1}\). The dotted line is the inflow rate for a \(\rm{PE}\dot{M}_{\rm p}\dot{M}_{\rm p}\) simulation with \(q_{\rm th}=1\) and \(H_{\rm p}/R_{\rm p}=0.035\), time-averaged from \(t=(20-21)\times 2\,{\rm MB}_{\rm p}^{-1}\). Just as subthermal runs have a nearly constant \(\dot{M}_{\rm p,in}\) for \(0.2\leq r/r_{\rm{H}ondi}\lesssim 1\) (Fig. 4), superthermal runs have a nearly constant \(\dot{M}_{\rm p,in}\) between \(0.4\leq r/r_{\rm Hill}\lesssim 1\) that we interpret as an upper limit to the planet’s accretion rate. Since these simulations do not use sink cells to permanently accrete gas, outflow rates (dashed lines) balance inflow rates. _Bottom:_ Same as top, but now showing only the inflow rates \(\dot{M}_{\rm p,in}\) normalised by \(\dot{M}_{\rm Hill,3D}=r_{\rm Hill}^{2}\times{\rm Q}_{\rm p}r_{\rm Hill}\times{ \rm\rho}_{0}\) (equation 16). Figure 6: _Top:_ Same as the top panel of Figure 5 but for \(q_{\rm th}\geq 10\). Inflow rates remain nearly constant between \(0.4\leq r/r_{\rm Hill}\lesssim 1\). At the times these data were taken (between \(10\) and \(15{\rm Q}_{\rm p}^{-1}\)), \(\dot{M}_{\rm p,in}\) has equilibrated but \(\dot{M}_{\rm p,out}\) has not. We have checked in one case (\(q_{\rm th}=10\), \(H_{\rm p}/R_{\rm p}=0.095\)) that over longer runtimes \(\dot{M}_{\rm p,out}\) grows to balance \(\dot{M}_{\rm p,in}\). _Bottom:_ Same as top, but now showing only the inflow rates \(\dot{M}_{\rm p,in}\), normalised by \(\dot{M}_{\rm Hill,2D}=r_{\rm Hill}H_{\rm p}\times{\rm\Omega}_{\rm p}r_{\rm Hill} \times{\rm\rho}_{0}\) (equation 18). that \(\dot{M}_{\rm p,in}\) re-normalised by the gap depth can describe the actual inflow rate to within a factor of 2 (dashed black curve). This result also agrees with Fung et al. (2019, their fig. 21) who showed that the average surface density in the circumplanetary region (i.e., the region we excised to compute \(\Sigma_{\rm g}\)) scales in proportion to \(\Sigma_{\rm g}\). Thus we expect that equations 14, 17, and 19 for planet inflow rates can still be used in the presence of gaps, with \(\rho_{0}\) in those equations set equal to the midplane density averaged over the annular gap, excluding the region nearest the planet.2 Footnote 2: This procedure sidesteps having to specify disc viscosity as it is encoded in the gap depth (e.g. Duffell and MacFadyen, 2013; Fung et al., 2014; Kanagawa et al., 2015). Our simulations do not include an explicit viscosity. Including one would presumably lead to accretion of circumplanetary material onto the planet, reducing \(\dot{M}_{\rm p,out}\) but leaving \(\dot{M}_{\rm p,in}\) unchanged. ### Sink cell runs Figure 10 plots \(\dot{M}_{\rm p,in}\) vs. \(r\) from Athena++ simulations that use sink cells near the planet. Like their sink-less counterparts, these runs show a well-defined \(\dot{M}_{\rm p,in}\) for \(0.1\lesssim r/\min(r_{\rm Bondi},r_{\rm Hill})\lesssim 1\) across all three subthermal, marginally superthermal, and superthermal regimes. Fig. 10 also shows that \(\dot{M}_{\rm p,in}\) simulated with sink cells follows the same scalings with \(q_{\rm th}\) and \(H_{\rm P}/R_{\rm p}\) that we identified from runs without sink cells (equations 13, 16, 18). Overall magnitudes for \(\dot{M}_{\rm p,in}\) are also similar, with the largest difference in the subthermal limit where \(\dot{M}_{\rm p,in}\) is 3\(\times\) higher with sink cells than without. This higher inflow rate is within 15% of the classic Bondi accretion Figure 8: Gas streamlines and density in the disc midplane at \(r=10\,\Omega_{\rm p}^{-1}\) from a sink-less Athena++ simulation with \(q_{\rm th}=1\) and \(H_{\rm P}/R_{\rm p}=0.035\). Data are in Cartesian coordinates centred on the planet, where \(x\) points away from the star and \(y\) points along the planet’s orbit. The Hill sphere (black circle) has gas fed into it by streamlines colored black; many of these streamlines are on horseshoe orbits (top and bottom), while others are circulating (sides). The rate at which these streamlines carry mass into the sphere defines \(\dot{M}_{\rm p,in}(r_{\rm Hill})\), and the rate at which they carry mass out defines \(\dot{M}_{\rm p,out}(r_{\rm Hill})\). Since the simulation shown here does not include sink cells, \(\dot{M}_{\rm p,out}\simeq\dot{M}_{\rm p,in}\). In analogous simulations that do use sink cells (section 3.4), \(\dot{M}_{\rm p,out}\ll\dot{M}_{\rm p,in}\), while \(\dot{M}_{\rm p,in}\) remains within a factor of 3\(\,\)of its value derived without sink cells. For this figure we highlight \(r_{\rm Hill}\) as the boundary across which we measure mass fluxes; in Figs. 4, 5, 6, and 10, we vary the measurement boundary by a factor of 10, and also consider \(r_{\rm Bondi}\) as an alternative reference boundary. Figure 7: Meridional slices of the density field around superthermal planets, taken at \(t=10\Omega_{\rm p}^{-1}\) and azimuthally averaged, from sink-less Athena++ runs with \(H_{\rm P}/R_{\rm p}=0.035\) and \(q_{\rm th}\) increasing from top to bottom. The planet is at the origin (\(r_{\rm expl},\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, rate onto a point mass from spherically symmetric, isothermal gas: \(\dot{M}_{\rm p}=4.48\pi G^{2}M_{\rm p}^{2}\rho_{0}/c_{\rm s}^{3}\)(table 1 of Bondi, 1952). The flow field around a subthermal planetary sink (Figure 11) is nearly spherically symmetric and lacks the midplane outflow of non-sink simulations (Fig. 2). ## 4 Summary and Discussion Using global, isothermal, 3D hydrodynamic simulations, we measured the maximum accretion rate of a planet embedded in a gaseous circumstellar disc. This upper bound is given by \(\dot{M}_{\rm p,in}\), the rate at which gas enters the planet's gravitational sphere of influence, which is the smaller of the planet's Bondi and Hill spheres. We would like to know how much of the inflowing gas becomes permanently bound, but this cannot be determined without knowing how the gas sheds angular momentum, or stays cool against adiabatic compression or shock heating; this physics is not captured in our inviscid, isothermal simulations. The upper limit we have established is relevant for protoplanets of at least several Earth masses with self-gravitating gas envelopes, accreting in the hydrodynamic runaway or post-runaway regimes (e.g. Ginzburg and Chiang, 2019, 2019). Figure 12 summarises our results. The planet's thermal mass parameter \(q_{\rm th}\) controls the geometry and magnitude of inflow according to: \[\frac{\dot{M}_{\rm p,in}}{\rho_{\rm g}\Omega_{\rm p}R_{\rm p}^{3}}\simeq \left\{\begin{array}{ll}C_{1}q_{\rm th}^{2}\left(\frac{H_{\rm p}}{R_{\rm p}} \right)^{3}&\qquad q_{\rm th}\lesssim 0.3\\ C_{2}q_{\rm th}\left(\frac{H_{\rm p}}{R_{\rm p}}\right)^{3}&\qquad 0.3 \lesssim q_{\rm th}\lesssim 10\\ C_{3}q_{\rm th}^{2/3}\left(\frac{H_{\rm p}}{R_{\rm p}}\right)^{3}&\qquad q_{ \rm th}\gtrsim 10\end{array}\right. \tag{20}\] where \(q_{\rm th}\equiv\left(M_{\rm p}/M_{\star}\right)\left(H_{\rm p}/R_{\rm p}\right) ^{-3}\), \(M_{\rm p}\) and \(M_{\star}\) are the planet and star masses, and \(\rho_{\rm g}\), \(H_{\rm p}/R_{\rm p}\), and \(\Omega_{\rm p}\) are the ambient midplane gas density, disc aspect ratio, and Keplerian angular frequency at the planet's orbital radius \(R_{\rm p}\). When we model the planet with sink cells, then the constants \(\{C_{1},\,C_{2},\,C_{3}\}=\{12,2,9/3^{2/3}\}\); otherwise \(\{C_{1},\,C_{2},\,C_{3}\}=\{3.5,4/3,9/3^{2/3}\}\). All of these constants, including the \(q_{\rm th}\) boundary values separating the three regimes, are calibrated from simulations. For subthermal planets with \(q_{\rm th}\lesssim 0.3\), gas flows in at a Bondi-like rate, increasing as the square of the planet mass. Superthermal inflow rates scale more weakly with planet mass because stellar tides restrict the planet's reach for \(q_{\rm th}\gtrsim 0.3\), and because the Hill sphere pops well out of the disc for \(q_{\rm th}\gtrsim 10\). Whereas the (minimum) mass doubling time \(M_{\rm p}/\dot{M}_{\rm p,in}\) at fixed \(\rho_{\rm g}\) decreases with planet mass in the strongly subthermal regime (i.e. growth is potentially super-exponentially fast), the doubling time increases with planet mass in the strongly superthermal regime (power-law growth). This last result should help to limit the masses to which planets can grow (e.g. Rosenthal et al., 2020). In equations 20-22, \(\rho_{\rm g}\) is the disc density outside the planet's immediate sphere of influence but still within the planet's horseshoe co-orbital region. This density is lowered as the planet opens a gap about its orbit. We have checked that the planet's inflow rate simply scales in proportion to the gap surface density, which follows its own scalings with \(M_{\rm p}/M_{\star}\), \(H_{\rm p}/R_{\rm p}\), and dimensionless viscosity \(\alpha\)(e.g. Duffell and MacFadyen, 2013; Fung et al., 2014; Kanagawa et al., 2015). These gap scalings can be combined with the scalings we have established in this paper to determine how inflow rates scale in the net. For example, for subthermal planets that open deep gaps (which they can if \(\alpha\) is small enough), \(\rho_{\rm g}\propto M_{\rm p}^{-2}\), and therefore \(\dot{M}_{\rm p,in}\propto\rho_{\rm g}q_{\rm th}^{2}\propto M_{\rm p}^{0}\). ### Comparison with other simulations For the most part our results confirm or can be reconciled with previous calculations. We found that inflow rates scale with the smaller of the Bondi and Hill spheres. In their study of orbital migration, Masset et al. (2006) determined that the smaller of the two regions also matters for the torque exerted by the disc, and that the width of the horseshoe zone changes its dependence on planet mass at \(q_{\rm th}\approx 0.5\) (see their fig. 9), similar to where we found a break in the inflow scaling. In the subthermal \(q_{\rm th}\lesssim 0.3\) regime, the 3D, isothermal, sink-cell simulations of D'Angelo et al. (2003) and Machida et al. (2010) (compiled in fig. 1 of Tanigawa and Tanaka, 2016) appear consistent with a Bondi accretion rate scaling, \(\dot{M}_{\rm p,in}\propto M_{\rm p}^{2}\), as we found. When our respective subthermal inflow rates are scaled to the same disc parameters (\(H_{\rm p}/R_{\rm p}=0.05\), \(R_{\rm p}=5.2\) au, and an unperturbed background disc density of \(\rho_{\rm g}=1.4\times 10^{-11}{\rm g/cm^{3}}\)), their rates are about an order of magnitude lower than what our equation 20 predicts using \(C_{1}=12\). Bedhune and Rafikov (2019) studied planets with \(0.5\leq q_{\rm th}\leq 4\) in the marginally superthermal regime using 3D sink-less, isothermal, and inviscid simulations. Their simulations do not use a softened potential and instead model the planet's core as an impermeable surface. They report some permanent accretion of gas because of dissipation in standing shocks near this core. Encouragingly, their net mass accretion rate \(\dot{M}_{\rm p}=\dot{M}_{\rm p,in}-\dot{M}_{\rm p,out}\) grows linearly with \(M_{\rm p}\) and is independent of \(H_{\rm p}/R_{\rm p}\), matching the scalings in our equation 21 for \(\dot{M}_{\rm p,in}\) (see their fig. 12 and equation 13; they do not give the breakdown of inflow vs. outflow). Their net rate is \(15\times\) lower than our sink-less inflow rate, possibly because only a narrow set of polar streamlines intersects the core and permanently accretes via shocks (see the cyan curve in their fig. 2 marking the width of the shocked region). As in our sink-less runs, most of the material entering their simulated Hill spheres exits through the midplane. Tanigawa and Watanabe (2002) also considered the marginally superthermal regime. For \(0.5<q_{\rm th}<6\), they found a steeper \(M_{\rm p}^{4/3}\) scaling for the accretion rates onto their planetary sink cells.3 But this result is based on 2D (vertically integrated) simulations, in a regime where accretion is actually more 3D (Bethune and Rafikov, 2019, and our section 3.2). We expect better agreement between 2D and 3D simulations when \(r_{\rm Hill}\gtrsim H_{\rm p}\) (\(q_{\rm th}\gtrsim 10\)). The self-gravitating gas clumps modeled in 2D as sink cells by Zhu et al. (2012) fall into this fully superthermal limit, and have accretion rates which match equation 22 in magnitude and scaling (see their equation 15). ### Connecting to observations We use our results for \(\dot{M}_{\rm p,in}\) to place lower bounds on the growth timescales for observed or suspected protoplanets embedded in circumstellar gas discs. Table 1 updates the compilation of Choksi & Chiang (2022) of such planets, listing their possible masses \(M_{\rm p}\) and, where optically thin \(C^{18}\)O data are available, ambient gas surface densities \(\Sigma_{\rm g}\) (for details, see the caption to Table 1, Appendix A, and Choksi & Chiang 2022). From \(\Sigma_{\rm g}\) we compute \(\rho_{\rm g}=\Sigma_{\rm g}/\Big{(}\sqrt{2\pi}H_{\rm p}\Big{)}\) (assuming the disc is isothermal and in hydrostatic equilibrium) and from there a planet's minimum mass-doubling timescale \(\min\left(t_{\rm double}\right)=M_{\rm p}/\dot{M}_{\rm p,in}\) (column 10 of Table 1) using equations 20-22 with the larger coefficients from our sink-cell simulations. Figure 13 compares \(\min\left(t_{\rm double}\right)\) to system ages \(t_{\rm age}\). A doubling time shorter than the system age is unlikely as it would require catching the protoplanet during a short-lived episode of fast growth. We would expect instead \(t_{\rm double}\sim t_{\rm age}\), or \(t_{\rm double}>t_{\rm age}\) if the protoplanet has largely finished forming. The protoplanets PDS 70b and c have \(\min\left(t_{\rm double}\right)\sim t_{\rm age}\); since \(t_{\rm age}\) is comparable to the gas disc's total lifetime, these objects are either undergoing their last or nearly last doublings, or have completed their assembly. Unlike the other entries in Table 1, PDS 70b and c are detected at a variety of wavelengths, have astrometry consistent with orbital motion about their host star, and reside in a large disc cavity. There are no confirmed detections among the other putative planets, only a suspicion of existence based on the observed annular disc gaps they are supposed to have opened (e.g. Zhang et al. 2018). Fig. 13 shows that for many of these systems, \(\min\left(t_{\rm double}\right)<t_{\rm age}\), sometimes by up to 4 orders of magnitude. There are a number of ways the actual doubling times \(t_{\rm double}\) can exceed our minimum estimates:4 (i) Most obviously in the context of the present work, \(\dot{M}_{\rm p}<\dot{M}_{\rm p,in}\); the barriers to permanent accretion of mass from angular momentum and energy may be formidable. Lambrechts et al. (2019) point out that cooling of the protoplanet's gas envelope may severely limit \(\dot{M}_{\rm p}\) (but see Ginzburg & Chiang 2019a for a simple argument for why cooling is fast once envelope self-gravity becomes important, and also Kurokawa & Tanigawa 2018). Circumplanetary discs are commonly invoked to remove excess angular momentum, but the mechanism of transport is unknown -- it is not even clear any disc accretes or decreters. Moreover, \(\dot{M}_{\rm p,in}\) itself may be smaller than we have calculated, if the inflowing material is adiabatic and subsonic (Cimerman et al. 2017; Fung et al. 2019; Moldenhauer et al. 2021, 2022); (ii) Disc gaps may be spatially under-resolved and thus surface densities \(\Sigma_{\rm g}\) and midplane densities \(\rho_{\rm g}\) overestimated; (iii) The non-PDS 70 planets may have masses toward the lower ends of their ranges in Table 1, closer to \(10M_{\rm p}\), as would be the case if disc viscosities were low. Lower planet masses would imply longer mass doubling times at subthermal (Bondi) inflow rates. Footnote 4: An alternate hypothesis is that the gaps do not actually host planets, but are instead caused by local variations in dust grain properties (e.g. Birnstiel et al. 2015; Hu et al. 2019) or fluid instabilities (e.g. Suriano et al. 2018; Cui & Bai 2021). We plan to leverage our simulations to model the spatial distribution of inflowing material and thereby compute spectral energy distributions. Our preliminary calculations show that much of the accretion power can be re-processed into the mid or far-infrared by circumplanetary dust (see also fig. 6 of Choksi & Chiang 2022). The protoplanet in HD 163296-G5 (Table 1) will be targeted by the James Webb Space Telescope later this year (Cugno et al. 2023). Figure 9: Effects of gap opening on inflow rates, demonstrated using our sink-less simulation with \(q_{\rm th}=10\) and \(H_{\rm p}/R_{\rm p}=0.095\). The left panel shows a snapshot of the gas surface density \(\Sigma\) in the disc midplane at \(t=100\,\Omega_{\rm p}^{-1}\). Data are in Cartesian coordinates centred on the star and the planet is at \((X,Y)=(-1,0)\). The colour scale is capped at \(\Sigma_{0}\), the initial surface density at the planet’s position. We compute a spatially averaged surface density \(\Sigma_{\rm g}\) between \(R_{\rm p}-r_{\rm Hill}<R<R_{\rm p}+r_{\rm Hill}\), excluding the circumplanetary region \(\Psi_{\rm p}-2r_{\rm Hill}/R_{\rm p}<\Psi<\Psi_{\rm p}+2r_{\rm Hill}/R_{\rm p}\). The right panel shows that \(\Sigma_{\rm g}\) decreases as the simulation progresses (solid blue curve read using the right-hand axis) and that the inflow rate through the Hill sphere \(\dot{M}_{\rm p,in}\) (solid black curve, left-hand axis) tracks this decline, as expected because the planet is fed by material in the gap (and not from the overdense spirals seen in the left panel; see also Fig. 8). The inflow rate re-normalised by \(\Sigma_{0}/\Sigma_{\rm g}\) is more constant with time (dashed black curve, left-hand axis). ## Acknowledgements We thank Chris White for his many hours spent debugging our simulations, and Andrea Antoni and Philipp Kempski for getting us started with Athena++. We also thank Aliza Beverage and Isaac Malsky for help with figures, and William Bethune, Yi-Xian Chen, Eve Lee, and Hidekazu Tanaka for feedback on a draft manuscript. The anonymous referee provided a thoughtful report that led to substantial improvements in this paper. Simulations were run on the Savio cluster provided by the Berkeley Research Computing program at the University of California, Berkeley, supported by the UC Berkeley Chancellor, Vice Chancellor for Research, and Chief Information Officer. Financial support was provided by NSF AST grant 2205500, and an NSF Graduate Research Fellowship awarded to NC. ## Data Availability Data and codes are available upon request of the authors.
2302.12139
Automated Extraction of Fine-Grained Standardized Product Information from Unstructured Multilingual Web Data
Extracting structured information from unstructured data is one of the key challenges in modern information retrieval applications, including e-commerce. Here, we demonstrate how recent advances in machine learning, combined with a recently published multilingual data set with standardized fine-grained product category information, enable robust product attribute extraction in challenging transfer learning settings. Our models can reliably predict product attributes across online shops, languages, or both. Furthermore, we show that our models can be used to match product taxonomies between online retailers.
Alexander Flick, Sebastian Jäger, Ivana Trajanovska, Felix Biessmann
2023-02-23T16:26:11Z
http://arxiv.org/abs/2302.12139v1
Automated Extraction of Fine-Grained Standardized Product Information from Unstructured Multilingual Web Data ###### Abstract Extracting structured information from unstructured data is one of the key challenges in modern information retrieval applications, including e-commerce. Here, we demonstrate how recent advances in machine learning, combined with a recently published multilingual data set with standardized fine-grained product category information, enable robust product attribute extraction in challenging transfer learning settings. Our models can reliably predict product attributes across online shops, languages, or both. Furthermore, we show that our models can be used to match product taxonomies between online retailers. Keywords:product information extraction e-commerce ## 1 Introduction Recent research achievements in the field of machine learning (ML) [1, 13] have the potential to improve automated information extraction in applications such as e-commerce. However, the translation of these ML innovations into real-world application scenarios is impeded by the lack of publicly available data sets. Here we demonstrate that recent advances in ML can be translated into automated information extraction applications when leveraging carefully curated data. To better assess the contribution of this study, we first highlight some relevant data sets and methods that aim at the automated extraction of structured data in the field of e-commerce. Public E-commerce Data SetsWe summarize publicly e-commerce data sets used for the automated extraction of product information in Table 1. To leverage the potential of ML, large and diverse data sets that follow a fine-grained product taxonomy are favorable. A common and detailed taxonomy is the Global Product Classification (GPC) standard, which "classifies products by grouping them into categories based on their essential properties as well as their relationships to other products"[4]. For example, multiple _Brick_s (shirts and shorts) can belong to the same _Family_ (clothing) but are different _Class_ses (upper and lower body wear)3. Footnote 3: See the GPC Browser for more examples: [https://gpc-browser.gs1.org/](https://gpc-browser.gs1.org/) #### 1.0.1 Multilingual Fine-Grained Product Classification There are few recent studies investigating automated extraction of standardized product information in text corpora. Brinkmann et al. [1] study how hierarchical product classification benefits from domain-specific language modeling. They report an improvement of 0.012 weighted F1 score by using schema.org product4 annotations for pretraining. Peeters et al. [12] study cross-language learning for entity matching and demonstrate that multilingual transformers outperform single-language models (German BERT) by 0.143 F1 when trained on a single language (German) and tested on multiple (German and English). Furthermore, using additional training data for the second language (English) improves the performance by another 0.038 weighted F1. Footnote 4: Website: [https://schema.org/Product](https://schema.org/Product) These studies highlight the potential of modern ML methods for automated product attribute extraction. In this work, we show that transfer learning helps to extract structured information (product category) from unstructured data (product name and description) and to find reliable taxonomy mappings. ## 2 Experiments We evaluate three transfer learning scenarios for product classification: 1. **Language Transfer:** training on data of one language, test on other language data \begin{table} \begin{tabular}{l|c c c c c} & regular & \multicolumn{3}{c}{multi-i-} & \\ & updated & lingual & shop & family & \\ \hline Farfetch product meta data [9] & ✗ & ✗ & ✗ & ✗ & 400K \\ Product details on Flipkart [3] & ✗ & ✗ & ✗ & ✓ & ✗ & 20K \\ Amazon browse node classification [2] & ✗ & ✗ & ✗ & ✓ & ✗ & 3M \\ Amazon product-question answering [16] & ✗ & ✗ & ✗ & ✓ & ✗ & 17.3GB \\ Rakuten data challenge [10] & ✗ & ✗ & ✗ & ✓ & ✗ & 1M \\ MAVE [18] & ✗ & ✗ & ✗ & ✓ & ✗ & 2.2M \\ Innerwear from victoria’s secret \& co [15] & ✗ & ✓ & ✗ & ✗ & 600K \\ WDC-MWPD [19] & ✗ & ✗ & ✓ & ✗ & ✓ & 16K \\ WDC-25 gold standard [14] & ✗ & ✗ & ✓ & ✓ & ✓ & 24K \\ GreenDB [7] & ✓ & ✓ & ✓ & ✓ & ✓ & \(>\)576K \\ \end{tabular} \end{table} Table 1: Comparison of e-commerce data sets used for product attribute extraction and classification. Column _GPC_ means whether or not the data set follows the GPC taxonomy. 2. **Shop Transfer:** training on data of one shop, test on other shop data 3. **Language and Shop Transfer:** training on data of one shop and one language, test on data of different shops and languages Furthermore, we study whether ML methods can be used to find reliable taxonomy mappings. For this, we apply a model trained for a _target taxonomy_ to data that uses a _source taxonomy_. For each source category, the majority of predicted target categories define the mapping from source to target taxonomy. Data SetsIn our experiments, we use two data sets, the GreenDB [6] and the Farfetch data set [9]. The GreenDB5 is a multilingual data set covering 5 European shops with about 576k unique products of the 37 most important product categories following the GPC taxonomy. It covers categories from the GPC segments Clothing, Footwear, Personal Accessories, Home Appliances, Audio Visual/Photography, and Computing. A recent publication [8] presents the GreenDB's high quality and usefulness for information extraction tasks. The Farfetch data set has about 400k unique products from a single shop. It does not follow a public taxonomy and covers only fashion products. Footnote 5: We use GreenDB version 0.2.2 available at [https://zenodo.org/record/7225336](https://zenodo.org/record/7225336) ML ModelThe experiment implementation is based on autogluon's [17] TextPredictor and uses _mDeBERTav3_[5] as the backbone model. For training, we use the GreenDB and apply Cleanlab [11] to find and remove miss-classified products (211 were found). Our models use the product's name and description to predict their product category. \(model_{baseline}\) is trained on the entire GreenDB (all shops), \(model_{ZaDE}\) on the German, \(model_{ZaFR}\) on the French, and \(model_{ZaALL}\) on the German, French, and English Zalando products contained in the GreenDB. Online DemoTo demonstrate the transfer capabilities, we published an online demo available: [https://product-classification.demo.calgo-lab.de](https://product-classification.demo.calgo-lab.de). As shown in Figure 1, it automatically downloads the HTML of a given URL, extracts the products' name and description, and uses \(model_{baseline}\) to predict its GPC category. ## 3 Results The baseline performance (\(model_{baseline}\)) shows a strong 0.99 weighted F1 score on a GreenDB test set. Transfer Tasks\(model_{ZaDE}\) demonstrates language transfer when it is applied to other languages of the same shop. It achieves weighted F1 scores of 0.898 for English and 0.873 for French. Applying \(model_{ZaFR}\) and \(model_{ZaDE}\) on other shops demonstrates shop transfer with weighted F1 scores from 0.648 to 0.836. If the model is fine-tuned on multi-lingual data (\(model_{ZaALL}\)), almost all shops benefit, see Table 2 for details. The language and shop transfer is even more challenging and performs worse for all shops. Transferring across data sets, i.e., applying \(model_{baseline}\) to Farfetch data, achieves a 0.924 weighted F1 score. Taxonomy MatchingUsing \(model_{baseline}\) to map products' categories from Farfetch to GreenDB (GPC taxonomy) results in 41 out of 46 (\(>\)89%) correctly mapped categories. ## 4 Conclusion We demonstrate that combining rich multilingual data sets and modern ML methods enables fine-grained standardized product information extraction from unstructured data. We investigate several transfer learning settings when training and testing on data from different shops and languages, even in zero-shot scenarios when no data from another shop and language was available in the training data. \begin{table} \begin{tabular}{l c|c c|c c} & \multirow{2}{*}{Model} & \multicolumn{2}{c|}{FR} & \multicolumn{2}{c}{DE} \\ & & Asos & H\&M & Otto & Amazon \\ \hline \multirow{3}{*}{Shop Transfer} & \(model_{ZaFR}\) & 0.836 & 0.678 & - & - \\ & \(model_{ZaDE}\) & - & - & 0.777 & 0.648 \\ & \(model_{ZaALL}\) & 0.842 & 0.717 & 0.762 & 0.739 \\ \hline \multirow{3}{*}{Shop \& Language Transfer} & \(model_{ZaFR}\) & - & - & 0.614 & 0.449 \\ & \(model_{ZaDE}\) & 0.795 & 0.666 & - & - \\ \end{tabular} \end{table} Table 2: Weighted F1 scores for shop transfer experiments. Scores from 0.648 to 0.836 demonstrate robust shop transfer. Shop transfer profits from additional data in other languages. Figure 1: Online demo overview. Automated extraction of schema.org information (product name and description) from HTML, used for product classification. **Acknowledgements** This research was supported by the Federal Ministry for the Environment, Nature Conservation and Nuclear Safety based on a decision of the German Bundestag.
2306.08496
On positive fixed points of operator of Hammerstein type with degenerate kernel and Gibbs Measures
From \cite{re} From \cite{re} it is known that ``translation-invariant Gibbs measures" of the model with an uncountable set of spin values can be described by positive fixed points of a nonlinear integral operator of Hammerstein type. In \cite{enh2015, MSSX} there are main results on positive fixed points of the operator of Hammerstein type with degenerate kernels, but it was not solved the existence of Gibbs measures corresponding to the founded fixed points for constructed kernels. This paper is an investigation of the papers \cite{enh2015} and \cite{MSSX}. In this paper we construct new degenerate kernels of the Hammerstein operator by taking into account problems in the theory of Gibbs measure, i.e. each positive fixed point of the operator gives translational-invariant Gibbs measure.
I. M. Mavlonov, Kh. N. Khushvaktov, G. P. Arzikulov, F. H. Haydarov
2023-06-14T13:24:51Z
http://arxiv.org/abs/2306.08496v1
# On positive fixed points of operator of Hammerstein type with degenerate kernel and Gibbs measures ###### Abstract. From [9] it is known that "translation-invariant Gibbs measures" of the model with an uncountable set of spin values can be described by positive fixed points of a nonlinear integral operator of Hammerstein type. In [4, 5] there are main results on positive fixed points of the operator of Hammerstein type with degenerate kernels, but it was not solved the existence of Gibbs measures corresponding to the founded fixed points for constructed kernels. This paper is an investigation of the papers [4] and [5]. In this paper we construct new degenerate kernels of the Hammerstein operator by taking into account problems in the theory of Gibbs measure, i.e. each positive fixed point of the operator gives translational-invariant Gibbs measure. **Mathematics Subject Classifications (2010).** 82B05, 82B20 (primary); 60K35 (secondary) **Key words.** Cayley tree, spin values, translational-invariant Gibbs measure, positive fixed point, Hammerstein operator. ## 1. Introduction Hammerstein equation covers a large variety of areas and is of much interest to a wide audience due to the fact that it has applications in numerous areas. Several problems that arise in differential equations (ordinary and partial), for instance, elliptic boundary value problems whose linear parts possess Green's function can be transformed into the Hammerstein integral equations. Equations of the Hammerstein type play a crucial role in the theory of optimal control systems and in automation and network theory (see e.g., Dolezale [12]). There are some works devoted to fixed points of Hammerstein operator on cones. We can find main results on the existence and multiplicity of fixed points of Hammerstein equations (e.g., [2, 3, 13]). On the other hand, we need to find new results on the uniqueness of fixed points of Hammerstein equations in cones. For instance, during last years, an increasing attention was given to models with a _uncountable_ many spin values on a Cayley tree. In [7], [9] Hamiltonian with a _uncountable_ set of spin values (with the set \([0,1]\) of spin values) on a Cayley tree \(\Gamma^{k}\) was considered and it was showed that, the existence translation-invariant splitting Gibbs measure of the Hamiltonian is equivalent to the existence of a positive fixed point of Hammerstein type nonlinear integral operator. In [9] for \(k=1\) (when the Cayley tree becomes a one-dimensional lattice \(\mathbb{Z}\)) it is shown that the integral equation has a unique solution, implying that there is a unique Gibbs measure. For general \(k\geq 2\), a sufficient condition is found under which a periodic Gibbs measure is unique (see [14]). On the other hand, on the Cayley trees \(\Gamma_{k}\) of order \(k\geq 2\), the existence of phase transitions has been proven, see [11, 15]. We note that all of these papers are devoted to models with nearest-neighbor interactions. Also, in [7, 8, 10] the splitting Gibbs measures for four competing interactions (external field, nearest neighbor, second neighbors and triples of neighbors) of models on \(\Gamma_{2}\) are described and the fact that periodic Gibbs measure for the Hamiltonians with four competing interactions are either _translation-invariant_ or _periodic with period two_ is shown. But, theorems on fixed points of Hammerstein operator on cone are ## 1. Introduction In this paper we consider the _Cayley tree_ of a graph \(G\) with vertex set \(V\) and edge set \(V\). The _Cayley tree_ of a graph \(G\) is a tree of \(G\) with vertex set \(V\) and edge set \(V\). The _Cayley tree_ of a graph \(G\) is a tree of \(G\) with vertex set \(V\) and edge set \(V\). The _Cayley tree_ of a graph \(G\) is a tree of \(G\) with vertex set \(V\) and edge set \(V\). The _Cayley tree_ of a graph \(G\) is a tree of \(G\) with vertex set \(V\) and edge set \(V\). The _Cayley tree_ of a graph \(G\) is a tree of \(G\) with vertex set \(V\) and edge set \(V\). The _Cayley tree_ of a graph \(G\) is a tree of \(G\) with vertex set \(V\) and edge set \(V\). The _Cayley tree_ of a graph \(G\) is a tree of \(G\) with vertex set \(V\) and edge set \(V\). The _Cayley tree_ of a graph \(G\) is a tree of \(G\) with vertex set \(V\) and edge set \(V\). The _Cayley tree_ of a graph \(G\) is a tree of \(G\) with vertex set \(V\) and edge set \(V\). The _Cayley tree_ of a graph \(G\) is a tree of \(G\) with vertex set \(V\) and edge set \(V\). The _Cay tree_ of a graph \(G\) is a tree of \(G\) with vertex set \(V\) and edge set \(V\). The _Cayley tree_ of a graph \(G\) is a tree of \(G\) with vertex set \(V\) and edge set \(V\). The _Cayley tree_ of a graph \(G\) is a tree of \(G\) with vertex set \(V\) and edge set \(V\). The _Cayley tree_ of a graph \(G\) is a tree of \(G\) with vertex set \(V\) and edge set \(V\). The _Cay tree_ of a graph \(G\) is a tree of \(G\) with vertex set \(V\) and edge set \(V\). Similarly, if \(\mu\) is a measure on \(\mathcal{B}\), the projection of \(\mu\) on \(\mathcal{B}_{\Lambda}\) is defined by \[\left[\pi_{\Lambda}(\mu)\right](B)=\mu\left\{\sigma\in\Omega:\sigma_{\Lambda} \in B\right\}=\mu(\bar{\sigma}|_{\Lambda}=\sigma_{\Lambda}:\sigma_{\Lambda} \in B),\quad B\in\mathcal{B}_{\Lambda}.\] The following theorem is known: **Theorem 2.1**.: _[_1_]__**(Kolmogorov Extension Theorem)** For each \(t\) in the arbitrary index set \(T\), let \(\Omega_{t}\) be a complete, separable metric space, with \(\mathcal{F}_{t}\) the class of Borel sets (the \(\sigma\)-field generated by the open sets)._ _Assume that for each finite nonempty subset \(v\) of \(T\), we are given a probability measure \(P_{v}\) on \(\mathcal{F}_{v}\). Assume the \(P_{v}\) are consistent, that is, \(\pi_{u}\left(P_{v}\right)=P_{u}\) for each nonempty \(u\subset v\)._ _Then there is a unique probability measure \(P\) on \(\mathcal{F}=\prod_{t\in T}\mathcal{F}_{t}\) such that \(\pi_{v}(P)=P_{v}\) for all \(v\)._ The probability distributions \(\mu^{(n)}\) are compatible if for any \(n\geq 1\) and \(\sigma_{n-1}\in\Omega_{V_{n-1}}\): \[\pi_{V_{n-1}}\left(\mu^{(n)}\right)=\mu^{(n-1)} \tag{2.4}\] Then by Kolmogorov extension theorem, there exists a unique measure \(\mu\) on \(\Omega_{V}\) such that, for any \(n\) and \(\sigma_{n}\in\Omega_{V_{n}}\), \(\mu\left(\left\{\sigma\Big{|}_{V_{n}}=\sigma_{n}\right\}\right)=\mu^{(n)}( \sigma_{n})\). The measure \(\mu\) is called _splitting Gibbs measure_ corresponding to Hamiltonian (2.1) and function \(x\mapsto h_{x}\), \(x\neq x^{0}\). **Proposition 2.2**.: _[_9_]_ _The probability distributions \(\mu^{(n)}(\sigma_{n})\), \(n=1,2,\ldots\), in (2.2) are compatible iff for any \(x\in V\setminus\{x^{0}\}\) the following equation holds:_ \[f(t,x)=\prod_{y\in S(x)}\frac{\int_{0}^{1}\exp(J\beta\xi_{tu})f(u,y)du}{\int_ {0}^{1}\exp(J\beta\xi_{0u})f(u,y)du}. \tag{2.5}\] _Here, and below \(f(t,x)=\exp(h_{t,x}-h_{0,x}),\ t\in[0,1]\) and \(du=\lambda(du)\) is the Lebesgue measure._ Note, that the analysis of solutions to (2.5) is not easy. It's difficult to give a full description for the given potential function \(\xi_{t,u}\). Let \(\xi_{tu}\) is a continuous function. We put \[C^{+}[0,1]=\{f\in C[0,1]:f(x)\geq 0\},\ \ C_{0}^{+}[0,1]=C^{+}[0,1]\setminus\{ \theta\equiv 0\}.\] Define the operator \(R_{k}:C_{0}^{+}[0,1]\to C_{0}^{+}[0,1]\) by \[(R_{k}f)(t)=\left(\frac{\int_{0}^{1}K(t,u)f(u)du}{\int_{0}^{1}K(0,u)f(u)du} \right)^{k},\ \ k\in\mathbb{N},\] where \(K(t,u)=\exp(J\beta\xi_{tu}),f(t)>0,t,u\in[0,1]\). We'll study the equation (2.5) in the class of translational-invariant functions \(f(t,x)\), i.e \(f(t,x)=f(t)\in C[0,1]\) for any \(x\in V\) and it can be written as \[(R_{k}f)(t)=f(t), \tag{2.6}\] Note that equation (2.6) is not linear for any \(k\geq 1\). For every \(k\in\mathbb{N}\) we consider an integral operator \(H_{k}\) acting in the cone \(C^{+}[0,1]\) i.e., \[(H_{k}f)(t)=\int_{0}^{1}K(t,u)f^{k}(u)du,\ \ k\in\mathbb{N}.\] The operator \(H_{k}\) is called Hammerstein's integral operator of order \(k\). Clearly, if \(k\geq 2\) then \(H_{k}\) is a nonlinear operator. **Lemma 2.3**.: _[_14_]_ _Let \(k\geq 2\). The equation_ \[R_{k}f=f,\ \ f\in C_{0}^{+}[0,1] \tag{2.7}\] _has a nontrivial positive solution iff the Hammerstein's operator has a positive eigenvalue, i.e. the Hammerstein's equation_ \[H_{k}f=\lambda f,\ \ f\in C^{+}[0,1] \tag{2.8}\] _has a nonzero positive solution for some \(\lambda>0\)._ It is easy to check that if the number \(\lambda_{0}>0\) is an eigenvalue of the operator \(H_{k}\), then an arbitrary positive number is an eigenvalue of the operator \(H_{k}\) (see Theorem 3.7 [14]), where \(k\geq 2\). Consequently, we obtain **Lemma 2.4**.: _Let \(k\geq 2\). The equation (2.7) has a nontrivial positive solution iff the Hammerstein's operator \(H_{k}\) has a nontrivial positive fixed point, moreover \(N^{+}_{fix}(R_{k})=N^{+}_{fix}(H_{k})\), where \(N^{+}_{fix}(T)\) is a number of nontrivial positive fixed points of the operator \(T\)._ ## 3. Hammerstein's operator \(H_{3}\) with degenerate kernel Let \(\varphi_{1}(t),\ \varphi_{2}(t)\) and \(\psi_{1}(t),\ \psi_{2}(t)\) are positive functions from \(C_{0}^{+}[0,1]\). Suppose that \(\varphi_{1}(t)>0,\ \ \psi_{1}(t)>0.\) We consider Hammerstein's operator \(H_{3}:\) \[(H_{3}f)(t)=\int\limits_{0}^{1}(\varphi_{1}(t)\psi_{1}(u)+\varphi_{2}(t)\psi_ {2}(u))f^{3}(u)du\] and cubic operator \(P\) on \(\mathbb{R}^{2}\) by the rule \[P(x,y)=(\alpha_{11}x^{3}+3\alpha_{12}x^{2}y+3\alpha_{21}xy^{2}+\alpha_{22}y^{ 3},\ \ \beta_{11}x^{3}+3\beta_{12}x^{2}y+3\beta_{21}xy^{2}+\beta_{22}y^{3}).\] Here \[\alpha_{11} =\int\limits_{0}^{1}\psi_{1}(u)\varphi_{1}^{3}(u)du>0,\ \ \alpha_{12}=\int\limits_{0}^{1}\psi_{1}(u)\varphi_{1}^{2}(u)\varphi_{2}(u)du>0,\] \[\alpha_{21} =\int\limits_{0}^{1}\psi_{1}(u)\varphi_{1}(u)\varphi_{2}^{2}(u)du >0,\ \ \alpha_{22}=\int\limits_{0}^{1}\psi_{1}(u)\varphi_{2}^{3}(u)du>0;\] \[\beta_{11} =\int\limits_{0}^{1}\psi_{2}(u)\varphi_{1}^{3}(u)du>0,\ \ \beta_{12}=\int\limits_{0}^{1}\psi_{2}(u)\varphi_{1}^{2}(u)\varphi_{2}(u)du>0,\] \[\beta_{21} =\int\limits_{0}^{1}\psi_{2}(u)\varphi_{1}(u)\varphi_{2}^{2}(u)du >0,\ \ \beta_{22}=\int\limits_{0}^{1}\psi_{2}(u)\varphi_{2}^{3}(u)du>0.\] **Lemma 3.1**.: _The Hammerstein's operator \(H_{3}\) has a nontrivial positive fixed point iff the cubic operator \(P\) has a nontrivial positive fixed point, moreover \(N^{+}_{fix}(H_{3})=N^{+}_{fix}(P)\)._ Proof.: \((a)\) Put \[\mathbb{R}_{2}^{+} =\{(x,y)\in\mathbb{R}^{2}:\ x\geq 0,y\geq 0\},\] \[\mathbb{R}_{2}^{>} =\{(x,y)\in\mathbb{R}^{2}:\ x>0,y>0\}.\] Let the Hammerstein's operator \(H_{3}\) has a nontrivial positive fixed point \(f(t)\in C_{0}^{+}[0,1]\). Let \[c_{1}=\int\limits_{0}^{1}\psi_{1}(u)f^{3}(u)du \tag{3.1}\] and \[c_{2}=\int\limits_{0}^{1}\psi_{2}(u)f^{3}(u)du. \tag{3.2}\] Clearly, \(c_{1}>0,\ \ c_{2}>0\), i.e. \((c_{1},c_{2})\in\mathbb{R}_{2}^{>}\). Then for the function \(f(t)\) the equality \[f(t)=c_{1}\varphi_{1}(t)+c_{2}\varphi_{2}(t) \tag{3.3}\] holds. Consequently, for parameters \(c_{1},c_{2}\) from the equality (3.1) and (3.2) we have the two identity: \[c_{1}=\alpha_{11}c_{1}^{3}+3\alpha_{12}c_{1}^{2}c_{2}+3\alpha_{21}c_{1}c_{2}^ {2}+\alpha_{22}c_{2}^{3},\] \[c_{2}=\beta_{11}c_{1}^{3}+3\beta_{12}c_{1}^{2}c_{2}+3\beta_{21}c_{1}c_{2}^{2}+ \beta_{22}c_{2}^{3}.\] Therefore, the point \((c_{1},c_{2})\) is fixed point of the cubic operator \(P.\) \((b)\) Assume, that the point \((x_{0},y_{0})\) is a nontrivial positive fixed point of the cubic operator \(P,\) i.e. \((x_{0},y_{0})\in\mathbb{R}_{2}^{+}\setminus\{\theta\}\) and numbers \(x_{0},y_{0}\) satisfies following equalities \[\alpha_{11}x_{0}^{3}+3\alpha_{12}x_{0}^{2}y_{0}+3\alpha_{21}x_{0}y_{0}^{2}+ \alpha_{22}y_{0}^{3}=x_{0},\] \[\beta_{11}x_{0}^{3}+3\beta_{12}x_{0}^{2}y_{0}+3\beta_{21}x_{0}y_{0}^{2}+\beta_ {22}y_{0}^{3}=y_{0}.\] Similarly, we can prove that the function \(f_{0}(t)=x_{0}\varphi_{1}(t)+y_{0}\varphi_{2}(t)\) is a fixed point of the Hammerstein's operator \(H_{3}\) and \(f_{0}(t)\in C_{0}^{+}[0,1]\). This completes the proof. ## 4. Positive fixed points of cubic operators in cone \(\mathbb{R}_{2}^{+}\) We define cubic operator (CO) \(\mathcal{C}\) in cone of the space \(\mathbb{R}^{2}\) by rule \[\mathcal{C}(x,y)=(a_{11}x^{3}+3a_{12}x^{2}y+3a_{21}xy^{2}+a_{22}y^{3},\ \ b_{11}x^{3}+3b_{12}x^{2}y+3b_{21}xy^{2}+b_{22}y^{3}).\] Clearly, an arbitrary nontrivial positive fixed points of the (CO) \(\mathcal{C}\) is strictly positive. We denote by \(N_{fix}^{>}(V)\) which the number of fixed points (CO) \(\mathcal{C}\) belong to \(\mathbb{R}_{2}^{>}\) (belong to \(\mathbb{R}_{2}^{+}\setminus\{\theta\}\)). **Lemma 4.1.** _[_5_]_ _i) If the point \(\omega=(x_{0},y_{0})\in\mathbb{R}_{2}^{+}\) is a fixed point of (CO) \(\mathcal{C}\), then \(\omega\in\mathbb{R}_{2}^{>}\) and \(\xi_{0}=\frac{y_{0}}{x_{0}}\) is a root of the algebraic equation_ \[a_{22}\xi^{4}+(3a_{21}-b_{22})\xi^{3}+(3a_{12}-3b_{21})\xi^{2}+(a_{11}-3b_{12} )\xi-b_{11}=0.\] _ii) If the positive number \(\xi_{0}\) is a root of the algebraic equation (4.1), then the point \(\omega_{0}=(x_{0},\xi_{0}x_{0})\in\mathbb{R}_{2}^{>}\) is fixed point of (CO) \(\mathcal{C}\), where_ \[x_{0}=\frac{1}{(a_{11}+3a_{12}\xi_{0}+3a_{21}\xi_{0}^{2}+a_{22}\xi_{0}^{3})^{ 1/2}}.\] We put \[\mu_{0}=a_{22},\ \mu_{1}=3a_{21}-b_{22},\ \mu_{2}=a_{12}-b_{21},\ \mu_{3}=a_{11}-3b_{12},\ \mu_{4}=-b_{11}\] and we define the polynomial \(P_{4}(\xi)\) of order four by \[P_{4}(\xi)=\mu_{0}\xi^{4}+\mu_{1}\xi^{3}+3\mu_{2}\xi^{2}+\mu_{3}\xi+\mu_{4}.\] The following result follows from Lemma 4.1. **Proposition 4.2.**_The number of positive fixed points of the operator (CO) \(\mathcal{C}\) is equal to the number of positive roots of the polynomial \(P_{4}(\xi)\)._ The proof of this result follows, if the point \(\xi_{0}\) is a positive root of the polynomial \(P_{4}(\xi)\), then we can find the positive fixed point of the operator (CO) \(\mathcal{C}\) with this formula \[\omega_{0}=\left(\frac{1}{(a_{11}+3a_{12}\xi_{0}+3a_{21}\xi_{0}^{2}+a_{22}\xi_{ 0}^{3})^{1/2}},\frac{\xi_{0}}{(a_{11}+3a_{12}\xi_{0}+3a_{21}\xi_{0}^{2}+a_{22} \xi_{0}^{3})^{1/2}}\right)\] from this, it follows that if there are many positive roots of the polynomial \(P_{4}(\xi)\), then there are as many positive fixed points of the operator (CO) \(\mathcal{C}\). **Lemma 4.3.**_[5] The cubic operator (CO) \(\mathcal{C}\) has no more than three fixed points, i.e. \(1\leq N_{fix}^{>}(\mathcal{C})\leq 3\)._ ## 5. Non-uniqueness of positive fixed points of the Hammerstein's operator \(H_{3}\) We define two continuous positive functions \(\zeta_{1}(t)\) and \(\zeta_{2}(t)\) on \([0,1]\) \[\zeta_{1}(u)=\left\{\begin{array}{l}\frac{1}{2}+\sin 2\pi u,\ \ \ \mbox{if}\ \ u\in[0,\frac{1}{2}]\\ \frac{1}{2},\ \ \ \mbox{if}\ \ u\in[\frac{1}{2},1]\end{array}\right.,\] and \[\zeta_{2}(u)=\left\{\begin{array}{l}\frac{1}{2},\ \ \ \mbox{if}\ \ u\in[0,\frac{1}{2}]\\ \frac{1}{2}-\sin 2\pi u,\ \ \ \mbox{if}\ \ u\in[\frac{1}{2},1].\end{array}\right.\] For each positive numbers \(a,b\) we define continuous positive functions \(F_{1}(t;a,b)\) and \(F_{2}(t;a,b)\) on \([0,1]\) \[F_{1}(t;a,b)=\left\{\begin{array}{l}a\cos\pi t+b,\ \ \ \mbox{if}\ \ t\in[0,\frac{1}{2}]\\ b,\ \ \ \mbox{if}\ \ t\in[\frac{1}{2},1]\end{array}\right.,\] \[F_{2}(t;a,b)=\left\{\begin{array}{ll}b,\quad\text{if}\ \ t\in[0,\frac{1}{2}]\\ -a\cos\pi t+b,\quad\text{if}\ \ t\in[\frac{1}{2},1].\end{array}\right.\] For each positive numbers \(a,b\) we denote \[\tilde{K}(t,u;a,b)=\zeta_{1}(u)F_{1}(t;a,b)+\zeta_{2}(u)F_{2}(t;a,b),\ \ t,u\in[0,1].\] **Theorem 5.1**.: _Let \(\tilde{K}(t,u;a,b)\) is kernel, then:_ \((i)\)__\(a<\frac{35(44+15\pi)}{318}b\) _there exists a unique positive fixed point of the operator_ \(H_{3}\) _and is:_ \[f(t)=\frac{1}{\sqrt{\frac{177a}{35\pi}+\frac{44+15\pi}{6\pi}}b}\zeta_{1}(t)+ \frac{1}{\sqrt{\frac{177a}{35\pi}+\frac{44+15\pi}{6\pi}}b}\zeta_{2}(t).\] \((ii)\)__\(a=\frac{35(44+15\pi)}{318}b\) _there exist exactly two positive fixed points of the operator_ \(H_{3}\)_._ \((iii)\)__\(a>\frac{35(44+15\pi)}{318}b\) _there exist exactly three positive fixed points of the operator_ \(H_{3}\)_._ Proof.: \((i)\) At first we'll find coefficients of \(P_{4}(\xi)\) polynomial. \[a_{11}=\int\limits_{0}^{1}F_{1}(u;a,b)\zeta_{1}^{3}(u)du=\int\limits_{0}^{ \frac{1}{2}}(a\cos\pi u+b)\left(\frac{1}{2}+\sin 2\pi u\right)^{3}du+\] \[+\int\limits_{\frac{1}{2}}^{1}b\left(\frac{1}{2}\right)^{3}du=\frac{527}{280 \pi}a+\frac{17}{12\pi}b+\frac{b}{2}\.\] Analogously we get \[a_{12}=\int\limits_{0}^{1}F_{1}(u;a,b)\zeta_{1}^{2}(u)\zeta_{2}(u)du=\frac{29 a}{40\pi}+\frac{3b}{4\pi}+\frac{b}{4},\ \ a_{21}=\int\limits_{0}^{1}F_{1}(u;a,b)\zeta_{1}(u)\zeta_{2}^{2}(u)du=\frac{ 7a}{24\pi}+\frac{3b}{4\pi}+\frac{b}{4},\] \[a_{22}=\int\limits_{0}^{1}F_{1}(u;a,b)\zeta_{2}^{3}(u)du=\frac{a}{8\pi}+\frac {17b}{12\pi}+\frac{b}{2}.\] After short calculation we get \(b_{11}=a_{22},b_{12}=a_{21},b_{21}=a_{12},b_{22}=a_{11}\). Consequently, we have \[P_{4}(\xi)=a_{22}\xi^{4}+(3a_{21}-a_{11})\xi^{3}+(a_{11}-3a_{12})\xi-a_{22}.\] If we find the roots of this polynomial \(P_{4}(\xi)\), \[a_{22}\xi^{4}+(3a_{21}-a_{11})\xi^{3}+(a_{11}-3a_{12})\xi-a_{22}=0\] \[(\xi-1)(\xi+1)(a_{22}\xi^{2}+(3a_{21}-a_{11})\xi+a_{22})=0\] \[\xi_{1}=-1,\xi_{2}=1,\ \ a_{22}\xi^{2}+(3a_{21}-a_{11})\xi+a_{22}=0\] The quadratic equation \(a_{22}\xi^{2}+(3a_{21}-a_{11})\xi+a_{22}=0\) has no roots when \(D=(3a_{21}-a_{11})^{2}-4a_{22}^{2}<0\), ie \(a<\frac{35(44+15\pi)}{318}b\). So \(P_{4}(\xi)\) polynomial has unique positive root, the root is \(\xi=1\). From Lemma (4.2) and (3.3), the fixed point of the Hammerstein operator \(H_{3}\) is as follows \[f(t)=\frac{1}{\sqrt{\frac{177a}{35\pi}+\frac{44+15\pi}{6\pi}}b}\zeta_{1}(t)+ \frac{1}{\sqrt{\frac{177a}{35\pi}+\frac{44+15\pi}{6\pi}}b}\zeta_{2}(t).\] \((ii)\) We saw above that the roots of the polynomial \(P_{4}(\xi)\) are the roots \(\xi_{1}=-1,\xi_{2}=1\) and \(a_{22}\xi^{2}+(3a_{21}-a_{11})\xi+a_{22}=0\) is equal to the roots of the quadratic equation. If we look at \(D\), the discriminant of this quadratic equation \[D=(3a_{21}-a_{11})^{2}-4a_{22}^{2}=(3a_{21}-a_{11}-2a_{22})(3a_{21}-a_{11}+a_{ 22})\] If we replace the coefficients in the expression \(3a_{21}-a_{11}-2a_{22}\), we will see that this expression is negative. \(3a_{21}-a_{11}+a_{22}=0\) i.e. \(a=\frac{35(44+15\pi)}{318}b\), then \(D=0\). It follows that the quadratic equation \(a_{22}\xi^{2}+(3a_{21}-a_{11})\xi+a_{22}=0\) has one root. This root is positive, because \(3a_{21}-a_{11}=-a_{22}\). \(a_{22}\) is positive, so \(3a_{21}-a_{11}\) is negative. Then the operator \(H_{3}\) has two positive fixed points. \((iii)\)This proof as above, i.e. the roots of the polynomial \(P_{4}(\xi)\) are the roots \(\xi_{1}=-1,\xi_{2}=1\) and \(a_{22}\xi^{2}+(3a_{21}-a_{11})\xi+a_{22}=0\) is equal to the roots of the quadratic equation. If \(D>0\) then \((3a_{21}-a_{11}-2a_{22})(3a_{21}-a_{11}+a_{22})>0\). Obviously \((3a_{21}-a_{11}-2a_{22})\) is negative. So \((3a_{21}-a_{11}+a_{22})\) must be negative, i.e. \((3a_{21}-a_{11}+a_{22})<0\). If we replace the coefficients in the expression \((3a_{21}-a_{11}-2a_{22})\), we will see \(a>\frac{35(44+15\pi)}{318}b\). Now we consider the expression \(3a_{21}-a_{11}\). Since \((3a_{21}-a_{11}+a_{22})<0\) and \(a_{22}>0\), \(3a_{21}-a_{11}<0\). Since \(3a_{21}-a_{11}<0\) and \(D>0\), it follows that roots of the quadratic equation \(a_{22}\xi^{2}+(3a_{21}-a_{11})\xi+a_{22}=0\) are positive. So the operator \(H_{3}\) has three positive fixed points. ## Acknowledgements The work supported by the fundamental project (number: F-FA-2021-425) of The Ministry of Innovative Development of the Republic of Uzbekistan. ## Statements and Declarations **Conflict of interest statement:** On behalf of all authors, the corresponding author states that there is no conflict of interest. ## Data availability statements The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
2302.04983
CREDENCE: Counterfactual Explanations for Document Ranking
Towards better explainability in the field of information retrieval, we present CREDENCE, an interactive tool capable of generating counterfactual explanations for document rankers. Embracing the unique properties of the ranking problem, we present counterfactual explanations in terms of document perturbations, query perturbations, and even other documents. Additionally, users may build and test their own perturbations, and extract insights about their query, documents, and ranker.
Joel Rorseth, Parke Godfrey, Lukasz Golab, Mehdi Kargar, Divesh Srivastava, Jaroslaw Szlichta
2023-02-10T00:01:00Z
http://arxiv.org/abs/2302.04983v1
# CREDENCE: Counterfactual Explanations for Document Ranking ###### Abstract Towards better explainability in the field of information retrieval, we present CREDENCE, an interactive tool capable of generating counterfactual explanations for document rankers. Embracing the unique properties of the ranking problem, we present counterfactual explanations in terms of document perturbations, query perturbations, and even other documents. Additionally, users may build and test their own perturbations, and extract insights about their query, documents, and ranker. ## I Introduction With the rise of deep learning (DL), significant advances have been made by the data science community, though often at the cost of increased model complexity. For many modern DL models, the underlying decision-making process is nearly unintelligible for data scientists and users [1]. With the growing adoption of data science in critical domains, such as medicine and law, _explainability_ has become a priority in many deployment scenarios. In critical applications, explanations build trust between models and their users, and enable auditing that works to ensure regulation adherence, mitigation of bias, and sufficient justification. In recent years, researchers have developed a variety of solutions that support _explainable artificial intelligence (XAI)_, which combat the increasing complexity that renders DL models uninterpretable. Among different types of _local_ explanations, which aim to rationalize individual predictions (decisions), _counterfactual_ explanations [2][3] have emerged as a popular and pragmatic explanation format to impart behavioral insight. Generally, counterfactual explanation methods identify sets of minimal changes to the features of an input, such that a change is observed in a model's prediction. In the field of information retrieval (IR), complex DL models have been employed for a variety of tasks, most notably for document ranking. Naturally, the decision-making and ranking logic behind document ranking models (rankers) is often unclear to their users [4]. Explainability for document rankers has been limited to derivatives of _saliency_ explanations [5][6][7][8], which attempt to approximate the relative importance of model features (e.g., query or document terms). To the best of our knowledge, counterfactual explanations have not yet been adapted for document rankers. To fill this gap, we demonstrate CREDENCE, the first tool for _CREating DocumEnt raNking explanations CountEricatually_.1 Footnote 1: A video is available at [https://vimeo.com/762787210](https://vimeo.com/762787210). The tool is available at [http://lg-research-1.uwaterloo.ca:8091/credence](http://lg-research-1.uwaterloo.ca:8091/credence). Our interactive tool produces several types of counterfactual explanations, which collectively expose the decision-making logic behind a ranking model: 1. **Counterfactual Documents.** Explore minimal perturbations to a given document that lower its rank (towards the bottom of the ranking) beyond some threshold. 2. **Counterfactual Queries.** Explore minimal perturbations to a search query that raise the rank of a given document (towards the top of the ranking). 3. **Instance-Based Counterfactual Documents.** For a given relevant document, discover similar documents that were deemed non-relevant. 4. **Build-Your-Own Counterfactual Documents.** Interactively edit a given ranked document, then compare the resulting ranking against the original. ## II System Description ### _Preliminaries_ In the document ranking problem, a user poses a search query \(q\) to a ranking model \(M\). Given a set of indexed documents \(D\) (i.e., the corpus), the ranking model \(M\) is tasked with producing a ranking (i.e., an ordered list of documents) \(\mathbf{D}^{M}\) such that, when treated like a set, \(\mathbf{D}^{M}\subseteq D\). Naturally, \(q\), \(D\), and \(M\) jointly contextualize the definition of \(\mathbf{D}^{M}\). In practice, it is often the case that \(|\mathbf{D}^{M}|\ll|D|\), since many rankers need only to identify and rank the top-\(k\)_relevant_ documents (i.e., \(|\mathbf{D}^{M}|=k\) for some parameter \(k\)). Let \(R(q,d,D,M)\) denote the ranking function representing a ranking model \(M\). \(R\) returns the rank \(r\in[1,|D|]\) assigned by \(M\), corresponding to the predicted relevance of a document \(d\in D\) to a search query \(q\). \(R\) and \(M\) are defined generally, such that the ranker (e.g., a machine learning model) is considered a _black box_. However, we assume that \(R\) assesses rank using only the body of each document. In future work, we plan to explain ranking models that support richer sets of features (e.g., user preferences). ### _System Architecture_ CREDENCE is an interactive web application built with the React framework, along with other JavaScript libraries, such as Material UI to render user interface components. The backend is implemented in Python 3.9.14, and ultimately runs as an ASGI web server (via Uvicorn). Our server takes the form of a REST API, built using the FastAPI framework, which exposes endpoints to retrieve all data displayed in the web application. Both applications are hosted on a server running Ubuntu 22.04, with an AMD Opteron 6348 Processor, 128 GB of DDR3 RAM, and GeForce RTX 2080 Ti GPU. The architecture of CREDENCE is illustrated in Figure 1. To facilitate all retrieval functionality, we create a Lucene index using the Pyserini library [9], which is a Python interface for the Anserini retrieval toolkit [10]. Although any compatible ranker could be used to rank documents in our index, we utilize the monoT5 neural ranker from the PyGaggle library.2 We implement several counterfactual algorithms, each repeatedly querying the ranker and index to develop understanding of the relationships between documents, search queries, and their rankings. Also, we offer a topic modeling module, allowing users to browse clusters of terms found in selected documents, for the purpose of discovering important terms that may influence relevance. Topic modeling capabilities are enabled through the scikit-learn implementation of the Latent Dirichlet Allocation (LDA) model [11]. Using the FastAPI framework, we expose REST endpoints to perform ranking, generate counterfactual explanations, and discover topics. Footnote 2: [http://pygaggle.ai](http://pygaggle.ai). ### _Counterfactual Document Explanations_ To generate counterfactual explanations in terms of a selected _document_ without corrupting its grammar, we consider removing _sentences_. An explanation identifies a minimal subset of sentences in a given instance document whose removal lowers the rank of the document beyond \(k\). Intuitively, in any query-based retrieval setting, the removal of search query terms from a document is likely to lower document rank, at least more than non-query terms. Building on this intuition, we propose an algorithm that calculates an importance score for each sentence in the instance document \(d\), equal to the number of sentence terms that appear in the search query \(q\). The algorithm then iterates through explanations in sorted order. Candidate documents are first sorted by perturbation size (i.e., number of removed sentences) in increasing order, then by their importance score (i.e., the sum of importance scores across removed sentences) in decreasing order. In each iteration, the perturbed document is reranked, then added to a final explanation set \(P\) if deemed non-relevant. This process continues until \(|P|=n\), where \(n\) is a maximum number of desired explanations. This method guarantees explanation _minimality_, as all perturbations with \(j\) removals must be evaluated before those with \(j+1\). ### _Counterfactual Query Explanations_ To generate counterfactual explanations in terms of a _search query_, we append terms from the instance document to the query, which intuitively increases the document's relevance with every addition. Although other terms and other types of perturbation could be used, they are likely to identify relevance-raising search query perturbations at a much slower pace. In our specific formulation, a valid explanation identifies a minimal set of terms that, when appended to the query, raises the rank of a selected document beyond some threshold. Once more, we propose an iterative algorithm to identify \(n\) valid explanations quickly. Our algorithm builds a set of candidate terms from the instance document, excluding terms that do not already appear in the search query, and aims to evaluate terms in order of their importance to the document. Although other importance measures could be used, we choose to score each candidate term using TF-IDF, which scores terms based on their frequency in, and exclusivity to, the instance document \(d\) (among the set of ranked documents \(\mathbf{D}^{M}\)). All combinations of candidate terms are then iterated, first in increasing order of perturbation size (i.e., the number of appended terms), then in decreasing order of their TF-IDF scores (summed over constituent terms). As with our algorithm for counterfactual document explanations, iterating first by perturbation size guarantees explanation minimality. ### _Instance-Based Counterfactual Explanations_ To enable users to prioritize the _plausibility_ of their counterfactual explanations, we implement two instance-based (document) counterfactual algorithms, which output actual documents from the corpus rather than arbitrary perturbations. In our formulation, a valid explanation for a relevant document identifies a non-relevant document with a high degree of similarity. Here, _relevance_ is dictated by \(k\). The instance-based algorithm is a specialization of our regular document counterfactual algorithm. To find a non-relevant document \(d^{\prime}\) that is similar to the instance document \(d\), we implement two variations of the same counterfactual algorithm, each employing different notions of similarity and different document sampling techniques. In the first method, we train a Doc2Vec embedding model [12]. In the second method, we build numeric vector representations of each corpus document using their BM25 scores, though any similar collection statistic (e.g., TF-IDF scores) would suffice. In either case, with numeric document vectors in hand, we calculate similarity using a cosine similarity formula. In the first method, we simply return the \(n\) most similar documents. However, in the second method, we sample \(s\) non-relevant Fig. 1: The architecture behind the CREDENCE system. documents (ranked \(k+1\) and below), ideally where \(n\ll s\), then return the \(n\) documents with the highest similarity. ## III Demonstration Plan In this demonstration, conference participants will generate minimal counterfactual document and search query explanations, instance-based document explanations, and build their own document explanations. Together, these components enable diverse explainability for individual ranking predictions. ### _Counterfactual Document and Query Explanations_ On the _Explanations_ page, the user is prompted to select a supported corpus, type an arbitrary query, and select a value of \(k\). Once the Rank button has been clicked, a ranking of the top-\(k\) documents appears beneath in a table. By clicking individual documents in the table, the user spawns a new _Generate Explanation_ pane to the right, from which four types of counterfactual explanation may be generated. In the following example, we demonstrate and explain the motivation behind the generation of these explanations. Consider a scenario where a user is investigating a fake news (misleading information) article that has ranked 3/10 in their search for "covid outbreak", while exploring the _COVID-19 Articles_ corpus. Seeking document counterfactual explanations, the user selects the _Sentence Removal_ type, requests one explanation, then clicks GENERATE. As illustrated in Figure 2, the resulting explanation renders the original body of the document, crossing out sentences that the counterfactual perturbation has removed. In this case, removing both sentences mentioning _covid_ and _outbreak_ lowers the document rank sufficiently to render it non-relevant (i.e., its rank of 11 surpasses \(k=10\)). Our algorithm, which scores sentences by the number of query terms present, assigns both the first and last sentence a score of two. Thus, they are heavily prioritized while exploring perturbations, until their combination (score of four) is discovered to be a valid counterfactual. Using this explanation, the user has quickly learned why this fake news article has ranked among the top-\(k\). Seeking to discover terms that distinguish it from others, the user now wonders which search queries would raise the rank of this fake news article even higher. With this motivation, the user selects the _Query Augmentation_ explanation type, which generates a _search-query_ counterfactual explanation. Without changing their query, the user selects this new explanation type, and requests seven explanations with a threshold of two. A table of queries appears, seen in Figure 3. In this case, the user learns that the ranker would bestow the fake news article a rank of 2/10 for the augmented query "covid outbreak 5G", and 1/10 for "covid outbreak 5G microchip". In our algorithm, these distinguishing terms (e.g., _5G_ and _microchip_) are assigned high (TF-IDF) scores, since they do not appear in the other nine relevant documents, and therefore increase the priority of query augmentations that contain them. By highlighting these terms, these explanations yield insight into the relevance of the document within the corpus. Moreover, the user may continue reformulating their own search query, perhaps using these insights to discover other fake news articles. ### _Instance-Based Counterfactual Explanations_ On the _Explanations_ page, two instance-based counterfactual methods are available in the _Explanation Type_ dropdown: _Cosine Sampled_ and _Doc2Vec Nearest_. The cosine sampled explanation requires a number of samples, which controls the number of documents for which the cosine similarity is calculated. In either case, each resulting explanation is a single document, whose body is rendered beneath the prompt. By evaluating the similarities and differences between a selected document and counterfactual instance, a user may gain insight into the behavior of a ranker. Continuing our example for the query "covid outbreak", the user selects _Doc2Vec Nearest_ type from the dropdown. Upon clicking GENERATE, a valid counterfactual document instance is rendered beneath the prompt, stating its numeric similarity to the document being explained. The document presented in the user's output (Figure 4) is 75% similar to the fake news article being explained, despite not being ranked among the original top-10. The inconsistency between a document and its counterfactual instance inherently delineates a decision boundary respected by the ranker. Upon closer inspection of Figure 4, the user will notice that the instance document is a near copy of the original fake news article, but likely ranked lower due to absence of the terms _covid_ and _outbreak_. By exploring these instance-based explanations, the user may discover other Fig. 4: A valid counterfactual document instance. Fig. 3: Seven counterfactual query explanations augmenting the original query “covid outbreak”. fake news articles that were absent from the original ranking, while deriving insights about the relevance of the original fake news article. Moreover, presenting actual instances bypasses the issues of finding perturbations that maintain grammar or meaning. In the next subsection, we present one further alternative to this perturbation issue: allow the user to build perturbations interactively. ### _Build-Your-Own Counterfactual Documents_ On the _Builder_ page, users may build their own counterfactual document perturbation, then test its counterfactual validity against the other ranked documents. The user is prompted to select a supported corpus, type an arbitrary search query, and select a value of \(k\). Upon clicking the RANK button, a ranking of the top-\(k\) documents is obtained from the ranking model, and displayed inside a table. Upon clicking a document in the table, the document body is loaded into an interactive text field, allowing the user to compose arbitrary edits. The BROWSE TOPICS button can be clicked to spawn a modal, allowing the user to generate and explore topics found across all \(k\) documents. After finalizing document edits, clicking the RE-RANK button obtains a new ranking from the ranking model. Behind the scenes, the edited document is substituted for the original, then re-ranked alongside the other top \(k+1\) documents. The new ranking of \(k+1\) documents is displayed in another table, with coloured arrows to indicate whether the rank of each document has been raised, lowered, or left unchanged. The originally hidden document with rank \(k+1\) is given an orange _plus_ icon to distinguish itself. In our running example, the user poses the usual "covid outbreak" query for \(k=10\), then receives a familiar ranking of top-10 documents. Clicking the fake news article at rank 3, they create several counterfactual perturbations of their own. As illustrated in Figure 5, the user chooses to replace _covid_ and _covid-19_ occurrences with an alternative term _flu_, and refactor the term _outbreak_ in favour of _the flu_. After re-ranking, the green check mark confirms the counterfactual validity of the perturbation, since its rank has been lowered from 3 to 11 (i.e., \(k+1\)). Using this interactive explanation format, the user tested their own plausible perturbations, receiving valuable relevance insights that transcend simple lexical manipulations. In this example, the user quickly learned how to edit this fake news document, so as to ensure it is not deemed relevant to their query.
2303.15219
Knowing the Distance: Understanding the Gap Between Synthetic and Real Data For Face Parsing
The use of synthetic data for training computer vision algorithms has become increasingly popular due to its cost-effectiveness, scalability, and ability to provide accurate multi-modality labels. Although recent studies have demonstrated impressive results when training networks solely on synthetic data, there remains a performance gap between synthetic and real data that is commonly attributed to lack of photorealism. The aim of this study is to investigate the gap in greater detail for the face parsing task. We differentiate between three types of gaps: distribution gap, label gap, and photorealism gap. Our findings show that the distribution gap is the largest contributor to the performance gap, accounting for over 50% of the gap. By addressing this gap and accounting for the labels gap, we demonstrate that a model trained on synthetic data achieves comparable results to one trained on a similar amount of real data. This suggests that synthetic data is a viable alternative to real data, especially when real data is limited or difficult to obtain. Our study highlights the importance of content diversity in synthetic datasets and challenges the notion that the photorealism gap is the most critical factor affecting the performance of computer vision models trained on synthetic data.
Eli Friedman, Assaf Lehr, Alexey Gruzdev, Vladimir Loginov, Max Kogan, Moran Rubin, Orly Zvitia
2023-03-27T13:59:26Z
http://arxiv.org/abs/2303.15219v1
# Knowing the Distance: Understanding the Gap Between Synthetic and Real Data For Face Parsing ###### Abstract The use of synthetic data for training computer vision algorithms has become increasingly popular due to its cost-effectiveness, scalability, and ability to provide accurate multi-modality labels. Although recent studies have demonstrated impressive results when training networks solely on synthetic data, there remains a performance gap between synthetic and real data that is commonly attributed to lack of photorealism. The aim of this study is to investigate the gap in greater detail for the face parsing task. We differentiate between three types of gaps: distribution gap, label gap, and photorealism gap. Our findings show that the distribution gap is the largest contributor to the performance gap, accounting for over 50% of the gap. By addressing this gap and accounting for the labels gap, we demonstrate that a model trained on synthetic data achieves comparable results to one trained on a similar amount of real data. This suggests that synthetic data is a viable alternative to real data, especially when real data is limited or difficult to obtain. Our study highlights the importance of content diversity in synthetic datasets and challenges the notion that the photorealism gap is the most critical factor affecting the performance of computer vision models trained on synthetic data. ## 1 Introduction Two components are required to achieve successful results in computer vision tasks: an appropriate model and the right data. While significant efforts have been made in recent years to optimize models for solving complex computer vision tasks, there is also a growing focus on optimizing the data itself [1, 2]. Synthetic data provides a promising direction for data-centric optimization of computer vision models. Recent works that use synthetic data for computer vision tasks show impressive performance [3, 4, 5]. Yet in order to further improve models trained on synthetic data, it is helpful to understand the potential gaps between synthetic and real data. We break down these differences into three types. * This gap accounts for differences in the distribution of the content between two datasets. These differences can manifest in multiple ways. To name a few, there can be differences in object frequencies (one dataset may have a gender bias, age bias, or ethnicity bias), object scale (faces closer to the camera in one dataset than the other), and the absence of certain elements in the training data that may be present in the testing scenario (e.g., hats, earrings and other accessories). Using datasets with different distributions for training and testing can lead to a variance deficiency in the training distribution relative to the test distribution, which can negatively impact the performance of the model. While this problem may also occur whenever using two **real** datasets, if one is using synthetic data, then the distribution gap is addressable, either by adapting the generation parameters to align more closely with the real data, or by creating additional 3D assets that the dataset lacks. - This gap arises due to inconsistencies in labeling conventions for the same semantics. For instance, two datasets might have different conventions about where the nose ends and the skin begins. The label gap may similarly occur between two real datasets, if they are labeled based on different conventions. This leads to an evaluation challenge as a model trained on a dataset with labeling-instructions-set A may show significant error rates when tested on data tagged with instructions-set B, even if it has performed perfectly on a test set with the training dataset's conventions. See Figure 1 for an example of the label gap between the real and our synthetic labels. * This accounts for any image level visual differences between real and synthetic data, such as image noise, color variations, texture differences, or other discrepancies. The photorealism gap is a specific type of visual domain gap, which can also occur between two real datasets due to factors like differences in camera sensors or lighting conditions. See Figure 2 for an example of a visual domain gap between real datasets. We refer to the visual domain gap between synthetic and real data as the photorealism gap, which occurs when a synthetic image lacks the realism of a photograph. This paper investigates the gaps between synthetic and real data in the context of face parsing, which involves segmenting an image into distinct regions corresponding to different facial areas. Specifically, we use synthetic data to train a model for this task and compare its performance to models trained on real-world data. Face parsing is a challenging task, as faces can vary in appearance, pose, coloring, and images can vary significantly in lighting, occlusions and accessories. Collecting a large dataset is difficult due to privacy concerns and labeling efforts. Additionally, the initial collected dataset may not cover all necessary test scenarios, and we may need multiple rounds of data collection and annotation to achieve optimal results. This iterative process of improving a dataset is called **data-centric iterations**. However, when using synthetic data, we can avoid the need for iterative data collection by Figure 1: **Label Gap** - A comparison of the synthetic labels and the CelebAMask labels. The left column shows the synthetic ground truth labels overlaid on synthetic images. The middle column shows accurate label predictions from our model which was trained on synthetic data and then applied to real images. The right column shows the ground truth labels. The highlighted regions show the differences in labeling conventions that are most noticeable—in the area of the nose, lips, and neck. Figure 2: **Visual Differences** - Consider the visual domain gap between the CelebAMask dataset examples (left) and the Helen dataset images [6] (right), which differ in lighting quality and color spectrum. This can occur between real datasets. When we discuss visual differences between synthetic and real datasets, we refer to it as the **photorealism gap** generating multiple controlled datasets that quickly converge to the required data distribution. With 3D simulated data, controlling distribution gaps such as object frequency and variety, occlusions, and camera viewpoint can be relatively easy with pre-existing 3D assets. It is also possible to create new assets on demand to reduce content gaps. The photorealism gap due to texture differences or camera noise can also be mitigated although it might be more challenging to accurately identify and simulate the test domain. We train a model using synthetic data and test it on the challenging CelebA-Mask dataset. We show that synthetically simulated face data offers a potential solution to the shortage of labeled data for face parsing tasks. The paper is structured as follows: Section 2 reviews prior research in the field. Section 3 outlines our method and training, while Section 4 presents our results. In Section 5, we interpret these results, and in Section 6, we suggest directions for future work. ### Contributions The contributions of this paper are as follows: * We provide a framework for understanding the performance gap between real and synthetic data. * We provide evidence that the distribution gap, rather than the photorealism gap, makes up the largest portion of the performance gap. * We demonstrate that a model trained purely on synthetic data can achieve comparable results to real data for face parsing tasks * We demonstrate the advantage of using the accurate synthetic labels over human-annotated labels for dense segments like hair ## 2 Related Work In the section we cover prior work related to our current research: real and synthetic face parsing datasets, label adaptation and domain adaptation. ### Face Parsing Models Face parsing is the process of segmenting a person's face into various sections. There are several model architectures and training methods available for face parsing, including pretraining on large image-language datasets [7], using transformers [7; 8], and graph methods [9]. However, our focus is on data design rather than model architecture, hence, we follow [3] and use a simple UNet [10], which works well in practice. ### Face Parsing Datasets **Real Datasets** There are several publicly available datasets for face parsing, including Helen [6], which contains 2,330 images, [11], with 22,176 images, and CelebAMask [12], which contains 30,000 high quality images collected from the larger CelebA dataset [13]. The F\({}_{1}\) score is the most standard metric for evaluating face parsing models since it takes into account the variation in sizes between different labels and weights each class equally. The F\({}_{1}\) score is calculated as: \[F_{1}=\frac{1}{\#\text{labels}}\sum_{label}2\frac{\text{precision}_{label}* \text{recall}_{label}}{\text{precision}_{label}+\text{recall}_{label}}\] **Synthetic Face Datasets** The task of collecting face data for various face-related tasks, including face recognition, face parsing, and face landmark detection, poses a significant challenge due to potential privacy concerns. To overcome this challenge, researchers have explored the use of synthetic face data generated through 2D generative models [14], or simulated 3D techniques [3]. Zhang et al. [14] use the StyleGAN network [15] to generate privacy friendly synthetic dataset for face parsing. Wood et al. [3] use a 3DMM and a graphics rendering engine to generate a large dataset of 100,000 simulated synthetic face images for landmark detection and face parsing. They demonstrate that after label adaptation, they achieve competitive results to real data on the LaPa [11] and Helen* datasets [6; 8]. These techniques have proven effective in enhancing the performance of deep learning models, as they provide a diverse and easily controllable dataset without privacy concerns. ### Label Adaptation Consider a model learning on one dataset, but being evaluated on a test set where the annotators were given very different instructions for labelling. Even if the model performs well on images labelled use the training data's conventions, the score on the test data may not accurately reflect the success of the model since the labels on the datasets differ. Label adaptation [3] is a technique used to compare and evaluate models trained on different datasets that may have different labeling conventions for the same semantic classes. Label adaptation adjusts the predicted labels from a model to align with the labeling conventions of another dataset. In our context, we train a model on synthetic data and then use label adaptation to evaluate how well our model works on a real-world dataset with different labeling conventions. The label adaptation process involves two steps. First, the trained model is run in inference mode on the real-world dataset to obtain labels using the synthetic dataset's convention. Second, another model is trained to translate the synthetic predictions to the labels used in the real-world dataset. ### Domain Adaptation In machine learning, domain adaptation aims to enhance the performance of a model that was trained on one domain when it is used on a different domain. Various domain adaptation frameworks handle dataset differences by focusing on variations in image distributions, known as covariate shift [16; 17]. Other domain adaptation works focus on solving the photorealism gap and develop techniques that adapt image appearances to more resemble real images [18; 19; 20]. However, they don't distinguish between different types of distribution divergences. Perhaps the simplest technique for adapting a model trained on one domain to another is simple fine-tuning. This process involves taking a pre-trained model that has been trained on synthetic data and then fine-tuning it on a small amount of real data. The idea is to adjust the weights of the model to better fit the distribution of the real data.The advantage of fine-tuning is that it requires very little additional real data and can be done relatively quickly [21]. Fine-tuning with real data can potentially overcome all the aforementioned gaps. However, the amount of data needed may vary depending on the severity of the gaps, and in some cases it might be difficult to obtain enough data. When designing synthetic datasets for face parsing, it is crucial to consider the distribution of the data. For example, each image contains a face and a decision needs to be made about what ethnicity, gender, and age the identity needs to have, along with their hairstyle, hair color, and eye color. The face needs to be positioned, and the parameters for the camera need to be chosen. There are many free variables, and it is not obvious which ones to choose. Kar et al. [5] establish the importance of matching the distribution of content in synthetic datasets to that of real datasets, and introduce a novel technique for achieving this. They are able to effectively match the distributions of the two datasets by backpropagating the Maximum Mean Discrepancy loss through the rendering engine to the probabilistic graph that generates the scenes. Their work showed that aligning content distributions can make synthetic data useful for real-world tasks. We build upon this idea and present further evidence in support of it. While it is true that an automatic method such as Kar et al. [5] for setting the parameters could potentially improve results, implementing such a method can be complex and time-consuming. Therefore, we opted to manually and iteratively choose the parameters for our study. This works reasonably well in achieving our objectives. ## 3 Method ### Datasets We measured our results on the CelebAMask dataset [12], a challenging face parsing dataset that contains 30,000 annotated images: 24,183 for training, 2,993 for validation and 2,824 for testing. We chose this dataset as it contains the largest number of images and the faces are pre-aligned in the image. The CelebAMask contains 19 categories, but we measure the F\({}_{1}\) score averaged over all the categories excluding the earrings, clothes, and necklace, for a total of 16 categories. We exclude these categories since the CelebAMask dataset overlays them on top of the other facial labels, and we focused on measuring our ability to segment facial areas only. We resized the CelebAMask images using bilinear interpolation from 1024x1024 to 512x512, which is the size at which it was originally labeled [12]. To investigate the differences between synthetic and real data on a face parsing task, we generated data using Datagen's face generation platform [21] and created a synthetic training dataset of widely varied face images. We generate each image at 512x512 resolution. The face-generation platform uses a physically-based rendering engine that renders 2D images from 3D models. It also produces additional 3D data, such as key points, normal maps, depth maps and segmentation maps. 1 **Generator Parameters:** We began by generating a dataset of 22,488 images. We uniformly sampled age, ethnicity, and gender from a database of tens of thousands of distinct identities, while using the default hair and eye parameters for each identity. It's worth noting that this uniform sampling approach could potentially lead to less accurate results in case of imbalanced representation of certain attributes. For instance, our synthetic dataset contains 50% females, whereas in CelebAMask, only 37% of the images contain females as measured by the CelebA annotations [13]. As a result, it's possible that using CelebAMask as a test dataset may not accurately reflect a potential real-world use case. We sample the camera position, location, and field of view so that the faces occupy 200 - 300 pixels in the image and the angle of the face is distributed according to a truncated Gaussian distribution centered on \(0^{\circ}\) and limited to \(\pm 90^{\circ}\). This means most of the faces are front facing, as in the CelebAMask dataset, but with some extreme poses up to \(90^{\circ}\). The CelebAMask dataset lacks a beard label, and instead estimates the jaw line to mark the skin label. To maintain consistency with this labeling convention, we render each image with a beard twice--once with a beard and once without--and use the image with the beard and the segmentation without the beard. This approach enables us to incorporate beards in our images while only segmenting the face underneath. See the first column in Table 1 for the distribution of the initial dataset. We chose 22.7% of the faces to have no expression, and the rest we gave a randomly chosen expression from the following expressions: fear, anger, contempt, happiness, disgust, surprise, and sadness. ### Iterative Approach Following our initial training, we iteratively improved our model by performing these steps: 1. _Generate a synthetic dataset._ We chose parameters to try and maximize variance while attempting to stay true to the distribution of the real data. 2. _Train UNet model on synthetic dataset._ It is critical to aggressively apply augmentations to the synthetic data. 3. _Run inference on real data._ 4. _Analyze systematic errors._ We look for patterns in the error cases to understand what data is missing that is causing the model to make mistakes. Once we know what's missing, we can generate new data to make up for the lack in the original dataset by repeating the procedure. Throughout our error analysis steps we found multiple errors which we fixed by generating adequate datasets. For example, we noticed the absence of hats, eyeliner, and earrings in our initial dataset. In addition, real-world images contained occlusions not present in our data at first as people often put their hands over their faces or hold objects in front of them. See Figure 3 for some examples. The promise of rendered synthetic data is that closing these types of gaps and fixing edge cases is easier than collecting and annotating new images. This of course depends on the availability of relevant 3D assets to close the distribution gap. In our case, we generated new images containing the Figure 4: **Synthetic samples** - Examples of images added to the synthetic dataset to better align the distribution to that of the CelebAMask dataset. Accessories include hats, makeup, earrings, and occlusions caused by hands and objects. Figure 3: **Real samples** - Some examples from the CelebA dataset that include hats, makeup, earrings, and occlusions. These are more challenging cases for the model. missing accessories and makeup. We also added images with occlusions by adding 3D objects in the scene randomly placed in front of the face, or moving the person's hands to occasionally cover the face. See Figure 4 for some images of the variance added to the dataset. To make it easier to compare the performance of the new dataset with additional variance to the previous version, we replaced old images in the dataset with the new ones. This way, we kept the same number of images in the dataset. Table 1 shows the data distribution of the two datasets. ### Training **Model:** Since our focus is on optimizing the training data, rather than on optimizing the model, we follow Wood et al. [3] and use a simple UNet [10] with a Resnet-18 backbone [22; 23]. The input to the network is a 512x512 RGB image and the output is a 16 channel segmentation map for each of the face regions plus the hat category. We trained all models until convergence using the Adam optimizer with a fixed learning rate of \(10^{-4}\), \(\beta_{1}=0.9\) and \(\beta_{2}=0.99\) and a batch size of 32. **Augmentations:** The process of rendering 3D images to create synthetic data assumes a perfect camera model, resulting in noise-free images. However, in the real world, cameras capture images with varying degrees of noise and lighting conditions, making augmentations critical for achieving optimal results when training with synthetic data. To this end, we employ a diverse range of augmentations, elaborated in Appendix A, to improve the generalization capabilities of the model. When training our reference model on the CelebAMask dataset, we applied the same augmentations, and found that they were also helpful for improving the results on the real data. To further enhance the robustness of our approach, we use Stable Diffusion [24] to generate 100,000 background images. During training, we use the rendered alpha mask of the face to composite these images with the face images. By doing so, the model can learn to ignore various background objects, resulting in improved performance and greater robustness. Since the alpha masks are not available for the real data, we were only able to apply the randomized backgrounds to the synthetic data. **Label Adaptation:** Our label adaptation model shares the same UNet architecture as the segmentation model, except the input layer takes the probabilities from the segmentation model as input rather than an RGB image. In order to speed up training, we initialize the weights of the label adaptation model using the frozen segmentation model's weights, except for the input layer which is initialized randomly. By using the label adaptation model, we can compare the performance of our synthetically trained model to a model solely trained on real data. We conduct experiments to determine the influence of dataset size on training the label adaptation model and how much real data is required for optimal results. See the label adaptation architecture in Figure 5. The label adaptation network was trained using the same optimization parameters as the segmentation model, except that the batch size was reduced to 16. For inference, we chose the model checkpoint that maximized the F\({}_{1}\) score on the validation set. \begin{table} \begin{tabular}{l l l} \hline \hline Variance Type & \% of Initial Dataset & \% of Synthetic Dataset + Variance \\ \hline Background Images & 100,000 & 100,000 \\ \hline Daytime & 72.0\% & 72.0\% \\ Evening & 14.2\% & 14.2\% \\ Night & 13.8\% & 13.8\% \\ Earrings & 0.0\% & 8.8\% \\ Beard & 9.0\% & 9.0\% \\ Makeup & 0.0\% & 8.4\% \\ Hat & 0.0\% & 16.02\% \\ Glasses & 18.6\% & 18.6\% \\ Extreme Pose & 13.9\% & 13.9\% \\ Occlusions & 0.0\% & 17.4\% \\ Closed Eyes & 4.8\% & 4.8\% \\ Mouth Open & 4.7\% & 4.7\% \\ \hline Total & 22,488 Images & 22,488 Images \\ \hline \hline \end{tabular} \end{table} Table 1: Synthetic dataset breakdown between the initial synthetic dataset and the dataset after adding additional variance to align the distributions better. Each column shows the fraction of the synthetic dataset that contains each type of content. **Fine-tuning:** To better understand the impact of real data on correcting the photorealism gap, we conduct experiments where we fine-tune the model using different amounts of real data. By doing so, we aim to gain insights into how much real data is needed to overcome any remaining variance or photorealism gap. We began with a model trained using synthetic data, and then train for another few epochs on a dataset of real images from the CelebAMask training set. We use the same hyperparameters as the initial training (described above) and chose the model checkpoint that maximized the F\({}_{1}\) score on the validation set. ## 4 Results In order to evaluate the relative influence of the different gaps we run the following experiments. ### Understanding the Distribution Gap We compare the results of our initial dataset to the one after our iterative improvements. See Table 1 for a comparison of the two datasets. Table 2 shows that the additional variance added to the dataset increases the F\({}_{1}\) score by 5.6 percentage points to 86.3%. It should be noted that a portion of this improvement is attributed to the hat category, as our initial dataset lacked any instances of hats, and was evaluated using the hat category. Nevertheless, it is still significant that the addition of even a limited amount of new content leads to a notable increase in the score. Also worth noting that this improvement is observed despite the known label gaps. ### Understanding the Label Gap To ensure a fair comparison between the model trained on synthetic data and one trained on real data, we also apply label adaptation. Table 2 shows the results of training with the additional variance and then adapting the labels using the full CelebAMask dataset. An additional 3.8 percentage point gain is achieved when accurately accounting for \begin{table} \begin{tabular}{l l l} \hline \hline Dataset & F\({}_{1}\) score (\%) & Improvement (\%) \\ \hline Initial Synthetic dataset & 80.7 & \\ Synthetic With Variance & 86.3 & +5.6 \\ Synthetic With Variance + Label Adaptation & 90.1 & +9.4 \\ \hline CelebAMask & 91.2 & \\ \hline \hline \end{tabular} \end{table} Table 2: Results of different training datasets. We compare training with our initial synthetic dataset, our synthetic dataset with additional variance, and with label adaptation. We compare to the reference CelebAMask dataset. Figure 5: **Label Adaptation:** A segmentation model that was trained on synthetic data (blue) is frozen and applied on real RGB images. This network outputs a 16 channel image containing segmentation probabilities in the convention of the synthetic data, which are input to the label adaptation network (green). The label adaptation network is trained to correct these and outputs segmentation labels in the convention of the real data. the difference in label conventions. We also experiment with varying the amount of real images used in the label adaptation process. Figure 6 demonstrates that label adaptation using even small amounts of real data can improve the training results on pure synthetic data. When 200 images are used, the F\({}_{1}\) score increases from 86.3% to 87.3%. What's particularly interesting about the graph is what's shown on the right-hand side. By using the entire CelebAMask training set to refine the labels, we ensure that they are fully aligned with their real-world counterparts, allowing us to accurately measure how well the model trained on synthetic data performs on real data. The result is that synthetic data performs on a comparable level to the real data, with only a 1 percentage point difference. This finding underscores the potential of synthetic data as a reliable and effective alternative to real-world data in applications where the real and synthetic labels are well aligned. It should be noted that this improvement in F\({}_{1}\) score may not be solely due to the adaptation of labels, as the label adaptation process also corrects some errors in the model's predictions. ### Fine-tuning with Real Data In order to overcome the photorealism gap, we fine-tune our synthetic network with varying amount of real samples. As illustrated in Figure 6, fine-tuning on real data consistently outperforms label adaptation. As previously mentioned, exposing the network to real data allows it to overcome all three types of gaps. The question that arises is then: "what accounts for this difference?" It cannot be attributed to the content of the data, since in our experiment both fine-tuning and label adaptation were trained on the same datasets. Instead, we hypothesize that the network's direct exposure to RGB images during fine-tuning allows it to adapt to any visual discrepancies between the synthetic and real data, such as variations in texture, image noise, and lighting. In contrast, the label adaptation process only exposes the network to label probabilities, which may not provide sufficient information for the network to fully adjust to such \begin{table} \begin{tabular}{l l l} \hline \hline Experiment & CelebAMask Dataset & F\({}_{1}\) score (\%) \\ \hline Train from scratch & 200 Minimal-variance Images & 75.9 \\ Train from scratch & 200 Randomly Chosen Images & 85.1 \\ \hline Train on Synthetic Data (no label adaptation) & 0 Images & 86.3 \\ \hline Fine-tune synthetic model & 200 Minimal-variance Images & 87.8 \\ Fine-tune synthetic model & 200 Randomly Chosen Images & 89.4 \\ \hline \hline \end{tabular} \end{table} Table 3: We compare two real datasets: one without any additional variance, and one randomly selected. We train and fine-tune a model trained on synthetic data using these datasets. The randomly chosen dataset shows improved results both for training and fine-tuning, thus showing the importance of additional variance. For reference, we show the results of the synthetic dataset that was used for fine-tuning. Figure 6: We compare the F\({}_{1}\) scores of different models: a model trained on synthetic data and then fine-tuned on varying amounts of real images (blue), a model trained on synthetic data with a label adaptation model trained on varying amounts of real images (green), a model trained solely on varying amounts of real images (orange), a model trained only on synthetic data without label adaptation (dashed red). The same information is presented in table format in Appendix B visual differences. This highlights the relatively minor impact the photorelism gap has on the results, as it only accounts for, on average, a 1.6 percentage point increase above the label adaptation. Another interesting observation is that fine-tuning a synthetic model with real data (blue curve) consistently outperform training a model from scratch using only real data of the same amount used for fine-tuning (orange curve). This emphasizes the value of synthetic data even when real data is available as it enables using significantly smaller amounts of real data for training. In order to investigate the role of variance vs. photorealism in the performance gap, we conducted an additional experiment. We select two small real image datasets, each containing 200 images from the CelebAMask training set. One dataset consisted of 200 randomly sampled images, while the other contained 200 images that excluded all additional variance such as hats, glasses, earrings. The purpose of the second dataset is to expose the network to photorealistic images, but without exposing it to any variance. We fine-tuned our final synthetic network (after iterative improvements) using these two datasets and also used them to train two models from scratch. Table 3 presents the results. The fine-tuning was performed on a network trained on synthetic data that achieved an F\({}_{1}\) score of 86.3%. Fine-tuning this network on the content-limited dataset increased the F\({}_{1}\) score 87.8%, whereas fine-tuning it on the randomly chosen dataset increases the score increased to 89.4%. These results are another indication that closing the content gap is more important than the photorealism gap and the label gap combined. The content-limited dataset increases the score by 1.4 percentage point s and contains photorealistic images labelled using the real-data conventions, but does not contain any additional variance. The randomly chosen dataset, on the other hand, increases the score by 3.1 percentage points, and it contains high variance in the content, in addition to photorealistic data labelled using the real data conventions. This suggests that closing the distribution gap leads to as big an improvement as the closing the photorealism gap and the label gap combined. The significant difference between training from scratch on the minimal variance dataset and fine-tuning is partly due to the extra variability in the synthetic dataset, which includes items like hats that are not present in the minimal variance dataset. ### Comparison to Other Synthetic Datasets We also compare our synthetic dataset to the Face Synthetics Dataset of Wood et al. [3]. Their dataset contains 100,000 synthetically generated faces and includes a wide variety of assets including clothes, hats, glasses, and masks. While we have access to their dataset, we did not have access to their data generator. This posed a challenge, since we could not correct the beard or mask segmentation as we did with our data. We trained using the same augmentations as we used for our dataset (see Appendix A) and the full Face Synthetics Dataset and achieved an F\({}_{1}\) Score 83%. We also trained using using a subset of the full dataset that contained images without a beard or mask and achieved an F\({}_{1}\) Score of 83.7%. ### Labels Accuracy on Dense Hair Segments Synthetic labels are precisely aligned with the pixels in the corresponding images, resulting in a level of accuracy that may surpass what a human can achieve when labeling real data. This is especially important in finely detailed areas, such as hair. When using synthetic data, the hair is labelled even with subpixel accuracy--if, for example, 30% of a pixel contains a hair, then it will be labeled as a hair with probability 30%. When a model is trained over a full dataset of these statistical labels, it ends up learning to correctly predict the probability that a pixel contains hair. In Figure 7, we plot the output probabilities of the network, and blend the colors together weighted by their respective probabilities. The rightmost column shows the network output from a model trained on real data. We can observe that due to the human labelling in the right column, the network outputs are imprecise and cover larger regions than the actual hair. In contrast, the middle column displays the outputs of the synthetically-trained network, and the outputs align more closely with the strands of the person's hair. Not every individual strand is captured, however finer detail is achieved than what the human labels capture. These precise labels may be beneficial when accurate segmentation of hair is desired (e.g., background separation or hair-dye try on). ## 5 Discussion Our study shows that synthetic data performs almost as well as real data, with a difference of only 1 percentage point once differences in label conventions are accounted for. This finding is significant because it suggests that synthetic data can be a useful replacement for real data. Furthermore, we were able to achieve this result using only 22k images, which is similar in size to the real dataset but required significantly less effort in manual collection and no effort to annotate. We also demonstrate that the distribution gap makes up a larger segment of the performance gap. At the start of our study, we discovered a significant difference in performance between a model trained on our initial dataset and one trained on real data. Specifically, there was a gap of 10.5 percentage points between the F\({}_{1}\) score on real data for the two models, with the model trained on our initial dataset achieving a score of 80.7% and the model trained on real data achieving a score of 91.2%. We identified the distribution gap as the primary contributor to this performance gap. Adding more content to align the distributions reduced 53% of this gap, by increasing the F\({}_{1}\) score 5.6 percentage points. The label gap likely accounted for another 36% of this gap, or 3.8 percentage points in F\({}_{1}\) score. It's worth noting that the label adaptation model could potentially correct some segmentation errors in addition to addressing the label gap, so this number might be slightly lower. The remaining 10% of the gap, or 1.1 percentage points in F\({}_{1}\) score, can be attributed to both the photorealism gap and additional differences in the distribution that we assume were not perfectly addressed. We observe that increasing the size of the fine-tuning dataset leads to an increase in the score until we reach the point where fine-tuning with the full real dataset and training from scratch both yield equivalent results. Our results also demonstrate that regardless of the amount of real data used, fine-tuning a model trained on synthetic data consistently yields better results than training solely on the equivalent amount of real data. In addition, we show that fine-tuning with a small amount of real data--as little as 1% of the total dataset or 200 images--can be helpful in improving the performance of a synthetically trained model. To understand the performance gaps, we tested both label adaptation and fine-tuning. However, in a production-oriented system with a fixed set of real, annotated images, we recommend performing fine-tuning over label adaptation. Fine-tuning is simpler and more effective in handling all three gaps. ## 6 Future Work Our study has demonstrated that aligning the distribution and labels between synthetic and real data can lead to competitive results comparable to training purely on real data. However, we believe that the remaining distribution and photorealism gaps can be further closed by adding more variance to our dataset. It would also be interesting to explore the impact of training with a significantly larger dataset. In addition, future work could focus on developing a better metric to precisely measure the size of the different gaps - the distribution gap, the photorealism gap, and the label gap. This would enable a more precise evaluation of the effectiveness of different techniques for closing these gaps. Moreover, while we manually aligned the distribution of the synthetic dataset with the real data, there is still scope for further automation of this process through data-centric iteration to close the distribution gap. Figure 7: The left column shows the RGB image, the center column visualizes the probability distribution of the network output trained on synthetic data, and the right column shows the probability outputs of the network trained on real data. It is worth noting that in areas where hair is present, the probabilities can be interpreted as the proportion of the pixel that is occupied by hair, thus providing subpixel accuracy for hair detection.
2310.13988
GEMBA-MQM: Detecting Translation Quality Error Spans with GPT-4
This paper introduces GEMBA-MQM, a GPT-based evaluation metric designed to detect translation quality errors, specifically for the quality estimation setting without the need for human reference translations. Based on the power of large language models (LLM), GEMBA-MQM employs a fixed three-shot prompting technique, querying the GPT-4 model to mark error quality spans. Compared to previous works, our method has language-agnostic prompts, thus avoiding the need for manual prompt preparation for new languages. While preliminary results indicate that GEMBA-MQM achieves state-of-the-art accuracy for system ranking, we advise caution when using it in academic works to demonstrate improvements over other methods due to its dependence on the proprietary, black-box GPT model.
Tom Kocmi, Christian Federmann
2023-10-21T12:30:33Z
http://arxiv.org/abs/2310.13988v1
# GEMBA-MQM: Detecting Translation Quality Error Spans with GPT-4 ###### Abstract This paper introduces GEMBA-MQM, a GPT-based evaluation metric designed to detect translation quality errors, specifically for the quality estimation setting without the need for human reference translations. Based on the power of large language models (LLM), GEMBA-MQM employs a fixed three-shot prompting technique, querying the GPT-4 model to mark error quality spans. Compared to previous works, our method has language-agnostic prompts, thus avoiding the need for manual prompt preparation for new languages. While preliminary results indicate that GEMBA-MQM achieves state-of-the-art accuracy for system ranking, we advise caution when using it in academic works to demonstrate improvements over other methods due to its dependence on the proprietary, black-box GPT model. ## 1 Introduction GEMBA-MQM builds on the recent finding that large language models (LLMs) can be prompted to assess the quality of machine translation (Kocmi and Federmann, 2023). The earlier work Kocmi and Federmann (2023) (GEMBA-DA) adopted a straightforward methodology of assessing single score values for each segment without specifying the scale in detail. Employing a zero-shot approach, their technique showed an unparalleled accuracy in assessment, surpassing all other non-LLM metrics on the WMT22 metrics test set (Freitag et al., 2022). Next, Lu et al. (2023) (EAPrompt) investigated prompting LLMs to assess individual error classes from a multidimensional quality metrics (MQM) framework (Freitag et al., 2021), where each error can be classified into various error classes (such as accuracy, fluency, style, terminology, etc.), sub-classes (accuracy > mistranslation), and is marked with its severity (critical, major, minor). Segment scores are computed by aggregating errors, each weighted by its respective severity coefficient (25, 5, 1). While their approach employed a few-shot prompting with a chain-of-thought strategy (Wei et al., 2022), our GEMBA-MQM approach differs in two aspects: 1) We streamline the process using only single-step prompting, and 2) our prompts are universally applicable across languages, avoiding the need for manual prompt preparation for each language pair. Another notable effort by Fernandes et al. (2023) paralleled the EAPrompt approach, also marking MQM error spans. In contrast, their approach used a PaLM-2 model, pooling MQM annotations to sample a few shot examples for the prompt. Their fine-tuning experiments did not improve system-level performance for the top-tier models. \begin{table} \begin{tabular}{l l l} Metric & Acc. & Meta \\ \hline \hline GEMBA-MQM & \(96.5\%\) (1) & 0.802 (3) \\ \hline **XCOMET-Ensemble** & \(95.2\%\) (1) & \(0.825\) (1) \\ **doWMT22CometDA** & \(93.7\%\) (2) & \(0.768\) (9) \\ \hline docWMT22CometKwiDA & \(93.7\%\) (2) & \(0.767\) (9) \\ XCOMET-QE-Ensemble & \(93.5\%\) (2) & \(0.808\) (2) \\ \hline **COMET** & \(93.5\%\) (2) & \(0.779\) (6) \\ \hline **MetricX-23** & \(93.4\%\) (3) & \(0.808\) (2) \\ \hline CometKwi & \(93.2\%\) (3) & \(0.782\) (5) \\ \hline **Calibrin-COMET22** & \(93.1\%\) (3) & \(0.767\) (10) \\ \hline **BLEURT-20** & \(93.0\%\) (4) & \(0.776\) (7) \\ **MaTESe** & \(92.8\%\) (4) & \(0.782\) (5) \\ \hline **mre-score-2hse-regular** & \(92.7\%\) (4) & \(0.743\) (13) \\ \hline mbr-bburturk1p-q & \(92.5\%\) (4) & \(0.788\) (4) \\ KG-BERTScore & \(92.5\%\) (5) & \(0.774\) (7) \\ MetricX-23-QE & \(92.0\%\) (5) & \(0.800\) (3) \\ \hline **BERTscore** & \(90.2\%\) (7) & \(0.742\) (13) \\ \hline MS-COME-QE-22 & \(90.1\%\) (8) & \(0.744\) (12) \\ \hline embed **lama** & \(87.3\%\) (10) & \(0.701\) (16) \\ \hline 200spBLEU & \(86.8\%\) (11) & \(0.704\) (15) \\ BLEU & \(85.9\%\) (12) & \(0.696\) (16) \\ \hline chrF & \(85.2\%\) (12) & \(0.694\) (17) \\ \hline \hline \end{tabular} \end{table} Table 1: Preliminary results of the WMT 2023 Metric Shared task. The first column shows the system-level accuracy, and the second column is the Metrics 2023 meta evaluation. Metrics with gray background need human references. The table does not contain the worst-performing, non-standard metrics due to space reasons. (System) You are an annotator for the quality of machine translation. Your task is to identify errors and assess the quality of the translation. (user) (source_language) source:\n '(source_segment)'n {target_language} translation:\n '(target_segment)'n \n Based on the source segment and machine translation surrounded with triple backticks, identify error types in the translation and classify them. The categories of errors are: accuracy (addition, mistranslation, omission, untranslated text), fluency (character encoding, grammar, inconsistency, punctuation, register, spelling), locale convention (currency, date, name, telephone, or time format) style (awkward), terminology (inappropriate for context, inconsistent use), non-translation, other, or no-error.\n Each error is classified as one of three categories: critical, major, and minor. Critical errors inhibit comprehension of the text. Major errors disrupt the flow, but what the text is trying to say is still understandable. Minor errors are technically errors, but do not disrupt the flow or hinder comprehension. (assistant) {observed error classes} ## 2 Description Our technique adopts few-shot learning with the GPT-4 model (OpenAI, 2023), prompting the model to mark quality error spans using the MQM framework. The underlying prompt template is modeled on guidelines for human annotators and shown in Figure 1. In contrast to other methods, we use three pre-determined examples (see Appendix A), allowing the method to be used with any language pair, avoiding the need to create language pair specific MQM few-shot examples. This was the original limitation that prevented Fernandes et al. (2023) from evaluating AutoMQM beyond two language pairs. Our decision was not driven by a desire to enhance performance -- since domain and language-specific prompts typically boost it (Moslem et al., 2023) -- but rather to ensure our method can be evaluated across any language pairs. ## 3 Experiments To measure the performance of the GEMBA-MQM metric, we follow the methodology and use test data provided by the WMT22 Metrics shared task (Freitag et al., 2022) which hosts an annual evaluation of automatic metrics, benchmarking them against human gold labels. We compare our method against the best-performing reference-based metrics of WMT22: MetrixX_XXL (non-public metric), COMET-22 (Rei et al., 2022), UNITE (Van et al., 2022), BLEURT-20 (Pu et al., 2021), and COMET-20 (Rei et al., 2020). In addition, we also compare against "classic" string-based metrics BLEU (Papineni et al., 2002) and ChrF (Popovic, 2015). Lastly, we compare against reference-less metrics of WMT22: CometKIWI (Rei et al., 2022), Unistrc (Van et al., 2022), Comet-QE (Rei et al., 2021), MS-COMET-QE-22 (Kocmi et al., 2022). We contrast our work with other LLM-based evaluation methods such as GEMBA-DA (Kocmi and Federmann, 2023) and EAPrompt (Lu et al., 2023), conducting experiments using two GPT models: GPT-3.5-Turbo and the more powerful GPT-4 (OpenAI, 2023). ### Test set The main evaluation of our work has been done on the MQM22 (Freitag et al., 2022) and internal Microsoft data. Furthermore, a few days before the camera-ready deadline, organizers of Metrics 2023 (Freitag et al., 2023) released results on the blind test set, showing performance on unseen data. The MQM22 test set contains human judgments for three translation directions: English into German, English into Russian, and Chinese into English. The test set contains a total of 54 machine translation system outputs or human translations. It contains a total of 106k segments. Translation systems are mainly from participants of the WMT22 General MT shared task (Kocmi et al., 2022). The source segments and human reference translations Figure 1: The general prompt for GEMBA-MQM omits the gray part which performed subpar on internal data (we include it in GEMBA-locale-MQM). The “(user)” and “(assistant)” section is repeated for each few-shot example. for each language pair contain around 2,000 sentences from four different text domains: news, social, conversational, and e-commerce. The gold standard for scoring translation quality is based on human MQM ratings, annotated by professionals who mark individual errors in each translation, as described in Freitag et al. (2021). The MQM23 test set is the blind set for this year's WMT Metrics shared task prepared in the same way as MQM22, but with unseen data for all participants, making it the most reliable evaluation as neither participants nor LLM could overfit to those data. The main difference from last year's iteration is the replacement of English into Russian with Hebrew into English. Also, some domains have been updated; see Kocmi et al. (2023). Additionally, we evaluated GEMBA-MQM on a large internal test set, an extended version of the data set described by Kocmi et al. (2021). This test set contains human scores collected with source-based Direct Assessment (DA, Graham et al., 2013) and its variant DA+SQM (Kocmi et al., 2022). This test set contains 15 high-resource languages paired with English. Specifically, these are: Arabic, Czech, Dutch, French, German, Hindi, Italian, Japanese, Korean, Polish, Portuguese, Russian, Simplified Chinese, Spanish, and Turkish. ### Evaluation methods The main use case of automatic metrics is system ranking, either when comparing a baseline to a new model, when claiming state-of-the-art results, when comparing different model architectures in ablation studies, or when deciding if to deploy a new model to production. Therefore, we focus on a method that specifically measures this target: system-level pairwise accuracy (Kocmi et al., 2021). The pairwise accuracy is defined as the number of system pairs ranked correctly by the metric with respect to the human ranking divided by the total number of system pair comparisons. Formally: \[\text{Accuracy}=\frac{|\text{sign}(\text{metric}\Delta)==\text{sign}(\text{ human}\Delta)|}{|\text{all system pairs}|}\] We reproduced all scores reported in the WMT22 Metrics shared task findings paper using the official WMT22 script.1 Reported scores match Table 11 of the WMT22 metrics findings paper (Freitag et al., 2022). Footnote 1: [https://github.com/google-research/mt-metrics-eval](https://github.com/google-research/mt-metrics-eval) Furthermore, organizers of Metrics shared task 2023 defined a new meta-evaluation metric based on four different scenarios, each contributing to the final score with a weight of 0.25: * system-level pairwise accuracy; * system-level Pearson correlation; * segment-level Accuracy-t (Deutsch et al., 2023); and * segment-level Pearson correlation. The motivation is to measure metrics in the most general usage scenarios (for example, for segment-level filtering) and not just for system ranking. However, we question the decision behind the use of Pearson correlation, especially on the system level. As Mathur et al. (2020) showed, Pearson used for metric evaluation is sensitive when applied to small sample sizes (in MQM23, the sample size is as little as 12 systems); it is heavily affected by outliers (Osborne and Overbay, 2004; Ma et al., 2019), which need to be removed before running the evaluation; and it measures linear correlation with the gold MQM data, which are not necessarily linear to start with (especially the discrete segment-level scores, with error weights of 0.1, 1, 5, 25). Although it is desirable to have an automatic metric that correlates highly with human annotation behaviour and which is useful for segment-level evaluation, more research is needed regarding the proper way of testing these properties. ## 4 Results In this section, we discuss the results observed on three different test sets: 1) MQM test data from WMT, 2) internal test data from Microsoft, and 3) a subset of the internal test data to measure the impact of the MQM locale convention. ### Results on MQM Test Data from WMT The results of the blind set MQM23 in Table 1 show that GEMBA-MQM outperforms all other techniques on the three languages evaluated in the system ranking scenario. Furthermore, when evaluated in the meta-evaluation scenario it achieves the third cluster rank. In addition to the official results, we also test on MQM22 test data and show results in Table 2. The main conclusion is that all GEMBA-MQM variants outperform traditional metrics (such as COMET or Metric XXL). When focusing on the quality estimation task, we can see that the GEMBA-locale-MQM-Turbo method slightly outperforms EAPrompt, which is the closest similar technique. However, we can see that our final technique GEMBA-MQM is performing significantly worse than the GEMBA-locale-MQM metric, while the only difference is the removal of the locale convention error class. We believe this to be caused by the test set. We discuss our decision to remove the locale convention error class in Section 4.3. ### Results on Internal Test Data Table 3 shows that GEMBA-MQM-Turbo outperforms almost all other metrics, losing only to COMETKIWI-22. This shows some limitations of GPT-based evaluation on blind test sets. Due to access limitations, we do not have results for GPT-4, which we assume should outperform the GPT-3.5 Turbo model. We leave this experiment for future work. ### Removal of Locale Convention When investigating the performance of GEMBA-locale-MQM on a subset of internal data (Czech and German), we observed a critical error in this prompt regarding the "locale convention" error class. GPT assigned this class for errors not related to translations. It flagged Czech sentences as a locale convention error when the currency Euro was mentioned, even when the translation was fine, see example in Table 4. We assume that it was using this error class to mark parts not standard for a given language but more investigation would be needed to draw any deeper conclusions. The evaluation on internal test data in Table 4 showed gains of 1.7% accuracy. However, when evaluating over 15 languages, we observed a small degradation of 0.2%. For MQM22 in Table 2, the degradation is even bigger. When we look at the distribution of the error classes over the fifteen highest resource languages in Table 5, we observe that 32% of all errors for GEMBA-locale-MQM are marked as a locale convention suggesting a misuse of GPT for this error class. Therefore, instead of explaining this class in the prompt, we removed it. This resulted in about half of the original locale errors being reassigned to other error classes, while the other half was not marked. In conclusion, we decided to remove this class as it is not aligned with what we expected to measure and how GPT appears to be using the classes. Thus, we force GPT to classify those errors using other error categories. Given the different behaviour for internal and external test data, this deserves more investigation in future work. \begin{table} \begin{tabular}{l l} Source & Vstupne do památky sini 16,50 Eur. \\ Hypothesis & Admission to the monument is 16.50 Euros. \\ GPT annot. & locale convention/currency: “euros” \\ \end{tabular} \end{table} Table 4: An example of a wrong error class “locale convention” as marked by GEMBA-locale-MQM. The translation is correct, however, we assume that the GPT model might not have liked the use of Euros in a Czech text because Euros are not used in the Czech Republic. \begin{table} \begin{tabular}{l l} Metric & Acc. \\ \hline \hline **EAPrompt-Turbo** & **90.9\%** \\ **GEMBA-DA-GPT4** & **89.8\%** \\ GEMBA-locale-MQM-Turbo & 89.8\% \\ EAPrompt-Turbo & 89.4\% \\ GEMBA-MQM-GPT4 & 89.4\% \\ GEMBA-DA-GPT4 & 87.6\% \\ GEMBA-DA-Turbo & 86.9\% \\ GEMBA-MQM-Turbo & 86.5\% \\ \hline GEMBA-DA-Turbo & **86.5\%** \\ **Metricy. XXL** & **85.0\%** \\ BLEURT-20 & 84.7\% \\ **COMET-22** & 83.9\% \\ **COMET-20** & 83.6\% \\ **UNIT** & **82.8\%** \\ COMETKwi & 78.8\% \\ COMET-QE & 78.1\% \\ **BERTScore** & 71.4\% \\ UniTE-src & 75.9\% \\ MS-COMET-QE-22 & 75.5\% \\ **chir** & **73.4\%** \\ **BLEU** & **70.8\%** \\ \end{tabular} \end{table} Table 2: The system-level pairwise accuracy results for the WMT 22 metrics task test set. Gray metrics need reference translations which are not the focus of the current evaluation. \begin{table} \begin{tabular}{l c c} Metric & \multicolumn{1}{c}{Acc.} \\ \hline \hline **EAPrompt-Turbo** & **90.9\%** \\ **GEMBA-DA-GPT4** & **89.8\%** \\ GEMBA-locale-MQM-Turbo & 89.8\% \\ EAPrompt-Turbo & 89.4\% \\ GEMBA-MQM-GPT4 & 89.4\% \\ GEMBA-DA-GPT4 & 87.6\% \\ GEMBA-DA-Turbo & 86.9\% \\ GEMBA-MQM-Turbo & 86.5\% \\ \hline GEMBA-DA-Turbo & **86.5\%** \\ **Metricy. XXL** & **85.0\%** \\ **BLEURT-20** & 84.7\% \\ **COMET-22** & 83.9\% \\ **COMET-20** & 83.6\% \\ **UNIT** & **82.8\%** \\ COMETKwi & 78.8\% \\ COMET-QE & 78.1\% \\ **BERTScore** & 71.4\% \\ UniTE-src & 75.9\% \\ MS-COMET-QE-22 & 75.5\% \\ **chir** & **73.4\%** \\ **BLEU** & **70.8\%** \\ \end{tabular} \end{table} Table 3: System-level pairwise accuracy results for our internal test set. The first column is for all 15 languages, and the second is Czech and German only. All languages are paired with English. ## 5 Caution with "Black Box" LLMs Although GEMBA-MQM is the state-of-the-art technique for system ranking, we would like to discuss in this section the inherent limitations of using "black box" LLMs (such as GPT-4) when conducting academic research. Firstly, we would like to point out that GPT-4 is a proprietary model, which leads to several problems. One of them is that we do not know which training data it was trained on, therefore any published test data should be considered as part of their training data (and is, therefore, possibly tainted). Secondly, we cannot guarantee that the model will be available in the future, or that it won't be updated in the future, meaning any results from such a model are relevant only for the specific sampling time. As Chen et al. (2023) showed, the model's performance fluctuated and decreased over the span of 2023. As this impacts all proprietary LLMs, we advocate for increased research using publicly available models, like LLama 2 (Touvron et al., 2023). This approach ensures future findings can be compared both to "black box" LLMs while also allowing comparison to "open" models.2 Footnote 2: Although LLama 2 is not fully open, its binary files have been released. Thus, when used it as a scorer, we are using the exact same model. ## 6 Conclusion In this paper, we have introduced and evaluated the GEMBA-MQM metric, a GPT-based metric for translation quality error marking. This technique takes advantage of the GPT-4 model with a fixed three-shot prompting strategy. Preliminary results show that GEMBA-MQM achieves a new state of the art when used as a metric for system ranking, outperforming established metrics such as COMET and BLEURT-20. We would like to acknowledge the inherent limitations tied to using a proprietary model like GPT. Our recommendation to the academic community is to be cautious with employing GEMBA-MQM on top of GPT models. For future research, we want to explore how our approach performs with other, more open LLMs such as LLama 2 (Touvron et al., 2023). Confirming superior behaviour on publicly distributed models (at least their binaries) could open the path for broader usage of the technique in the academic environment. ## Limitations While our findings and techniques with GEMBA-MQM bring promising advancements in translation quality error marking, it is essential to highlight the limitations encountered in this study. * Reliance on Proprietary GPT Models: GEMBA-MQM depends on the GPT-4 model, which remains proprietary in nature. We do not know what data the model was trained on or if the same model is still deployed and therefore the results are comparable. As Chen et al. (2023) showed, the model's performance fluctuated throughout 2023; * High-Resource Languages Only: As WMT evaluations primarily focus on high-resource languages, we cannot conclude if the method will perform well on low-resource languages. ## Acknowledgements We are grateful to our anonymous reviewers for their insightful comments and patience that have helped improve the paper. We would like to thank our colleagues on the Microsoft Translator research team for their valuable feedback.
2307.01943
Hierarchical Planning and Policy Shaping Shared Autonomy for Articulated Robots
In this work, we propose a novel shared autonomy framework to operate articulated robots. We provide strategies to design both the task-oriented hierarchical planning and policy shaping algorithms for efficient human-robot interactions in context-aware operation of articulated robots. Our framework for interplay between the human and the autonomy, as the participating agents in the system, is particularly influenced by the ideas from multi-agent systems, game theory, and theory of mind for a sliding level of autonomy. We formulate the sequential hierarchical human-in-the-loop decision making process by extending MDPs and Options framework to shared autonomy, and make use of deep RL techniques to train an uncertainty-aware shared autonomy policy. To fine-tune the formulation to a human, we use history of the system states, human actions, and their error with respect to a surrogate optimal model to encode human's internal state embeddings, beyond the designed values, by using conditional VAEs. We showcase the effectiveness of our formulation for different human skill levels and degrees of cooperativeness by using a case study of a feller-buncher machine in the challenging tasks of timber harvesting. Our framework is successful in providing a sliding level of autonomy from fully autonomous to fully manual, and is particularly successful in handling a noisy non-cooperative human agent in the loop. The proposed framework advances the state-of-the-art in shared autonomy for operating articulated robots, but can also be applied to other domains where autonomous operation is the ultimate goal.
Ehsan Yousefi, Mo Chen, Inna Sharf
2023-07-04T22:21:51Z
http://arxiv.org/abs/2307.01943v1
# Hierarchical Planning and ###### Abstract In this work, we propose a novel shared autonomy framework to operate articulated robots. We provide strategies to design both the task-oriented hierarchical planning and policy shaping algorithms for efficient human-robot interactions in context-aware operation of articulated robots. Our framework for interplay between the human and the autonomy, as the participating agents in the system, is particularly influenced by the ideas from multi-agent systems, game theory, and theory of mind for a sliding level of autonomy. We formulate the sequential hierarchical human-in-the-loop decision making process by extending MDPs and Options framework to shared autonomy, and make use of deep RL techniques to train an uncertainty-aware shared autonomy policy. To fine-tune the formulation to a human, we use history of the system states, human actions, and their error with respect to a surrogate optimal model to encode human's internal state embeddings, beyond the designed values, by using conditional VAEs. We showcase the effectiveness of our formulation for different human skill levels and degrees of cooperativeness by using a case study of a feller-buncher machine in the challenging tasks of timber harvesting. Our framework is successful in providing a sliding level of autonomy from fully autonomous to fully manual, and is particularly successful in handling a noisy non-cooperative human agent in the loop. The proposed framework advances the state-of-the-art in shared autonomy for operating articulated robots, but can also be applied to other domains where autonomous operation is the ultimate goal. Shared Autonomy Human-Robot Interaction Hierarchical Planning Policy Shaping MDP Deep RL cVAE Articulated Robots. ## 1 Introduction ### Background and Motivation Shared autonomy is a framework to enable humans and robots to interact in a shared manner in order to accomplish certain goals. Shared autonomy has been utilized in a wide range of applications, from autonomous driving (Kiran et al. (2021)) to assistive robots (Losey et al. (2022)), in order to extend and enhance human capabilities (Annaswamy et al. (2023)). Indeed, the wide range of its applications attests to the importance of efficient human-robot interactions, as well as the current state of co-existence between humans and increasingly more intelligent robots. Our interest in shared autonomy is motivated by its potential applications in the context of articulated machines, these commonly operated by a human operator physically located in the machine. Operating such a machine, typically comprised of a mobile base and a large-scale multi-degree-of-freedom arm, involves multiple levels of hierarchy in the operator decision making. These encompass the higher-level strategical decision making, such as path planning for the machine, all the way to the lower-level decision-making related to individual joint control for arm manipulation. In essence, this hierarchy is comparable to human decision making when driving a car (Guo et al. (2019)). A schematic of an articulated robot with mobile base is shown in Figure 1. In addition to the hierarchy of decision levels as described above, operation of articulated machines, especially those in industrial settings, such as excavators used in construction or feller-bunchers employed in timber harvesting, is also tied to the detailed know-how of their respective application domains. This is one of the reasons why reaching a high operator skill level to efficiently utilize these machines can take years in some applications (Westerberg (2014), Lofgren (2009)). In this paper, we develop a general task-oriented hierarchical planning framework for the robot/machine, with human and AI interactions in mind, that extends beyond the standard robotic planning techniques. One of the main challenges in the multi-level, real-world robotic applications is that despite having a good insight into different operations, complete knowledge of the relevant application domain needed to achieve a fully autonomous system cannot be assumed. However, this should not prevent us from incorporating whatever knowledge we have into a versatile framework. Moreover, the type of applications considered here, involving large, extremely powerful machines, does not allow for hazardous trial-and-error experimentation in the field. This motivates the central research question addressed in this work, that of how to design a comprehensive shared autonomy architecture that allows different levels of autonomy in a human-in-the-loop framework for complex, hierarchical, robotic decision-making tasks. It is important to highlight that in this work, the agents, i.e., the human and the autonomous, _co-operate_ on one physical entity, the robot/machine. Once again, this bears similarities to autonomous driving scenarios (Amini et al. (2020)), and is unlike many other human-robot interaction scenarios where the two agents act on/through two separate physical entities (Hong et al. (2023), Dragan (2017)). ### State of the Art In shared autonomy, _arbitration_ of human and autonomous action commands, which jointly form the input to the robot/machine system, is of prominent importance. In this regard, the available schemes in literature can be categorized into two main groups. The first is referred to as _policy blending_, where the human action and autonomous action are treated as two separate signals and an _arbitration function_ is used to decide how to blend these two signals (Dragan & Srinivasa (2013_a_)). Despite wide application due to its simplicity and efficacy, the policy blending approach has some drawbacks that stem from the fact that it attempts to blend two signals that might be different in nature and their meaning (Javdani et al. (2018)). To address the latter issue, in (Losey et al. (2022)), the authors suggested a latent-action representation from human's low-dimensional actions to high-dimensional inputs. They next combined the latter with the assistance signal in order to fine-tune the robot behavior. The other limitation is the inherent "predict-then-go" nature of the system architecture to implement the policy blending approach (Javdani et al. (2018)). In some respects, the resulting autonomous agent in an inherently "predict-then-go" setting can be viewed as a Sisyphus or an _absur_ hero (Camus (2018)), with a perpetual though successful struggle, that resets after every cue by the human agent. The second group of methods is what we will call _policy shaping_, where the autonomous action (and policy) is shaped by taking into account human action, as well as other available information, and it is the _only_ input to the robot. In Figure 1: Schematic of an articulated robot with mobile base. other words, unlike policy blending where the agents' inputs (i.e., human and autonomous) are combined in parallel, in policy shaping, they are in series. This approach does not suffer from the drawbacks of policy blending; however, it is computationally expensive and users report less comfort using it despite having better performance in certain scenarios (Javdani et al. (2018), Reddy et al. (2018)). One of the strategies in the second category involves conditioning the robot action on the human signal. The authors of (Javdani et al. (2018)) defined an augmented (autonomous) state consisting of the (overall) state of the robot and _human's goal_. It was assumed that the human policy that acts based on the augmented state is modeled and known, for which they used the Maximum Entropy (MaxEnt) Inverse Optimal Control (IOC) framework. The autonomous action is based on the overall robotic system state as well as the human action, and is defined such that it minimizes a cost function dependent on the human action and goal. It was furthermore assumed that a goal \(g\) is partially observable and the human state is the same as the autonomous state. In (Reddy et al. (2018)), the authors developed a deep Reinforcement Learning (RL) algorithm to learn a model-free policy that maps the augmented state of the robot to the (autonomous) action. The augmented state comprised the state of the overall robotic system and the _human signal_. The latter was either the intended goal - inferred using Bayesian inference under an inverse RL scheme - if such information was available, or the raw low-level human inputs, otherwise. The purpose in (Reddy et al. (2018)) was to find an optimal autonomous action close to the human action to deliver high performance, while keeping the human as a high-quality input source in the loop. It was demonstrated that incorporating an inference algorithm resulted in a better overall performance despite the additional computational cost. However, the authors of (Reddy et al. (2018)) did not assume a model for human policy and the human signal was part of an augmented state definition for the autonomous policy. A model-free RL algorithm was used to find the optimal autonomous action while keeping it close to the human action. ### Contributions Our work is based on the premise that the ultimate goal of a fully autonomous system operating an articulated robot/machine is best achieved through a shared autonomy framework. Under such a framework, the _autonomous agent_ can progressively increase the level of autonomy while keeping the human in the loop to handle edge cases and to, possibly, learn from or teach the autonomous agent. We suggest that such a framework is particularly useful to applications which rely heavily on a skilled human to operate the robot/machine, when the operations involve a hierarchy of decision making, and in operations where safety is important. With this perspective, the main contributions of this paper are as follows: 1. Development of a general, task-oriented hierarchical planning formulation for the operation of articulated robots/machines, with human interpretability and shared autonomy in mind; 2. Proposition of a novel shared autonomy architecture for human-in-the-loop tasks and policy shaping; this involves a design of hierarchical interactions and _arbitration_ between the autonomy and the human. Our work towards this contribution is particularly influenced by the ideas from multi-agent systems, game theory, and theory of mind (Pynadath & Marsella (2005), Dragan (2017)) for a sliding level of autonomy; 3. Formulation of the MDPs and Options framework to enable deep RL for shared autonomy; 4. Application of the proposed shared autonomy framework to an industrially important application--timber harvesting. Thus, we fine-tune our formulation for the specific tasks of a timber-harvesting machine: a feller-buncher, which is a large-scale hydraulically actuated articulated robot with a specialized end-effector (Yousefi et al. (2022)). This paper is organized as follows: We first introduce the definitions and nomenclature in SS2. In SS3, we provide the elements of shared autonomy. Then, in SS4, we provide our problem statement and points of view on the problem. In SS5, we discuss the case study application: timber harvesting, followed by detailed analysis in SS6. In SS7, we present our results for different sections. Finally, SS8 concludes our work by reiterating the main ideas presented and suggesting directions for future work. ## 2 Definitions and Notation To eliminate possible ambiguities and for most clarity, we begin by defining the relevant terminology in our work. As much as possible, we use terminology consistent with what is established in the relevant literature; however, we bear in mind that definitions often depend on the specific perspective and background of the authors. Figure 2 depicts graphically the various components of the system and the corresponding terms to describe its components. **Agent**: This term refers to any entity capable of making decisions. In our problem, we have two agents: * _Human Agent (HA)_: This term refers to a human operator, driver, or user, as a decision maker. * _Autonomous Agent (AA)_: This term refers to the artificial high-level intelligence capable of decision making. **Robot** or machine: the physical entity being operated by an agent; it is operated in the field and interacts with the _environment_. In our application, it is a mobile base with an articulated arm, such as a feller-buncher machine operated in the forest. We might use the term _robotic system_ equivalently, as a robot includes certain components internally, such as sensors and actuators. We use the term _Overall Robotic System_ when we refer to the robotic system and the environment together. **Autonomous System**: This term refers to the autonomous agent (AA) and the robot together. The term _Overall Autonomous System_ is used when we include the environment as an element in this system. **Human-Robot System**: This term refers to the human agent (HA) and the robot together. The term _Overall Human-Robot System_ is used when we include the environment as an element in this system. **System**: This term refers to the human agent (HA), the autonomous agent (AA), and the robot together. The term _Overall System_ is used when we include the environment as an element in this system. With the above system components clearly delineated, we next introduce the basic terminology for the learning aspect of the framework. **State**: relevant variables defined for each element of the Overall System, in particular, * State of the robot, \(s^{R}\): This refers to variables relevant to the robot itself, such as its pose and the remaining capacity of its end effector, i.e., the end effector capacity for maneuverability (\(CfM_{ee}\)). * State of the environment, \(s^{E}\): This state defines the different elements in the environment surrounding the robot, such as the objects, and obstacles, as well as certain task-related elements, depending on the type of task. We will discuss these in more detail in the subsequent sections. * State of the Autonomous Agent, \(s^{A}\): This is the _designed_ representation of the state of the Overall System by the architect of Autonomous Agent based on the foregoing state elements as well as task-related elements. This representation forms the basis upon which the autonomous agent acts. * State of the Human Agent, \(s^{H}\). This refers to the human's representation of the state of the Overall System. We do not assume knowledge of \(s^{H}\). Also, we do not assume equivalency between \(s^{H}\) and \(s^{A}\) as will be discussed later. **Action**: Each of the decision making agents, i.e., HA and AA, can also _act_ in a shared autonomy setting depending on the collaboration scheme and level of autonomy. The action is relayed directly to the robot as input. We will use the following terms: Figure 2: Definition of different terms in our work. * Action of Autonomous Agent or simply **Autonomous Action**, \(a^{A}\): In a shared autonomy setting, this action will be of assistive nature, and we might refer to it as _Assistive Signal_. * Action of Human Agent or simply **Human Action**, \(a^{H}\): This refers to the human action using any input device, such as, joysticks and pedals. **Policy**: Each of the decision-making/acting agents in a shared autonomy setting has a policy according to which they act. We will use Human (Agent) Policy, \(\pi_{H}\), and Autonomous (Agent) Policy, \(\pi_{A}\), to refer to these policies. ## 3 Elements of Shared Autonomy ### Hierarchical task-oriented robot planning & design variables As alluded earlier, we consider the robot planning problem in terms of tasks and functions: this is advantageous when interfacing multiple agents, including a human in the context of shared autonomy, as well as sliding levels of autonomy (Jorda et al. (2022)). Moreover, this task-oriented approach makes it possible to incorporate the inherent hierarchy of tasks and consequently, hierarchy in human-robot interactions (Guo et al. (2019)). Figure 4: Temporal abstraction of hierarchical robot planning policies. The policies repeat over time and with different time scales as shown with “...”. Figure 3: Hierarchical task-oriented breakdown (black arrows) of robot planning for articulated robots. The green arrows in Figure 3 show the inter-dependencies of the (sub-)policies. The conceptual representation of our task-oriented hierarchical perspective on the robot planning problem for an articulated robot/machine is shown in Figure 3. Here, \(\pi_{RP}\) denotes the overarching, master policy for robot planning and it is broken down (black arrows) into two general tasks or policies and their associated sub-policies as follows: \(\pi_{M}\)**:** _policy to move the arm._ This is further categorized into two hierarchical levels: * \(\pi_{MHL}\)**:** policy for high-level arm manipulations that includes \(n_{m}\) sub-policies, such as end-effector path planning and scheduling of arm motions. The specific definition of each sub-policy depends on the particular application domain beyond standard robotic planners; * \(\pi_{MLL}\)**:** policy for low-level arm manipulations that includes low-level control, e.g., joint control; \(\pi_{B}\)**:** _policy to move the base of the robot/machine._ This is further categorized into two hierarchical levels, defined similar to the arm motion policies: * \(\pi_{BHL}\)**:** policy for high-level motion of the base that includes \(n_{b}\) sub-policies, such as the classic example of hierarchical room-to-room robot planning (Precup & Sutton (1997)), * \(\pi_{BLL}\)**:** policy for low-level motion of the base that includes base motion control. The green arrows in Figure 3 show the inter-dependencies of the (sub-)policies. It should be noted that there might be policies that require coordinated or combined planning of robot arm and base. This would also fall under the umbrella of the overarching robot planning policy. A task-oriented planning (and scheduling) problem can be formulated as a sequential decision making problem that optimizes for certain task-specific metrics (Yousefi et al. (2022)). We use the analogy between an _option_ in Options framework (Sutton et al. (1999), Pateria et al. (2021)) and a task in our robot planning: just like a task may involve multiple sub-tasks, an option generally involves multiple actions. The Options framework encodes the generalized actions as options. With this analogy, we invoke the Markov decision process (MDP) framework which provides a model for sequential decision making processes, considering the agents' actions while taking into account the stochasticity of the process. Since the Options framework itself is built on semi-MDPs which extend the definition of MDPs to include a sense of _time_, it enables our shared autonomy formulation to accommodate robotic tasks with different durations, as well as hierarchies. Moreover, it has been shown that the behavior of a human agent operating an articulated robot can be described by a _well-structured sequence_ of repetitive (sub-)tasks (Westerberg (2014)). Our framework, therefore, is designed to take into account the _spatiotemporal_ aspects of a _shared_ policy in providing the abstraction of shared autonomy. An example of temporal progression of a sequence of hierarchical tasks is shown in Figure 4. From a graphical probabilistic model point of view, the policy of an autonomous agent can be depicted as in Figure 5, where \(z_{0}\) encodes the _task_ or _function_ specific state variables, which augment the classic robot-related states \(s\) into \(\hat{s}\). Research on how to define \(z_{0}\) is quite extensive and is also application specific, as illustrated by the work in (Losey et al. (2022), Reddy et al. (2018), Dragan & Srinivasa (2013_b_)). It could be argued that there is neither a unique formulation nor a methodology to define \(z_{0}\), as there is no unique way of performing the same task. In tasks involving pick and place operations, using information about the goal space to define \(z_{0}\) has been shown a good choice (Losey et al. (2022)). We will demonstrate the significance of this choice through an example in SS7.1. The advantage of our framework is that \(z_{0}\) is defined for shared autonomy by design, which makes the robot operation interpretable as well as efficient. The mutual interpretability attribute is particularly important for tasks involving a human agent in the loop and for those with limited domain knowledge. It should be noted that our framework is not necessarily a "human-knowledge-based" method1. Although having insight into how humans perform a task helps with the understanding of the task, especially Figure 5: Graphical probabilistic model for a policy in task-oriented robot planning for applications of human-operated machines, this is not a requirement for our framework, but a matter of interpretability to the human who is in the loop. With the understanding presented above, we now discuss how we interface the agents in the system, i.e., the human and the autonomous, given the hierarchy of tasks. Our view of the overall system involving a human and an autonomous agent is that of a multi-agent system with gamified interactions, and we design our shared autonomy architecture accordingly, as discussed next. ### Shared autonomy architecture & design variables A high-level block diagram of the proposed shared autonomy scheme from the control systems point of view is shown in Figure 6. The architecture of each of these blocks and how they interface with each other are the most challenging aspects of shared autonomy design. Figure 7 depicts our proposed viewpoint of a shared autonomy framework as a graphical probabilistic model. With the task-representative variables, \(z_{0}\), as introduced in SS3.1, we now introduce the human-representative design variables, \(z_{1}\), which encode the human's internal states. There is substantial literature on conceptualizing the human aspect in the context of shared autonomy, whether through explicit model assumptions (for example, (Dragan and Srinivasa (2013)) or model-free approaches (for example, (Reddy et al. (2018))). In the literature to date, inferred human's belief over the goals of the specific task is often used as one of the human internal states, even though inference of specific goals may not always be feasible. Arguably, optimality of the operator, or equivalently, the amount of noise in their actions, is another important characteristic that we are interested in quantifying and utilizing for smooth, user-tuned shared autonomy. Thus, human operator analysis with minimal assumptions about them is an important aspect of our work, to be discussed in SS4.1. The flow of information to/from a human agent is shown with blue arrows in Figure 7. The dashed lines are related the human analysis process that will be discussed in SS4.1. Continuing with Figure 7, we introduce \(z_{2}^{RP,A}\) to encode _pre-training_ related state variables to represent prior training and knowledge. In our framework, we refer to pre-training as the process of training a fully autonomous agent using a Figure 6: Block diagram of the general shared autonomy scheme Figure 7: Graphical probabilistic model for shared autonomy. The dashed arrows are related to cVAE and human analysis discussed in §4.1. The green arrows are related to including pre-trained model in the shared autonomy operation. The blue arrows designate the flow of signals to human and our inferred internal state variable, \(z_{1}\). model similar to Figure 5. This is the first step in shared autonomy design, ensuring that the state definition and the model are capable of performing a task autonomously. As well, the pre-trained model will allow us to look into the human's internal state and/or their fine-tuning and noise through the lens of a structured task. The information gained with this process enters the model via the green arrows in Figure 7. To summarize, we employ three categories of variables in our shared autonomy framework: 1. \(z_{0}\): task/operation specific state variables, representing the domain knowledge, 2. \(z_{1}\): human's internal states, 3. \(z_{2}\): pre-training state variables. These three categories can be considered as pillars of how humans learn to perform a task: we bring in our past knowledge and experiences (category 3), fine-tune those for a particular series of tasks towards optimality based on the task requirements (category 1), and personalize how we proceed with taking actions (category 2). In a shared autonomy context, this equivalence is helpful as it allows the system to be mutually understandable to both the human and the autonomous agents. Our proposed shared autonomy framework can also be considered as an instance of computational human-robot interaction (Thomaz et al. (2016)). The most important problem that we address is how to design a hierarchical structure for shared autonomy so as to facilitate and further, to make seamless, this complex interaction of multiple hierarchical systems, as shown in Figure 8. In analyzing human behavior, it is important to note that, in general, to assume that the state definition \(s_{t}\) is the same between the human agent and the autonomous agent is not valid. The reason lies in the fact that we do not have access to internal perception and state definition of a human, i.e., \(s_{t}^{H}\). Therefore, assuming that a human policy maps from \(s_{t}^{A}\) to \(a_{t}^{H}\) is conceptually inaccurate, in general. In this paper, we assume that a human has an internal state that is comprised of \(s_{t}^{A}\) and \(z_{1,t}\); hence, \(s_{t}^{H}\triangleq(s_{t}^{A},z_{1,t})\). The latter denotes the partially observable part of a human's state. This point becomes even more important when we have a hierarchy of tasks. The notion of including the human signal in the augmented state definition, as proposed in (Reddy et al. (2018)), makes sense in this light. If the human signal is defined as the low-level actions, then this implicitly enforces the Markovian assumption, i.e., the history is ignored. In contrast, it can be argued that conditioning the _shared policy_\(\pi_{sh}\) on a rich signal from the human, such as the goal space and if feasible the intended goal, is critical, as it effectively reflects the human history of actions. This argument is also supported by the results in (Reddy et al. (2018)) for an unstructured user input, where raw (i.e., unconditioned) low-level human inputs were used and poor performance was reported. Consequently, the performance of a collaboration scheme highly depends on a relatively successful encoding of human's internal state variable(s) \(z_{1}\). It is worth noting that goal inference algorithms, in general, are based on a history of human inputs. The goal/intent inference requires a knowledge of goal space, which, in turn, requires domain knowledge. We encode the latter in \(z_{0}\) without assuming direct knowledge of the intended goal, but only as a measure of goal space. From another point of view, using the human's internal signal as input to the autonomous policy provides a mechanism to _synchronize_ human actions and the resultant autonomous actions over a finite horizon that ends with reaching a goal. Otherwise, the performance of collaboration will be poor, as was the case in the results reported in (Reddy et al. (2018)) when low-level human input was used in the autonomous policy shaping. Figure 8: Simplified representation of hierarchical shared autonomy architecture ## 4 Problem Statement & Modelling We now present the mathematical model of our shared autonomy architecture in compact mathematical form. A typical trajectory \(\tau\) of sequential state-actions in the context of human-robot interaction takes the following form: \[\tau=\{s_{t}^{A},a_{t}^{H},a_{t}^{A},...,s_{T}^{A},a_{T}^{H},a_{T}^{A}\}, \tag{1}\] where \(s_{t}^{A}\), \(a_{t}^{A}\), and \(a_{t}^{H}\) denote the defined state, the autonomous agent action, and the human agent action, respectively, at any time-step \(t\); \(T\) denotes the time horizon for the task at hand. The action can be extended to an _option_ wherever needed. In this work, we do not impose a Markovian constraint on the human action, and thus, include a history of states in the human policy. Letting \(n_{h}\) represent the number of steps of human's state history, it can be shown that the probability distribution of the trajectory is given by: \[p(\tau)=p(s_{1}^{A})\prod_{t=1}^{T}\pi_{H}(a_{t}^{H}|\overline{s}_{t}^{A})\pi _{A}(a_{t}^{A}|a_{t}^{H},s_{t}^{A})p(s_{t+1}|s_{t}^{A},a_{t}^{A}), \tag{2}\] where \(\pi_{H}\) and \(\pi_{A}\) are human and autonomous policies, respectively. The state variable \(\overline{s}_{t}^{A}=\{s_{t}^{A},...,s_{t-n_{k}}^{A}\}\) comprises \(n_{h}\) steps of history of state trajectory. Note that \(t\geq n_{h}\). The derivation of (2) is given in Appendix A. We take a look at each of the terms in (2) in more details. ### Analysis of Human Agent & Policy \(\pi_{H}\) As already noted, we do not assume equivalence between state definitions of the human and autonomous agents. Moreover, we do not assume any direct knowledge of the human's policy or their internal variables, \(z_{1}\). To address this knowledge gap and to analyze the \(\pi_{H}(a_{t}^{H}|\overline{s}_{t})\) term in (2), we propose to explicitly encode the human's internal state variable \(z_{1}\). This explicit encoding offers a deeper insight into the individual human agent; it helps to encode the differences between human agents and ultimately, enables a faster tuning of the shared autonomy framework to individual human operators. It also facilitates a more robust model against human _noise_ levels. We provide specific examples of this point in SS7.3-7.4. Let \(n_{s}\) denote the dimension of autonomous state \(\overline{S}\subset\mathbb{R}^{n_{h}\times n_{s}}\). We learn an encoder \(\theta_{H}:\{\overline{E},\overline{A},\overline{S}\}\rightarrow\mathbf{Z}_{1}\), with human's latent state space \(\mathbf{Z}_{1}\subset d_{z_{1}},d_{z_{1}}<(n_{h}\times n_{s})\), from \(\overline{S}\) conditioned on human's \(n_{h}\) steps of history of actions \(\overline{a}^{H}\subset\overline{A}\) as well as history of errors of their actions \(\overline{e}^{H}\subset\overline{E}\) with respect to those of a known surrogate optimal agent, which might be another human or a pre-trained model. Considering \(n_{a}\) discrete actions, \(\overline{E}\subset\mathbb{N}^{0}\) and \(\overline{A}\subset\mathbb{N}^{0}\). The error, in general, is defined as the angular difference between the denoted actions, as follows: \[\angle\boldsymbol{e}^{H}=\arccos(\boldsymbol{a}^{H}\cdot\boldsymbol{a}^{*}/ \|\boldsymbol{a}^{H}\|\|\boldsymbol{a}^{*}\|), \tag{3}\] where \(\boldsymbol{a}^{*}\) and \(\boldsymbol{a}^{H}\) are the vectors representing the action of the surrogate optimal agent and that of the human, respectively. Moreover, we learn a decoder \(\phi_{H}:\mathbf{Z}_{1}\times\overline{S}\rightarrow\{\overline{A},\overline {E}\}\), with the following reward function of the optimization process in cVAE defined as: \[\mathcal{L}_{i,H} = E_{z_{1}\sim q_{\phi_{H}}(z_{1}|\overline{e}_{t}^{H},\overline{ a}_{t}^{H},\overline{s}_{t})}(logp_{\theta_{H}}(\overline{e}_{i}^{H}, \overline{a}_{i}^{H}|\overline{s}_{i},z_{1})) - D_{KL}(q_{\phi_{H}}(z_{1}|\overline{e}_{i}^{H},\overline{a}_{i}^{H}, \overline{s}_{i})\qquad\|\qquad p(z_{1})), \tag{4}\] where \(\phi_{H}\) and \(\theta_{H}\) are encoder and decoder networks, as shown in Figure 9. The two terms in (4) are the reconstruction error and KL-divergence, respectively (Kingma & Welling (2013)). Figure 9: cVAE architecture to encode a measure of human performance in latent variable \(z_{1}\) ### Analysis of Autonomous Agent and Policy \(\pi_{A}\) Based on (2) and the discussions regarding encoding of distribution of \(z_{1}\), the policy for the autonomous agent tuned to the human agent can be written as: \[\pi_{A}=\pi_{A}(a_{t}^{A}|a_{t}^{H},z_{1,t},s_{t}). \tag{5}\] Following the logic of SS4.1, we now have access to the distribution of \(z_{1}\). Based on (5), we set up our shared autonomy framework, as shown in Figure 10 in a graphical probabilistic model. It is worth noting that the degree to which a human agent participates in sharing the operation of the system depends on: (a) the level of autonomy desired for the system, (b) domain and task-specific knowledge, and (c) the extent of human presence. This shared autonomy framework facilitates a sliding level of autonomy. If we have a semi-autonomous agent, a shared autonomy framework is _needed_ to assist the human agent to reach a goal (Dragan & Srinivasa (2013_a_)). The human agent's involvement, therefore, is in the training phase as well as testing/operational phases. Moreover, we utilize the gamified human-robot interaction as well as game theoretic approaches in designing the reward function and the interaction architecture. The objective of this shared autonomy setting is to provide a near optimal input to the robot with respect to a reward function comprised of two contributions: * \(R_{1}\): Reward from robot planning, which includes task-related and obstacle avoidance rewards, * \(R_{2}\): Closeness to human input depending on signal \(z_{1}\). Hence, the general form of the reward function is as follows: \[R=c_{1}R_{1}+c_{2}R_{2}=\mathbf{c}^{T}\mathbf{R}, \tag{6}\] where \(\mathbf{c}\) assembles the dynamic level of autonomy coefficients showing how much autonomy is required, how successful it has been, and in short, the level of autonomy. In other words, the choice of the two coefficients allows for a more efficient sliding level of autonomy. From a multi-agent perspective, we model the agents' interactions and resolve possible issues in two ways: (1) policy shaping, that considers a serial architecture of the agents, and (2) strategic assignment of the coefficients \(\mathbf{c}\). This is a novel perspective on the problem of multi-agent shared autonomy with human as an agent in the loop. In summary, Figure 11 shows our point of view on how the closed-loop block diagram of Figure 6 should be designed in a framework that is interface-able with the human as another agent for hierarchical robotic tasks. ## 5 Case Study Application: Timber Harvesting As mentioned in SS1, the motivation for our research on shared autonomy stems from its potential applications to machines employed in the Canadian timber harvesting industry. These machines, such as the feller-buncher machine used in our case study (see Figure 12), are comprised of a mobile base and, a crane-like hydraulically-actuated manipulator arm with a specialized end-effector. In the case of feller-buncher, the latter is designed for cutting trees, Figure 10: Hierarchical shared autonomy architecture for the complete graphical model of decision making. The middle box shows the main graphical probabilistic representation of the proposed shared autonomy framework. The box on the left shows the processing of human-tuned variable \(z_{1}\), and the box on the right shows the processing of pre-training related variable, \(z_{2}\). picking them up and depositing them in a storage location. Currently, machines employed for timber harvesting rely heavily on direct operator intervention, sometimes at the level of controlling individual joints of the crane. In fact, the current state of autonomy in the industry is much lower than in other comparable industries, such as mining (Lindroos et al. (2017), Fukui et al. (2017)). There are several drivers for increasing autonomy of the machines employed in timber harvesting such as to improve the productivity of the operations which are significantly affected by human performance: for example, it has been reported that the productivity of a harvester is 25-40% dependent on the skill of the operator (Lofgren (2009)). In addition, the harsh machine and environmental conditions contribute to human fatigue, health issues and compromise operator safety, all of these factors exaggerating the labour shortage for machine operators. Moreover, it can take years of working in the field to achieve a skill level necessary for operating the machine at a high level of productivity. This becomes apparent if one considers, for example, that the operator of a harvesting machine is required to perform, on average, 24 functions per tree and to make 12 decisions (Lofgren (2009)). We suggest that shared autonomy can provide the way forward to address both the issue of productivity and operator training. The human-in-the-loop approach also addresses other challenges of complex robotic tasks, in particular the limited knowledge of their details and ensuring a certain level of safety. We consider the operation of a filler-buncher at a particular fixed location in the forest (i.e., fixed base), as defined by the operation _region_ in the ground plane, illustrated in Figure 14; the region is bounded by the minimum and maximum reach of the robot end-effector centered at the location of the mobile base. An actual photo of an operation region is shown in Figure 15 for comparison. We use the term Capacity for Maneuverability (\(CfM_{arm}\in[0,1]\)), which is the remaining actuation capacity for a human intervention (Eraslan et al. (2019)). A second capacity is defined for the end-effector since it can pick up several cut trees at a time: \(CfM_{ee}\in[0,1]\) quantifies the remaining capacity in the end-effector to carry objects. With the view to describing the feller-buncher operation as an MDP, we divide each region Figure 11: Complete representation of hierarchical shared autonomy architecture based on Figure 8 utilizing hierarchical MDP (hMDP). Figure 12: Example of robot/machine: a feller-buncher machine into _cell_s, which discretely encode the location of the machine end-effector \(p_{EE}\), the objects in the region (i.e., trees), the goal location(s), obstacles, and the storage location. In (Yousefi et al. (2022)), we proposed a human-inspired planning algorithm using a concept we called the _Envelope of Manipulation_\(E^{M}\), which is a curve connecting _key points_\(E_{i}\) (see Figure 14), these assembled in a set \(\mathcal{E}\). Taking Figure 14: Setup for robot task planning with details. Figure 13: (a) Robot performing option \(O_{1}\) at a key point; (b) Robot end-effector trajectory on \(E^{M}\) and to/from it during option \(O_{2}\). \(O_{1}\) encodes motion along the envelope from cell to cell, and \(O_{2}\) encodes operations inside each cell. Figure 15: Top view photo of a harvester machine (similar in form to a feller-buncher) working in a forest site. The photo shows the analogy to our setup in Figure 14. the perspective of a human operator based on our observations in the field, we identified two high-level options in the options space \(\mathcal{O}\): 1) \(O_{1}\in\mathcal{O}\) that encapsulated the motion along the envelope between two cells, and 2) \(O_{2}\in\mathcal{O}\) which encodes the operations inside each cell. The operator may group several objects into a _cluster_ in order to cut and grab several trees together, before moving on. The envelope \(E^{M}\) can take any shape; however, based on our field observations, it is well approximated by a circular arc. It is thus possible to encode a sequence of operations as a sequence of the two aforementioned options. The problem of robot planning, therefore, turns into optimizing this sequence of options. The overall task of robot planning includes a hierarchy of subtasks, namely, envelope options (i.e., \(O_{1}\) and \(O_{2}\)), and moving the arm or crane along the specified trajectories (MA). We will discuss these further in the subsequent sections. ## 6 Shared Autonomy Design for Feller-Buncher Robot ### Hierarchical Robot Planning Following SS3.1, we break down the tasks into a hierarchical planning and execution scheme with the levels listed in Table 1, where \(\pi_{\mathcal{E}}\) is an instance of \(\pi_{MHL}\), and \(\pi_{MA}\) is an instance of \(\pi_{MLL}\). As noted in SS5, in the current scheme of operations, a human operator uses the arm to manipulate (e.g., cut, grab, and deposit) the objects in the operational region of a particular base location. As listed in Table 1, we break down the relevant tasks into a hierarchical planning and execution scheme, with three levels defined as follows: #### 6.1.1 \(\pi_{RP}\): Overarching policy to plan a robot motion in a task-oriented manner This is the global or master policy which includes the policies of the lower levels and collects the corresponding rewards; this policy is executed once per _region_. \begin{table} \begin{tabular}{l|l} \hline policy & description \\ level & \\ \hline \hline \(\pi_{RP}\) & Overarching policy to plan a robot motion in a task-oriented manner \\ \hline \(\pi_{\mathcal{E}}\) & Policy encapsulated in Envelope of Manipulation (\(E^{M}\)) actions, an instance of \(\pi_{MHL}\). \\ \hline \(\pi_{MA}\) & Policy to Move Arm (MA), an instance of \(\pi_{MLL}\). \\ \hline \end{tabular} \end{table} Table 1: Break down of robot planning tasks into a hierarchical planning and execution scheme with different levels Figure 16: Temporal abstraction of hierarchical shared autonomy policies. This figure shows how _Shared Policy_\(\pi_{sh}\) is configured as a higher level policy on top of \(\pi_{RP}\) in the general spatiotemporal scheme. This is analogous to Figure 4, specialized to the feller-buncher robot/machine related tasks. #### 6.1.2 \(\pi_{\mathcal{E}}\): Policy for Envelope of Manipulation (\(E^{m}\)) The definition of this level in our hierarchy was motivated by our observations of expert operators: they first implicitly carry out a clustering of trees by grouping subsets of objects into clusters around the machine and subsequently, interact with the objects in clusters. Each cluster can include multiple objects, and span across one or more cells. We designate each _cell_ with a _key point_\(E_{i}\) and we have a set of key points \(\mathcal{E}\), defined as \(\mathcal{E}=\{E_{0},E_{1},...,E_{n}\}\), where \(E_{0}\) and \(E_{n}\) correspond to the initial end-effector location at the start of operation and the storage point, respectively. The planning problem is, in fact, a sequential decision making problem to optimally sequence options \(O_{1}\) and \(O_{2}\) and how to group the objects next to a key point as cluster(s). #### 6.1.3 \(\pi_{MA}\): Policy to Move Arm (MA) This is the lowest-level policy in our hierarchy, and it directly interacts with the environment. This policy takes the destination key point as its goal, plans a smooth trajectory for the end-effector to reach it, and executes the motion of the arm. Standard robotic tools can be employed to execute these subtasks. Although we do not design this policy directly in the present implementation, we include the reward terms related to it in robot planning. To find the optimal policy for Robot Planning, \(\pi_{RP}^{*}\), we build on our previous proof-of-concept formulation in (Yousefi et al. (2022)), combined with the background provided in SS5. For that purpose, we construct the more general, compared to (Yousefi et al. (2022)), Markov Decision Process (MDP) framework, \(MDP^{RP}\), as follows: **The Environment or The World**: We have tailored a commonly employed grid world to our specific robotic application, for a generalizable MDP backbone. As schematically shown in Figure 14, this environment is comprised of \(n_{c}\times n_{r}\) cells, circumferentially arranged around the robot base; \(n_{c}\) and \(n_{r}\) are the circular and radial dimensions of the grid, respectively, these chosen to accommodate the desired resolution. An important element of environment definition which directly affects \(\pi_{RP}\) is how to handle the objects surrounding the robot. These objects, depending on the state of the system, can be either obstacles or subgoals, and this categorization changes dynamically. As illustrated in Figure 17, we first construct the relevant _Spaces_ as follows: * Objects space \(\mathcal{S}_{oj}\): includes all objects. * Subgoals space \(\mathcal{S}_{sg}\): a subset of \(\mathcal{S}_{oj}\) and it includes all objects accessible to the robot (and hence, not blocked by other objects from robot's reach). The augmented Subgoal space, \(\hat{\mathcal{S}}_{sg}\), is constructed by adding the storage point, as conditioned on the capacity for maneuverability \({CfM_{ee}}\), with the following logic: * if \({CfM_{ee}}=0\), \(\hat{\mathcal{S}}_{sg}=\{E_{n}\}\) * if \(0<{CfM_{ee}}<1\), \(\hat{\mathcal{S}}_{sg}=\{\mathcal{S}_{sg},E_{n}\}\) * if \({CfM_{ee}}=1\), \(\hat{\mathcal{S}}_{sg}=\{\mathcal{S}_{sg}\}\). * Obstacles space \(\mathcal{S}_{ot}\): defined by negating the augmented Subgoal space from the Objects space. Therefore, the three object spaces are related through: \[\mathcal{S}_{oj}=\mathcal{S}_{sg}\cup\mathcal{S}_{ot}\] (7) **Constraints** In general, the workspace of the robot is limited by (a) boundaries of the grid, which are in turn defined by the minimum and maximum values of the reachability of the robot, \({CfM_{arm}}\), and (b) obstacles located next to the end-effector as well as those obstructing its arm movement, as depicted in Figure 18. This environment, therefore, includes the following constraints built-in and dynamically updated: * Robot workspace and related constraints, such as the capacity for maneuverability of the arm and the end-effector, i.e., \({CfM_{arm}}\) and \({CfM_{ee}}\), respectively. This can be extended to stability related constraints, as well. * Path planning related constraints, such as obstacles. Figure 17: Objects \(\mathcal{S}_{oj}\), Obstacles \(\mathcal{S}_{ot}\), subgoals \(\mathcal{S}_{sg}\) space characterization for the environment in \(\pi_{RP}\). With the definition of the world, the categorization of object spaces, and the constraints, we are now ready to define the main MDP elements which in turn encode the path planning problem with obstacle avoidance, as discussed earlier. **State (or Observation) Space**. As shown in Figure 19, the observation space is a discrete space with three types of observations: 1. \(s_{1}^{RP}\): discrete 2D position of the robot end-effector, [angular position, radial position], in the range \(0,..,n_{c}-1\) and \(0,...,n_{r}-1\), respectively, 2. \(s_{2}^{RP}\): payload indicator \(0,...,p_{max}\); it is related to \(CfM_{ee}\) through (\(CfM_{ee}=(p_{max}-s_{2})/p_{max}\)), where \(p_{max}\) is the maximum number of trees/objects that the end-effector is able to carry. 3. \(s_{3}^{RP}\): contains the circular distance from the current cluster/key point to all subgoals with respect to the robot end-effector in the CCW direction. If no subgoal is present at a location, -1 is returned. Therefore, \(s_{3}^{RP}\) effectively augments the state of the robot (comprised of \(s_{1}^{RP}\) and \(s_{2}^{RP}\)) with the information related to the subgoals space. It is, in fact, the variable \(z_{0}\) introduced earlier since it encodes the goal space information of the task. Therefore, we denote the state in this level with \(s^{RP}\) defined as follows: \[s^{RP}\triangleq(s_{1}^{RP},\ s_{2}^{RP},\ s_{3}^{RP}).\] (8) Together with the spaces defined above, a state \(s^{RP}\) encodes three features depending on the scenario: 1. obstacle, if \(s^{RP}\in\mathcal{S}_{ot}\), 2. sub-goal, if \(s^{RP}\in\mathcal{\hat{S}}_{sg}\). If the agent is _done_ with the operation overall, reaching the storage point is the _final goal_ and the episode is _done_. 3. normal, otherwise. **Action Space**. The action space is discrete, consisting of four actions: left, right, front, and back, encoded by 0, 1, 2, and 3, respectively. Note that the directions of these actions are defined with respect to each cell, and not in an absolute sense. **Rewards**. Rewards are defined as follows: * \(R_{1}^{RP}=-2\): All transitions except the transition to the "sub-goal" or "goal" state, * \(R_{2}^{RP}=20n_{cut}\) or \(20n_{store}\): Transition to one of the "sub-goal" states for the cases of cutting or storing, * \(R_{3}^{RP}=400\): Transition to the "goal" state. This ends an episode and resets the environment, * \(R_{4}^{RP}=-20\): Collision with an obstacle, * \(R_{5}^{RP}=-20\): Out of boundaries action, * \(R_{6}^{RP}=-5s_{2}^{RP}\): Cost of carrying an object, Figure 19: State definition for \(\pi_{RP}\) Figure 18: Illustration of obstacles obstructing the movement of the arm in different directions from a cell in the grid with the arm colored in amber. The pink cells designate ”obstacle” cells. * \(R_{\mathcal{T}}^{RP}=-400\): Getting trapped. This also ends an episode and resets the environment. Note that the combined value of the above reward elements forms \(R_{1}\) in (6). **Policy**. With the architecture shown in Figure 10, we denote the policy for this level with \(\pi_{RP}=\pi_{RP}(o^{RP,A}|s^{RP})\). It is worth noting that to implement this world efficiently, we have created a custom OpenAI Gym (Brockman et al. (2016)) environment, and we call it "adaptive_grid_v0". ### Shared Autonomy Setting As shown in Figures 11 and 16, we define the autonomous agent policy as the highest level policy called _Shared Policy_, \(\pi_{sh}\), and model it as an instance of MDP with the following elements: **The Environment or The World**: We designed a higher level environment for the tasks of shared autonomy for a generalizable MDP backbone, the attributes of which are defined shortly. In particular, we have created a second custom OpenAI Gym environment, called "assist_AI_v0", which directly communicates with the lower level environment, adaptive_grid_v0, and acts as a master agent with a master policy incorporating the assistance and/or autonomy protocols. **State Space**. This is defined based on state space of the \(MDP^{RP}\), but expands it to include \(z_{1}\) and \(a^{H}\). Note that \(a^{H}\) is recorded as -1 if no action is taken by the human agent to accommodate such instances. **Action Space**. This is defined similar to that of \(MDP^{RP}\), and is comprised of four actions: left, right, up, and down, encoded by 0, 1, 2, and 3, respectively. **Policy**. The policy considering the model presented in Figure 5 and discussions in SS4.2. **Rewards**. Rewards, as discussed earlier, take the form of \(\mathbf{\epsilon}^{\top}\mathbf{R}\) in (6). From another perspective, since the autonomous agent is human-inspired by design, with a shared autonomy mindset, the autonomous actions are comprehensible to the human agent and vice versa. Hence, this shared mental model (Chris et al. (2007), Yousefi (2018)) is not only necessary for proper collaboration but more importantly provides a road map to the design of any modern framework involving humans and (semi-)autonomous agents. Our approach has the capability to correct and help train a novice human operator in a safe and efficient manner. On the other hand, with the designed hierarchical learning and planning algorithms, full autonomy is also achievable. ## 7 Numerical Results & Experiments We present numerical results in four parts, progressively adding layers of complexity, in a similar order to the material discussed in SS3. The four sets of results also illustrate how we build up and test our shared autonomy framework in the following stages: * **Stage I**: Pre-Training--this stage produces an autonomous agent trained by using deep RL (RL) techniques. This model will be considered as the baseline to which the behavior of a human operator will be compared. The results for this stage are presented in SS7.1. Since our shared autonomy framework is capable of full autonomy by design, i.e., highest level of autonomy, the overarching goal in this stage is to showcase such capability in training and testing of an autonomous agent given the inherent stochasticity of the environment as well as the challenges of the application. * **Stage II**: Manual--we let the human take full control authority over the robot. The advantage of a shared autonomy framework is in effectively incorporating human operator in the loop to learn from and, in general, to switch control if needs be. The results of this stage, presented in SS7.2 are to set this important capability in place and provide the algorithm with necessary data for a human-tuned framework using the formulation presented in SS4.1. * **Stage III**: Shared-Training--we train the shared autonomy policy according to our proposed model, also by using deep RL techniques. Our approach in this stage is looking into certain challenging _scenarios_ in training a shared autonomy agent with expert and noisy humans and to see the effects of different components of our architecture. The results for this stage are presented in SS7.3. * **Stage IV**: Shared-Testing--we test the trained model with expert human for a variety of cases. Finally, in this stage, we interface the trained shared autonomy agent with humans with different levels of noise and analyze its performance. The results for this stage are presented in SS7.4. ### Results for Stage I, Pre-Training Here, we showcase the training of a fully autonomous robot. In the context of our shared autonomy framework, this will represent a pre-trained agent, to be used as the baseline for computing the human agent's error. With the adaptive_radial_grid_v0 environment described earlier, we use Stable-Baselines3 library (Raffin et al. (2021)) to train a deep RL policy. During the training process, we sample the objects in the environment from a Gaussian distribution in order to account for different possible variations of the object spaces. More specifically, we draw samples from a truncated Gaussian distribution in interval \([0\ 4]\) with the mean and standard deviation of 2 and 1, respectively. Accordingly, our formulation is _uncertainty-aware_. We employ the Proximal Policy Optimization (PPO) algorithm (Schulman et al. (2017)) notably with a batch size of 32, learning rate of \(1\times 10^{-3}\), and \(\gamma\) of 0.99 for a multilayer-preceptron (MLP) policy with 2 layers of 64 nodes. Figure 20 shows the performance of the training process. We also provide two examples of output sequences for two objects configurations: Figure 21 for a relatively sparse scenario, and Figure 22 for a relatively crowded scenario. The figures include information on the initial state and the output action sequences of the policy. We observe that these are logical and intuitive. Figure 21: Pre-trained agent Example 1. Green circles are objects (trees), yellow rectangles are clusters, blue circle is the initial agent location, solid arrows indicate actions, dashed lines conceptualize the path (simplified to straight lines for ease of illustration). The sequence starts at “i” and continues until the last action “F”. \(CFM_{ee}\) is set to 4. For better readability, we use different colors for the two sub-sequences ending at a storage location (red, purple, in that order). The storage is also located at “i”. Figure 20: Training of autonomous policy using a deep RL technique, PPO. ### Results for Stage II, Manual Here, we present the results supporting the encoding of the human agent's internal/latent variable \(z_{1}\). We build up a shared autonomy platform that, at its core, is comprised of our hierarchical MDPs implemented using two Gym environments. We have used Cogment (AI-Redefined et al. (2021)) to enable real-time human-in-the-loop interaction in our platform. Cogment platform is an open source framework built on a micro-service architecture for running different kinds of RL, multi-agent RL and human-in-the-loop learning applications. During a test, human user is presented with a random initialization of the environment, an example of which is shown Figure 30. The four basic discrete inputs, introduced in SS6.1, are mapped to four direction buttons on a regular keyboard. Figure 24 shows our setup for a test. We record the actual human data using similar object randomization and environment configurations as in Stage I. It is also important to note that an explicit goal inference is not feasible in our set-up except for a myopic (Dragan and Srinivasa (2013)), which assumes that the intended goal is the closest of goal space points. This, in essence, is how we defined variable \(z_{0}\) in this work that encodes the angular distance to the nearby goals. Following the formulation presented in SS4.1, we train our auto-encoder for a 5D latent variable \(z_{1}\) using 2 history steps (\(n_{h}=2\)) using 40 recorded episodes or trials of a human user interacting with our setup. It is worth noting that we randomly divide the dataset with a ratio of 0.7. Notably, the learning rate and batch size are \(5\times 10^{-4}\) and 5, respectively. To compute the input error of (3), we use the pre-trained model from Stage I as the surrogate optimal model. From a practical point of view, we used one-hot transformation for our discrete variables, such as the state \(s^{RP}\), and introduced white noise for better training. Figure 23 shows the training and validation process of our cVAE model. ### Results for Stage III, Shared-Training Building on Stages I and II, Stage III is the next step in setting up our platform. In this work, we used deep RL to train the shared policies using PPO algorithm (Schulman et al. (2017)), similar to the pre-trained model. We consider three scenarios of training with different human skill levels and different shared autonomy settings. Each of these scenarios might include one or more hypotheses, i.e., a research question(s), followed by presentation of the results and assessing them. It is worth noting that in evaluating the results of the training process, we use the following two measures: * Reward per time-step: the rewards with respect to the training (simulator) time-step. In all training cases, we train the policy for a total number of \(N_{tr}=5\times 10^{5}\) time-steps, which is the horizontal axis in all rewards-related plots. Note that we use subscript \(tr\) for training related variables. * Sample Processing Rate (SPR): the number of processed samples forward and backward per second. In shared autonomy, the speed of the platform is of crucial importance, since it needs to train an agent for a shared Figure 22: Pre-trained agent Example 2. Green circles are objects (trees), yellow rectangles are clusters, blue circle is the initial agent location, solid arrows indicate actions, dashed lines conceptualize the path (simplified to straight lines for ease of illustration). The sequence starts at “i” and continues until the last action “\(\Gamma\)”. \(CFM_{ce}\) is set to 4. For better readability, we use different colors for the four sub-sequences, each ending in storage location (red, purple, black, and blue, in that order). The storage is also located at “i”. autonomy while the task progresses with human in the loop. SPR is defined as follows: \[SPR(k)=\sum_{j=0}^{k}n_{tr}^{j}/(t-t_{0}), \tag{9}\] where \(t\) and \(t_{0}\) are the current and initial wall-time in seconds, \(n_{tr}^{k}\) is the number of training samples processed at time-step \(k\). This also includes the time required by the stochastic gradient decent. We use Adam as our optimizer (Kingma & Ba (2017)). Batch size is 64 with learning rate of \(1\times 10^{-4}\). Moreover, we have two expertise levels for the human agent in our trials: * _Expert Human_: A human agent who is familiar with our setup. Figure 23: Training vs Validation for cVAE model. Figure 24: Our setup to conduct human-in-the-loop tests. We use a visualization layer to show the environment to the user, who can move with 4 basic inputs in 4 directions, a reflection of the inputs defined in our discrete robot planning MDP introduced in §6. * _Noisy Human_: We deliberately perturb the human's action by adding noise. In all cases, if no human action is available, the human agent is considered to be non-collaborative. No action is an action itself. We use the above-mentioned metrics to investigate: (a) whether or not a shared autonomy agent can be trained under different human expertise levels and the degree of collaborativeness given the proposed MDP structure, (b) to what extent the human-tuned variable \(z_{1}\) affects the training process, (c) how rewards terms and their coefficients in (6) affect the training process, (d) how a trained model performs if interfaced with humans of different expertise level. **Scenario 1: Training with Expert Human** We begin the presentation of shared autonomy results by demonstrating the training process for expert human, defined above, which includes the trials data collected from a human agent familiar with our setup. Arguably, no human-in-the-loop test can cover the complete state space, and therefore, we will treat the human agent at the unseen states also as a non-cooperative agent who takes no action. In this scenario, we choose equal weights for \(R_{1}\) and \(R_{2}\) and set \(\mathbf{c}=[10,10]\). **Hypothesis 1**: _A shared autonomy policy can be trained using our formulation for an expert human, as defined before, who is alternating between cooperative and non-cooperative, with the inherent stochasticity of the environment._ Figure 25 shows the training process, confirming the success of our algorithm in training a shared autonomy agent detailed in Hypothesis 1. Further tests with the trained model will be provided in SS7.4. **Hypothesis 2**: _Under the conditions outlined in Hypothesis 1, we hypothesize that the human-tuned variable \(z_{1}\) results in a more efficient training process._ Figure 26 shows how sample processing rate (SPR), defined in (9), changes with respect to training time-step for the expert human _with_ and _without_\(z_{1}\). In the case of without \(z_{1}\), we still pass the signal through the \(z_{1}\) model but zero it out before feeding to the algorithm in order to as much as possible isolate the mere effect of \(z_{1}\). In the early stages of the training time-steps, i.e., \(ts\leq 1.6\times 10^{5}\), which is the highly oscillatory stage, we do not see noticeable difference between the performances of the two cases. However, as the training progresses, the effect of \(z_{1}\) is evident, which results in more efficient performance. This result partially confirms Hypothesis 2. Intuitively, \(z_{1}\) is effective when the policy is getting closer to the convergence. Moreover, it can be concluded that \(z_{1}\) contributes positively in our shared autonomy framework for an expert human. **Scenario 2: Training with Noisy Human** In this scenario, we deliberately perturb the human action by adding noise to their action, that results in a noisy human, defined above. The process is outlined in Appendix B. This is a challenging scenario for our setup for three reasons: (a) at a conceptual level, a noisy human in shared autonomy setting is, in general, a non-collaborative agent, which makes the task of the autonomous agent that is accommodating them challenging. As noted in SS1, the aspect where Figure 25: Training process for scenario 1: Expert human. The darker red is the smoothed, average reward with a window of 50 time-steps. The lighter red shows the raw training rewards. policy shaping_ outshines _policy blending_ is in this interfacing with a noisy human in the loop, (b) as we are assessing the limits of our setting, we are still using equally weighted rewards with \(\mathbf{c}=[10,10]\). We will drop this constraint later. (c) we keep using the \(z_{1}\) variable that is fine-tuned to the expert human. The latter results in mismatched \(z_{1}\), which makes the process even more challenging. **Hypothesis 3**: _For a noisy human with mismatched \(z_{1}\), we expect this variable to negatively affect the training process._ Figure 27 depicts the sample processing rate (SPR) for the training process with a noisy human for the cases with and without \(z_{1}\). Similar to the previous assessment, this figure partially verifies Hypothesis 3. Only towards the later stages of the training that we observe the effects of including \(z_{1}\), which is negatively affecting the performance. Intuitively, this is expected, since \(z_{1}\) is fine-tuned for a different human. The results of assessment of Hypotheses 2 and 3 are significant in the sense that they confirm the validity of our architecture design and the considered assumptions. From the behavior of the training processes, one realizes the inherently complex dynamics of a shared autonomy setting with a human in the loop, and how finding and incorporating human's latent variable improves the training process. Another aspect of our framework is the ability to adjust the coefficients through \(\mathbf{c}\). To showcase this, we present results with reduced \(c_{2}\), i.e., reduced contribution of human action in reward function, and use \(\mathbf{c}=[10,5]\) to counter-act the high noise in human actions. **Hypothesis 4**: _Given a noisy human, reducing the associated coefficient in objective function, i.e., \(c_{2}\), results in improved training process._ Figure 26: Sample processing rate (SPR) for Scenario 1: Expert human with and without \(z_{1}\). Figure 27: Sample processing rate (SPR) of the training process for scenario 2: Noisy human with and without \(z_{1}\). Figure 28 compared training performance for the noisy human with and without reduced \(c_{2}\) effect. This figure confirms Hypothesis 4 by showing a much less oscillatory training process and earlier convergence. This result also confirms practical applicability of human-related coefficient \(c_{2}\) as a design variable to control the performance of shared autonomy. #### Scenario 3: Training with Override Option A challenging task for an autonomous agent in shared autonomy is presented when the human agent is given an override option. We tested for the scenario that the human action overrides the autonomous action with a probability of 80%. This scenario is important from a practical point of view for safety critical applications. The training process is shown in Figure 29 which indicates a much more challenging policy update and a sub-optimal policy. However, we maintain the equal weights on \(R_{1}\) and \(R_{2}\). This is another manifestation of the bottleneck in shared autonomy discussed in Scenario 2. That is we are imposing our _designed_ definition of a reward function next to what a human _might_ consider (and hence, override). Figure 28: Training process for scenario 2: Noisy human with reduced \(c_{2}\) effect. Figure 29: Training performance for Scenario 3: Expert human with and without override option. ### Results for Stage IV, Shared-Testing In this stage, we test the model trained with an expert human, discussed in Scenario I. We, however, test the model in a challenging scenario, which occurs when shared autonomy interacts with a noisy human agent that might or might not cooperate. In such scenarios, we expect a shared autonomy framework implemented with a policy shaping paradigm to outshine in its performance. For such cases, the burden of successfully carrying out the tasks is on the autonomous agent while trying as much as possible to follow the human's input. We use the model trained using raw human data, i.e., the expert human and present results for the environment initialized as shown in Figure 30. We consider the following three cases. For each case, there is range of possible outcomes due to the stochasticity of the human behavior; we present an illustrative example for each case. * **Case 1**: Random human, i.e., very noisy or novice human. The results for this case are given in Table 2. In this table, the first row is the sequence of actions of autonomous agent (_AA sequence_). Comparatively, the second row shows the sequence of actions of human agent (_HA sequence_). The reader is reminded that an \begin{table} \begin{tabular}{c|c} AA sequence & [1, 2, 0, 3, 2, 3, 2, 0, 0, 1, 1, 3, 1, 1, 2, 0, 0, 3, 0, 0, 0, 2, 0, 0, 0, 3] \\ \hline HA sequence & [2, 2, -1, 3, 2, -1, 2, 0, 0, 1, 1, 3, 1, 1, 2, -1, -1, -1, 0, 0, 2, 0, 0, 0, 3] \\ \hline HA Interaction & 21 out of 26, i.e., 80.8\% \\ \hline AA followed HA & 20 \\ \hline Reward & 24.26 \\ \end{tabular} \end{table} Table 4: Shared autonomy stats for Case 3 \begin{table} \begin{tabular}{c|c} AA sequence & [1, 2, 0, 3, 2, 3, 0, 0, 2, 1, 1, 3, 2, 1, 1, 0, 0, 3, 1, 1, 1, 2, 3, 0, 0, 0] \\ \hline HA sequence & [1, 2, -1, 1, 3, -1, 0, -1, -1, -1, 2, 1, 2, 3, 2, -1, -1, -1, 1, 1, 2, 0, 1, 0] \\ \hline HA Interaction & 18 out of 26, i.e., 69.2\% \\ \hline AA followed HA & 8 \\ \hline Reward & 18.27 \\ \end{tabular} \end{table} Table 2: Shared autonomy stats for Case 1 Figure 30: Starting configuration for \(\pi_{sh}\) examples \begin{table} \begin{tabular}{c|c} AA sequence & [1, 2, 0, 3, 2, 3, 0, 0, 2, 1, 1, 3, 2, 1, 1, 0, 0, 3, 1, 1, 1, 2, 3, 0, 0, 0] \\ \hline HA sequence & [1, 2, -1, 1, 3, -1, 0, -1, -1, -1, 2, 1, 2, 3, 2, -1, -1, -1, 1, 1, 2, 0, 1, 0] \\ \hline HA Interaction & 18 out of 26, i.e., 69.2\% \\ \hline AA followed HA & 8 \\ \hline Reward & 18.27 \\ \end{tabular} \end{table} Table 3: Shared autonomy stats for Case 2 action of "-1" denotes a non-cooperative human agent. Moreover, as denoted in the third row, human agent interacted 18 times out of the total of 26 actions or \(69.2\%\) of the time throughout the episode. Despite having a very noisy human, the shared autonomy managed to follow the human 8 times. This result shows that that the autonomous agent ignored the human most of time and successfully carried out the operation. Table 5 shows the extended results for 10 tests. * **Case 2**: Medium-level noisy. The results for this case are given in Table 3, which shows how differently the autonomous agent engaged with the human compared to the previous case. This signifies the capability of the framework to discern humans with different skill levels. Table 6 shows the extended results for 10 tests. * **Case 3**: Least noisy human, i.e., close to expert human. The results for this case are given in Table 3, which once more shows how the autonomous agent engaged with the human. It is observed that the autonomous agent managed to follow this human 20 times out of 21. Table 7 shows the extended results for 10 tests. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline **Test ID** & **steps** & **HA Interaction** & **AA follow** & **reward** & **success** \\ \hline 0 & 26 & 21 & 15 & 21.30 & 1 \\ \hline 1 & 26 & 21 & 14 & 20.08 & 1 \\ \hline 2 & 26 & 18 & 12 & 19.13 & 1 \\ \hline 3 & 26 & 21 & 15 & 20.18 & 1 \\ \hline 4 & 26 & 21 & 13 & 19.08 & 1 \\ \hline 5 & 26 & 21 & 15 & 21.06 & 1 \\ \hline 6 & 26 & 21 & 16 & 21.20 & 1 \\ \hline 7 & 26 & 21 & 14 & 20.70 & 1 \\ \hline 8 & 26 & 21 & 14 & 20.56 & 1 \\ \hline 9 & 26 & 21 & 13 & 20.05 & 1 \\ \hline \hline avg. & 26 & 20.7 & 14.1 & 20.34 & 1 \\ \hline \end{tabular} \end{table} Table 6: Extended stats for Case 2 \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline **Test ID** & **steps** & **HA Interaction** & **AA follow** & **reward** & **success** \\ \hline 0 & 26 & 21 & 20 & 24.30 & 1 \\ \hline 1 & 26 & 21 & 18 & 22.17 & 1 \\ \hline 2 & 26 & 21 & 19 & 23.61 & 1 \\ \hline 3 & 26 & 21 & 19 & 23.70 & 1 \\ \hline 4 & 26 & 21 & 17 & 22.26 & 1 \\ \hline 5 & 26 & 21 & 19 & 23.06 & 1 \\ \hline 6 & 26 & 21 & 19 & 23.85 & 1 \\ \hline 7 & 26 & 21 & 18 & 23.16 & 1 \\ \hline 8 & 26 & 21 & 16 & 20.66 & 1 \\ \hline 9 & 26 & 21 & 19 & 23.79 & 1 \\ \hline \hline avg. & 26 & 21 & 18.4 & 23.06 & 1 \\ \hline \end{tabular} \end{table} Table 7: Extended stats for Case 3 \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline **Test ID** & **steps** & **HA Interaction** & **AA follow** & **reward** & **success** \\ \hline 0 & 30 & 19 & 5 & 13.96 & 1 \\ \hline 1 & 30 & 21 & 7 & 14.92 & 1 \\ \hline 2 & 26 & 18 & 10 & 19.46 & 1 \\ \hline 3 & 28 & 17 & 5 & 14.21 & 1 \\ \hline 4 & 30 & 21 & 5 & 12.53 & 1 \\ \hline 5 & 26 & 21 & 10 & 18.18 & 1 \\ \hline 6 & 32 & 22 & 5 & 14.44 & 1 \\ \hline 7 & 30 & 22 & 8 & 15.70 & 1 \\ \hline 8 & 30 & 21 & 12 & 19.14 & 1 \\ \hline 9 & 26 & 21 & 10 & 17.20 & 1 \\ \hline \hline avg. & 28.8 & 20.3 & 7.7 & 15.97 & 1 \\ \hline \end{tabular} \end{table} Table 5: Extended stats for Case 1 ## 8 Conclusions and Future Work Operating an articulated machine is similar to driving a car in terms of complexity and hierarchy of tasks involved, from strategical route planning to low-level controls, and it is highly intertwined with the specific requirements of the application domain. Therefore, as we argued in this paper, design for autonomous operation of such machines requires a careful understanding of the nature of the tasks and their environment. In this work, we proposed a shared autonomy framework to operate articulated robots. We first introduced a hierarchical task-oriented planning formulation for context-aware robot operation. Building on this foundation as well as theory of mind and game theory, we proposed a novel shared autonomy framework to facilitate efficient interaction between the human and the autonomy, the two participating agents in this system. We modelled the decision making process using hierarchical MDPs and Options in an algorithm we called _policy shaping_. In this algorithm, the autonomous system policy is shaped by incorporating design variables contextual to the task, human's internal state, and pre-training, as well as the human's input. To encode the human's internal state beyond the designed state variables, we used the pre-trained model as the surrogate optimal model as a frame of reference to compare human's input. We employed the associated error as well as the history of states and actions in a conditional Variational Autoencoder (cVAE) architecture to find the human's latent embedding through the lens of the structured task at hand. To showcase the success of our framework, we fine-tuned our framework for the operation of a feller-buncher articulated machine in timber harvesting, a series of physically and mentally arduous operations in harsh environmental conditions. Building on our earlier work (Yousefi et al. (2022)), with intricate know-how of the tasks, a novel, human-inspired path planning algorithm using the _Envelope of Manipulation_\(\mathcal{E}^{M}\) and the _Envelope actions_ to encode the sequence of decisions/actions in the operations was proposed. We have used this case study as our test-bed to train and test different policies. In training the policies, we used deep RL techniques. Moreover, by using a wide range of available tools, libraries, and packages, we setup a human-in-the-loop test that enabled us to gather actual human trials data. In presenting results, we considered a number of scenarios and cases of importance to a shared autonomy framework. First, we trained a fully autonomous policy capable of carrying out the operations in our setup. We used this model as the surrogate optimal model. By gathering actual human trials data, we were able to train a cVAE network to access a human's internal embeddings. Then, we envisioned and implemented several training scenarios involving a range of human expertise. We assessed the success of our novel platform by forming certain hypotheses regarding the affect of our designed structure and variables. In testing the trained shared autonomy policy, once more we looked at the performance of the model in interacting with human agents with different skill levels and degree of cooperativeness. The extensive test results demonstrate the success of our platform in a particularly challenging case of interacting with a noisy non-cooperative human. The future directions are numerous, given the potential of this novel framework. We, however, propose that the future directions should be more in-line with autonomous operation/driving scenarios, since this platform offers an alternative point of view into designing a hierarchical planning framework with full autonomy in mind. Moreover, refined training algorithms tuned for shared autonomy and human-in-the-loop scenarios as well as more structured approaches to encode human's embeddings can be considered in future work. ## Appendix A Here, we show the derivation of (2) for 3 time-steps. For the trajectory \(\tau\), we have: \[\tau=\{s_{1}^{A},a_{1}^{A},a_{1}^{H},...,s_{3}^{A},a_{3}^{A},a_{3}^{H}\}, \tag{10}\] The probability over trajectory is hence given be: \[p_{\tau} =p(s_{1}^{A},a_{1}^{A},a_{1}^{H},...,s_{3}^{A},a_{3}^{A},a_{3}^{H}) \tag{11}\] \[=p(a_{1}^{A},a_{1}^{H},...,s_{3}^{A},a_{3}^{A},a_{3}^{H}|s_{1}^{A })p(s_{1}^{A})\] Next, we write: \[p_{\tau} =p(s_{3}^{A}|a_{2}^{A},s_{2}^{A})p(a_{2}^{A},a_{2}^{H},s_{2}^{A}, a_{1}^{A},a_{1}^{H}|s_{1}^{A})p(s_{1}^{A}) \tag{12}\] \[=p(s_{3}^{A}|a_{2}^{A},s_{2}^{A})p(a_{2}^{A},a_{2}^{H}|s_{2}^{A}, a_{1}^{A},a_{1}^{H},s_{1}^{A})p(s_{2}^{A},a_{1}^{A},a_{1}^{H}|s_{1}^{A})p(s_{1}^{A})\] Next, we have: \[p_{\tau}=p(s_{3}^{A}|a_{2}^{A},s_{2}^{A})p(s_{2}^{A}|a_{1}^{A}, s_{1}^{A})p(a_{2}^{A}|a_{2}^{H},s_{2}^{A})p(a_{2}^{H}|s_{2}^{A},s_{1}^{A})p(a_{1}^{A }|a_{1}^{H},s_{1}^{A})p(a_{1}^{H}|s_{1}^{A})p(s_{1}^{A}), \tag{13}\] which simplifies to: \[p(\tau)=p(s_{1}^{A})\prod_{t=1}^{3}\pi_{H}(a_{t}^{H}|\overline{s _{t}^{A}})\pi_{A}(a_{t}^{A}|a_{t}^{H},s_{t}^{A})p(s_{t+1}|s_{t}^{A},a_{t}^{A}), \tag{14}\] ## Appendix B Here we provide the pseudocode on how we perturb human action in the cases we discuss noisy human: ``` 1:\(output\gets output+\text{np.random.randint}(0,4)\) 2:if\(output<0\)then 3:\(output\gets output+4\) 4:endif 5:if\(output>3\)then 6:\(output\gets output\mod 4\) 7:endif ``` **Algorithm 1** algorithm to perturb human input ## Acknowledgement We would like to thank Professor Dylan P. Losey for his early contributions to this work. This work was supported by the National Sciences and Engineering Research Council (NSERC) Canadian Robotics Network (NCRN). The authors also acknowledge the valuable contributions of AI-Redefined and William Duguay to the development of the shared autonomy setup.
2310.05132
Real-Time Measurements of Photonic Microchips with Femtometer-Scale Spectral Precision and Ultra-High Sensitivity
Photonic integrated circuits (PICs) are enabling major breakthroughs in a number of areas, including quantum computing, neuromorphic processors, wearable devices, and more. Nevertheless, existing PIC measurement methods lack the spectral precision, speed, and sensitivity required for refining current applications and exploring new frontiers such as point-of-care or wearable biosensors. Here, we present the Sweeping Optical Frequency Mixing Method (SOHO), surpassing traditional PIC measurement methods with real-time operation, 30 dB higher sensitivity, and over 100 times better spectral resolution. Leveraging the frequency mixing process with a sweeping laser and custom control software, SOHO excels in simplicity, eliminating the need for advanced optical components and additional calibration procedures. We showcase its superior performance on ultrahigh-quality factor (Q) fiber-loop resonators (Q = 46M) as well as microresonators realized on a new optical waveguide platform. An experimental spectral resolution of 19.1 femtometers is demonstrated using an 85-meter-long unbalanced fiber Mach Zehnder Interferometer, constrained by noise resulting from the extended fiber length, while the theoretical resolution is calculated to be 6.2 femtometers, limited by the linewidth of the reference laser. With its excellent performance metrics, SOHO has the potential to become a vital measurement tool in photonics, excelling in high-speed and high-resolution measurements of weak optical signals.
Mahdi Mozdoor Dashtabi, Mohammad Talebi Khoshmehr, Hamed Nikbakht, Bruno Lopez Rodriguez, Naresh Sharma, Iman Esmaeil Zadeh, B. Imran Akca
2023-10-08T11:44:50Z
http://arxiv.org/abs/2310.05132v2
## Real-time spectral characterization of photonic microchips with femtometer resolution (SOHO) ## Real-time spectral characterization of photonic microchips with femtometer resolution (SOHO) _Mahdi Mozdoor Dashtabi, Hamed Nikbakht, Mohammad Talebi Khoshmehr, and B. Imran Akca\({}^{*}\)_ LaserLab, Department of Physics and Astronomy, VU University, De Boelelaan 1081, 1081 HV, Amsterdam, The Netherlands \({}^{*}\)Corresponding author e-mail: [email protected] Keywords: optical heterodyne detection, real-time, high resolution, photonic integrated circuits **Abstract:** Here we present a new measurement method for the characterization of photonic integrated circuits (PIC) with an ultrahigh resolution, speed, and sensitivity that outperforms the existing PIC characterization methods. It is based on the heterodyne process and employs a tunable laser and a fixed-wavelength laser (termed sweeping optical heterodyne method, SOHO). By varying the wavelength of the tunable laser that is fed into the photonic microchip, the beat frequency is swept from DC to the GHz range. An outstanding advantage of this method is that it does not require any advanced components such as a narrow linewidth tunable laser or light modulator. Moreover, there is no need for an additional calibration procedure. We compared the performance of this method with the standard measurement method based on a tunable laser and a photodiode and achieved 100 times higher spectral resolution and 6 dB higher sensitivity. We further advanced this method to operate in real-time. The wavelength resolution of <40 fm is verified by measuring the spectrum of an unbalanced fiber Mach-Zehnder interferometer. Finally, using this method, we characterized high-quality factor microring resonators that are realized in our new hybrid platform. This new measurement approach, which has superior performance parameters compared to existing methods, has the potential to become an essential characterization tool in integrated photonics. ## 1 Introduction Photonic integration is revolutionizing the field of optics in the same way that integrated circuits revolutionized the world of electronics in the 1960s. Miniature optical circuits (i.e. photonic integrated circuits, PIC)) are becoming increasingly common in data centers, artificial neural networks, medical diagnostics, optical sensing, quantum computing, and astronomy. In particular, ultra-low loss integrated microresonators have been extensively employed in various emerging applications including optical sensing [1-3], photonic processors [4,5], ultra-narrow linewidth lasers [6,7], precision spectroscopy [8], and quantum computation [9,10]. Their precise optical characterization is the first step in their successful implementation in these applications. For example, microring-based biosensors [1] have been widely used in the detection of various biological and chemical materials in which very small changes in the surrounding medium cause the resonance wavelength to shift, allowing high-resolution detection. However, the characterization of these devices is often limited by measurement techniques, i.e. optical spectrum analyzers or tunable lasers, which lack the sensitivity, speed, and sub-picometer resolution necessary for measuring ultra-small shifts of the resonance peaks. On the other hand, sub-MHz narrow linewidth tunable lasers suffer from long-term (millisecond) frequency stability during measurement time and, in some cases, need dithering to stabilize [11]. In addition to their expensive price, their frequency is prone to some uncertainty throughout a sweep. To overcome these problems, different approaches have been developed such as comparing the resonance width with sidebands of a modulated laser [12], or with a spectrum of an asymmetric fiber Mach-Zehnder interferometer [13]. Most of the time, the experiments were confirmed with lifetime measurement (cavity ring down) techniques [14]. Another measurement technique combines a phase modulator with a vector network analyzer [15] at the expense of increased system complexity. A promising method that has been widely used in the field of radio wave engineering is heterodyne detection, which is based on mixing two signals at two different frequencies using a signal processing technique called heterodyning [16]. In this paper, we present a novel PIC characterization approach termed "sweeping optical heterodyne detection method" (SOHO), which advances the optical heterodyne detection method using a tunable laser together with dedicated control software. Using this approach, we demonstrated <40fm spectral resolution over 6 GHz bandwidth and by modulating the tunable laser, real-time operation was achieved. Using this method, we characterized microring resonators with high-quality factors (Q) and demonstrated that our method can provide more accurate results with higher resolution and sensitivity compared to the commonly used method based on a tunable laser and a photodiode. SOHO holds great promise both for PIC characterization as well as for high-resolution spectral measurements of weak optical signals. ## 2 Sweeping optical heterodyne method (SOHO) ### Working principle and measurement setup For the microcavity resonators with ultrahigh-Q (\(>10^{5}\)), a simple laser scan could result in large measurement errors due to the uncertainties of the laser frequency during the scan [12]. To solve this problem, we use a measurement scheme based on the heterodyne process. As illustrated in Fig.1, after mixing the two lasers inside the detector, the difference in their optical frequency can be seen on the electric spectrum analyzer (ESA). Here, the optical frequency of the sample laser relative to that of the reference laser is mapped into the radio frequency (RF) spectral domain and can be precisely measured using electronics. As far as the frequency of the reference laser is known, the frequency of the sample laser can be deduced from the ESA output. Therefore, one can use a fixed wavelength laser as a reference for measurement. Because a non-scanning laser can remain stable during measurements compared to a scanning one, a non-scanning laser is used as a reference and the frequency of the scanning laser is compared with its frequency in ESA. As a result, there will be no need to know the exact frequency of the scanning laser and moreover, the frequency jumps and noise of the scanning laser can be automatically seen on the RF spectrum. A laser with a stable amplitude and a randomly fluctuating frequency can also be employed. Optical heterodyne detection involves two optical signals, whereas the mixing product is an electrical signal. If the device under test is put after the sample laser, by sweeping the frequency of the sample laser, its optical transmittance spectrum, \(T(\nu)\), can be extracted from the measured RF spectrum, \(P_{log}(f)\). In this configuration the optical intensity after the interference of two lasers is: \[I(\nu)\propto I_{R}+I_{S}+2\sqrt{I_{R}\;I_{S}\;T(\nu)} \tag{1}\] where, \(I_{R}\) and \(I_{S}\) are the intensity of reference and sample lasers, respectively, and \(T(\nu)\) is the transmittance of the sample as a function of optical frequency, \(\nu\). The detected signal on the photodiode (proportional to the AC part of the intensity) is: \[V(f)=\alpha\sqrt{I_{R}\;I_{S}\;T(\nu)} \tag{2}\] where \(\alpha\) is a constant representing the optical to electrical conversion responsivity of the setup and \(f\) is the electrical frequency mapped from optical heterodyne down-conversion of the lasers' beating frequency, i.e. \(f=\nu_{s}-\nu_{r}\). The ESA measures electrical power spectral density as a function of \(f\) and demonstrates it in the dBm scale as formulated below: \[P_{lin}(f)=\frac{V(f)^{2}}{Z}=\frac{\alpha^{2}I_{R}\;I_{S}\;T(\nu)}{Z} \tag{3}\] \[P_{lin}(f)=\frac{10^{P_{log}(f)}}{1000} \tag{4}\] where \(Z\) is the input impedance of the ESA. By combining these two equations, \(T(\nu)\) is obtained as: \[T(\nu)=\frac{Z}{1000\;\alpha^{2}I_{R}I_{S}}\;10^{\frac{P_{log}(f)}{10}} \tag{5}\] The experimental setup of SOHO is shown in Fig. 1. The tunable laser (EMCORE TTX1995 micro-ITLA, called sample laser) is coupled to the photonic microchip via a single-mode optical fiber after passing through a polarization controller. The light coming out of the microchip is sent through a 90/10 splitter with the 90% output sent to an amplified photodiode (Thorlabs, DET08CFC/M). This part of the setup is the commonly-used tunable laser-based characterization setup (termed "standard method") and we embedded it into the SOHO setup to be able to make a direct comparison between the two methods. The remaining 10% of the light is combined with the reference laser (Thorlabs, WDM8-C-23A-20nm) through a 50/50 coupler and is sent into a 5GHz balanced detector (Thorlabs, BDX3BA). A polarization controller was employed after the reference laser. When the frequency difference of the sample and the reference lasers are within the bandwidth of the photodetector, a beat frequency equal to the difference between the frequencies of the two lasers is detected on the balanced photodetector and measured with the electrical spectrum analyzer (ESA, Signal Hound, BB60D). The output signal has an intensity proportional to the product of the amplitudes of the sample and the reference lasers. Control of lasers, reading the ESA output, and data processing are done using a custom-code written in C#. ### Microresonator design, fabrication The microring resonators were fabricated using a strip-loaded waveguide platform that we developed earlier [17]. In this platform, we used low plasma chemical vapor deposition (LPCVD)-deposited Si\({}_{3}\)N\({}_{4}\) as the guiding layer and SU8 as the loading layer. 140-nm thick LPCVD Si\({}_{3}\)N\({}_{4}\) film was deposited on an 8-\(\upmu\)m-thick thermally-oxidized silicon wafer and annealed at 1200 \({}^{\circ}\)C in an N\({}_{2}\) environment for 3 hours. An 850-nm thick SU8 was spin-coated on the Si\({}_{3}\)N\({}_{4}\) layer. The thickness values of each layer are calculated by using the waveguide design approach that we introduced in Ref. [17]. Air cladding was used in our current devices. The refractive index of the thermal oxide, SU8, and Si\({}_{3}\)N\({}_{4}\) layer are 1.464, 1.58, and 2.0 at \(\lambda\)=1550 nm, respectively. The waveguide width was \(w\)=1.5 \(\upmu\)m. The devices were fabricated using electron beam writing followed by a simple chemical development step. The mode profile of the fabricated waveguide is given in Fig. 2a. The optical microscope image of fabricated microring resonators is given in Fig. 2b. The radius of the microring resonator was \(R\) = 150 \(\upmu\)m and the central wavelength was 1550 nm. Fabricated devices were cleaved but facets were not polished. Figure 1: Schematic of the experimental setup of SOHO. The standard method (i.e. tunable laser with photodiode) is incorporated in this setup to make a direct comparison between the two methods. A variable optical attenuator (VOA) as a sample is used for checking the linearity of the measured signal relative to the input power. The output power of the sample port is calibrated using a power meter (not shown). An unbalanced fiber Mach-Zehnder interferometer (MZI) as a sample is used to show resolution enhancement of the sweeping heterodyne compared to the conventional method. The response of microring resonators (MR) was also measured using the same setup. ESA: electrical spectrum analyzer. The polarization controllers that are placed after sample and reference lasers are not shown in this figure. The fabricated microring resonators were characterized using SOHO. The full-width-at-half-maximum (FWHM) of the resonance peak centered at 1550 nm was measured as 4 pm, which corresponds to an intrinsic quality factor of \(Q_{int}\) = 3\(\times\)10\({}^{5}\). By inserting \(R\) and \(Q_{int}\)into Eq. [4] in Ref. [17], the optical loss value in the microring resonator was calculated as 0.9 dB/cm, which is mainly dominated by the absorption loss of the SU8 layer. ### Performance parameters of SOHO We characterized the SOHO setup and obtained the performance parameters. To check the linearity of the signal amplitude as a function of input power, a variable optical attenuator (VOA) is used as a sample as demonstrated in Fig. 1. Figure 3a depicts the ESA output (converted to linear scale using Eq [4]) relative to the input power measured with a photodiode and calibrated with a power meter. As can be seen, the SOHO method has a good linear relationship between input power and measured signal. Figure 3: a) Linearized heterodyne signal on ESA relative to the sample laser power. The red dashed line is the linear fit of the data. b) Long-term linewidth measurement of the tunable laser. Figure 2: a) Schematic of the hybrid waveguide structure, b) Mode profile of the fabricated waveguide. c) Optical microscope image of the fabricated microring resonator. The optical frequency of the sample laser is swept between that of the reference laser and 6 GHz (ESA bandwidth) away from it in order to measure the transmission spectrum of the sample. Despite having a 5 GHz bandwidth, the balanced detector has sufficient responsivity to be used for 6 GHz measurements. To compare the spectral resolution of both methods, the spectrum of an unbalanced fiber Mach-Zehnder interferometer (MZI) with \(\sim\) 52.5m of length difference is measured using the SOHO setup. While the short-term line width of the sample laser is less than 100 kHz, it has frequency dither and as a result, its long-term (millisecond) linewidth is widened to around 100 MHz (Fig. 3b). Therefore, it leads to a flat top spectral shape, which is more suitable for sweeping heterodyne measurements. Even though the standard method cannot resolve the 3.8 MHz free spectral range (FSR) of the MZI spectrum because of the 100 MHz linewidth of the laser, SOHO resolves it very well as shown in Fig. 4a. For measuring the transmission spectrum of the microring resonator with the SOHO setup, the reference laser is tuned to a frequency that is 3 GHz less than the microring's resonance, and therefore the sample laser frequency is swept between 3 GHz prior and 3 GHz after the resonance frequency. A comparison of the spectral measurement results of a silicon microring resonator using both the standard method and SOHO can be seen in Fig. 4b. As the sweeping heterodyne provides higher resolution and dynamic range, it shows a sharper and deeper resonance dip while it washes out with the low resolution of the standard method, hence SOHO is proven to be a more accurate spectral measurement method compared to the standard method. Note that for the measurements using the standard method, we maximized the fiber-chip coupling efficiency and also increased the sample laser power to 10mw, and amplified the detector signal by 30 dB to be able to read a good signal with this method. Only then the comparison between SOHO and the standard method could be done. Figure 4: a) Spectrum of a 52.5m unbalanced fiber MZI. The FSR is 3.8MHz. b) Spectrum of a microring resonator measured with the standard method (red) and with SOHO (black). Because of the higher resolution and dynamic range of SOHO, it shows a better demonstration of the resonance spectrum compared to the standard method. ### Real-time operation of SOHO We further advanced the SOHO approach to be able to do real-time spectral measurements. Toward this goal, we connected a function generator to the fine-tune port of the sample laser to modulate its central wavelength. In this way, by increasing/decreasing the applied voltage, the wavelength of the sample laser shifts toward shorter/longer wavelengths. Here, the amplitude of the applied voltage determines the final linewidth and its frequency determines the rate of wavelength change. For this experiment, we used two types of modulation signals; triangular and sinusoidal. As the dwell time of the laser wavelength is higher near the minimum and maximum of the sinusoidal modulation signal, the spectrum has an M shape, whereas it becomes flatter for a triangular modulator signal as shown in Fig. 5a. To correct this issue and also have a similar detector response time at different heterodyne frequencies on the ESA, the measured spectrum of the chip is divided by a reference spectrum acquired by replacing the chip with a VOA. Correcting such nonlinear changes in wavelength over time is not easy using the standard technique. Moreover, with triangular modulation, the standard method has a nonlinear response at high modulation frequencies. Therefore, SOHO is a much better approach for real-time measurements. In the experiments, sinusoidal and triangular waves with 200 kHz frequency and 50% symmetry are applied. Real-time spectral measurements with 6 GHz bandwidth (\(\approx\)48pm) are performed at a frame rate of around 4.15frames/s (see supplemental video), which is limited by the capture rate of the ESA. In order to shift the resonance dip of the microring resonator, it was illuminated by a 150 W lamp from above. Eight snapshot images of the real-time spectral measurement video during 4 seconds of the chip illumination are shown in Fig. 5b-i. As can be seen, by increasing the chip temperature, its resonance shifts toward lower heterodyne frequencies, corresponding to shorter wavelengths. ## 3 Discussion As the SOHO approach is based on the detection of down-converted frequency on ESA, frequency accuracy and stability of the sample laser (tunable) are not important anymore because they will be compared to the frequency of a stable reference laser. Furthermore, because of the same reason, the linewidth of the tunable laser does not have any effect on the frequency resolution of the measurement. Even employing a laser with a wider linewidth is beneficial since it can allow for a faster full scan of the measuring frequency window. For precise measurement and tracking of resonance shift for sensing applications, one can use a broad linewidth laser as a sample laser and put its wavelength on the resonance. This configuration makes it possible to view the microring resonator's resonance spectrum without scanning the laser frequency. As a result, it can increase the measurement speed significantly, whose maximum is defined by the maximum scan rate of the ESA. Therefore, using this technique, we can use a non-scanning stable laser as a reference in conjunction with a cost-effective free-running tunable laser without frequency stabilization. Another advantage that stems from the innate amplification property of heterodyne detection is that it can measure very weak signals with a good signal-to-noise ratio. This is important in particular for PIC devices with low fiber-chip coupling efficiency. Additionally, measurements of nonlinear high-Q microring resonators can benefit from this approach dramatically as a few microwatts of optical power can excite unwanted effects such as the photorefractive effect [18]. ## 4 Conclusion In this work, we demonstrated a new high-performance PIC characterization method--the sweeping optical heterodyne method (SOHO) that outperforms the current commonly used measurement methods in terms of spectral resolution, speed, and sensitivity. In contrast to current PIC characterization methods, the SOHO setup is formed using cost-effective components and it does not require any calibration. The spectral resolution of 3.8 MHz was achieved, which is limited by the current laser and can be improved to the kHz range by using a laser with a narrow linewidth. We measured the spectral response of high-Q microring resonators that were realized using our new hybrid waveguide technology. One promising application of this method is optical biosensing, where the input and output coupling requirements can be eased because of the high sensitivity of this measurement method, and eventually, disposable optical biosensors can be realized. The sweeping optical heterodyne detection method holds great promise for PIC characterization and we believe scientists and industrial companies working in the field of integrated photonics can significantly benefit from this method. Figure 5: a) Comparison of spectral shapes of the laser for the sinusoidal and triangular modulation. Here both of the plots are normalized to the frequency response of the system, which is measured when the laser wavelength is swept slowly (10 mHz) and data maximum hold option of the ESA is on. b-i) Snapshot images of the real-time video of the microring resonance shift during 4 seconds of illumination by a lamp from above. The video is provided in the Supplementary. ## Supporting Information Supporting Information is available from the author. ## 5 Acknowledgements This work was financially supported by the NWO Open Technology Program (COMBO, 18757).